Essex logo VASE logo

Demonstrations of VASE Lab projects

We believe that the best way to prove the effectiveness of a piece of work is to produce a working demonstration. Most of our demonstrations can't be shown effectively on the web, they need to be seen in person. However, a few of them can.

Facial features

There is a great deal of interest these days in finding faces in images and in tracking facial features. We looked into this ourselves during the 1990s in the context of video coding. The particular apprach we explored was so-called model-based coding, in which we tracked the motion of facial features in a sequence, inferred what was happening in 3D, and used 3D graphics to animate the result. The original work at Essex was carried out by Munevver Kokuer (the subject of the first sequence below), who produced coded imagery that could be teansferred over an analogue modem and lip-read at the far end. Her work was extended by Ali Al-Qayedi in the late 1990s. Ali introduced the idea of animation agents using Tcl, and was able to achieve minuscule data rates. These sequences are from Ali's work; they are all rather short, which reflects what was achievable on the hardware of the time.

Talking sequence
62 frames, CIF size (352 × 288 pixels)
Original sequence
Tracking facial features in 2-D
Tracking the head in 3-D
Tracking the mouth corners
MBC-coded sequence

Miss America sequence
109 frames, CIF size (352 × 288 pixels)
Original sequence
Tracking facial features in 2-D
Tracking the head in 3-D
Tracking the mouth corners
MBC-coded sequence

Peter sequence
100 frames, CIF size (352 x 288 pixels)
Original sequence
Tracking the head in 3-D
Tracking the mouth corners
MBC-coded sequence

Eckehard sequence
100 frames, CIF size (352 x 288 pixels)
Original sequence
Tracking the head in 3-D
Tracking the mouth corners
MBC-coded sequence

VRML modelling

Your web browser needs to have a VRML1 or VRML2 (aka VRML97) viewer available in order to view the models described below. If, when you follow a link to a model, it appears in your browser as text rather than a 3D world, your browser isn't set up to handle VRML. These models were produced by Dr Christine Clark as an aside from her main research. They were produced astonishingly quickly — for example, the campus model below took about half a working week. This is partly a testament to Christine's modelling and programming expertise and partly because she wrote code to do much of the grunt work, producing VRML code via a series of procedure calls.

We started generating VRML models long before there were any tools to help us. When we started work on our first model, of the Essex campus, we wrote a script in the the Tcl programming language, which generated VRML using information measured from the architect's drawings. This approach allowed the information to be represented in a much more concise way than writing VRML directly.

We have subsequently extended and enhanced the scheme as other models were devised. We are now able to make use of a "library" of object-generating scripts, including some quite sophisticated ones such as the column-generator used in the temple model, which can generate columns with any number of flutes to an arbitrary accuracy.