ISCANIT: Recognising Intention in Real-Time for Visually Mediated Interaction


EPSRC GR/L89624
July 1998-July 2000

About the Project

The ISCANIT project undertakes research supporting Visually Mediated Interaction (VMI) and aims to advance the state of the art by developing generic view-based head and body behavioural models. These models will be used to recognise intentions for active camera control. We have developed active computer vision systems which perform dynamic scene interpretation in terms of subjects' behaviour and intention to mediate interaction. A prototype system, VIGOUR, has been built to perform real-time tracking of multiple people and behavioural analysis of several individuals (at most three simultaneously) within typical indoor office or home environments. 

The main strategy for the research uses incremental prototyping to develop ongoing demonstrations and identify key problems. 
 

What is "Visually Mediated Interaction"?

While the main purpose of the project is to advance state-of-the-art research into modelling and recognition of human behaviour, VMI has been chosen as vehicle application to demonstrate developments.  VMI involves parsimonious visual communication between remote parties, with a computational system intelligently editing the visual information to focus on important ares in the scene.  An example would be an automatic camera-man system that can understand a visual scene sufficiently to focus the camera on a speaker or other imporant event.  Our chief example application has been that of a video conference involving several people at one end, but using only a single pan-tilt-zoom camera to perform close-ups on the participants.  In this early stage of the technology, the participants assist the computer system by using pointing and waving gestures to indicate intention.

The following MPEG example demonstrates the initial concept.  Note that this example is NOT computer-based, a human is controlling the camera with a remote-control device.

  MPEG example, click to download (4.7 MB)
 
 

Outcomes of the ISCANIT Project

There are three main areas of contribution from the project:

  1. Research: 

  2. there have been published contributions to the computer vision field on the following three main topics:
  3. Demonstration: 

  4. the prototype system VIGOUR: Visual Interaction based on Gestures and behaviOUR was developed to integrate these results in a working real-time VMI system.  Final and intermediate demonstrations can be seen here.
     
  5. Dissemination: 

  6. in January 2001 we will be hosting a workshop on Understanding Human Gestures and Behaviour.  The meeting will take place under the auspices of the British Machine Vision Association, and provide a forum for dissemination of results both to the research and industry communities.
People Involved in the ISCANIT Project 

This EPSRC-funded 2-year project is a collaboration between the Department of Computer Science, Queen Mary and Westfield College, University of London and the School of Cognitive and Computing Sciences at the University of Sussex (COGS). 

QMW 

Sussex  Related Links