BEWARE Project

BEWARE: Behaviour based Enhancement of Wide-Area Situational Awareness in a Distributed Network of CCTV Cameras


BEWARE is a project funded by EPSRC and MOD to develop models for video-based people tagging (consistent labelling) and behaviour monitoring across a distributed network of CCTV cameras for the enhancement of global situational awareness in a wide area. 

There are now large networks of CCTV cameras collecting colossal amounts of video data, of which many deploy not only fixed but also mobile cameras on wireless connections with an increasing number of the cameras being either PTZ controllable or embedded smart cameras. A multi-camera system has the potential for gaining better viewpoints resulting in both improved imaging quality and more relevant details being captured. However, more is not necessarily better. Such a system can also cause overflow of information and confusion if data content is not analysed in real-time to give the correct camera selection and capturing decision. Moreover, current PTZ cameras are mostly controlled manually by operators based on ad hoc criteria. BEWARE aims to develop automated systems to monitor behaviours of people cooperatively across a distributed network of cameras and making on-the-fly decisions for more effective content selection in data capturing. Specifically, we are:

  1. Developing a model for robust detection and tagging of people over wide areas of different physical sites captured by a distributed network of cameras, e.g. monitoring the activities of a person travelling through a city/cities.
  2. Developing a model for global situational awareness enhancement via correlating behaviours across a network of cameras located at different physical sites, and for real-time detection of abnormal behaviours in public space across camera views; The model must be able to cope with changes in visual context and on definitions of abnormality, e.g. what is abnormal needs be modelled by the time of the day, locations, and scene context.
  3. Developing a model for automatic selection and controlling of Pan-Tilt-Zoom (PTZ) and embedded smart cameras (including wireless ones) in a surveillance network to zoom into people based on behaviour analysis using a global situational awareness model therefore achieving active sampling of higher quality visual evidence on the fly in a global context, e.g. when a car enters a restricted zone which has also been spotted stopping unusually elsewhere, the optimally situated PTZ and embedded smart camera is to be activated to perform adaptive image content selection and capturing of higher resolution imagery of, e.g. the face of the driver.

 

Updated January 2014