Date: Wednesday 5th February 2014
Time: 4-5pm
Place: Informatics Teaching Lab (ITL) — Top floor meeting room
Speaker: Ravi Garg
Title: Dense non-rigid motion capture from monocular video
Abstract: Accurate recovery of dense 3D shape of deformable and articulated objects from monocular video sequences is a challenging computer vision problem with immense applicability to domains ranging from virtual reality, animation or motion re-targeting to image guided surgery.
Rigid scene capture is a mature field now and there exist algorithms to reconstruct indoor scenes using a single camera in real time. Multi-view geometry has also evolved to facilitate city scale reconstructions with reasonable accuracy. However, the rigidity assumption is too restrictive
and interesting real scenes are often dynamic.
In this seminar he will present a method to reconstruct highly deforming smooth surfaces densely, using only a single video as input, without the need for any prior models or shape templates. He will focus on the well explored low rank prior for deformable shapes and propose its convex relaxation to introduce the first variational energy minimisation approach to non-rigid reconstruction.
He will argue for the importance of long range 2D trajectories for several vision problems and explain how subspace constraints can be used to exploit the redundancy present in the motion of real scenes for dense video registration.
He will also advocate the use of GPU-portable and scalable energy minimisation algorithms to progress towards practical dense non-rigid motion capture from single video in the presence of occlusions and illumination changes.
Finally, he will talk about their multiple model fitting framework for piecewise rigid scene modelling and show its application to dense multi-rigid reconstruction.