Face recognition using photometric stereo (Photoface)
The Photoface project covers two EPSRC funded grants that started in April 2007.
Aims
- Use high-speed photometric stereo to rapidly capture facial geometry.
- Capture a new 3D face database for testing within the project and for the benefit of the worldwide face recognition research community.
- Apply novel and existing state of the art face recognition algorithms to the dataset.
- Capture skin reflectance data in order to generate synthetic poses of any face captured by the device.
Project stages
Stages one to three were researched in partnership with the Communications and Signal Processing Group at Imperial College London and the Home Office Centre for Applied Science and Technology, General Dynamics UK.
For stage four, we worked with the University of Central Lancashire.
1. Face reconstruction
View the face reconstruction device we constructed. As detailed in our 2008 CVIU paper, both visible and near infrared lights are feasible solutions and the latter gives marginally superior reconstructions.
We operated with five light sources and a camera operating at 210fps. The total capture time is or of the order 30ms with high-speed synchronisation based on Field Programmable Gate Array (FPGA) Technology.
2. Photoface database
One area of particular interest was the construction of a database of raw face images. This unique database is very different from existing databases in several respects:
- Each record consists of four images of a face, corresponding to each of the light sources of our photometric stereo rig.
- The volunteers were imaged on many occasions over a period of months, allowing extensive testing of our new methods as people change over time. These changes may be due to expression/mood, pose, hair (including facial hair), tanning, injury.
- The data was collected from a real working environment (General Dynamics UK, South Wales), rather than in controlled laboratory settings. This is in line with the laboratory’s aim to develop machine vision techniques for real-world applications.
This unique 3D face database is amongst the largest currently available, containing 3187 sessions of 453 subjects, captured in two recording periods of approximately six months each.
The Photoface device was located in an unsupervised corridor allowing real-world and unconstrained capture. Each session comprises four differently lit colour photographs of the subject.
From which surface normal and albedo estimations can be calculated (photometric stereo MATLAB code implementation included). This allows for many testing scenarios and data fusion modalities.
Eleven facial landmarks have been manually located on each session for alignment purposes.
Additionally, the Photoface Query Tool is supplied (implemented in MATLAB). This allows for subsets of the database to be extracted according to selected metadata, such as gender, facial hair, pose, expression.
The Photoface database is available to download for research purposes. Please see our 2011 CVPR Workshop paper or email Gary Atkinson at gary.atkinson@uwe.ac.uk.
3. Face recognition
This part of the project aimed to optimise recognition algorithms to the acquired data and considered effects such as the:
- specific reconstruction methods that optimise recognition rates
- inclusion of advanced photometric stereo methods (for example, to account for shadow and specularity)
- choice subspace mapping.
Further work concentrated on novel methods for dimensionality reduction for face recognition. This involved the use of a psychologically inspired approach to isolate specific pixels within the face and optimal resolutions that are used by humans and emulate this using machine vision - see BMVC 2011 workshop paper.
We also discovered that surface normals are particularly well compressed using the ridgelet transform, whilst maintaining highly discriminating information. Indeed, we achieved 100% recognition with this approach on major subsets of our database, as reported in our Pattern Recognition 2012 paper.
Finally, in collaboration with the University of Bath, we designed a recognition algorithm based on the nose ridge shape.
4. Reflectance analysis
For some applications, it may be useful to compare 3D (or 2.5D) data to 2D images. In these cases it is necessary to use the 2.5D data to render images that have matching illumination conditions to the 2D images. A video illustrating our ability to re-render images in this way can be viewed in either, a greyscale AVI video or a colour AVI video.
These illustrate the usefulness of the reflectance data that emerges from photometric stereo - namely the surface albedo map. Our next project, EPSRC looked to reliably capture Bidirectional Reflectance Distribution Function (BRDF) data for each face scanned by the system. This can then be used to simultaneously render synthetic face images and enhance the quality of the reconstruction. More details to follow.
Grant nos. EPSRC EP/E028659/1, EP/I003061/1
This article is translated to Serbo-Croatian language by Anja Skrba.
You may also be interested in
Projects in the Centre for Machine Vision
Current Centre for Machine Vision projects, read about the many advances we have made.
About the Centre for Machine Vision
Centre for Machine Vision is part of Bristol Robotics Laboratory, a centre of excellence in the Department of Engineering, Design and Mathematics at UWE Bristol.
Members of the Centre for Machine Vision (CMV)
Meet the members of the Centre for Machine Vision.
Publications from the Centre for Machine Vision
Publications from the Centre for Machine Vision.