A team of researchers from Princeton University and Adobe Research have detailed a new project in which they use a 3D computer model of a head and a virtual ‘full perspective’ camera to manipulate the perspective of a single portrait. The manipulations simulate various shooting distances and the warps typically seen at those depths, potentially allowing software adjustments that create selfies with corrected perspective distortion.
A demo system (currently in beta) on lead researcher Ohad Fried’s website allows you to upload your own images to explore the technology.
The front-facing lenses found in smartphones cameras are often wide-angle, fixed focal length, to make them as flexible as possible, but the close-up nature of selfies tends to show distortions such as large noses or sloping foreheads. Interestingly, these distortions can change how the individuals are perceived; the subjects in portraits taken at close distances are often described in ways that include ‘approachable’ and ‘peaceful’ while subjects in portraits taken at longer distances are more often described as ‘smart,’ ‘strong,’ and ‘attractive.’
While it might be beneficial to take selfies at longer distances and longer focal lengths to eliminate the distortion, there is no practical way to do so with present phone technology. This newly developed technology could change that, however, with the researchers explaining: ‘our framework allows one to simulate a distant camera when the original shot was a selfie, and vice versa, in order to achieve various artistic goals.’
The researchers based their method on existing approaches to manipulating images, including the type of technology used in face-swapping apps. The key difference was using a ‘full perspective’ virtual camera model rather than a more simplistic, ‘weak perspective’ model, enabling them to compensate for the wider range of perspective adjustments needed for portraits taken at very close distances. This new method is able to estimate the camera distance and edit the perceived camera distance. Its modeling of depth also allows slight changes in the position of the virtual camera, allowing the photos to be slightly ‘re-posed’.
The technology promises than just correcting selfie perspective. The ability to slightly correct perspective and map facial features to a 3D model allows the creation of stereo pairs of images (3D anaglyphs) from a single image, or could make it possible to animate changes in facial expressions.