The current generation is the selfie generation. Whether they’re sad, happy, depressed, angry or hungry…they click selfies to express their every emotion. So, seeing the current fad within the current generation and its growing popularity among people of almost every generation, it comes as no surprise that there a several companies developing new apps, technology to help people click better selfies and garner the praise of their friends on social media.
If you and your gang (group of friends) are regular selfie clickers, I’m sure you might have faced this situation. During group selfies, the one clicking the selfie ends up having distorted facial features. The part of the face which is most close to the phone camera, usually ends up much bigger in size in the selfie than it is in reality.
It’s a really unfortunate fact, but according to Physics, the only way to do away with the facial distortions and end up with a beautiful selfie is by moving away ones face from the camera as far as possible. The only two ways this can be achieved is either through a selfie stick or requesting people to click your pictures wherever you go. While carrying a selfie stick 24×7 with you is a tedious task, so is asking strangers to quick your pictures. Hence, these two options aren’t that viable in the real world. But, don’t you worry, there’s a cool new third option developed by a team of researchers from Adobe and Princeton University that can up your selfie game by quite a few notches.
The research team have meticulously worked and developed a selfie technology that has the capability of digitally adjusting the perspective of a portrait after it has been captured by the lens. The new technology could provide selfie takers with a simple slider, using which they would themselves be able to easily adjust the level of facial distortion in an app.
For those wondering how this would be made possible, well the process is quite simple. The
technology allows the users to adjust facial distortions by mapping the 2D source image onto a 3D head model which is then rendered at variable distances from a virtual camera. By calculating the distance of the camera in the original picture the team can figure out how the 2D image needs to be wrapped in order to match the changes in the facial features of a person as the virtual camera moves further away or closer to the face.
One can give the system a try at their web-based demo here.