The 58th Grammy Award opening ceremony will take its place in history not only as a major show business event but as the first demonstration of real-time projection face mapping on a moving object.
During Lady Gaga’s show, when the singer performed a mix from David Bowie’s songs, different animated images were projected onto her face. Japanese designer Nobumichi Asai, one of the so-called “fathers” of facial mapping, explained in an interview that this was the first demonstration of facial projection on Lady Gaga’s face, preceded by many rehearsals and technology tweaks.
Officially, the projection face mapping is just a little more than one year old. In October 2014, Nobumichi Asai with the makeup artist Hiroto Kuwahara and the French visage artist Paul Lacroix presented Omote, a real-time face tracking and projection mapping technology. A team of programmers and photographers helped create the project from computer-generated images. Omote has grown from run-of-the-mill CGIs to creating virtual worlds.
The face of the sitting singer was outfitted with sensors allowing the system to track its exact location. Then, different images were projected onto the face, changing it beyond all recognition in real time. Whenever the singer turned her head, the computer corrected the image on the fly and precisely projected it onto the face.
The video became an online hit, collecting more than 6.3 million views and causing an avalanche of comments and discussions. Omote was called a new digital makeup system allowing to change the person’s appearance instantly, from the skin colour to the colour of the eyes. Some even argue that face mapping in combination with AR technology could replace plastic surgery in the future.
In January 2015, Nobumichi Asai presented his new 3D projection system called Face Hacking.
How It Works The face with all its particularities is scanned by the 3D scanner. A high-precision 3D model with clearly defined contours is created. Artists adapt the animated imaging for the facial relief. Then, the system determines the face’s inclination and turning angle using the sensors at the control points, monitoring every motion. After the full scan, the program superimposes the video on the object and transforms it according to the motions.
The new technology can reproduce everything at all on the human face, changing it beyond recognition, creating any makeup, image, or colour. The system registers and handles emotional expressions, changes in colour, makeup, even texture and material.
Sephora’s marketing experts were the first to use fact mapping to promote their brand of cosmetic products.
In the movie industry, discussions have arisen about the possibility of replacing complicated makeup with face mapping. The new technology would drastically reduce the time needed to apply makeup (complicated makeup could take up to 4 hours to apply). The difference between fantasy and reality is disappearing, and today, we’re starting to take to heart Asai’s words that “the face is a very interesting and promising media vehicle”. Why not?