Russian researchers have developed a face learning system that can produce individualized talking head models from only a few personal images. Even portrait paintings may come to life with the new technology — a sort of AI that analyzes information like a human brain.
The Mona Lisa seems to speak and move her head in three recent YouTube videos posted on May 21. See how shocking it is to see her in “real life.”
The researchers taught the algorithm to recognize the broad forms of face features and their relationships. They used that knowledge to build a single shot’s realistic video series of fresh-face emotions.
The AI “learned” face movement from three human participants, resulting in three radically diverse animations. In the video, research lead author Egor Zakharov, an engineer at the Skolkovo Institute of Science and Technology and the Samsung AI Center (both in Moscow), describes how variances in the training models’ appearance and behavior contributed various “personalities” to the “living portraits.”
However, the scientists remind out that Hollywood has been generating phony films (aka “special effects”) for a century. And deep networks with comparable capabilities have been accessible for some years. They think their effort will help democratize special effects technology, even though democratization has traditionally had severe consequences.