Recent advancements in the area of AI “deepfakes”, has shown promise in creating realistic fake videos based on large datasets. But, new research by Samsung demonstrates it would be possible to create a “talking head” from just a single portrait photo.
Samsung researchers in Moscow recently published a paper“ Few-Shot Adversarial Learning of Realistic Neural Talking Head Models,” which demonstrated a new AI algorithm capable of creating “deepfake” portrait videos based on a single input image.
This is exciting, as previous AI’s required an extensive amount of sample data to “learn” the input face. But because this research can scale to just one image, it means they can realistically animate things that truly have only one input image, such as the Mona Lisa.
Needless to say, once the AI is given more images, the results continue to be more realistic and closer to the ground truth. This research is part of a growing field of “few-shot” learning, which focuses on better results given fewer input data points.
Be the first to comment on "Talking Heads"