Technology
An AI model that understands the interplay between human body language, speech and emotion enables digital avatars to move more realistically when speaking
By Jeremy Hsu
An artificial intelligence model can make virtual avatars gesture naturally to match spoken words – possibly paving the way for AI-generated newsreaders or influencers that move more realistically as they speak.
As humans talk, we gesture to help convey our meaning. But when video game characters or digital avatars attempt similar behaviour, they often make generic movements regardless of what they are actually saying. To make virtual figures gesture more realistically, researchers first had to teach an AI model the connection between speech and body language – and the emotions…
More from New Scientist
Explore the latest news, articles and features