On Tuesday, Google launched Veo 3, a new AI video synthesis model that can do something no major AI video generator has been able to do before: create a synchronized audio track. While from 2022 to 2024, we saw early steps in AI video generation, each video was silent and usually very short in duration. Now you can hear voices, dialog, and sound effects in eight-second high-definition video clips.
Shortly after the new launch, people began asking the most obvious benchmarking question: How good is Veo 3 at faking Oscar-winning actor Will Smith at eating spaghetti?
First, a brief recap. The spaghetti benchmark in AI video traces its origins back to March 2023, when we first covered an early example of horrific AI-generated video using an open source video synthesis model called ModelScope. The spaghetti example later became well-known enough that Smith parodied it almost a year later in February 2024.
Here's what the original viral video looked like:
One thing people forget is that at the time, the Smith example wasn't the best AI video generator out there—a video synthesis model called Gen-2 from Runway had already achieved superior results (though it was not yet publicly accessible). But the ModelScope result was funny and weird enough to stick in people's memories as an early poor example of video synthesis, handy for future comparisons as AI models progressed.
AI app developer Javi Lopez first came to the rescue for curious spaghetti fans earlier this week with Veo 3, performing the Smith test and posting the results on X. But as you'll notice below when you watch, the soundtrack has a curious quality: The faux Smith appears to be crunching on the spaghetti.
On X, Javi Lopez ran "Will Smith eating spaghetti" in Google's Veo 3 AI video generator and received this result.
It's a glitch in Veo 3's experimental ability to apply sound effects to video, likely because the training data used to create Google's AI models featured many examples of chewing mouths with crunching sound effects. Generative AI models are pattern-matching prediction machines, and they need to be shown enough examples of various types of media to generate convincing new outputs. If a concept is over-represented or under-represented in the training data, you'll see unusual generation results, such as jabberwockies.