Not to be outdone by Meta’s Make-A-Video, Google today detailed its work on Imagen Video, an artificial intelligence system that can create video clips with a text message (eg “a teddy bear washing dishes”). While the results aren’t perfect — the clips the system creates tend to have artifacts and noise — Google claims that Imagen Video is a step toward a system with a “high degree of control” and global awareness, including the ability to create video in a range of artistic styles.
As my colleague Devin Coldewey noted in his article on Make-A-Video, text-to-video systems are not new. Earlier this year, a team of researchers from Tsinghua University and the Beijing Academy of Artificial Intelligence released CogVideo, which can translate text into short clips of reasonably high fidelity. But Imagen Video appears to be a significant leap over the previous state-of-the-art, showing a capability for moving captions that existing systems would have trouble understanding.
“It’s definitely an improvement,” Matthew Guzdial, an assistant professor at the University of Alberta who studies artificial intelligence and machine learning, told TechCrunch via email. “As you can see from the video examples, even though the comms team picks the best outputs, there’s still the odd blur and artifacts. So this is definitely not going to be directly used in animation or TV anytime soon. But that, or something like it, could certainly be built into tools to help speed things up.”
Imagen Video is based on Google’s Imagen, an image generation system comparable to DALL-E 2 and OpenAI’s Stable Diffusion. Imagen is what is known as a “diffusion” model, which creates new data (eg video) by learning how to “destroy” and “recover” many existing data samples. As it is fed with existing samples, the model gets better at recovering previously destroyed data to create new projects.
As the Google research team behind Imagen Video explains in a paper, the system takes a text description and creates a 16-frame, three-frame-per-second video at a resolution of 24 by 48 pixels. The system then upscales and “predicts” additional frames, producing a final 128-frame, 24-fps 720p (1280×768) video.
Google says Imagen Video was trained on 14 million video-text pairs and 60 million image-text pairs, as well as the publicly available LAION-400M image-text dataset, which allowed it to generalize to a range of aesthetics. In experiments, they found that Imagen Video could create videos in the style of Van Gogh paintings and watercolors. Perhaps most impressively, they claim that Imagen Video demonstrated an understanding of depth and 3D, allowing it to create videos like drone flythroughs that rotate around and capture objects from different angles without distorting them.
In a significant improvement over image generation systems available today, Imagen Video can also render text correctly. While both Stable Diffusion and DALL-E 2 struggle to translate prompts like “a logo for ‘Diffusion'” into readable type, Imagen Video pulls it off without a hitch — at least judging by the paper.
This is not to say that Imagen Video is without limitations. As with Make-A-Video, even the clips selected by Imagen Video are crazy and distorted in parts, as Guzdial mentioned, with objects blending together in naturally unnatural – and impossible – ways. The researchers also note that the data used to train the system contained problematic content, which could have resulted in Imagen Video producing graphic violent or sexual clips. Google says it won’t release the Imagen Video model or source code “until these concerns are mitigated.”
However, with text-to-video technology evolving rapidly, it may not be long before an open-source model emerges – both straining creativity and presenting an insurmountable challenge to fakes and disinformation.