Runway Research just announced its upcoming generative AI model, Gen-2, which gives creatives the ability to transform words into videos.
Runaway's Gen-2 video generation model is trained on «an internal dataset of 240M images and a custom dataset of 6.4M video clips.» In other words, the dataset is huge, though there's no indication as to whether this custom dataset was made by scraping the web(opens in new tab) for videographers' work. Maybe not the best thing to use if you're planning to monetise your generated content.
Still, Runway looks like a pretty powerful evolution of the Gen-1 tool many are already using for storyboarding and pre-production visuals. It's similar to Meta's AI video generator(opens in new tab) though the Gen-2 model looks to have added some really interesting modes.
Soon the tool will let you generate video through text alone. Not only that, there are a few other modes the company is adding to its plethora of video editing tools, including one that will work by feeding the algorithm a video clip and which it can rework in other styles etc.
One exciting example in the teaser vid(opens in new tab) shows a panning shot across some books standing on a table. The Gen-2 model has transformed it into a night time city scene where the books have become skyscrapers, and while it's not the most realistic shot, it looks to be a powerful tool for visualising ideas.
Generate videos with nothing but words. If you can say it, now you can see it.Introducing, Text to Video. With Gen-2.Learn more at https://t.co/PsJh664G0Q pic.twitter.com/6qEgcZ9QV4March 20, 2023
The accompanying research paper(opens in new tab) goes into much more detail about the process. It's come a long way since conception and it's clear
Read more on pcgamer.com