Meta announced the launch of its new AI model called Movie Gen, designed to generate video and audio clips based on user input. The tool is aimed at creating both visuals and sound effects, with the potential to influence the media landscape. The model can produce video clips lasting up to 16 seconds and audio clips up to 45 seconds, according to Meta. The generated content synchronises video and sound seamlessly, offering new possibilities for content creators.
In a recent blog post, Meta demonstrated the tool by showcasing clips that included scenes of animals swimming and individuals engaging in creative activities. The AI also transformed existing content, such as inserting props like pom-poms into the hands of a runner or turning a dry parking lot into a water-filled scene with a skateboarder. Meta emphasised that Movie Gen's capabilities extend beyond generating new clips -it can also edit existing videos, providing creators with more options for customization.
Meta Movie Gen is on the scene! Our breakthrough generative AI research for media enables:
-turning text into video
-creation of personalized video
-precision video editing
-audio creation
And while it's just research today, we can't wait to see all the ways people enhance… pic.twitter.com/I4Bq9if3eK
Also read: Beware of 'Pig Butchering' scams: Fake trading apps are stealing money from smartphone users: Report
The company further highlighted that the AI can generate background music and sound effects, enhancing the overall quality of the content. Meta cited blind tests showing Movie Gen's performance as competitive with offerings from other AI leaders, including OpenAI and ElevenLabs. However, Meta has no plans to make the tool openly available for developers, citing the need for careful risk evaluation. Instead, the company plans to work directly with the entertainment industry and content creators to integrate the model into future products.
Also read: YouTube Shorts says goodbye to the 60-second
Read more on tech.hindustantimes.com