Whenever Madonna sings the 1980s hit “La Isla Bonita” on her concert tour, moving images of swirling, sunset-tinted clouds play on the giant arena screens behind her.
To get that ethereal look, the pop legend embraced a still-uncharted branch of generative artificial intelligence – the text-to-video tool. Type some words — say, “surreal cloud sunset” or “waterfall in the jungle at dawn” — and an instant video is made.
Following in the footsteps of AI chatbots and still image-generators, some AI video enthusiasts say the emerging technology could one day upend entertainment, enabling you to choose your own movie with customizable story lines and endings. But there's a long way to go before they can do that, and plenty of ethical pitfalls on the way.
For early adopters like Madonna, who's long pushed art's boundaries, it was more of an experiment. She nixed an earlier version of “La Isla Bonita” concert visuals that used more conventional computer graphics to evoke a tropical mood.
“We tried CGI. It looked pretty bland and cheesy and she didn't like it,” said Sasha Kasiuha, content director for Madonna's Celebration Tour that continues through late April. “And then we decided to try AI.”
ChatGPT-maker OpenAI gave a glimpse of what sophisticated text-to-video technology might look like when the company recently showed off Sora, a new tool that's not yet publicly available. Madonna's team tried a different product from New York-based startup Runway, which helped pioneer the technology by releasing its first public text-to-video model last March. The company released a more advanced “Gen-2" version in June.
Runway CEO Cristóbal Valenzuela said while some see these tools as a “magical device that you type a word and somehow it conjures exactly what you had in your head,” the most effective approaches are by creative professionals looking for an upgrade to the decades-old digital editing software they're already using.
He said Runway can't yet make a full-length documentary. But
Read more on tech.hindustantimes.com