OpenAI unveils text-to-video model Sora

OpenAI has unveiled Sora, its text-to-video model. Sora can generate realistic and imaginative scenes from text instructions.
The AI model can create videos up to a minute long while maintaining visual quality and adherence to the user’s prompt.
OpenAI announced that Sora is able to generate complex scenes with multiple characters, specific types of motion, and accurate details of the subject and background. The model understands not only what the user has asked for in the prompt, but also how those things exist in the physical world.
Regarding weakness, the company stated: “Simulating complex interactions between objects and multiple characters is often challenging for the model, sometimes resulting in humorous generations.”
“We’ll be engaging policymakers, educators and artists around the world to understand their concerns and to identify positive use cases for this new technology. Despite extensive research and testing, we cannot predict all of the beneficial ways people will use our technology, nor all the ways people will abuse it. That’s why we believe that learning from real-world use is a critical component of creating and releasing increasingly safe AI systems over time,” said OpenAI in a post.

The featured image above is the screenshot of this prompt given: A stylish woman walks down a Tokyo street filled with warm glowing neon and animated city signage. She wears a black leather jacket, a long red dress, and black boots, and carries a black purse. She wears sunglasses and red lipstick. She walks confidently and casually. The street is damp and reflective, creating a mirror effect of the colorful lights. Many pedestrians walk about.

Media
@adgully

News in the domain of Advertising, Marketing, Media and Business of Entertainment