OpenAI's Sora program shows how anyone will be able to create realistic images from the comfort of their armchair with simple text commands.
The images look shot in Tokyo, but are generated by a simple text PHOTO capture X
Sora, unveiled by OpenAI last Thursday, shows footage shot by a drone above Tokyo covered in snow, waves hitting the cliffs of Big Sur or a woman enjoying her birthday party.
Experts say the new artificial intelligence program could wipe out entire industries such as film production and lead to a surge in deepfake videos ahead of the US presidential election, writes the Daily Mail.
“Generative AI tools are evolving so quickly and we have social media, leading to a vulnerable point in our democracy and it couldn't have happened at a worse timeOren Etzioni, founder of TruMedia.org, told CBS.
“As we try to solve this problem, we face one of the most important choices in history“, he added.
The quality of AI-generated images, sound, and video has grown rapidly over the past year, with companies like OpenAI, Google, Meta, and Stable Diffusion rushing to create more advanced and accessible tools.
“Sora can generate complex scenes with multiple characters, specific types of movement, and precise subject and background details,” OpenAI explains on its website. “The model understands not only what the user asked for in the text, but also how those things exist in the physical world.“
The program is currently being tested and evaluated for potential security risks, with no date available for a public release yet.
The company has released examples that are unlikely to be offensive, but experts warn that the new technology could trigger a new wave of highly realistic deepfakes.
Sister “it will make it even easier for bad actors to generate high-quality video deepfakes and give them more flexibility to create videos that could be used for offensive purposes,” Dr. Andrew Newell, chief scientific officer of identity verification firm iProov, told CBS. “Actors or people who make short videos for video games, educational purposes or advertisements will be the most affected,” Newell warned.
Deepfake videos, including those of a sexual nature, are becoming a growing problem, both for private individuals and those with a public profile.
“We will take several important safety precautions before making Sora available in OpenAI products“, the company wrote. “We work with rendering teams — experts in areas such as misinformation, hate content and bias. We're also building tools to help detect misleading content, such as a detection classifier that can tell when a video was generated by Sora“, OpenAI also informed.
Deepfakes gained attention this year when AI-generated sexual images of Taylor Swift circulated on social media.
Even President Joe Biden did not escape the trap of deepfakes that animated his face and created his own voice.
On Friday, several major tech companies signed a pact to take “reasonable precautions” to prevent the use of artificial intelligence tools to disrupt democratic elections around the world.