๐ด Website ๐ https://u-s-news.com/
Telegram ๐ https://t.me/usnewscom_channel
By BARBARA ORTUTAY and MATT OโBRIEN
Updated 1:44 PM PST, November 11, 2025
The tech industry is moving fast and breaking things again โ and this time it is humanityโs shared reality and control of our likeness before and after death โ thanks to artificial intelligence image-generation platforms like OpenAIโs Sora 2.
The typical Sora video, made on OpenAIโs app and spread onto TikTok, Instagram, X and Facebook, is designed to be amusing enough for you to click and share. It could be Queen Elizabeth II rapping or something more ordinary and believable. One popular Sora genre is fake doorbell camera footage capturing something slightly uncanny โ say, a boa constrictor on the porch or an alligator approaching an unfazed child โ and ends with a mild shock, like a grandma shouting as she beats the animal with a broom.
But a growing chorus of advocacy groups, academics and experts are raising alarms about the dangers of letting people create AI videos on just about anything they can type into a prompt, leading to the proliferation of nonconsensual images and realistic deepfakes in a sea of less harmful โAI slop.โ OpenAI has cracked down on AI creations of public figures โ among them, Michael Jackson, Martin Luther King Jr. and Mister Rogers โ doing outlandish things, but only after an outcry from family estates and an actorsโ union.
The nonprofit Public Citizen is now demanding OpenAI withdraw Sora 2 from the public, writing in a Tuesday letter to the company and CEO Sam Altman that the appโs hasty release so that it could launch ahead of competitors shows a โconsistent and dangerous pattern of OpenAI rushing to market with a product that is either inherently unsafe or lacking in needed guardrails.โ Sora 2, the letter says, shows a โreckless disregardโ for product safety, as well as peopleโs rights to their own likeness and the stability of democracy. The group also sent the letter to the U.S. Congress.
OpenAI didnโt respond to requests for comment Tuesday.
Advertisement
โOur biggest concern is the potential threat to democracy,โ said Public Citizen tech policy advocate J.B. Branch in an interview. โI think weโre entering a world in which people canโt really trust what they see. And weโre starting to see strategies in politics where the first image, the first video that gets released, is what people remember.โ
Branch, author of Tuesdayโs letter, also sees broader concerns to peopleโs privacy that disproportionately impact vulnerable populations online.
OpenAI blocks nudity but Branch said that โwomen are seeing themselves being harassed onlineโ in other ways, such as with fetishized niche content that makes it through the appsโ restrictions. The news outlet 404 Media on Friday reported on a flood of Sora-made videos of women being strangled.
OpenAI introduced its new Sora app on iPhones more than a month ago. It launched on Android phones last week in the U.S., Canada and several Asian countries, including Japan and South Korea.
Much of the strongest pushback has come from Hollywood and other entertainment interests, including the Japanese manga industry. OpenAI announced its first big changes just days after the release, saying โovermoderation is super frustratingโ for users but that itโs important to be conservative โwhile the world is still adjusting to this new technology.โ
That was followed by publicly announced agreements with Martin Luther King Jr.โs family on Oct. 16, preventing โdisrespectful depictionsโ of the civil rights leader while the company worked on better safeguards, and another on Oct. 20 with โBreaking Badโ actor Bryan Cranston, the SAG-AFTRA union and talent agencies.
โThatโs all well and good if youโre famous,โ Branch said. โItโs sort of just a pattern that OpenAI has where theyโre willing to respond to the outrage of a very small population. Theyโre willing to release something and apologize afterwards. But a lot of these issues are design choices that they can make before releasing.โ
OpenAI has faced similar complaints about its flagship product, ChatGPT. Seven new lawsuits filed last week in California courts claim the chatbot drove people to suicide and harmful delusions even when they had no prior mental health issues. Filed on behalf of six adults and one teenager by the Social Media Victims Law Center and Tech Justice Law Project, the lawsuits claim that OpenAI knowingly released GPT-4o prematurely last year, despite internal warnings that it was dangerously sycophantic and psychologically manipulative. Four of the victims died by suicide.
Public Citizen was not involved in the lawsuits, but Branch said he sees parallels in Soraโs hasty release.
He said theyโre โputting the pedal to the floor without regard for harms. Much of this seems foreseeable. But theyโd rather get a product out there, get people downloading it, get people who are addicted to it rather than doing the right thing and stress-testing these things beforehand and worrying about the plight of everyday users.โ
OpenAI spent last week responding to complaints from a Japanese trade association representing famed animators like Hayao Miyazakiโs Studio Ghibli and video game makers like Bandai Namco and Square Enix. OpenAI said many anime fans want to interact with their favorite characters, but the company has also set guardrails in place to prevent well-known characters from being generated without the consent of the people who own the copyrights.
โWeโre engaging directly with studios and rightsholders, listening to feedback, and learning from how people are using Sora 2, including in Japan, where cultural and creative industries are deeply valued,โ OpenAI said in a statement about the trade groupโs letter last week.
BARBARA ORTUTAY
Ortutay writes about social media and the internet for The Associated Press.
MATT OโBRIEN
OโBrien covers the business of technology and artificial intelligence for The Associated Press.
ย
What do YOU think? Click here to jump to the comments!
Sponsored Content Below
ย

