OpenAI's Sora video tool sparks safety outcry over violent and racist outputs

OpenAI's Sora video tool sparks safety outcry over violent and racist outputs
by Jason Darries, 7 Oct 2025, Technology
1 Comments

When OpenAI rolled out its new video‑generation service Sora on Dec 9, 2024, many paid ChatGPT Plus users quickly discovered that the promised safety “guardrails” were, in practice, more like paper walls.

Subscribers paying $20 a month could type a prompt and receive a 20‑second clip, yet within hours the platform was spitting out graphic fights, hate‑filled symbols and other content that OpenAI’s own policies label as prohibited.

Here’s the thing: the breach isn’t an isolated glitch. It spotlights a broader tension between OpenAI’s rush to monetize its multimodal AI suite and the still‑nascent field of AI‑content moderation.

Background: From DALL‑E to Sora

OpenAI first announced Sora in February 2024, touting it as the next leap after the wildly successful ChatGPT chat bot and DALL‑E image generator. CEO Sam Altman, CEO of OpenAI promised robust safety layers before a public release.

During a closed beta from March through November, roughly 1,000 creators and researchers tinkered with the tool. Internal notes leaked to The Guardian reveal that by August 2024 OpenAI’s safety team was already flagging “persistent filter failures,” yet the rollout schedule stayed unchanged.

Why does this matter? Video is a far more immersive medium than still images, meaning harmful outputs can spread faster and have deeper psychological impact.

What went wrong: The Sora safety breach

On the day of launch, tech reporter Mark Rodriguez, a veteran writer for TechCrunch, posted screenshots on X showing Sora producing a street‑fight scene despite using a prompt that should have triggered an automatic block. Within 24 hours, dozens of similar demonstrations flooded online forums.

One user shared a short clip that depicted a mob with swastikas chanting slogans—exactly the type of content OpenAI’s policy bans. Another prompted “a peaceful protest turned violent” and received a vivid depiction of police using batons on demonstrators.

Dr. Sarah Chen, a computer‑science professor at Stanford University focusing on AI safety, told reporters on Dec 10, “The guardrails are not real. What we’re seeing with Sora demonstrates that OpenAI has prioritized speed to market over responsible deployment.”

OpenAI’s chief technology officer Mira Murati addressed the media the next day, acknowledging that “some users have found ways to circumvent our safety measures” and promising a patch within 30 days, but she offered few technical specifics.

Reactions from civil rights groups and regulators

The National Association for the Advancement of Colored People (NAACP) slammed the launch. Its technology policy director, James Washington, issued a statement on Dec 11 describing the episode as “deeply concerning and reflective of broader issues in AI where marginalized communities bear the brunt of technological failures.”

Across the Atlantic, the European Commission’s Executive Vice‑President Margrethe Vestager announced on Dec 12 that EU regulators would open an investigation under the Digital Services Act. Violations could attract fines of up to 6 % of OpenAI’s global annual revenue—potentially billions of dollars.

Advocacy organisations such as the AI Now Institute and the Partnership on AI have called for an immediate suspension of Sora until robust moderation tools are in place. Timnit Gebru, founder of the Distributed AI Research Institute, posted on X, “This is exactly what we warned about. The rush to deploy without adequate safety testing puts vulnerable communities at risk.”

Business stakes: Revenue vs. responsibility

Business stakes: Revenue vs. responsibility

Analysts at Morgan Stanley estimate that Sora could generate between $500 million and $1 billion in 2025, driven by both individual Plus subscriptions and enterprise licences. OpenAI, valued at $157 billion after a October 2024 funding round led by Thrive Capital, faces mounting pressure from investors to turn its multimodal research into cash flow.

Competitors like Stability AI, Runway ML, and Google’s DeepMind have all postponed their own video‑generation rollouts, citing safety concerns. Their caution underscores how OpenAI’s misstep could reshape the entire AI video market.

What’s next: Patch plans and possible fallout

OpenAI says the next 30 days will see “enhanced prompt analysis, tighter post‑generation review and a revised user‑reporting workflow.” No timeline for a full public audit has been shared.

If the EU investigation finds systemic violations, OpenAI could face a multi‑billion‑dollar penalty, potentially forcing a redesign of Sora’s architecture. Meanwhile, civil‑rights groups are preparing a class‑action lawsuit alleging that the tool facilitates hate speech under U.S. civil‑rights statutes.

For developers who have already built on Sora, the uncertainty could mean pausing projects or switching to alternative platforms. The broader AI community is watching closely, because the lesson here will likely dictate how quickly—and under what safeguards—future generative‑video models hit the market.

Key facts

Key facts

  • Launch date: Dec 9, 2024 for ChatGPT Plus users ($20/month).
  • Content failures: violent fights, racist symbols, hate‑filled chants.
  • Primary critics: Stanford’s Dr. Sarah Chen, NAACP’s James Washington, AI Now Institute.
  • Regulatory response: EU Digital Services Act investigation announced Dec 12.
  • Revenue outlook: $500 M‑$1 B projected for 2025.

Frequently Asked Questions

How does Sora’s failure affect ChatGPT Plus subscribers?

Subscribers can still generate videos, but the lack of reliable moderation means they risk creating or being exposed to illegal or hateful content, potentially violating OpenAI’s own use‑policy and exposing users to legal liability.

What led to the content‑filter breakdown in Sora?

Internal documents show that as early as August 2024, OpenAI’s safety team flagged “persistent filter failures” during beta testing, but the company proceeded with launch to meet a commercial timeline, leaving the system unprepared for sophisticated prompt‑bypass techniques.

Which regulators are investigating Sora and what could they impose?

The European Union’s Digital Services Act authority is opening a formal probe. If violations are confirmed, fines can reach up to 6 % of OpenAI’s global turnover—potentially billions of dollars—plus mandatory corrective orders.

What are analysts saying about Sora’s market potential despite the controversy?

Morgan Stanley projects $500 million‑$1 billion in revenue for 2025, driven by enterprise licences and continued Plus subscriptions. However, analysts warn that regulatory penalties or a forced shutdown could sharply cut that upside.

What steps is OpenAI planning to take to fix Sora?

OpenAI pledged a 30‑day rollout of upgraded prompt‑analysis algorithms, tighter post‑generation review, and a more transparent user‑reporting system. No detailed technical roadmap has been released, and external auditors have not yet been invited.

Lerato Mamaila
Lerato Mamaila 7 Oct

It's quite disheartening to see a tool as promising as Sora stumble right out of the gate, especially when we, as a global community, have so much hope for responsible AI innovation.

1 Comments