Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

The Trump administration has quietly offered MIT a deal that could transform campus life

Phia’s Phoebe Gates and Sophia Kianni talk consumer AI at Disrupt 2025

7 Strategic Reasons to Consider Recruitment Outsourcing

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
OpenAI

OpenAI’s Sora app from ChatGPT maker tests limits of copyright

By Advanced AI EditorOctober 2, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Sam Altman singing in a toilet. James Bond playing Altman in high-stakes poker. Pikachu storming Normandy’s beaches. Mario jumping from his virtual world into real life.

Those are just some of the lifelike videos that are rocketing through the internet a day after OpenAI released Sora, an app at the intersection of social media and artificial intelligence-powered media generation. The app surged to be the most popular app in the iOS App Store’s Photo and Video category within a day of its release.

Powered by OpenAI’s upgraded Sora 2 media generation AI model, the app allows users to create high-definition videos from simple text prompts. After it processes one-time video and audio recordings of users’ likenesses, Sora allows users to embed lifelike “cameos” of themselves, their friends and others who give their permission.

The app is a recipe made for virality. But many of the videos published within the first day of Sora’s debut have also raised alarm bells from copyright and deepfake experts.

Users have so far reported being able to feature video game characters like Lara Croft or Nintendo heavyweights like Mario, Luigi and even Princess Peach in their AI creations.

One user inserted Ronald McDonald into a saucy scene from the romantic reality TV show “Love Island.”

The Wall Street Journal reported Monday that the app would enable users to feature material protected by copyright unless the copyright holders opted out of having their work appear.

However, the report said, blanket opt-outs did not appear to be an option, instead requiring copyright holders to submit examples of offending content.

Sora 2 builds on OpenAI’s original Sora model, which was released to the public in December. Unlike the original Sora, Sora 2 now enables users to create videos with matching dialogue and sound effects.

AI models ingest large swaths of information in the “training” process as they learn how to respond to users’ queries. That data forms the basis for models’ responses to future user requests. For example, Google’s Veo 3 video generation model was trained on YouTube videos, much to the dismay of some YouTube creators.

OpenAI has not clearly indicated which exact data its models draw from, but the appearance of characters under copyright indicates that it used copyright-protected information to design the Sora 2 system. China’s ByteDance and its Seedance video generation model have also attracted recent copyright scrutiny.

OpenAI faces legal action over copyright infringement claims, including a high-profile lawsuit featuring authors including Ta-Nehisi Coates and Jodi Picoult and newspapers like The New York Times. OpenAI competitor Anthropic recently agreed to pay $1.5 billion to settle claims from authors who alleged that Anthropic illegally downloaded and used their books to train its AI models.

In an interview, Mark McKenna, a law professor and the faculty director of the UCLA Institute for Technology, Law, and Policy, drew a stark line between using copyrighted data as an input to train models and generating outputs that depict copyright-protected information.

“If OpenAI is taking an aggressive approach that says they’re going to allow outputs of your copyright-protected material unless you opt out, that strikes me as not likely to work. That’s not how copyright law works. You don’t have to opt out of somebody else’s rules,” McKenna said.

“The early indications show that training AI models on legitimately acquired copyright material can be considered fair use. There’s a very different question about the outputs of these systems,” he continued. “Outputting visual material is a harder copyright question than just the training of models.”

As McKenna sees it, that approach is a calculated risk. “The opt-out is clearly a ‘move fast and break things’ mindset,” he said. “And the aggressive response by some of the studios is ‘No, we’re not going to go along with that.’”

Disney, Warner Bros. and Sony Music Entertainment did not reply to requests for comment.

In addition to copyright issues, some observers were unsettled by one of the most popular first-day creations, which depicted OpenAI CEO Sam Altman stealing valuable computer components from Target — illustrating the ease with which Sora 2 can create content depicting real people committing crimes they had not actually committed.

Sora 2’s high-quality outputs arrive as some have expressed concerns about illicit or harmful creations, from worries about gory scenes and child safety to the model’s role in spreading deepfakes.

OpenAI includes techniques to indicate Sora 2’s creations are AI-generated as concerns grow about the ever-blurrier line between reality and computer-generated content.

Sora 2 will include moving watermarks on all videos on the Sora app or downloaded from sora.com, while invisible metadata will indicate Sora-generated videos are created by AI systems.

However, the metadata can be easily removed. OpenAI’s own documentation says the metadata approach “is not a silver bullet to address issues of provenance. It can easily be removed either accidentally or intentionally,” like when users upload images to social media websites.

Siwei Lyu, a professor of computer science and the director of the University of Buffalo’s Media Forensic Lab and Center for Information Integrity, agreed that multiple layers of authentication were key to prove content’s origin from Sora.

“OpenAI claimed they have other responsible use measures, such as the inclusion of visible and invisible watermarks, and tracing tools for Sora-made images and audio. These complement the metadata and provide an additional layer of protection,” Lyu said.

“However, their effectiveness requires additional testing. The invisible watermark and tracing tools can only be tested internally, so it is hard to judge how well they work at this point,” he added.

OpenAI addressed those limitations in its technical safety report, writing that “we will continue to improve the provenance ecosystem to help bring more transparency to content created from our tools.” OpenAI did not immediately reply to a request for comment.

Though the Sora app is available for download, access to Sora’s services remains invitation-only as OpenAI gradually increases access.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleField Didn’t Yield Like Expected? Start With These Checks
Next Article DeepSeek Launches New AI Model to Undercut OpenAI With 50% Cheaper API
Advanced AI Editor
  • Website

Related Posts

OpenAI is now the world’s most valuable private company at $500 billion

October 2, 2025

Sora Invite Frenzy Is Leading Some People to Resell Their Codes

October 2, 2025

Apple denies harming Musk’s xAI by teaming up with OpenAI – The Mercury News

October 2, 2025

Comments are closed.

Latest Posts

Sotheby’s Sells York Avenue HQ to Weill Cornell, Prepares Breuer Move

Outsider Art Fair’s New Director Elizabeth Denny Discusses Her Role

50 Pianos Sound Off in ’11,000 Strings’ at the Park Avenue Armory

Five Arts and Culture Nonprofits Join NYC’s Cultural Institutions Group

Latest Posts

The Trump administration has quietly offered MIT a deal that could transform campus life

October 2, 2025

Phia’s Phoebe Gates and Sophia Kianni talk consumer AI at Disrupt 2025

October 2, 2025

7 Strategic Reasons to Consider Recruitment Outsourcing

October 2, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • The Trump administration has quietly offered MIT a deal that could transform campus life
  • Phia’s Phoebe Gates and Sophia Kianni talk consumer AI at Disrupt 2025
  • 7 Strategic Reasons to Consider Recruitment Outsourcing
  • Making, not Taking, the Best of N – Takara TLDR
  • Gen AI Hits India’s CX Industry: Entry-Level Roles Decline Sharply

Recent Comments

  1. وی پرو موتانت on Meta, Booz Allen Launch ‘Space Llama’ AI System For Space Station Operations – Meta Platforms (NASDAQ:META), Booz Allen Hamilton (NYSE:BAH)
  2. Ira Finan on AI-Powered Coding Tool Anysphere Lands $900M At $9B — Report
  3. Drusilla Schouviller on Best Buy wants AI to offer customers fewer — but more relevant — search results
  4. Donya Ihle on Getty Images CEO warns it can’t afford to fight every AI copyright case
  5. Elvina Silversmith on Class Dismissed? Representative Claims in Getty v. Stability AI | Cooley LLP

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.