This week, OpenAI released its latest AI video generation model, Sora 2, advertising it as a “big leap forward” for the space. As Sora hits the public, it will have to compete for market share in a crowded market, including with a major competitor that is rapidly gaining steam: the Chinese company ByteDance, which owns TikTok.
In the past few months, ByteDance released Seedance, an AI video generator that many users are already calling the best in the world, and a new version of Seedream, an elite image model. Its LLM, Doubao, has 150 million monthly active users, according to website analytics site Aicpb.com.
ByteDance’s AI advancements are a prime example of how Chinese AI companies are quickly catching up to American ones, despite chip export controls. Because their models are high quality and also cheaper, they are winning over consumers around the world, including in America. But while these models are enthralling many users, they also come with a host of concerns that plague many of the cutting edge models: they allow anyone to create affordable deepfakes that are indistinguishable from reality—and to also freely reproduce copyrighted material.
Reaching the Frontier
Over the past year, ByteDance has assembled top AI talent, hiring a former vice president of Google DeepMind to lead AI foundational research and luring other engineers and researchers away from Alibaba and other start-ups, the Financial Times reported in December. It has also invested billions of dollars into infrastructure, including advanced Nvidia chips.
ByteDance released the first iteration of its video model Seedance in June, and a new image generator, Seedream 4.0, in September. The models can be accessed in the U.S. through third-party platforms.
Jobin Jonny, a designer based in Kerala, India, first discovered Seedream in late August, and was particularly impressed with how it imagined the face of someone from his region. “The generated face carried the exact features and details of a real Kerala man,” he says.

Jonny says that Seedance is now his favorite video model, especially with how it captures physics and natural movements. It doesn’t hurt that Seedance is much cheaper: On the third-party platform Freepik, it costs half as many credits as Google’s Veo 3. On social media, several AI influencers have encouraged their followers to switch over to Bytedance’s products based on their price point.
Tiezhen Wang, an engineer at the machine learning platform Hugging Face, tested the tools. He says he designed an “amazing” poster with Seedream, and that Seedance “shines on image‑to‑video tasks, preserving style and character consistency. . . .ByteDance has clearly moved into the frontier of AI across multimodal generation,” he says.
Eric Lu, the co-founder of the online video editing software program Kapwing, has offered AI image generation to his customers for several years, starting with Stable Diffusion. When Seedance and Seedream came out, his team conducted internal testing to compare their prompt adherence, image quality, speed, and cost to American competitors. “And it wasn’t close—the models are better in every way,” he says.
Lu quickly switched Kapwing’s default AI image models to Seedance and Seedream away from American models. “It was almost a no-brainer, because we save money, but also give our users a better quality output,” he says.
“Unrestricted” AI
But this rise in quality has many implications. First, it shows that Chinese companies have successfully navigated around the U.S.’s chip export controls designed to slow them down. The Information reported in December that ByteDance has been accessing advanced Nvidia chips by renting them outside of China. The company has been rapidly expanding its data center usage in Malaysia.
The price point of ByteDance’s tools also enables many new users to turn to AI to create realistic images. The affordability and accessibility of these tools could upend the workflows of advertising, marketing, and the stock footage industry. “Why buy clips when you can generate any shot you need instantly?” one X user wrote in a thread about Seedance.
As these hyper-realistic AI tools spread, threats of deepfakes and misinformation loom large. In June, TIME found that Google’s video model Veo 3 generated realistic clips that contained misleading or inflammatory information about news events. After TIME contacted Google about these videos, the company said it would begin adding a visible watermark to videos generated with the tool.
Read More: Google’s New AI Tool Generates Convincing Deepfakes of Riots, Conflict, and Election Fraud
When TIME tested many of the same prompts used with Google’s model with Seedance through Capcut’s Dreamina tool, it actually rejected many of them, on the grounds that they violated community guidelines. Still, the model, like its competitors, produced decently realistic footage that could conceivably be shared as misinformation on social media, like this video created via Seedance of U.S. soldiers delivering aid to Palestinian refugees. A representative for ByteDance did not respond to a request for comment.
The realism of ByteDance’s models also raises questions around copyright and likeness issues. Chinese scholars have contended that China has taken a regulatory approach of “moderate leniency” in terms of training models on copyrighted material. This shows up in the model outputs: One X user, for example, posted a Seedream image showing Heath Ledger’s Joker, Margot Robbie’s Harley Quinn, and Michelle Pfeiffer’s Catwoman together at a dive bar. Another created an image with Spider-Man, Batman, and Superman.
Lu, at Kapwing, says that Seedance and Seedream appear especially willing to recreate copyrighted characters, whether it be Mickey Mouse or the Minions. “I think that in the States, there’s a lot more scrutiny on some of these big labs in terms where they’re sourcing the content that they’re training on,” he says. “I think in China, there is an unrestricted ability of researchers to get the data that they need and train on that.”
Selina Xu, China and AI Policy Lead in the Office of Eric Schmidt, says that it is “expected” that Bytedance and other Chinese companies train their models on the user-generated video data from their social media platforms. She adds that video generation models are a “growing revenue stream for AI companies.”
TIME was able to create an image of a “young Brad Pitt and Leonardo Dicaprio shaking hands” through Seedream on Kapwing. Some members of Congress, including Marsha Blackburn, are attempting to pass legislation that would protect the voice and visual likenesses of individuals and creators from digital replicas created without their consent. But such legislation is still quite far from passing.
Meanwhile, American companies are beginning to pay attention to these Chinese AI giants, forcing them to grapple more publicly with copyright protections. In September, Disney, Warner Bros. Discovery and NBCUniversal sued the Chinese company MiniMax for“willful and brazen” copyright infringement.

‘Heat to the fire’
American labs have argued that because their Chinese counterparts have a lax attitude toward copyright, they should also be able to train on copyrighted material, and that they are creating new images in a way that is transformative and protected under fair use. Earlier this year, OpenAI announced it would relax its rules around content moderation, leading to a wave of Studio Ghibli memes flooding the internet.
Read More: How Those Studio Ghibli Memes Are a Sign of OpenAI’s Trump-Era Shift
“I’m not convinced this is being driven by Chinese companies. OpenAI opened the floodgates, to some extent, back in March,” says Maribeth Rauh, an AI ethics researcher at the AI Accountability Lab at Trinity College Dublin. She says that the ability of Bytedance’s models to create likenesses of copyrighted characters and real people “unfortunately adds heat to the fire of scrambling to get ahead at any cost, and regardless of any kind of law or ethical implications.”
Rauh has many concerns about the spread of deepfake tools, including that they could lead to increased harassment and misinformation, and threaten users’ data privacy. “People are having very revealing interactions with these models: the kind of images that they’re interested in generating, how they tweak them, or if they’re putting in images of likeness of real people,” she says. “That’s all data that would be at risk.”
Katharine Trendacosta, director of policy and advocacy at the Electronic Frontier Foundation, argues that education is key to mitigating deepfake risks. “We’ve reached this weird point where simultaneously anything can be generated, but no one believes anything anymore,” she says. “But we never solve the underlying problem. We just keep targeting the new technology, and not media literacy or teaching analytical skills or how to evaluate sources.”