OpenAI and the White House have accused DeepSeek of utilizing ChatGPT to cheaply train its brand-new chatbot.
- Experts in tech law state OpenAI has little recourse under copyright and agreement law.
- OpenAI's regards to usage may apply however are mostly unenforceable, they state.
Today, OpenAI and the White House implicated DeepSeek of something comparable to theft.
![](https://eu-images.contentstack.com/v3/assets/blt69509c9116440be8/bltdab34f69f74c72fe/65380fc40ef0e002921fc072/AI-thinking-Kittipong_Jirasukhanont-alamy.jpg)
In a flurry of press statements, they said the Chinese upstart had actually bombarded OpenAI's chatbots with inquiries and hoovered up the resulting data trove to quickly and cheaply train a design that's now almost as great.
The Trump administration's top AI czar said this training procedure, called "distilling," amounted to copyright theft. OpenAI, meanwhile, told Business Insider and other outlets that it's investigating whether "DeepSeek might have wrongly distilled our models."
OpenAI is not stating whether the business plans to pursue legal action, rather assuring what a spokesperson called "aggressive, proactive countermeasures to secure our technology."
But could it? Could it sue DeepSeek on "you took our content" grounds, similar to the grounds OpenAI was itself took legal action against on in an ongoing copyright claim submitted in 2023 by The New York Times and other news outlets?
BI positioned this concern to professionals in technology law, who stated challenging DeepSeek in the courts would be an uphill battle for OpenAI now that the content-appropriation shoe is on the other foot.
OpenAI would have a difficult time proving an intellectual residential or commercial property or visualchemy.gallery copyright claim, these lawyers stated.
"The concern is whether ChatGPT outputs" - meaning the answers it produces in action to queries - "are copyrightable at all," Mason Kortz of Harvard Law School stated.
That's due to the fact that it's uncertain whether the responses ChatGPT spits out certify as "imagination," he said.
"There's a doctrine that says innovative expression is copyrightable, but truths and concepts are not," Kortz, who teaches at Harvard's Cyberlaw Clinic, said.
"There's a big question in copyright law right now about whether the outputs of a generative AI can ever make up innovative expression or if they are always vulnerable realities," he included.
Could OpenAI roll those dice anyway and declare that its outputs are protected?
That's unlikely, the attorneys said.
OpenAI is currently on the record in The New York Times' copyright case arguing that training AI is an allowable "fair use" exception to copyright defense.
If they do a 180 and inform DeepSeek that training is not a fair usage, "that may come back to type of bite them," Kortz said. "DeepSeek could state, 'Hey, weren't you just stating that training is reasonable usage?'"
There may be a difference in between the Times and DeepSeek cases, Kortz included.
"Maybe it's more transformative to turn news articles into a design" - as the Times implicates OpenAI of doing - "than it is to turn outputs of a model into another design," as DeepSeek is said to have actually done, Kortz stated.
"But this still puts OpenAI in a pretty predicament with regard to the line it's been toeing concerning fair use," he included.
A breach-of-contract claim is more likely
A breach-of-contract claim is much likelier than an IP-based claim, though it includes its own set of issues, stated Anupam Chander, who teaches innovation law at Georgetown University.
Related stories
The terms of service for wiki.myamens.com Big Tech chatbots like those established by OpenAI and Anthropic forbid using their material as training fodder for a competing AI design.
"So perhaps that's the lawsuit you might potentially bring - a contract-based claim, not an IP-based claim," Chander stated.
"Not, 'You copied something from me,' however that you benefited from my design to do something that you were not permitted to do under our agreement."
There may be a hitch, Chander and Kortz said. OpenAI's regards to service need that a lot of claims be fixed through arbitration, not lawsuits. There's an exception for lawsuits "to stop unauthorized usage or abuse of the Services or copyright infringement or misappropriation."
There's a larger hitch, though, experts said.
"You need to know that the dazzling scholar Mark Lemley and a coauthor argue that AI regards to usage are likely unenforceable," Chander stated. He was describing a January 10 paper, "The Mirage of Expert System Terms of Use Restrictions," by Stanford Law's Mark A. Lemley and Peter Henderson of Princeton University's Center for Infotech Policy.
To date, "no design creator has actually tried to enforce these terms with monetary charges or injunctive relief," the paper states.
"This is likely for good factor: we believe that the legal enforceability of these licenses is doubtful," it adds. That's in part because design outputs "are largely not copyrightable" and because laws like the Digital Millennium Copyright Act and the Computer Fraud and Abuse Act "deal minimal recourse," it states.
"I believe they are likely unenforceable," Lemley told BI of OpenAI's regards to service, "since DeepSeek didn't take anything copyrighted by OpenAI and since courts generally won't enforce arrangements not to contend in the absence of an IP right that would prevent that competitors."
Lawsuits between parties in various nations, each with its own legal and enforcement systems, are always challenging, Kortz stated.
Even if OpenAI cleared all the above difficulties and won a judgment from a United States court or arbitrator, "in order to get DeepSeek to turn over cash or stop doing what it's doing, the enforcement would boil down to the Chinese legal system," he stated.
Here, OpenAI would be at the grace of another extremely complex location of law - the enforcement of foreign judgments and the balancing of individual and corporate rights and nationwide sovereignty - that stretches back to before the founding of the US.
"So this is, a long, made complex, fraught process," Kortz added.
Could OpenAI have safeguarded itself better from a distilling attack?
"They could have utilized technical steps to block repetitive access to their website," Lemley stated. "But doing so would also interfere with normal clients."
He added: "I don't think they could, or should, have a valid legal claim versus the searching of uncopyrightable info from a public website."
![](https://flemingcollege.ca/i/program-header/artificial-intelligence.jpg)
Representatives for DeepSeek did not instantly react to a demand for remark.
"We understand that groups in the PRC are actively working to use techniques, including what's known as distillation, to attempt to reproduce sophisticated U.S. AI designs," Rhianna Donaldson, an OpenAI spokesperson, informed BI in an emailed declaration.
![](https://cdn.builtin.com/cdn-cgi/image/f\u003dauto,fit\u003dcover,w\u003d1200,h\u003d635,q\u003d80/https://builtin.com/sites/www.builtin.com/files/2024-10/artificial-intelligence.jpg)