The GenAI Liability Problem
Contract law is all about predictability. So how do you write a contract for something that’s inherently unpredictable?
Written by Evan Schuman | 5 min • April 17, 2025
The GenAI Liability Problem
Contract law is all about predictability. So how do you write a contract for something that’s inherently unpredictable?
Written by Evan Schuman | 5 min • April 17, 2025
The essence of contract law is simple: Both parties to a contract deliver what they’re supposed to — and spell out the consequences for failing to do so.
In other words, contracts operate under an assumption of predictability. Everyone knows what they’re getting and what they’re responsible for providing.
But attorneys fighting on behalf of enterprise clients who are spending billions of dollars on generative AI deployments are finding that the very nature of GenAI runs contrary to that assumption.
This puts IT executives and their attorneys in a no-win predicament. With this much money at stake, they both need contracts to protect them. But the various attributes of GenAI models — on top of the different ways companies plan to use those models — makes such contracts all but unenforceable.
What, for example, can a GenAI model maker truly guarantee? GenAI models often deviate from the programmer’s intent, hallucinating without warning, disregarding human instructions and offering five different answers to an identical question asked five times. They improvise.
How can you warranty a program that seems to have a mind of its own?
Further complicating the matter is what’s referred to as an “intersection” issue. Enterprises will almost never use a GenAI model in the form it’s delivered. The model maker will program and train it, but the enterprise licensee will finetune the model for its own needs. The queries submitted to the model — from employees, contractors, partners or customers — also affect its behavior, and therefore its reliability.
Let’s say a GenAI model outputs a mistake that leads to a loss of life or the loss of significant revenue. It’s difficult — if not impossible — to precisely pinpoint whether the error was caused by the original AI training and coding (and is therefore the model maker’s fault), the fine-tuning (the enterprise licensee’s fault) or the query phrasing (the employees’ or customers’ fault).
"The nature of GenAI makes standard contract legal approaches irrelevant. "
Altogether, these issues make writing a contract to protect a GenAI investment new territory.
Mark Rasch, a former federal prosecutor who now specializes in data compliance and cybersecurity, says that the nature of GenAI makes standard contract legal approaches irrelevant.
“The problem with AI is we don’t know exactly what it does. More importantly, we don’t know how it does it. And therefore, we can’t warrant anything about it,” Rasch said. “Generative AI developers are never going to agree with a contract where they have to warrant and represent that the thing will work. They are just not going to do that. It’s the nature of generative AI.”
That problem is compounded by sales reps for the model makers who make bold and unsubstantiated claims about GenAI’s capabilities.
“I want them to warrant and represent that they’ll assume liability for any damages to us or to any third party if it doesn’t [do what it’s supposed to],” Rasch added. “We want the warranty to represent that the product will do what they claim. That’s when they back off and say, ‘Well, I never claimed it would do that. I simply said it would help do that.’”
As a non-GenAI example, Rasch pointed to the traditional Google search service.
“Google is simply directing you to content related to your query. They are not saying that it is all the content or the right content or even the best content,” Rasch said. “They are merely directing you to some content using their algorithm.”
Another attorney struggling with GenAI contracts is Liz Harding, a technology attorney with the Greenberg Traurig law firm based in Denver.
Harding said model makers’ lawyers are arguing that they can’t be held responsible when models hallucinate or rely on false information. They’re essentially saying, “You can’t rely on these outputs as your source of truth,” she said.
"GenAI models are fundamentally different from any other software enterprises are used to licensing. "
Enterprises “have the obligation to run the risk analysis and make their own determination as to whether they are willing to take the risk,” Harding said. “They know the output is not necessarily 100 percent accurate, so we’re seeing a lot of disclaimers going into those contracts.”
Another significant challenge is that GenAI models are fundamentally different from any other software enterprises are used to licensing. Software has historically been purpose-specific, such as using Excel for spreadsheet calculations or Oracle for database work.
But when an enterprise licenses a GenAI model, the negotiating executive doesn’t know exactly what it will be used for. The model will be shared with all departments, and each business unit will come up with its own use cases. The beauty of GenAI is its versatility.
For lawyers, though, that versatility can quickly become a curse. Since they can’t be specific about how the model will be used (or how the AI was generated and trained), there are severe limits on how much liability they can reasonably demand in a contract to mitigate risk and ensure safety.
Rasch stressed that there’s a massive difference between the model doing precisely what it’s told and the model doing what the enterprise intends for it to do.
As an example, Rasch theorized the following dilemma: What if a retailer tried using GenAI for automated personalized marketing messages based on demographics?
Without knowing what data the model was trained on, that retailer could be exposing itself to a potential PR and legal disaster. What if the GenAI model sent messages to customers of one demographic group and used an offensive name for that group?
It might sound outlandish, but if the language model was trained on lyrics from popular songs that used that offensive term repeatedly, Rasch said it could be entirely possible.
“That would cause huge harm and damage,” Rasch said. That retailer is “going to come back [to the model maker] and say, ‘Hey! This thing was supposed to send personalized marketing emails to customers and it screwed up.’”
The model maker would likely reply, “‘But those were marketing emails and they were personalized,’” Rasch said. “What you now have to do in these contracts is deliver a clearer definition of the assignment. What is it intended to do and what are the roles and responsibilities?”
Rasch added that enterprises — and their corporate counsel — are not used to software that changes by the hour.
“What are the liabilities of each of the parties when the product that is being sold is different on the day that it’s delivered, and different the date it is used, and different every day thereafter?” Rasch asked. “And how it is used will also be different every day?”
Considering all of the potential risks GenAI contracts present, how can enterprises best protect themselves? The answer is tricky.
GenAI specialists offer a few general guidelines, but each comes with its own caveat.
The reality is there’s a price to pay for GenAI’s seemingly infinite capabilities: uncertainty. Tied to those capabilities are a myriad of potential liabilities, from which lawyers — at least for the moment — can’t protect you.
That’s a conundrum even GenAI can’t solve.