Researchers have developed a new artificial intelligence tool designed to analyze legal documents and flag clauses that are risky, unfair, or legally unenforceable. The system, created by a team at New York University, aims to help tenants, employees, and others without legal training understand the potential pitfalls hidden within complex contracts before they sign. These documents often contain ambiguous or unreasonable terms that leave individuals vulnerable to unjust expenses or constraints, tilting the balance of power heavily in favor of landlords and employers who draft them.
The tool, named ContractNerd, uses large language models to scrutinize leases and employment agreements for problematic language. It automatically categorizes clauses into four main groups: legally sound clauses, unenforceable clauses, legal but risky clauses, and missing clauses altogether. By highlighting these terms, the creators intend to provide a platform for both the drafters of contracts and the signing parties to spot potential legal conflicts and create fairer agreements. The system is seen as a way to democratize access to legal insights, helping people identify and question clauses that might otherwise lead to future disputes.
How the AI System Works
ContractNerd operates by applying artificial intelligence to dissect the language of legal agreements. The underlying technology relies on large language models, or LLMs, which are sophisticated AI systems trained on vast amounts of text to understand context, nuance, and legal terminology. The system’s primary function is to analyze contractual text and classify different clauses based on their potential legal standing and the level of risk they introduce for the signing party. The process is designed to be comprehensive, going beyond a simple check for illegal terms.
The tool sorts every clause into one of four distinct classifications. The first category is “unenforceable clauses,” which are terms that would likely not hold up in court because they violate specific laws or public policies. An example would be an overly broad non-compete agreement in a state that prohibits such restrictions. The second is “legally sound clauses,” which are considered standard and enforceable. A third category, “missing clauses,” identifies common protections or terms that should be in a contract but are absent. Finally, the system flags “legal but risky clauses,” which are terms that, while not illegal, could put a tenant or employee at a disadvantage. These risky clauses are further stratified into three tiers: “high risk,” “medium risk,” or “low risk,” giving users a clearer picture of the potential danger.
Identifying Problematic Language
The AI was specifically engineered to catch the kind of ambiguous and biased language that can lead to disputes. For instance, many leases include phrasing that requires a tenant to provide “written notice of intent to vacate at a reasonable time.” While this sounds straightforward, the word “reasonable” is not defined, creating ambiguity that a landlord could potentially exploit. ContractNerd is designed to identify such undefined terms that could be interpreted unfairly. This lack of specificity can lead to unexpected costs, legal challenges, or even the threat of eviction for tenants who misinterpret their obligations.
In the context of employment agreements, the tool is adept at spotting overreaching restrictions. A common example found in many contracts is a clause stating that an “employee agrees not to work for any business in the United States for two years following termination.” Such a broad non-compete clause is unenforceable in many states because it places an undue burden on the former employee, potentially preventing them from earning a livelihood in their field. By flagging these clauses, the tool can alert a potential signatory that a specific term is likely void and gives them the opportunity to negotiate its removal or modification before accepting a job offer.
Development and Data Sources
The project was led by Dennis Shasha, a professor of computer science at New York University’s Courant Institute of Mathematical Sciences. The development team also included NYU graduate students Musonda Sinkala and Yuge Duan, along with undergraduate Haowen Yuan. Their goal was to create a system that could help level the playing field between those who write contracts and those who must sign them, often without the resources to hire a legal expert for a review. Shasha noted that while most people have to sign contracts at some point, very few possess the legal training needed to understand them fully.
Training and Knowledge Base
To ensure its analysis is relevant and accurate, the AI was trained on data from several authoritative legal sources. These include Thomson Reuters Westlaw, a comprehensive legal research database, and Justia, a website that archives standard rental agreements and other legal documents. The system also incorporates information from Agile Legal, which provides a collection of templates for common clauses. Crucially, the tool is designed to factor in local and state laws, which is essential because the enforceability of certain contract terms can vary significantly from one jurisdiction to another. The initial prototype of ContractNerd focuses specifically on leases and employment agreements in New York City and Chicago.
Performance and Evaluation
To validate the effectiveness of their AI, the NYU researchers conducted a series of tests comparing ContractNerd’s performance against other existing contract-analysis tools. In the first major evaluation, the systems were tasked with predicting which clauses from real-world legal cases would be deemed unenforceable by the courts. The results showed that ContractNerd achieved the highest accuracy scores among all the AI systems tested, correctly identifying more unenforceable clauses than its competitors.
User-Based Assessments
In a second comparison, the researchers brought in an independent panel of laypersons—individuals without legal expertise—to evaluate the output from ContractNerd and its strongest competitor from the first test, a tool named goHeather. To prevent bias, the evaluators were not told which tool produced which analysis. The panel was asked to rate the outputs based on three key criteria: relevance, which measured how directly the analysis addressed the clause’s intent; accuracy, which assessed the factual correctness of the legal points; and completeness, which judged whether all key aspects of the clause were covered. Across the board, the layperson panel consistently rated ContractNerd’s analysis as superior.
Future Implications and Goals
The creators of ContractNerd see the tool as more than just a legal-tech application; they envision it as an aid that promotes fairness in contractual relationships. Professor Shasha emphasized that contracts should not only be about legality but also about equity between the two parties involved. “We see ContractNerd as an aid that can help guide users in determining if a contract is both legal and fair, potentially heading off both risky agreements and future legal disputes,” he stated. The tool is intended to empower users by providing them with the necessary information to advocate for themselves before entering into a binding agreement.
While some might view such a tool as a potential replacement for human lawyers, the developers position it as a supplementary aid designed to make legal understanding more accessible. The research team plans to continue improving the system and has expressed a desire to expand its geographic reach beyond its current focus on New York and Chicago. The ultimate objective is to foster a more equitable legal landscape where individuals are not disadvantaged by complex language and hidden risks in everyday contracts. The project highlights a growing trend in the fusion of artificial intelligence and law, with technology offering the potential to bridge the gap between complex legal standards and the general public.