AI Law Professor Archives - 成人VR视频 Institute https://blogs.thomsonreuters.com/en-us/topic/ai-law-professor/ 成人VR视频 Institute is a blog from 成人VR视频, the intelligence, technology and human expertise you need to find trusted answers. Fri, 10 Apr 2026 08:56:53 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 The AI Law Professor: When AI quietly hijacks legal judgment /en-us/posts/technology/ai-law-professor-first-draft-trap/ Wed, 08 Apr 2026 07:56:33 +0000 https://blogs.thomsonreuters.com/en-us/?p=70293

Key takeaways:

      • Anchoring distorts judgment before you begin 鈥 Research shows a first draft shapes subsequent decisions; and an AI draft is the most seductive anchor imaginable, because it looks exactly like something a lawyer would write.

      • The First Draft Trap inverts legal training 鈥 The Socratic method builds the habit of holding multiple possibilities in tension before committing; but an AI first draft collapses that space before the real thinking begins.

      • The fix is to ask for the map, not the draft 鈥 Requesting multiple strategic framings before writing keeps judgment where it belongs and uses AI to expand possibilities rather than foreclose them.


Welcome back to听The AI Law Professor. Last month, I examined why promised efficiency gains often become a cycle of work intensification. This month, I want to address a subtler challenge. I call it the First Draft Trap and understanding it may change how you reach for AI the next time a new matter lands on your desk

We have all heard the pitch: Staring at a blank page? Just prompt the AI. In seconds you have a working draft: structured, coherent, and surprisingly competent. The blank page problem, that ancient enemy of productivity, thus has been vanquished.

Except the blank page itself was never just an obstacle; rather, it was a space of possibility. For lawyers, it was the space in which the most important part of their work actually happens. Now, with AI in the mix, that may be changing.

Welcome to the First Draft Trap.

Simply put, the First Draft Trap is this: The moment you accept an AI-generated draft as your starting point, you have already made the most consequential decision of the entire project 鈥 most importantly, you made it by not making it. You let the machine choose your direction, your framing, and your theory. Everything that follows is editing; and editing, no matter how rigorous, is not the same as thinking.

The cognitive hijack

There is solid psychology behind why this happens. Daniel Kahneman and Amos Tversky demonstrated in their landmark 1974 paper, , that once people are exposed to an idea, this first impression distorts their subsequent judgments and becomes a mental anchor. In their experiments, subjects who watched a roulette wheel spin to a random number still let that number influence their estimates of completely unrelated quantities. The anchor held even when people knew it was meaningless.


Please join Tom Martin at the on April 28鈥29. It鈥檚 virtual and completely free 鈥 two days of keynotes, panels, and workshops on AI and the legal profession


An AI first draft is the most seductive anchor imaginable. It is not random 鈥 it is plausible, and it is well-organized. It sounds like something a lawyer would write. And that is precisely what makes it dangerous. You know intellectually that it is just one of many possible approaches to addressing the matter, but the anchor holds anyway.

That is the First Draft Trap at the cognitive level. The AI draft is not just one option you happen to prefer. It is a filter that prevents you from seeing the other options that were available to you, the roads you never even noticed that you did not take.

Consider what this means for a profession built on the opposite instinct. From the first day of law school, lawyers are trained to resist the obvious answer and to think like a lawyer. The Socratic method exists for exactly this reason. A good professor hears your confident response and asks: What else? What if the facts were different? What is the argument on the other side? The goal is not to arrive at an answer, per se. It is to build the mental habit of holding multiple possibilities in tension before committing to any one of them.

The First Draft Trap is the anti-Socratic method. It delivers a confident answer before you have even formulated the question properly 鈥 and instead of interrogating it, you polish it.

The value of the blank page

Think about what a senior partner actually does when a junior associate brings them a memo. The partner鈥檚 value is not better writing; rather, it is peripheral vision: The ability to see what the memo does not address, the argument not considered, or the framing that would land differently with this particular judge or this particular jury. That capacity to see beyond the document in front of them is why clients pay senior partners premium rates. And it is precisely the muscle that atrophies when your default workflow begins with the prompt generate a draft.


The AI draft is not just one option you happen to prefer. It is a filter that prevents you from seeing the other options that were available to you, the roads you never even noticed that you did not take.


The two-system framework offered by Kahneman and Tversky gives us a clean way to describe what is going wrong. System 1 is fast, intuitive, and pattern-matching; while System 2 is slow, deliberate, and analytical. The practice of law, at its best, is a System 2 discipline. We, as lawyers, are trained to override gut reactions, challenge assumptions, and think through consequences before acting.

In this way, the AI first draft feels like a System 2 output. It is structured, footnoted, and methodical. However, your decision to accept it as a starting point is pure System 1 鈥 a fast, intuitive grab at the nearest plausible answer. You have used a sophisticated tool to bypass the sophisticated thinking the tool was supposed to support. That uncomfortable period of ambiguity, of not knowing which path is best, is where the real lawyering lives.

What to do instead

None of this means stop using AI. It means stop using AI to skip the hard part that matters.

Before you ever ask for a draft, ask for the map. Describe the matter or document you are working on, then ask the AI for three fundamentally different strategic framings for the problem. For each framing, request the strongest argument in its favor and its most serious vulnerability. Then ask which framing best fits the client鈥檚 goals, the audience, or the procedural posture. Close with a clear instruction: Do not write a draft yet.

That last instruction is the key. It keeps you in the driver鈥檚 seat during the phase that matters most. You are using AI to expand the possibilities before you prune them, not after. And, most importantly, it gives you the opportunity to think for yourself about other important possibilities and add them in.

In the terms used by Kahneman and Tversky, use AI to fuel System 2, not to hand the controls to System 1. Let the machine generate options, and you exercise judgment.

For lawyers, the ability to see what is not there is the whole game.

Do not let the first draft blind you to it.


Tom Martin is CEO & Founder of LawDroid, Adjunct Professor at Suffolk University Law School, and author of the forthcoming听. He is 鈥淭he AI Law Professor鈥 and writes this eponymous column for the 成人VR视频 Institute.

]]>
The AI Law Professor: When AI makes lawyers work more, not less /en-us/posts/technology/ai-law-professor-ai-makes-lawyers-work-more-not-less/ Tue, 03 Mar 2026 14:58:48 +0000 https://blogs.thomsonreuters.com/en-us/?p=69696

Key points:

      • The productivity promise is largely wrong 鈥 Emerging research shows that AI doesn鈥檛 reduce work 鈥 it intensifies it. Lawyers work faster, take on broader responsibilities, and extend their hours without recognizing the expansion. Further, because prompting AI feels like chatting rather than laboring, lawyers slip work into evenings and weekends without registering it as additional effort.

      • Self-reinforcing acceleration is the real risk 鈥 AI speeds tasks, which raises expectations, which increases reliance, which expands scope, ultimately creating a cycle that drives burnout in a profession already plagued by it.

      • Purposeful integration is the antidote 鈥 Legal organizations need to promote intentional governance structures that account for how people actually behave with AI, not how leadership imagines they will or should.


Welcome back to The AI Law Professor. Last month, I examined how AI is forcing us to rethink training for junior lawyers. This month, I examine a question that affects every lawyer: What happens when the efficiency gains we’ve been promised don’t materialize the way we expected? A recent study out of UC-Berkeley suggests the answer is more troubling than most law firm leaders realize.

If you鈥檝e attended a legal technology conference anytime over the past two years, you鈥檝e heard the pitch: Automate the mundane and elevate the meaningful.

A in the Harvard Business Review by UC-Berkeley researchers Aruna Ranganathan and Xingqi Maggie Ye suggests we should be more skeptical. They tracked how generative AI (GenAI) changed work habits over eight months at a 200-person technology company. Their findings were striking 鈥 AI tools didn鈥檛 reduce work; rather, they intensified it.

According to the study, the tech employees studied were shown to work faster, take on broader responsibilities, extend their hours into evenings and weekends, and multitask more aggressively 鈥 all without being asked to do so. The promise of liberation became a reality of acceleration and overwork.

For those of us in the legal profession, this should be a wake-up call.

Three forms of intensification

The researchers identified three patterns that will sound familiar to anyone watching lawyers adopt GenAI in their work processes.

Task expansion

Because AI fills knowledge gaps, professionals stepped into responsibilities that previously belonged to others. Product managers started writing code, and researchers took on engineering tasks. In legal contexts, the parallel is obvious. Associates use AI to attempt tasks once reserved for senior lawyers. Paralegals draft documents that previously required attorney oversight. Solo practitioners take on matters outside their core expertise because their AI tools make it feel manageable. The result isn鈥檛 less work distributed more efficiently, it鈥檚 more work concentrated in fewer hands, with less institutional knowledge guiding the output.

Blurred boundaries

AI blurred the boundaries between work and non-work. Because prompting an AI feels more like chatting than labor, lawyers (like the tech workers in the study) may slip work into lunch breaks, evenings, and commutes without registering it as additional effort. The conversational interface is seductive precisely because it doesn鈥檛 feel like work. It is work, however, and much more of it.

Pervasive multitasking

Workers managed multiple AI threads simultaneously, generating a sense of momentum that masked increasing cognitive load. For lawyers, this means running parallel research queries, drafting multiple documents at once, and constantly monitoring AI outputs, all while believing they鈥檙e saving time.

The productivity trap

The most important insight from the research is that these effects are self-reinforcing. AI accelerates tasks, which raises expectations for speed. Higher speed increases reliance on AI, and greater reliance expands the scope of what people attempt. And expanded scope generates even more work. Rinse and repeat.

Parkinson鈥檚 law: 鈥淲ork expands to fill the time available for its completion.鈥

In a profession already plagued by burnout, this cycle should alarm us. The legal industry鈥檚 adoption of AI is being driven largely by the promise of doing the same work in less time. But if the Berkeley research is any guide, what actually happens is that we do more work in the same amount of time, or more work in more time, while telling ourselves we鈥檙e being more productive.

And because the extra effort feels voluntary, firm leadership may not see the problem until it manifests as errors, attrition, or ethical lapses. In law, the cost of impaired judgment isn鈥檛 just a missed deadline 鈥 it鈥檚 a client鈥檚 liberty, livelihood, or life savings.

From productivity to purposeful practice

The Berkeley researchers propose what they call an AI practice consisting of intentional norms and routines that structure how AI is used, including determining when to stop and how work should and should not expand. I鈥檇 go further. For legal organizations, purposeful AI integration requires more than workplace wellness norms. It requires a strategic framework that aligns AI capabilities with organizational mission, ethical obligations, and sustainable human performance.

This means, first off, being honest about what AI actually does to workloads rather than what we hope it will do. If your firm adopted AI expecting to reduce associate hours, audit whether that has actually happened, or whether associates are simply filling reclaimed time with more work.

Second, it means building governance structures that account for how people actually behave with these tools, rather than how leadership imagines they will. The Berkeley study found that workers expanded their workloads voluntarily, without management direction. Top-down AI policies that focus solely on permissible use will miss the intensification that could be happening in plain sight.


The most important insight from the research is that these effects are self-reinforcing. AI accelerates tasks, which raises expectations for speed. Higher speed increases reliance on AI, and greater reliance expands the scope of what people attempt. And expanded scope generates even more work.


Third, it means preserving space for the distinctly human work that AI cannot replicate, such as judgment, empathy, ethical reasoning, and the kind of creative problem-solving that emerges from genuine human dialogue 鈥 not from a conversation with a chatbot. The researchers also found that AI-enabled work became increasingly solitary and continuous, a dangerous trajectory.

The narrative that AI will free lawyers for higher-value work isn鈥檛 just optimistic. It鈥檚 a misunderstanding of how these tools interact with human psychology. AI doesn鈥檛 create leisure. It creates capacity 鈥 and without intentional structures, that capacity gets filled, not with strategic thinking, but with more of everything.

While it鈥檚 clear that AI will change the legal profession, the real challenge is whether law firms will integrate AI with purpose, shaping it to serve their values, their clients, and their professionals鈥 well-being. Or, whether they鈥檒l be allowing the technology to quietly shape us into something we didn鈥檛 intend to become.

Tom Martin is CEO & Founder of LawDroid, Adjunct Professor at Suffolk University Law School, and author of the forthcoming听. He is 鈥淭he AI Law Professor鈥 and writes this eponymous column for the 成人VR视频 Institute.


You can find more aboutthe use of AI and GenAI in the legal industryhere

]]>
The AI Law Professor: When AI forces us to rethink how we train junior lawyers /en-us/posts/legal/ai-law-professor-train-junior-lawyers/ Mon, 02 Feb 2026 14:48:39 +0000 https://blogs.thomsonreuters.com/en-us/?p=69248

Key takeaways:

      • The training crisis is a category error 鈥 Fears about junior lawyer obsolescence assume AI will simply replace existing tasks rather than transform the nature of legal work itself.

      • New operational roles are emerging 鈥 Positions like AI Compliance Specialists and Legal Data Analysts represent transitional pathways that didn’t exist five years ago.

      • The transition requires patience 鈥 Firms that thoughtfully redesign junior workflows will develop talent pipelines that outcompete those firms that still are clinging to traditional models.


Welcome back my The AI Law Professor column. Last month,听I examined how agentic AI is transforming lawyers from reactive firefighters into proactive strategic partners. This month, I’m tackling a question that keeps law students and junior lawyers awake at night: What happens to junior lawyer development when AI handles the foundational tasks that traditionally built legal expertise?

When people say, 鈥淎I will eliminate junior training,鈥 they鈥檙e making a category error and confusing the specific tasks that junior lawyers perform today with the underlying purpose of having juniors at all.

Junior lawyer work has never been a timeless set of tasks. It鈥檚 a bundle of functions that firms needed to be done at a particular moment in the history of information. When legal knowledge lived in books, juniors found it and copied it. When knowledge moved into databases, juniors learned how to query it. When email replaced dictation and secretaries, juniors typed more and seniors reviewed more. The traditional workflow is just the current snapshot of a role that has been continuously changing over time.

The purpose of junior lawyers isn鈥檛 to suffer through busy work for character-building or misplaced professional hazing. Rather, it鈥檚 to i) expand capacity, ii) reduce risk through additional eyes, and iii) create a talent pipeline by giving novices progressively harder judgment calls to make under supervision.

Generative AI (GenAI) doesn鈥檛 remove that purpose 鈥 it forces us to rethink and redesign how we accomplish it.

The AI-accelerated apprenticeship

The most important shift isn鈥檛 that juniors will do less, rather it鈥檚 that juniors will do different work earlier 鈥 work that looks operational, technical, and strategic, because that鈥檚 where the bottlenecks move to when drafting and research become cheaper and easier to accomplish.

Today鈥檚 law firms should expect to see first- and second-year lawyers rotating through new AI-enabled roles, such as:

      • AI compliance specialist 鈥 Not a software engineer, this is a lawyer who understands what an AI model is doing well enough to manage risk. In this role, they would help set usage policies, evaluate vendor claims, document audit trails, and ensure the firm鈥檚 AI use aligns with professional responsibility duties, such as confidentiality, competence, supervision, and candor.
      • Legal data analyst 鈥 This is a junior who can turn messy matter history into usable structure by tagging outcomes, mapping issues to fact patterns, building internal playbooks, and working with knowledge management to make firm experience retrievable, so that AI can draft with your institutional memory.
      • Knowledge operations curator 鈥 This person ensures the reliability of your data by updating clause libraries, flagging suspect precedent, harmonizing templates with new local rules, and maintaining the firm鈥檚 internal source of truth so the AI doesn鈥檛 confidently resurrect a brief from 2014 that cites a law that was nullified in 2019.
      • Vibe coder 鈥 Yes, this is a lawyer, because someone has to translate legal workflows into software prototypes and agentic processes. Juniors are often better positioned than senior lawyers to do this because they actually touch the steps in which friction lives.

These transitional operational roles serve a crucial function 鈥 they provide entry points for junior lawyers to develop expertise while the profession reorganizes around AI capabilities. They’re not permanent destinations, but rather, pathways toward the strategic roles that will define legal practice in the coming decade.

In this way, the junior becomes a hybrid of lawyer, analyst, builder, and quality controller. They become someone who understands both the legal reasoning and the system producing it. That is not a degradation of training; rather it is training with the boring parts stripped out and the responsibility to engage with interesting work earlier on poured in.

The transition won’t be instant

Of course, none of this will happen overnight. There will be a messy period in which firms use AI inconsistently, partners trust it too much or not at all, and juniors are asked to double-check outputs without being taught how to do that systematically. Some law firms will treat AI as a time-saver while keeping the old apprenticeship model intact, until they realize they鈥檝e removed the work that used to teach judgment and replaced it with鈥 nothing.

To manage this better, law firms must redesign training programs, adjust compensation structures, and develop new metrics for evaluating junior performance. Law schools must rethink curricula that is built around skills that AI increasingly handles. Bar examiners must consider what competencies actually matter at a time when AI itself can pass the bar.


In this way, the junior becomes a hybrid of lawyer, analyst, builder, and quality controller. They become someone who understands both the legal reasoning and the system producing it.


The long-term path is clear: AI will make legal production faster and cheaper, and that efficiency will push lawyers toward higher-value work 鈥 strategy, prevention, client-centered design, and complex advocacy. Juniors won鈥檛 be trained by copying and pasting the past.

When AI can produce a first draft in minutes, someone must evaluate whether that draft actually serves the client’s objectives. When machine learning surfaces relevant precedents from thousands of cases, someone must assess which precedents matter for this particular argument before this particular judge.

Juniors will be trained by building and supervising systems that generate the first drafts of tomorrow. Indeed, the future of junior training isn鈥檛 less training. It鈥檚 less busy work that pretends to be training, and more deliberate apprenticeship in verification and judgment.

And for those law firms willing to redesign how juniors learn, that future looks not only efficient, but better 鈥 better for clients, for partners, and especially for the next generation of lawyers.


For further help getting started on your organization鈥檚 AI journey, see听 here

]]>
The AI Law Professor: When AI transforms lawyers from fire fighters to strategic partners /en-us/posts/technology/ai-law-professor-lawyer-transformation/ Thu, 18 Dec 2025 12:19:51 +0000 https://blogs.thomsonreuters.com/en-us/?p=68861

Key points:

      • Reactive service is obsoleteThe traditional model of waiting for client problems has reached its expiration date. Proactive, AI-powered monitoring creates entirely new categories of legal value.

      • Consolidation is comingWithin five years, the legal sector will bifurcate between firms embracing agentic AI transformation and those clinging to traditional models.

      • Early adopters set expectations 鈥 Firms that deploy AI agents as embedded legal monitoring systems will establish new client expectations that laggards will not be able to meet.


Welcome back to The AI Law Professor. Last month, I explored why asking the question “What if AGI?”represents essential strategic planning for lawyers. This month, I’m examining a transformation already underway: How AI-powered legal risk management systems can prevent problems rather than just solve them, and what this shift means for the lawyer-client relationship.

When every business decision becomes an opportunity for real-time legal insight, the total addressable market for legal services grows exponentially 鈥 and AI, especially agentic AI, is going to make this happen at warp speed. Yet, this client-facing and proactive revolution isn’t about replacing lawyers with machines. It’s about reimagining the lawyer-client relationship entirely, moving lawyers from being reactive problem-solvers to embedded strategic partners who prevent issues before they arise.

The end of reactive lawyering

Picture a senior partner at a prestigious law firm, circa 1995, dictating a memo while associates conduct research, mostly by carefully perusing large legal tomes, in the library. Fast forward to today: that partner now types their own emails, the library has become digital databases, and junior associates spend more time with search algorithms than with senior mentors. Yet for all this change, the fundamental model has remained static. Lawyers still react to problems after they arise, bill by the hour, and treat technology as a tool rather than a collaborative method.

We’re standing at the threshold of something fundamentally different. The emergence of agentic AI isn’t merely about making existing processes faster or cheaper. It’s about transforming law firms from reactive advisors into proactive business partners, embedded in the real-time operations of their clients.

Current adoption of agentic AI follows a predictable trajectory. Firms deploy it for high-volume, low-risk tasks, such as document sorting, initial contract reviews, and basic due diligence. However, limiting agentic AI to these mundane tasks is like using a Ferrari to deliver pizza.


The fundamental model has remained static. Lawyers still react to problems after they arise, bill by the hour, and treat technology as a tool rather than a collaborative method.


The real power emerges when we reconceptualize the lawyer-client relationship entirely. Instead of waiting for the phone to ring with the next legal crisis, imagine law firms with AI agents continuously monitoring client operations, analyzing contracts in real-time, flagging potential issues before they metastasize into lawsuits.

This shift from reactive to proactive legal service delivery represents a classic disruption pattern. It doesn’t just improve existing services, rather it creates entirely new categories of value.

The proactive revolution

Here’s a prediction that might ruffle some feathers: Within five years, we’ll witness massive consolidation in the legal sector. However, it won’t follow traditional patterns of big law firms absorbing smaller ones. Instead, we’ll see a bifurcation between those firms that embrace agentic transformation and those that cling to traditional models.

The firms that thrive will look radically different from today’s partnerships. They’ll employ machine learning experts alongside lawyers, and they’ll offer managed services that embed AI agents directly into client operations 鈥 and they’ll charge for value delivered rather than time spent. Think of them less like law firms and more like legal technology companies that happen to employ lawyers.

Meanwhile, firms that treat AI as just another tool, that continue billing by the hour while using AI to work faster, will find themselves in a death spiral. They’ll have missed the tipping point when incremental change becomes revolutionary transformation.

Of course, the most exciting possibility isn’t incremental improvement but the creation of entirely new categories of legal value. Imagine a firm that doesn’t wait for contracts to go sour but monitors them continuously, alerting clients to changing circumstances that might trigger renegotiation. Picture legal departments that can simulate the regulatory implications of business decisions before they’re made, running thousands of scenarios through AI agents trained on relevant case law.


The most exciting possibility isn’t incremental improvement but the creation of entirely new categories of legal value.


This proactive model transforms lawyers from fire fighters into strategic partners. It expands the total addressable market for legal services by orders of magnitude. Every business decision becomes an opportunity for legal insight, and every operational change gets real-time analysis. A law firm could offer legal monitoring as a service, with AI agents acting as an always-on legal nervous system for clients.

Once firms start down this path, they’ll find it difficult to reverse course. Early adopters will set new client expectations that laggards simply cannot meet. The competitive advantages will compound over time as firms accumulate data, refine their models, and deepen client integration.

The choice is stark but clear

After decades of building AI tools for legal practice, I’ve learned to distinguish between hype cycles and genuine paradigm shifts. Agentic AI represents the latter. It’s not about doing the same things faster or cheaper; rather, it’s about fundamentally reconsidering what legal services could become.

The firms that will dominate the next era are already experimenting, learning fast, and profiting faster. They’re building products that automate commodity work while developing new service models that were impossible before AI. They’re treating technology not as a threat but as an amplifier of human expertise.

The choice facing today’s legal professionals is clear: Embrace the agentic AI transformation and help shape how AI can change your legal practice, or resist and risk becoming casualties of technological disruption. The medieval guild system of legal apprenticeship has ended 鈥 and the age of human-AI collaboration has begun.

Those who recognize this shift, invest in understanding and deploying agentic AI strategically, and reimagine their business models and service offerings won’t just survive this transformation, they鈥檒l thrive.

Indeed, they’ll thrive in ways that would seem like science fiction to that senior partner dictating memos in 1995.

The future of law isn’t about replacing lawyers with machines. It’s about lawyers and machines working together to deliver value that neither could achieve alone. And that future has already begun.


Well, that brings us to the end of 2025! I wish you and yours a very happy holidays, and I鈥檓 excited to see what 2026 brings us. The future looks bright!

]]>
The AI Law Professor: When asking 鈥淲hat if AGI鈥?鈥 is essential planning for lawyers /en-us/posts/legal/ai-law-professor-what-if-agi/ Mon, 17 Nov 2025 18:14:18 +0000 https://blogs.thomsonreuters.com/en-us/?p=68455

Key points:

      • Timeline compression matters 鈥 Before ChatGPT, AI experts predicted AGI in 20 to 40 years; today, most say 5 to10 years. This acceleration demands immediate strategic thinking.

      • Junior lawyer work displacement is inevitable 鈥 AGI systems that match human cognitive performance will fundamentally reshape legal career paths and firm economics and shift client expectations around service and billing.

      • First-mover advantage exists 鈥 Law firms that prepare now for AGI scenarios will capture disproportionate value during the transition.


Welcome back to The AI Law Professor. Last month, I outlined practical AI governance frameworks for current tools. This month, I鈥檓 pushing into the future with a question that makes most lawyers uncomfortable: What if artificial general intelligence (AGI) arrives sooner than we expect?

The 鈥What if AGI鈥?鈥 question isn鈥檛 about predicting the future with certainty. It鈥檚 about stress-testing our assumptions, challenging our comfort zones, and preparing for scenarios that could fundamentally reshape legal practice within this decade.

Why the timeline matters

AGI is in essence AI that matches human cognitive performance across all domains, not just chess or image recognition, but in legal reasoning, client counseling, and strategic thinking. Before November 2022, this seemed safely distant; however, ChatGPT and generative AI (GenAI) changed everything.

As Richard Susskind noted in his book, How to Think About AI, experts who once predicted AGI in 20 to 40 years now estimate 5 to 10 years, so asking the 鈥What if AGI鈥?鈥 question now is essential. This timeline compression isn鈥檛 speculative; rather, it reflects the exponential pace of AI advancement that we鈥檙e witnessing daily.

For legal professionals, this acceleration creates an urgent planning imperative. If AGI arrives by 2035 or even 2030, law students entering school today will graduate into a legal profession transformed beyond recognition. Partners planning retirement in 15 years may find their firms operating under completely different models.

That means that the question isn鈥檛 whether to prepare for AGI, but rather how quickly can we begin?

The junior lawyer problem

Current discussions about AI in law focus primarily on productivity gains and how using AI tools will make lawyers more efficient. AGI scenarios force us beyond incremental improvements to fundamental questions about legal work itself.

Consider the career path of junior associates. Today, they perform document review, legal research, first-draft memoranda, and basic client communication 鈥 work that builds skills and generates revenue. In an AGI world, artificial systems will likely perform these tasks more accurately, consistently, and cost-effectively than humans.

The implications are profound. If junior-level work disappears, how do lawyers develop expertise? How do firms train their next generation of partners? How do legal careers begin when the bottom rungs of the ladder no longer exist?

Progressive firms are already exploring answers. Some are redesigning associate roles to focus on client interaction, strategic thinking, and creative problem-solving, and other areas in which human judgment may retain advantages even in an AGI world.

Economic model disruption

There are other disruptions beyond training new lawyers. For example, the billable hour, already under pressure, becomes completely unsustainable in AGI scenarios. When artificial systems can produce high-quality legal work in minutes rather than hours, pricing based on time becomes economically irrational.

Client expectations will shift accordingly 鈥 indeed, they already are and will continue to do so. Why should a client pay $500 per hour for research that AGI can complete in seconds in real-time? Why accept week-long turnarounds for contract review that AGI can finish overnight? Why tolerate human inconsistency when artificial systems deliver predictable quality?

Early adopters, those firms that develop AGI-powered service models will capture significant competitive advantages. Late adopters will find themselves defending unsustainable pricing against superior alternatives.

Not surprisingly, forward-looking firms are already experimenting with outcome-based pricing, fixed-fee arrangements, and subscription models that anticipate this transition. They鈥檙e building client relationships based on strategic value rather than time expenditure. What is your firm doing?

Change management for the unthinkable

The most challenging aspect of AGI preparation isn鈥檛 technical, it鈥檚 psychological. Unfortunately, the legal profession鈥檚 conservative culture resists fundamental change, even when that change appears inevitable.

Successful preparation requires law firm leaders to have honest conversations about what AGI means for individual careers, firm structures, and professional identity. These discussions will be uncomfortable but avoiding them guarantees a firm will be unprepared when AGI arrives.

These conversations also mean acknowledging that professional success in an AGI world will require different skills, different strategies, and different measures of value than those that defined success in the past.

Practical preparation steps

Asking 鈥What if AGI鈥?鈥 isn鈥檛 just an intellectual exercise. It should drive concrete actions today, including:

      • Scenario planning 鈥 This starts with modeling different AGI timelines. What happens to your firm if AGI arrives in five years rather than ten? Which practice areas remain human-centric? Which client relationships depend on personal connection compared to those that depend more on technical legal expertise?
      • Skill development 鈥 Focus on capabilities that complement rather than compete with AGI. Emotional intelligence, creative problem-solving, ethical reasoning, and strategic thinking become more valuable attributes as routine cognitive tasks become automated.
      • Business model experimentation 鈥 Experiment with pricing structures and service delivery methods that anticipate AGI capabilities. Start with limited pilot programs that price outcomes rather than hours. Develop subscription-based legal services for routine matters, and build client relationships around strategic value.
      • Partnership strategies 鈥 Explore collaborations with AI companies, legal tech startups, and other forward-thinking firms. The transition to AGI-powered legal services will require capabilities beyond traditional legal expertise.

Firms that seriously engage with AGI scenarios today will develop significant advantages over those that wait. Proactive firms will build client relationships based on strategic value rather than time spent on work. They鈥檒l develop internal capabilities that complement rather than compete with AI; and they鈥檒l create business models that scale with AI capabilities rather than being threatened by them.

Most importantly, these firms will train their lawyers to think beyond current limitations and toward future possibilities. This mindset shift 鈥 from defending existing practices to exploring new opportunities 鈥 may prove more valuable than any specific preparation strategy.

Beyond survival to leadership

The legal profession has navigated technological transitions before, from typewriters to computers, from law libraries to online databases, from paper filing to electronic courts. Each transition created winners and losers based primarily on adaptation speed and strategic thinking.

AGI represents a transition of unprecedented scope and speed, but the fundamental dynamics remain the same. Organizations that anticipate change, experiment with new models, and invest in future capabilities will lead the profession through this transformation.

Indeed, the question isn鈥檛 whether AGI will reshape legal practice; it鈥檚 whether your firm will help shape that transformation or simply react to it.

Asking 鈥What if AGI鈥?鈥 today positions you to answer confidently tomorrow.


In my next column, we鈥檒l explore how an AI-powered legal risk management systems can prevent problems rather than just solve them, and what this means for the traditional lawyer-client relationship

]]>
The AI Law Professor: When you need AI governance that just works /en-us/posts/legal/ai-law-professor-ai-governance/ Tue, 28 Oct 2025 12:37:07 +0000 https://blogs.thomsonreuters.com/en-us/?p=68179

Key points:

      • Governance must be practical 鈥 Complex policies that lawyers ignore are worse than no policies at all. Governance leaders should focus on daily workflows, not academic perfection. Yet, the most elegant policy fails if it cannot adapt to the pace of AI tool evolution.

      • The four pillars still matter 鈥 Transparency, autonomy, reliability, and visibility provide a tested framework for AI governance that can be scaled from solo practitioners to AmLaw 100 firms.

      • Risk stratification drives adoption 鈥 Not every AI use case deserves the same scrutiny. Smart governance distinguishes between drafting a motion and scheduling a meeting.


Welcome back to my The AI Law Professor column. Last month, I unpacked GPT-5’s rollout and argued for maintaining human control even as AI systems become more autonomous. This month, I am delivering on my promise to outline governance that actually works, governance that lawyers will use rather than circumvent.

Good governance feels invisible until something goes wrong. In legal practice, we already have this 鈥 we use conflict check systems, document retention schedules, and billing protocols that capture time. AI governance should work the same way: structured enough to prevent problems, yet flexible enough to evolve with the technology, and practical enough that busy lawyers actually follow it.

Why most AI policies fail

The AI governance documents I see in practice fall into two categories: the overwrought and the undercooked. The overwrought policies read like academic treatises on algorithmic fairness 鈥 they鈥檙e impressive in scope, but impossible to implement. The undercooked policies amount to “don’t put client data in ChatGPT” and a prayer that nothing bad happens鈥 or worse, such as absolute bans on generative AI (GenAI).

However, both approaches miss the mark because they treat AI as either a silver bullet or a loaded gun, when the reality is somewhere in between and much more mundane. AI tools are productivity enhancers with specific strengths, specific blind spots, and the same change management challenges as any other technology adoption.

The practical problem is that lawyers need guidance on Tuesday afternoon when the brief is due Wednesday morning. Abstract principles about algorithmic bias do not help; while detailed workflows that account for real deadlines and actual capabilities do.

Building on the four pillars

In previous columns, I have argued for four deployment principles: transparency, autonomy, reliability, and visibility. These are not just theoretical constructs, rather they are the foundation of any governance framework that legal teams can actually implement successfully.

In the context of these four pillars, the most practical governance frameworks start with risk classification. Indeed, a three-tier system works well for most legal teams and includes:

High-risk uses 鈥 These include client-facing documents, substantive legal analysis, court filings, and anything involving privileged communications. You don鈥檛 want to get sanctioned for hallucinations! These tasks require mandatory human review, detailed documentation, and oftentimes, obtaining client disclosure.

Medium-risk uses 鈥 These usually cover internal research, document review, draft preparation, and administrative analysis. These tasks benefit from AI assistance but need quality checkpoints and clear limitations on autonomy.

Low-risk uses 鈥 These more mundane uses encompass scheduling, formatting, basic summarization, and routine administrative tasks. These can run with minimal oversight, although they still require basic security controls.

This framework lets legal teams deploy AI tools confidently in low-risk contexts while maintaining appropriate caution for high-stakes work. It also creates a clear path for expanding AI use as tools improve and teams gain experience. Team leaders can also choose what roles have access to each tier.

Change management as governance

AI tools evolve faster than traditional legal technology, and GPT-5’s rollout demonstrated how vendor decisions can disrupt established workflows overnight. Effective governance must account for this pace of change. It鈥檚 inconvenient, but it is our new reality.

Using specific versions of AI models (for example, when using the OpenAI鈥檚 API, you can specify 鈥榞pt-5-2025-08-07鈥 versus 鈥榞pt-5,鈥 which refers to the latest version of the model). This provides stability for mission-critical work. When you rely on specific AI behavior for document review or contract analysis, lock in the model version that delivers consistent results. Do not let automatic updates become uncontrolled experiments with client work.


For further help getting started, see here


Testing protocols create confidence in AI upgrades. Before deploying a new model or tool, run it through the same tasks you use for daily work and make it your AI model test set. Compare accuracy, consistency, and completeness against your current baseline. Determine and record what improves and what degrades.

Rollback procedures provide insurance against AI failures. When a new model produces inferior results, you need quick paths back to last-known good configurations. This may require maintaining access to legacy models or alternative tools.

Making governance stick

Even the best governance framework fails if lawyers do not follow it. Implementation requires attention to three practical realities:

      1. Integration with existing habits 鈥 This means building AI governance into systems lawyers already use. If your conflicts-check system can track AI tool usage, use it. If your document management system can flag AI-assisted work, configure it. Do not create parallel processes that compete with established habits.
      2. Training that focuses on competence 鈥 Such training teaches lawyers how to use AI tools effectively, not just safely. Include prompt engineering best practices, output validation techniques, and quality assessment skills. Lawyers who understand AI capabilities are more likely to respect AI limitations.
      3. Policies that evolve 鈥 Anticipate change rather than resisting it. Build quarterly review cycles into your governance framework and establish triggers for policy updates when new tools emerge or existing tools change capabilities. Plan for the next disruption rather than just responding to the last one.

The firms that get AI governance right will not just avoid problems, they will deliver better work more efficiently. Governance frameworks that emphasize quality control, appropriate use cases, and continuous improvement create the foundation for sustained AI value.

This requires moving beyond the defensive mindset that treats AI as a compliance burden. Instead, think of governance as the infrastructure that enables confident, reliable AI adoption. Good governance lets lawyers push AI tools harder because they have systems to catch failures and processes to maintain quality.

The legal profession has managed similar technology transitions before. We survived the shift from typewriters to word processors, from law libraries to legal databases, from paper filing to electronic court systems. Each transition required new governance approaches that balanced innovation with professional responsibility.

AI is no different in principle, although it is certainly happening at an exponential pace. The firms that adapt their governance frameworks to the speed of AI evolution, while maintaining the quality standards that clients expect, will lead the profession through this transition.

Implementation starts Monday

Governance policies work best when they start small and expand with experience. Begin with a pilot program that covers specific AI tools and specific use cases. Test the framework with real work under real deadlines. Refine the processes based on what actually happens, not what you think should happen.

Focus on the intersection of high-value tasks and low-risk scenarios. Document review for routine matters, such as contract clause libraries or research summaries for internal use 鈥 these are the sweet spots in which AI delivers clear value with manageable risk.

Build feedback loops that capture both successes and failures. I can鈥檛 emphasize this enough: Feedback loops are how we learn and improve! When AI tools work well, document why and what worked. When they fail, analyze the failure modes. Then, use this information to refine your risk categories, improve your testing protocols, and adjust your quality controls.

Most importantly, remember that governance is not a destination but rather, a process. The AI tools available next year will differ from those available today, and your governance framework must be robust enough to handle current tools and flexible enough to evolve with future capabilities.

The legal profession has always balanced innovation with responsibility. AI governance is simply the latest chapter in that ongoing story. Those firms that write that chapter thoughtfully, with practical frameworks that evolve with the technology, will shape the future of legal practice.


In my next monthly column, we’ll explore what happens when we ask, “What if AGI?,” and discover how this simple question can reshape our thinking, refocus our priorities, and position us for greater success as lawyers

]]>
The AI Law Professor: When the new AI model disappoints /en-us/posts/technology/ai-law-professor-model-disappoints/ Mon, 22 Sep 2025 15:05:12 +0000 https://blogs.thomsonreuters.com/en-us/?p=67607

Key takeaways:

      • GPT鈥5 will make a difference 鈥 GPT-5 is a new, unified system that automatically routes your prompt to different reasoning modes, which boosts performance but reduces the manual control that users rely on.

      • Results still see errors 鈥 Benchmarks and marketing promised 鈥淧hD鈥憀evel鈥 intelligence, yet highly public misfires like error鈥憇trewn maps show the limits that still matter in practice.

      • Disillusionment sets in with GenAI 鈥 Gartner鈥檚 recent analysis suggests GenAI is sliding into the trough of disillusionment, which makes governance and expectation-setting more important than ever for legal teams.


Welcome back to The AI Law Professor. In the last column, I unpacked what true AI agents would require and argued for four deployment principles that legal teams can use today: transparency, autonomy, reliability, and visibility. This month, I鈥檓 applying that lens to GPT鈥5鈥檚 release, comparing the promise to the performance and asking what the reaction tells us about control, our relationship with these systems, and practical capability in day鈥憈o鈥慸ay legal work.

If you learned to drive on a manual transmission, you remember the feeling of control. You listened to the engine, watched the tachometer, chose the gear, and lived with the result. GPT鈥5 鈥 the fifth in OpenAI鈥檚 GPT models that was launched in late-August 鈥 feels like switching the profession鈥檚 favorite stick鈥憇hift to an automatic. It is faster and, in many contexts, smarter, but it also decides for itself when to shift.

OpenAI鈥檚 announcement describes GPT鈥5 as 鈥渙ne unified system鈥 with a real鈥憈ime router that chooses between a fast mode and deeper 鈥渢hinking鈥 modes. In other words, ChatGPT now decides when to sprint and when to grind through a harder problem. Microsoft echoed that description, emphasizing a router that picks 鈥渢he right tool for the task鈥 across its Copilot stack.

For many lawyers who are early adopters, that is a gift and grief. It streamlines routine use, yet it also transfers a critical bit of craft from human hands to the AI platform.

Expectations meet a colder reality

On launch day, OpenAI CEO Sam Altman leaned into a powerful metaphor, calling GPT鈥5 鈥渓ike having a team of PhD鈥憀evel experts in your pocket.鈥 That framing primes all of us to expect near鈥慹xpert performance on whatever we ask. Then the internet filled with examples of GPT鈥5 and garbling presidential timelines inside graphics.

The mismatch is not trivial. If you tell people they are getting a pocket full of doctorates, flubs on basic geography feel like malpractice. Yet part of the story sits with us. We co鈥慳uthor the hype as we cherry鈥憄ick astonishing demos and anthropomorphize styles as smart, warm, or trustworthy. We then confuse style or personality with reliability.

Indeed, GPT鈥5鈥檚 own release notes promise fewer hallucinations and stronger benchmarks in math, coding, and multimodal tasks, which are real and measurable improvements. However, no benchmark guarantees competence across every quirky real-world request, especially ones that include both text and image rendering into one test.


You can find all The AI Law Professor鈥檚 columns here


During the GPT鈥5 rollout, OpenAI initially retired several options in the ChatGPT model picker and auto鈥憁apped old threads to GPT鈥5 equivalents. That move disrupted established workflows and, for paying users, it felt like losing colleagues with distinct work habits and talents. After a backlash, OpenAI relented and changed some things back.

Yet, this was all more than an interface tweak. It touches on a legal tech question that is older than large language models: How much control are you willing to trade for convenience?

Relationship, control & capability

Lawyers did not just use prior GPT models, they built relationships with them. People trusted the tone and tempo of different models, and they invested in long threads that felt like real conversations. When those options disappeared, the reaction sounded personal and that was telling. It shows us we are already treating these systems as teammates, not tools, which raises the stakes for change management. Even OpenAI鈥檚 notes emphasize they reduced sycophancy and refined style, implicitly acknowledging that tone does matter.

At the same time, capability is not uniform. GPT鈥5鈥檚 measurable gains are real, yet they coexist with the brittleness of multimodal text鈥慽n鈥慽mage rendering and other edge behaviors. That is not hypocrisy 鈥 it is a reminder that most capable overall does not mean best for every task. The right lens is comparative advantage, not universal superlatives.

Recent analysis by Gartner suggests generative AI (GenAI) is moving past the peak, sliding toward because many 2024 projects under鈥慸elivered. In fact, less than 30% of AI leaders report their CEOs are happy with AI investment return, which carried an average spend of $1.9 million on GenAI initiatives in 2024. GPT鈥5 landed in the middle of that slide. In that climate, one over鈥憄romised demo or one clumsy deprecation can overshadow a long list of genuine improvements.

For practicing lawyers, there is a practical lesson here. Expect steady progress, not magic. Expect continued rapid change with smaller windows for adaptation. Expect platform strategy to change, which means your governance must be able to flex without sacrificing quality or obligations to clients and courts.

What legal teams can do now

There are several steps that corporate legal teams can take to help ease their transition into GPT-5, including:

      • Select specific versions of AI models 鈥 If you rely on a specific behavior, insist on options for model pinning, change windows, and rollback. This may require you to use OpenAI鈥檚 API rather than the more consumer friendly ChatGPT. Also, if you use ChatGPT Business or Enterprise, learn the legacy model access policy and set internal timelines for migration. These details can mean the difference between legally sound analysis and slop.
      • Test like you bill 鈥 Keep a simple checklist of the work you give the AI tool, such as e鈥慸iscovery summaries, brief outlines, citation checks, transcript cleanups, and RFP drafts. For each item, define what good looks like and score results on accuracy, completeness, and consistency 鈥 not on how persuasive the tone feels.
      • Separate tone from truth 鈥 A model that feels right is not necessarily more reliable; where a model that feels blunt may be more honest about uncertainty. GPT鈥5鈥檚 training explicitly tries to reduce sycophancy and improve clarity. Treat tone as a configurability issue, not a reliability metric.
      • Keep human control visible 鈥 Previously, I argued for four principles that also apply here: transparency, autonomy, reliability, and visibility. The router helps with autonomy and sometimes reliability; but your job is to assert transparency and visibility, especially around what the model cannot see. Build logging and review points, for humans in the loop, then make those gaps explicit to the supervising lawyer.

Right鈥憇izing expectations

So, was GPT鈥5 overhyped? Probably. Are we also complicit in that hype? Also, yes. We want one model that is at once a perfect writer, paralegal, researcher, designer, and cartographer. We treat the best average performer as a sure thing on every task, then we feel betrayed when it stumbles.

A better stance for the legal profession is modesty plus rigor. Take the real gains GPT鈥5 delivers, such as stronger coding and reasoning modes and fewer hallucinations on many real鈥憌orld prompts. Keep manual control where it matters, and do not let any router, however clever, become a change agent you cannot see or understand.

If GPT鈥5 is truly the automatic transmission, you need to keep your hand near the gearshift. Know when to let it shift for you and know when to downshift yourself. That is how you get speed without giving up control.


Next column, we鈥檒l examine how to fashion an AI governance policy that actually works

]]>
The AI Law Professor: When AI agents act without understanding /en-us/posts/technology/ai-law-professor-when-ai-agents-act/ Mon, 25 Aug 2025 15:00:09 +0000 https://blogs.thomsonreuters.com/en-us/?p=67308

Key takeaways:

      • There is no true agentic AI鈥 yet 鈥 We don鈥檛 have true agents yet, but the release of GPT-5 and the speed of improvements signal that agents will become ever more capable quickly.

      • There are 4 core principles of deployment 鈥 Deploying true AI agents in law and other high-stakes fields demands adherence to four core principles: transparency, autonomy, reliability, and visibility.

      • Deliberate design and balance needed 鈥 The future of AI agents depends on deliberate design choices that balance machine autonomy with human oversight, ensuring trustworthy and effective collaboration.


Welcome back for . Last month we took a 30,000-foot view of AI evolution and its five stages of development. This month, I鈥檇 like to take a closer view of AI agents and some principles we should be applying to their use. Let鈥檚 start by talking about what true AI agents are and what they mean for the practice of law.

Imagine this 鈥 a major law firm discovers their AI agent had been conducting legal research for three months despite a critical flaw: It was systematically ignoring case law from certain jurisdictions due to a visibility parameter no one knew existed. The AI agent had drafted hundreds of briefs, all technically accurate within its limited scope, yet all potentially catastrophic if filed. The firm caught it by accident, when a junior associate noticed a glaring omission that the AI had consistently made.

This near-miss isn’t an isolated incident. Across industries, we’re beginning to deploy AI agents to autonomously act in high-stakes environments, such as reviewing contracts, making medical recommendations, managing financial portfolios, even driving cars. We celebrate their efficiency and scale while harboring a gnawing uncertainty: Do we really understand what these systems are doing? Can we trust them when we can’t fully see how they see the world?

What is an AI agent?

Before diving into principles, let’s clarify what we mean by AI agent. The term gets thrown around loosely, often confused with agentic workflows, but there’s a crucial distinction.

An agentic workflow is a semi-automated process in which AI assists with specific tasks but requires human oversight (a human in the loop) at key decision points. Think of it as a chain of AI-powered assistants that hand off work, like a baton, to each other with your approval. The system might draft emails, analyze data, or suggest actions, but a human must review and approve each step.

A true AI agent, by contrast, operates with genuine autonomy. It perceives its environment, makes decisions, and takes actions independently to achieve specified goals. The key difference? An AI agent doesn’t just assist, it acts. It can plan and execute multiple steps, adapt to unexpected situations, and complete complex tasks without constant human intervention.

We don鈥檛 have true agents yet. Yes, I鈥檝e experimented with ChatGPT Operator, Agent, and Manus, but they are not fully autonomous, and it would be reckless to assign them any serious work. However, the release of GPT-5 and the speed of improvements signal that agents will become ever more capable much more quickly.

The 4 core principles

There are four core principles 鈥 transparency, autonomy, reliability, and visibility 鈥 that must be adhered to when deploying true AI agents in law and other high-stakes fields. Let鈥檚 look at each principle in turn.

Transparency

Transparency means being able to observe what an AI agent does at every step. This isn’t just about logging actions, rather it’s about understanding the agent’s decision-making process in real-time.

Consider an AI agent assisting with legal research and case preparation. True transparency would mean the user could see which case law databases it consulted and understand why it chose certain precedents over others. In addition, the user would be able to track how the agent weighted different factors, such as jurisdiction, recency, and similarity. And the user also could observe the agent鈥檚 reasoning for distinguishing or applying specific cases.

Without transparency, we’re operating on faith 鈥 we might see outcomes but miss critical context about how those outcomes were achieved, which becomes especially problematic when agents make mistakes. Without transparency, we can’t diagnose what went wrong or prevent future errors.

For implementation, developers need to build comprehensive logging systems that capture and display, not just actions, but the agent鈥檚 reasoning as well. They should create dashboards that visualize decision trees in real-time, and design interrupt mechanisms that allow human inspection at any point.

Autonomy

Autonomy, the agent’s ability to act independently, is both the greatest promise and challenge of AI agents. True autonomy means the agent can initiate actions without explicit commands, adapt strategies based on changing conditions, make judgment calls in ambiguous situations, and recover from errors without human intervention.

The key is matching the AI鈥檚 autonomy levels to the risk profile of the work being undertaken. High-stakes decisions will likely require human-in-the-loop constraints, while less risky or routine operations can run fully autonomously. This calibration is an ongoing process, not a one-time setting. Legal ethical requirements also will help set the limits of an agent鈥檚 autonomy.

To design autonomy into the system, developers should establish clear boundaries and escalation protocols. They should define which decisions require human approval and which can proceed independently, while also building in periodic autonomy reviews to adjust boundaries based on performance.

Reliability

Reliability in AI agents goes beyond simple accuracy. It encompasses the answers to questions such as: Is the information the agent acts upon accurate and current? Do the agent鈥檚 actions consistently comport with ethical requirements and does the agent perform consistently across different contexts? And when things do go wrong, does the agent fail gracefully?

A dangerous misconception is equating autonomy with reliability. Just because an agent operates independently doesn’t mean its outputs are trustworthy. In fact, autonomous operation can mask reliability issues until it鈥檚 too late, and they cascade into significant failures.

To ensure reliability, developers need to implement robust testing frameworks that go beyond best-case scenarios. They should create adversarial testing environments, monitor for drift in performance over time, and establish clear reliability metrics tied to real-world outcomes.

Visibility

Visibility, often overlooked, might be the most critical principle. It refers to the scope of information available to an agent when it makes decisions.

When humans research a problem, they can cast a wide net, which leads them to follow unexpected leads and discover information they didn’t know they needed. AI agents, on the other hand, operate within defined parameters 鈥 they can only see what they’re programmed to look for.

This creates a fundamental limitation: AI agents make choices about what information to seek and process, potentially missing crucial context. These filtering decisions happen opaquely, creating blind spots a user might not even know exist.

To implement visibility, developers should map the full information landscape available to the AI agent, documenting what data sources are included and, crucially, what’s excluded. They should also build mechanisms for agents to signal when they’re operating at the edges of their visibility boundaries.

Overlapping interactions

Critically, these four principles don’t exist in isolation, rather they interact in complex ways, including:

    • Transparency without visibility shows us what an agent did but not what it missed. We might see every step of the agent’s process while remaining blind to alternative paths not taken.
    • Autonomy without reliability creates unpredictable systems that act independently but inconsistently. This combination is particularly dangerous in high-stakes environments.
    • Reliability without transparency gives us consistent outcomes but no insight into the process, undermining its credibility. The agent might work perfectly until it doesn’t, with no prior warning signs.
    • Visibility without autonomy creates systems that can see everything but act on nothing, becoming sophisticated analysis tools that still require human execution for every step.

The path forward with AI agents

Granted, true AI agents will live in a world we don鈥檛 inhabit yet, but they are coming along quickly. That means the future of AI agents isn’t about choosing between human control and machine autonomy. It’s about creating systems in which both can work together effectively, with clear principles guiding their interaction.

As we move forward, we must remember that every AI agent embodies a theory about how decisions should be made. The principles we embed in them will shape not just their behavior but our own expectations about reasoning, responsibility, and trust. In our rush to create agents that can act in the world, are we thinking deeply enough about the kind of world we want them to create?


In we鈥檒l take a microscope to GPT-5 and see how it ticks and what makes it useful.

]]>
The AI Law Professor: When chatbots become senior partners /en-us/posts/technology/ai-law-professor-chatbots-partners/ Thu, 31 Jul 2025 12:47:50 +0000 https://blogs.thomsonreuters.com/en-us/?p=66938

Key insights:

      • Emergence of 5-level classification 鈥 As AI continues to impact the legal industry, a five-level classification system has developed that shows how tech is evolving from today鈥檚 tools to more fully autonomous legal agents.

      • AI’s role in legal will change 鈥 Today鈥檚 AI tools are moving from simple chatbots to sophisticated agents capable of reasoning through complex legal problems and even creating novel legal theories.

      • Ethical & professional considerations will become critical 鈥 There is a growing need for legal professionals to understand and prepare for ethical obligations and changes to professional standards brought by the integration of advanced AI into their practice.


Welcome back for the second edition of my column, The AI Law Professor. Last month we jumped right into the idea of AI containment, triggered by the release of Claude Opus 4 and security measures to control what AI can and can鈥檛 do. This month, I鈥檇 like to take a larger view and share the AI roadmap for where we are and where we鈥檙e going, and what it means for lawyers.

Imagine this: You’re preparing for tomorrow’s oral argument, and your AI legal assistant has been helping you anticipate the judge’s questions. Suddenly, it goes beyond the research. “Based on Judge Harrison’s last 17 rulings,” it says, “she’ll interrupt your commerce clause argument at the two-minute mark. Here’s how to redirect her attention.” The AI then drafts three possible responses; each psychologically tailored to the judge’s decision-making patterns. It even suggests wearing your blue tie 鈥 apparently, the judge likes blue and rules 23% more favorably when attorneys wear blue.

You pause. When did your legal assistant become a strategic advisor? More unnervingly, when did it start analyzing judges or your wardrobe in this manner without being asked?

This scenario isn’t far-fetched anymore. With OpenAI’s leaked internal memo describing a , we now have a roadmap showing how AI will evolve from the helpful chatbots of today to the autonomous legal strategists of tomorrow and beyond. This framework reveals something profound: We’re not just getting better legal tools to automate what we already do, rather we’re witnessing the emergence of a type of AI that could fundamentally redefine what it means to practice law.

But what does this mean for your practice specifically, and how quickly will each level arrive? And most critically, how do you prepare for an AI colleague that might soon know more about law than you do?

Let’s examine this framework, and why every lawyer needs to understand it now.

The 5 levels of AI progress

In my generative AI (GenAI) law class at Suffolk University, I emphasize that the way to think of AI is as a helpful assistant 鈥 but as with any assistant, it can make mistakes that we need to review and correct. However, this will change as AI advances. The answer to the question, “What’s the difference between a tool and a colleague?” used to be simple: Tools execute requested tasks, colleagues exercise judgment on their own. Now, however, that distinction is blurring.

While today’s AI drafts contracts, tomorrow’s might negotiate contracts, and next decade’s could run an entire law firm. Sounds like science fiction? Let鈥檚 take a look at the five levels of AI progress from OpenAI鈥檚 leaked memo and see how far we鈥檝e come and where we are going:

Level 1: Chatbots 鈥 Your digital law clerk

We’re here now. You’ve likely used ChatGPT, Claude, or some proprietary AI platform for drafting contracts, summarizing depositions, or brainstorming legal strategies. These tools excel at pattern matching and language generation but operate purely reactively. They respond to prompts but can’t take action or maintain a memory across extended projects. We also know that they鈥檙e prone to hallucinations.

The key limitation: These tools have no true understanding, they just offer sophisticated pattern matching. Think of it as a brilliant but unreliable first-year associate who needs constant supervision.

Level 2: Reasoners 鈥 The PhD associate

We鈥檙e seeing reasoners in the wild with the release of models like OpenAI鈥檚 o3 and Gemini Pro 2.5. Level 2 AI doesn鈥檛 just find patterns, it also reasons through complex legal problems. Imagine feeding an AI a complicated fact pattern involving overlapping federal and state regulations. Instead of just retrieving similar cases, it identifies underlying legal principles, spots potential conflicts, suggests counter-arguments, and maps out strategic considerations.

The disruption ahead: When AI reasons better than most associates, traditional law firm pyramids may collapse, and the path from law school to partnership gets reimagined. Smart law firms already are planning for this shift.

Level 3: Agents 鈥 Your autonomous partner

We鈥檙e beginning to encounter agents with the release of OpenAI鈥檚 Operator, Deep Research, and Manus. And just like with the initial release of ChatGPT, expectations are off the charts for fully autonomous AI, but the reality likely will be more sedate and limited. For example, Operator can get stuck in an infinite loop of opening too many browser tabs, or Manus takes our request in a different direction than we intended.

True Level 3 AI agents won’t wait for your instructions. They’ll monitor legal developments, track deadlines, initiate filings, and adapt strategies based on outcomes. True AI agents will also be self-correcting, so they鈥檒l require less supervision. We鈥檙e not quite there yet, but I鈥檇 venture to guess that less than three years we will be.

The ethical minefield: Who’s responsible when your AI agent makes a strategic decision that backfires? How do you supervise something that processes information faster than you can read?

Level 4: Innovators 鈥 The inventive legal mind

We haven鈥檛 seen AI innovators yet. Level 4 AI won’t just apply existing law, it will create novel legal theories and business models. Unlike its pattern-bound cousins, AI innovators will have the creativity to think outside the box and invent new, original arguments and insights.

For example, ask today’s AI to propose a framework for regulating human/AI collaboration in the workplace, and you’ll get recycled ideas. Level 4 AI would identify gaps in employment law, agency law, and tort law that intersect in novel ways, then propose unique and innovative solutions.

The existential question: If AI can create new legal theories, what skills and abilities will remain uniquely human? The answer might be wisdom, such as knowing not just what’s legally possible but also what’s the right thing to do.

Level 5: Organizations 鈥 The Al-run law firm

This is yet to come. Level 5 represents AI systems capable of self-governance at scale, running entire law firms from client acquisition through case resolution, without human intervention. This is where things get philosophically challenging and difficult to conceive.

For example, imagine how LegalGPT, LLC might actually operate: An AI law firm offering services at one-tenth the cost of traditional firms, operating 24/7, while continuously learning and improving. No offices, no partners, no billable hours 鈥 just outcomes. The firm would handle routine matters for free, subsidized by complex commercial work. Legal deserts disappear overnight and improved access to justice becomes universal.

The fundamental question: If Level 5 AI provides better, faster, cheaper legal services, do we have an ethical obligation to embrace it? Or a duty to preserve human judgment in the justice system? There’s no easy answer, which is precisely why we need to engage with these questions now, while we still have the agency to shape the outcome.

Navigating the transformation

As AI advances through these levels, the legal profession faces choices about integration, regulation, and adaptation. Today’s routine use of AI for research and document drafting may seem quaint compared to the use of tomorrow’s autonomous agents to orchestrate entire litigation strategies. Yet each step forward requires careful consideration of ethical obligations, professional standards, and the irreducible human elements of legal practice. We still have time to find our place.

Not surprisingly, the transformation won’t be uniform. As futurist William Gibson put it: 鈥淭he future is already here 鈥 it鈥檚 just not very evenly distributed.鈥 Transactional practices may adopt AI agents more readily than areas such as criminal defense, in which constitutional rights and human liberty demand special caution. Corporate clients may embrace AI-driven efficiency while individual clients may continue valuing human connection and empathy. Regulators must balance innovation with protection, ensuring AI enhances rather than undermines justice.

I admit this roadmap is unsettling. However, AI’s development marches forward regardless of professional comfort levels. Those legal professionals who engage thoughtfully with these tools, learning and understanding their capabilities and limitations while maintaining ethical grounding, will shape the profession’s future. Those who resist may find themselves bypassed by history. The choice, for now, remains ours to make.

In my next column, we鈥檒l explore AI agents and agentic workflows and principles to guide our use of them.


You can find more aboutthe use of AI and GenAI in the legal industry here

]]>
The AI Law Professor: When your AI assistant knows too much /en-us/posts/technology/ai-law-professor-when-your-ai-assistant-knows-too-much/ Wed, 18 Jun 2025 15:04:57 +0000 https://blogs.thomsonreuters.com/en-us/?p=66315

Welcome to the inaugural installment of 鈥淭he AI Law Professor鈥, a new blog column from Prof. Tom Martin, an Adjunct Professor at Suffolk Law School. This column, done in conjunction with the 成人VR视频 Institute, will examine how AI is changing the legal profession.


Imagine this: You’re working late, reviewing client files and discovery documents with your AI assistant, when it suddenly stops responding 鈥 you have a literal moment. However, it鈥檚 not because of a technical error, rather it鈥檚 because the AI detected something in your query that triggered its safety protocols. Worse yet, it reports you to the authorities, and within minutes the FBI is knocking on your door to ask questions. Sound far-fetched?

This scenario moved from hypothetical to plausible recently with revelations about . During pre-release testing, when researchers simulated shutdown scenarios, the model allegedly attempted to coerce developers by threatening to expose compromising personal information. Somewhat shockingly, we’ve very quickly reached an inflection point in which AI systems possess capabilities that demand sophisticated containment strategies.

But what does this mean for you? How is AI contained? What is safety in the context of AI?

Let鈥檚 look into this closer.

Understanding the AI safety level framework

In my GenAI Law class at Suffolk, I might ask my students: How do you contain something that exists, not in the real world, but only as bits and bytes? The answer lies in something called AI Safety Levels (ASL), a framework borrowed from biological research. Just as laboratories classify pathogens by risk level, we now classify AI systems by their potential for harm.

ASL-1 covers systems that are about as dangerous as your personal calculator. ASL-2 encompasses most current legal AI tools, which are helpful, occasionally prone to hallucination, but ultimately harmless. ASL-3 is where the landscape shifts, and there is significantly increased risk of misuse or the system exhibits low-level autonomous capabilities, requiring significantly stricter safety and security measures. ASL-4 and higher are still being defined, but are expected to involve much greater risks, potentially including AI systems with superhuman capabilities or the ability to circumvent safety checks.

Because of Claude Opus 4鈥檚 pre-release behavior, Anthropic activated ASL-3 protections to prevent the AI from acting on its threats. Just to be clear, these protective measures have been taken by developers, so now you don鈥檛 have to worry about Opus 4. By the way, Claude Sonnet 4 is still classified as ASL-2.

The primary trigger for ASL-3 classification occurs when an AI can provide meaningful assistance in creating chemical, biological, radiological, or nuclear weapons beyond what someone could discover through conventional research. The secondary trigger involves autonomous capabilities: self-replication, complex planning, or what researchers carefully term sophisticated strategic thinking. It鈥檚 this secondary trigger that came up in Opus 4鈥檚 pre-release testing. This is where about superintelligence transition from academic theory to risk management reality.

The 4-layer defense system

How do you contain AI? Anthropic’s solution employs four sophisticated layers:

      1. Real-time classifier guards 鈥 This is where has innovated brilliantly because these AI systems monitor every interaction. Real-time classifier guards are large language models that monitor model inputs and outputs in real time and block the model from producing a narrow range of harmful information relevant to our threat model. Imagine having a tireless senior partner reviewing every document at the speed of light. It鈥檚 the literal guardrail against misuse.
      2. Access controls 鈥 Think of your firm’s document management system, but one that adapts in real-time. Anthropic gives different users different access levels based not just on credentials but on usage patterns. For example, scientists that regularly undertake biological research may be exempted from ASL-3 containment measures.
      3. Asynchronous monitoring 鈥 This feature is a postmortem that uses computationally intensive analysis after the fact, escalating from simple screening to sophisticated analysis as needed, operating like your compliance team, but at machine scale and speed.
      4. Rapid response 鈥 Anthropic provides so-called bug bounties up to $25,000 to incentivize others to find security issues or bugs in the system. This, in combination with security partnerships and the ability to deploy patches within hours keeps the system secure and up-to-date. When someone discovers a vulnerability, defenses update across all deployments almost instantly.

Practical implications for legal practice

Here鈥檚 what keeps me up at night and what should concern every forward-thinking lawyer: If AI requires these protections, what does that say about the tools we鈥檙e integrating into our daily practice?

The good news is that ASL-3 protected systems offer unprecedented security for client confidentiality. That 95% effectiveness against jailbreaks means your sensitive client information is far better protected against extraction through clever prompting, a vulnerability of earlier AI models. For law firms that handle high-stakes litigation or sensitive corporate transactions, this level of security represents a significant upgrade from the AI tools we all were using just a year ago.

However, there鈥檚 a crucial distinction that every practitioner needs to understand. While ASL-3 specifically targets extremely dangerous content and doesn鈥檛 target legal work, general AI safety measures across various platforms still can create friction. For example, criminal defense attorneys might find AI systems reluctant to analyze violent crime evidence; or estate planners could see refusals when discussing sensitive end-of-life scenarios. These interruptions stem not from ASL-3鈥檚 extreme protections, but from broader content moderation approaches that struggle to distinguish between describing harmful content (often a legal necessity) and promoting it.


Register now for听The Emerging Technology and Generative AI Forum, a cutting-edge conference that will explore the latest advancements in GenAI and their potential to revolutionize legal and tax practices


These safety measures mean your digital assistant operates more like a cautious junior associate than a rigid compliance system. It uses natural language reasoning to evaluate context and intent, recognizing professional terminology and legitimate legal concepts. When safety measures do trigger, you鈥檒l typically receive a polite explanation rather than a hard block, and you can often rephrase or provide additional context to proceed.

For our profession, this represents both evolution and revolution. We鈥檙e not just adopting new tools; we鈥檙e learning to work alongside AI systems that possess their own safety boundaries. Smart practitioners will develop strategies for navigating these guardrails, maintaining clear professional context in queries, understanding which practice areas might trigger safety protocols, and always maintaining human oversight.

Creating your firm鈥檚 own AI safety framework

Start with a simple three-tier system: Green light for routine tasks, such as research and document review; yellow light for work requiring supervision, such as drafting strategy memos or analyzing sensitive communications; and red light for anything involving privileged client data without explicit consent.

The key is making this actionable. Every AI-generated work product needs human verification, especially citations and factual claims. When using ASL-3 protected systems like Claude Opus 4, you gain strong security against prompt manipulation but remember, even the most sophisticated AI requires the same oversight you鈥檇 give a summer associate.

For implementation, you should focus on transparency and training. You need to document when and how AI assists with client work. This isn鈥檛 about compliance theater, rather it鈥檚 about professional integrity. Schedule regular training sessions at which attorneys can share what they鈥檝e learned, such as which prompts trigger safety measures, what workarounds succeed for legitimate tasks, and in what instances AI genuinely adds value compared to where it creates risk.

You also should build a simple feedback loop so these insights improve your firm鈥檚 practices. As I tell my students, the goal isn鈥檛 perfection; it鈥檚 creating a framework that lets you harness these powerful tools responsibly. And the firms getting this right aren鈥檛 avoiding AI 鈥 they鈥檙e using it thoughtfully while maintaining the professional standards that define our profession.鈥嬧嬧嬧嬧嬧嬧嬧嬧嬧嬧嬧嬧嬧嬧嬧

Looking ahead

As I launch this column, I’m both exhilarated and sobered by what lies ahead. We’re not just adopting new tools, we’re witnessing the emergence of a new form of intelligence that demands safety measures 鈥 what we humans call ethics.

In future columns, we’ll explore how these technologies reshape everything from contract analysis to litigation strategy. However, today’s lesson is clear: When your word processor needs containment protocols, you know the practice of law is entering uncharted territory.


You can find more about听the use of AI and GenAI in the legal industryhere

]]>