Design Thinking Archives - 成人VR视频 Institute https://blogs.thomsonreuters.com/en-us/topic/design-thinking/ 成人VR视频 Institute is a blog from 成人VR视频, the intelligence, technology and human expertise you need to find trusted answers. Tue, 09 Dec 2025 15:32:26 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 AI in Practice: Using Design Thinking to build a culture of innovation for the AI era /en-us/posts/technology/ai-in-practice-design-thinking/ Mon, 13 Oct 2025 13:33:57 +0000 https://blogs.thomsonreuters.com/en-us/?p=68007

Key takeaways:

      • Human-centered AI strategy is essential 鈥 Successful AI adoption in law firms requires a strategy that is not static, but adaptive and human-centered. Firms must continuously evolve their approach to keep pace with technological advancements and prioritize the human experience in their transformation.

      • Design Thinking drives sustainable change 鈥 Incorporating Design Thinking into AI strategy helps foster innovation, cultural buy-in, and effective change management. This ensures that adoption is not just about technology, but also about engaging and empowering people across the organization.

      • Focus on collaboration and continuous learning 鈥 Creating cross-functional teams, running ideation workshops, and establishing feedback loops are practical steps to build a culture of innovation. By teaching Design Thinking as a core skill, law firms strengthen their capacity for curiosity, adaptability, and continuous improvement, allowing them to thrive in the AI era.


It鈥檚 here and it鈥檚 happening. The days of AI as a buzzword are behind us. We have entered an era in which AI is woven into the fabric of daily life, work, and business. And for law firms and legal professionals, this isn鈥檛 just about technology, it鈥檚 about rethinking how we work, collaborate, and deliver value.

Not surprisingly, 8 of 10 professionals surveyed predict that 鈥淎I will have a transformational or high impact on their work in the coming five years,鈥 according to the 2025 Future of Professionals from 成人VR视频. Some legal industry leaders even suggest AI is more transformative to the legal profession than the introduction of the billable hour or email.

So how should law firms and corporate legal teams prepare their attorneys, professionals, and business models for what鈥檚 ahead? The answer isn鈥檛 black and white or formulaic. Like AI itself, the approach must evolve. A successful AI strategy isn鈥檛 static; it鈥檚 adaptive, iterative, and human-centered.

That is essentially what the crux of an effective AI strategy should incorporate: the ability to evolve and pivot along with the pace of change and rapid advancements in technology. Of course, this raises a common question: If everything will change anyway, why bother? 听Mostly because history tells us that waiting on the sidelines is rarely a winning strategy.

Today, organizations must embrace AI, and the key to unlocking sustainable adoption, change and opportunity may well lie in the incorporation of the core principles of Design Thinking.

Why Design Thinking?

Design Thinking is an approach to fostering innovation, solutions, and ideas incorporating a human-centric philosophy. It starts with identifying an opportunity or a problem, then moves through phases: brainstorming, developing and testing potential ideas, and ultimately implementing and scaling solutions.

Human experiences, perspectives, and empathy are not only the key to unlocking the benefits of Design Thinking but coincidentally, they are also mission critical for AI. Indeed, AI is only as strong as the humans guiding it. That鈥檚 why it鈥檚 essential to humanize the process by which we figure out how to use emerging technology and bring others along during the process to reduce fear, increase engagement, and evolve together.

The Future of Professionals shows that organizations with clearly crafted, transparent, and tangible strategic AI plans are 鈥渁lmost twice鈥 as likely to already be experiencing revenue growth as a result of their AI investment.鈥 But how can law firms merge the very clear need to have an AI strategy with the need for professionals to actually adopt and use the tools?

The strategy in and of itself needs to include an adoption plan. Adoption requires cultural buy-in. Recognizing that every firm and organization is different, incorporating a Design Thinking session around AI adoption across various user groups is a great way to embed the cultural and individual human experience that comes with change management.

Seyfarth鈥檚 approach: SEYmultaneous Advancement

At Seyfarth Shaw, we recognized the need for a new way of working that develops future-ready skills, fosters innovation, and creates a high-performance culture that empowers all employees to lead through change. SEYmultaneous Advancement is our firmwide approach to embed innovation and Design Thinking into the DNA of our people strategy.

The goal of the initiative is to avoid siloed innovation and instead foster a culture in which learning happens in all directions 鈥 across titles, practice areas, and departments. It breaks down traditional barriers between lawyers and business professionals and embeds innovation as a shared competency across all roles.

Collaborating across functions

With an approach rooted in Design Thinking, a standout feature is our live AI demo and ideation sessions, during which attorneys and non-attorneys share real use cases, brainstorm solutions, and learn from one another. Led by attorneys rather than traditional tech support, these sessions make AI tools more accessible and foster a culture of experimentation and peer learning.

The initiative also promotes a generational exchange of knowledge: junior attorneys explore new tech use cases while senior attorneys contribute subject matter expertise, creating a two-way learning model that accelerates both technical fluency and legal acumen.

Those who use and train AI in silos will likely not have the same level of success 鈥 and as varied success 鈥 as companies that embed a culture of collaborative ideation as part of AI adoption and use. Not only does collaboration and group approaches to ideation foster more ideas, but they also help to create broad awareness around the limitations and risks associated with AI.

Getting started: Putting Design Thinking to work

Law firms and corporate legal departments can begin leveraging Design Thinking today by taking the following steps:

Approach:

      • Start with empathy and listen to pain points across roles.
      • Use collaborative workshops to define problems, not top-down memos.
      • Encourage 鈥yes and鈥鈥 thinking to open creative possibilities.

Action items:

      • Create cross-functional teams (partners, associates, IT and operational professionals) to experiment with AI and share their findings.
      • Run ideation workshops in which participants can be celebrated and encouraged to suggest use cases without fear of being judged.
      • Pilot in sprints 鈥 start with smaller low-risk adoption tests before wider rollouts.
      • Establish feedback loops by implementing systems and regular channels for feedback throughout the process, not only at the end.
      • Identify champions across every office, level, and department to drive adoption, embrace ideas, and bring people together.

Beyond technology: Building a muscle

When law firms incorporate Design Thinking, it鈥檚 not simply a framework for onboarding new technology, but a muscle that strengthens every dimension of their legal practice. Learning to empathize, define, ideate, prototype, and test doesn鈥檛 only make attorneys and professionals better at adopting AI skills; it makes them better problem-solvers, strategists, and innovators.

In client service, for example, Design Thinking helps attorneys move from What鈥檚 the legal answer? to What鈥檚 the best solution for this particular client鈥檚 context and need? That shift in perspective builds stronger connection and trust between lawyers and their clients.

Further, Design Thinking can spark curiosity and experimentation in business development, whether it be reimagining how services are packaged or designing more engaging client experiences. Professionals who creatively differentiate how to interface with clients will stand out.

Ultimately, fostering a culture of innovation where Design Thinking is taught as its own skill increases a law firm鈥檚 capacity for curiosity, adaptability, and continuous learning. AI will no doubt continue to accelerate the pace of change, but it is this human-centered, design-driven mindset that will determine which professionals and organizations thrive and evolve in the future of law.

And those organizations that thrive won鈥檛 just adopt AI, they鈥檒l design their future with intention.


You can find out more about Design Thinking here

]]>
Competitor or collaborator? Navigating legal tech鈥檚 role in document drafting /en-us/posts/legal/document-drafting/ Mon, 06 Oct 2025 13:17:12 +0000 https://blogs.thomsonreuters.com/en-us/?p=67810

Key takeaways:

      • Competitive approaches create bottlenecks and dissatisfaction 鈥 Resisting AI integration in legal document drafting leads to inefficient workflows, increased errors, and an inability to meet evolving client expectations.

      • Collaborative approaches boost efficiency and value 鈥 Embracing AI as a collaborative tool streamlines document processes, enhances accuracy, and allows legal professionals to focus on higher-value strategic work.

      • AI integration elevates document drafting through automation 鈥 Leveraging AI in document drafting automates repetitive tasks, improves consistency, and enables lawyers to provide more strategic, data-informed contributions to their organizations.


The legal profession stands at an inflection point. As AI transforms document drafting and review work, legal professionals face a fundamental choice: Compete against these tools or collaborate with them. This isn’t merely about adopting new software; rather, it’s about reimagining how legal services operate.

The legal industry is witnessing a significant shift from the traditional pen-holder approach to document management toward a . When lawyers compete with AI, they resist integration and view these tools as threats to human expertise. Collaboration treats AI as technology that amplifies legal professionals rather than replacing them.

Current data from the Federal Bar Association reveals the stakes: now use generative AI (GenAI) at work, up from 27% last year, and nearly 80% of firms plan to leverage GenAI within five years. The question isn’t whether AI will transform legal practice; the question is whether legal professionals will shape that transformation or be shaped by it.

The competition approach: What happens when legal resists

Legal professionals who compete with AI demonstrate predictable resistance patterns. They delay technology adoption, maintain paper-heavy processes, and stick rigidly to traditional workflows. This approach introduces bottlenecks in complex documents requiring multiple specialists’ input. This pen-holder model, in which one person integrates various perspectives into a cohesive document, becomes increasingly inefficient under competitive approaches.

Indeed, the data tells a stark story. Despite digital document management advances, 86% of attorneys . This preference creates operational bottlenecks and increases risks of document loss or damage. In fact, continued reliance on paper documents in a digital world transforms retrieval into time-consuming processes that clients increasingly will not tolerate.

Not surprisingly, document processing suffers measurably under these competitive strategies. Manual review processes take longer and produce more errors compared to AI-assisted approaches. Quality control remains entirely dependent on human oversight, becoming increasingly expensive and time-consuming for high-volume work like contract reviews and due diligence projects.

The market consequences are equally clear. Clients now expect law firms to use AI wherever and whenever possible to improve efficiency so their outside lawyers can focus on strategic thinking. Firms that take a competitive stance to AI usage struggle to meet these evolving expectations: Response times lag behind technologically advanced competitors; and pricing becomes less competitive as operational costs remain elevated while market rates adjust to AI-enhanced efficiency standards.

When legal speaks only in traditional terms rather than data-driven insights, it becomes the black box everyone struggles to navigate.

The collaboration approach: When AI enhances lawyers鈥 efforts

Collaborative approaches integrate AI as workflow enhancement rather than replacement technology. Co-authoring legal documents allows participants to view and edit documents simultaneously, ensuring changes are immediately visible to everyone involved. This creates hybrid workflows that combine human judgment with machine-processing capabilities while maintaining professional oversight.

The transition requires cultural adaptation alongside technological implementation. Successful collaboration demands that firms move beyond the ingrained preference for presenting polished final drafts toward embracing real-time collaborative processes. Training focuses on AI literacy while maintaining professional responsibility standards, which, in turn, helps legal professionals understand how to leverage technology without compromising quality or ethics.

The Federal Bar Association research demonstrates measurable benefits from collaborative AI adoption. At the firm level, 61% of respondents report that AI adoption has “somewhat” increased efficiency, while another 21% note significant efficiency improvements. Among practitioners using AI tools, 45% say they incorporate technology into daily workflows and 40% say they use it weekly. These users primarily leverage AI for drafting correspondence, brainstorming, and research tasks that previously consumed a disproportionate amount of time.

Collaboration also fundamentally transforms resource allocation. Repetitive tasks like document review and drafting become automated, freeing attorneys to focus on . Quality control evolves to incorporate both human expertise and algorithmic verification, creating more robust review processes than either humans or AI can achieve independently.

Further, real-time communication, task assignment, document sharing, and collaborative editing regardless of geographical barriers. This technological infrastructure supports the kind of seamless collaboration that clients increasingly expect from modern legal services providers.

AI tool categories serve distinct collaborative functions. For example, at extracting terms, identifying risks, and comparing provisions across sets of documents while humans provide strategic interpretation. Document drafting assistance provides template optimization, consistency checking, and compliance verification while lawyers maintain creative control. And due diligence platforms organize repositories, extract relevant information, and flag issues requiring human attention, enabling comprehensive review within compressed timeframes.

Smart legal contract management leverages advanced technology to redefine drafting, executing, and enforcing of agreements. When legal teams understand that contracts really are how businesses run and contain valuable data that often goes unnoticed, they can transform themselves from document creators into .

The more that legal teams can track data points and use them to drive decision making, the more leadership values their contributions and understands their strategic importance.

Strategic implications: What the data shows

Direct comparison between these competitive and collaborative strategies reveals substantial operational differences. Collaborative implementations consistently demonstrate productivity advantages and enhanced accuracy across document categories.

Organization size significantly influences adoption success 鈥 Firms with 51 or more lawyers report 39% GenAI adoption rates, the Federal Bar Association research says, thus benefiting from dedicated technology teams and comprehensive training programs. Solo practitioners need streamlined solutions with minimal learning curves, but the fundamental benefits of collaboration remain consistent across firm sizes.

Implementation timing matters 鈥 With more than two-thirds (67%) of law firms planning document management system upgrades by 2025, according to , AI-driven features become essential for supporting strategic goals. Gradual implementation approaches achieve higher acceptance than rapid deployment strategies, but early adopters will be the ones to gain experience and client relationship advantages in evolving legal service markets.

Risk management remains paramount 鈥 Technology adoption introduces security, confidentiality, and professional responsibility considerations that collaborative approaches must address through robust protocols and ethical compliance frameworks. The goal isn’t efficiency at any cost but rather enhanced delivery of legal services that maintains professional standards while meeting modern client expectations.

The path forward

With 79% of law firm professionals incorporating AI tools into daily work, the profession has moved beyond asking whether to adopt AI toward determining how to implement it strategically.

Legal professionals should evaluate AI integration through structured analysis that considers practice requirements, client expectations, and competitive positioning needs. Success demands understanding that legal technology isn’t just about automation but about visibility and strategic value creation.


You can learn more about how the legal industry is adapting to the impact of GenAI here

]]>
AI & human rights: The importance of explainability by design for digital agency /en-us/posts/sustainability/explainability-by-design-digital-agency/ Thu, 15 May 2025 14:56:23 +0000 https://blogs.thomsonreuters.com/en-us/?p=65829 AI systems increasingly shape access to rights, services, and opportunities, which makes the ability to understand, evaluate, and respond to AI-driven decisions a structural requirement for exercising human rights. This condition, called digital agency, ensures that individuals retain autonomy and accountability in environments governed by automated systems.

, a recognized AI governance and data protection expert and Co-Founder of Women in AI Governance, calls for the formal recognition of digital agency as a fundamental human right. Securing digital agency requires embedding explainability into AI systems at the design level, making system outputs understandable, accessible, and actionable. Without digital agency, individuals are exposed to systems that decide without visibility, affect without consent, and deny the possibility of meaningful redress.


Join us for a free online听Webinar: World Day Against Trafficking in Personsto learn more about the complexities of human trafficking, the impact on victims, and effective strategies for prevention and intervention


Today, many AI systems operate without meaningful explanation, creating an explainability gap that prevents individuals from recognizing or responding to the impact AI-driven decision may have on their lives. This unchecked deployment of opaque AI can systematically displace individual agency, creating environments in which decisions are made without visibility or contest, Rosenberg warns.

Current legal frameworks, including the European Union鈥檚 AI Act, attempt to mitigate systemic risks through classification and documentation requirements. However, they do not secure operational explainability for individuals affected by AI-driven decisions. Rosenberg argues that recognizing digital agency as a human right is essential to correcting this failure. She advocates embedding explainability into AI systems as a condition for preserving autonomy within increasingly automated governance structures.

Preserving digital agency through explainability

AI governance frameworks often conflate transparency with explainability, although the two concepts serve different functions. Transparency provides limited information about systems鈥 existence or purpose; while explainability ensures that individuals can understand how decisions are made, what influences them, and how they can respond. Most legal frameworks mandate transparency but do not compel explainability, leaving individuals without the means to navigate or challenge AI-driven outcomes.

Embedding explainability by design requires systems to support functional understanding from the outset. Rosenberg defines this threshold as minimum viable explainability: ensuring that AI systems make influencing factors and decision outcomes intelligible enough for individuals to assess, understand, and act upon meaningfully, if necessary. Systems designed without explainability embed opacity as a structural feature, cutting individuals off from seeing how decisions affect them, questioning outcomes, and seeking correction when needed.

Mandating minimum viable explainability ensures that individuals retain agency within AI-mediated environments. Digital agency must serve as the foundation of regulatory frameworks because, without such agency, legal protections remain abstract and unenforceable, Rosenberg explains.

Learning from the history of privacy

The human right to privacy was recognized internationally in 1948, but it did not meaningfully shape digital regulation before systemic harms emerged. AI systems now operate in a similarly underregulated space, necessitating a way to anchor AI regulation to digital agency, warning that without this foundation, systemic harms will again outpace regulatory response, Rosenberg says.

In the area of privacy, for example, the United Nations鈥 Special Rapporteur role helped consolidate regulatory momentum already underway. A Special Rapporteur for AI & Human Rights would be tasked with accelerating global recognition and protections that have yet to fully emerge. Establishing this role requires a UN Human Rights Council resolution that has not been formally proposed, reflecting the delayed global response to technologies already impacting individual rights.

Privacy protections emerged reactively, and digital agency protections must be built proactively to prevent further erosion of autonomy. Recognizing digital agency as a human right is a crucial step to ensure that digital agency protections are established before dependencies erode autonomy beyond repair.

Enshrining digital agency

As AI evolves, protecting human agency becomes imperative. However, recognition must come first: enshrining digital agency as a human right will create the foundation for systemic accountability.

To get there, we need to pursue a three-part strategy that includes:

      1. Recognizing the right to digital agency 鈥 Concerned individuals and organizations need to advocate for the establishment of a UN Special Rapporteur for AI Governance and the formal recognition of digital agency as a protected human right. Advocates should also mobilize support from human rights organizations, policymakers, and legal experts to initiate and advance a UN Human Rights Council resolution affirming digital agency as fundamental to autonomy and dignity.
      2. Establishing minimum viable explainability standards 鈥 Next, supporters should define standards for AI systems that set clear guidelines for what individuals need to preserve agency. International collaboration is essential to develop these standards and integrate them into certification and compliance processes.
      3. Mandating explainability by design 鈥 Requiring that new AI systems embed explainability from the outset, ensuring usability and intelligibility, is a critical step. Regulatory frameworks must ensure that explainability becomes a baseline condition for AI deployment, with voluntary leadership strengthening early adoption.

Today, AI is reshaping the systems that govern individuals, determine rights, and affect autonomy. Protecting digital agency ensures that individuals can understand, navigate, and challenge the decisions that shape their lives. Securing digital agency now is essential to ensuring that technology strengthens human dignity rather than eroding it.


You can find more information here about where current regulations are going concerning AI and its impact

]]>
Legalweek 2025: Collaborative contracting takes a village of people, plus technology /en-us/posts/technology/legalweek-2025-collaborative-contracting/ Thu, 17 Apr 2025 17:15:46 +0000 https://blogs.thomsonreuters.com/en-us/?p=65570 NEW YORK 鈥 Think about the number of agreements you sign in a day. It could be a privacy agreement when signing up for a new website, a services agreement when signing up for a lawn care service, even signing for the check when out at a restaurant. The vast number of agreements that a single person makes in a single week, or month, or year can be staggering.

Now scale that up to an entire company. Take pharmaceutical company Organon, for instance. In 2021, Organon was spun off from pharma giant Merck, and suddenly its much smaller legal department had to learn how to handle contracts 鈥 and lots of them. Organon had pre-existing agreements in 37 different countries, all in different formats, and needed to figure out how to simplify and consolidate this hoard of contracts.

鈥淣obody knows where anything is 鈥 for every template we have, there are probably 10 to 15 versions of that template floating around,鈥 recalls Stacy Lettie, Chief of Staff to the General Counsel at Organon. 鈥淭hat in itself creates an inefficiency that is so hard to overcome, it鈥檚 almost a little daunting.鈥

However, even the greatest challenges can be overcome. At , Lettie and Jamal Brown, Head of Legal Operations and Knowledge Management at JPMorgan Chase, explain how to simplify the complex when it comes to managing the explosion of contracts.

Their takeaway: Such as in life, it takes a village 鈥 and this village includes a combination of people plus technology.

The right tool for the job

Originally, Lettie and the Organon team had a mostly manual process to try and compare and contrast contract templates. At one point, she says, the team took over a whole conference room, printed out as many templates as they could find, and sorted them into piles that could be compared against one another.

Now however, she says that technology provides another option, and it鈥檚 just a matter of finding the right tool for the job. 鈥淲e need to lean into the technology to solve that inefficiency because that is one of the most solvable problems that we have in contract management,鈥 Lettie explains. 鈥淏ut also, it doesn鈥檛 need to be perfect, it doesn鈥檛 need to be a template that solves everything. Let鈥檚 make it good enough.鈥

Indeed, AI technology is becoming a regular starting point for tasks 鈥 in fact, 82% of corporate C-suites report having used AI as a starting point for tasks, according to data from 成人VR视频鈥 2024 Future of Professionals Report.

However, not all technology is created equally, the Legalweek panel warns.

Brown says that at JPMorgan, for example, the team has experimented with two separate AI tools for contracting 鈥 and gotten two very different results. The first he called 鈥渁 Cadillac, it was best-in-class and every feature and functionality.鈥 However, it provided a number of solutions to problems the department didn鈥檛 necessarily have. In response, the legal department decided to develop 鈥渁 smaller, medium-value solution that does one thing really well.鈥 And because this solution attacks a single problem, it has been a better value.

鈥淢y recommendation is, don鈥檛 boil the ocean in the first instance that you build,鈥 Brown notes.

The people side of contract tech

With so many different types of contracts to deal with, however, technology is not the only consideration. Brown and Lettie also discussed how to balance standardization and customization 鈥 and importantly, how to make attorneys feel empowered to prioritize what鈥檚 important.

Lettie notes that at Organon, the legal department does not actually own the contracting processes, the business side does. The legal team gives the templates and the playbook, but at times, those templates are not always followed, and the business side accepts the client side鈥檚 contract as the basis of the agreement.

What results is not a technological question, but a business one. 鈥淚 felt that our younger lawyers in particular didn鈥檛 feel empowered,鈥 Lettie says. 鈥淭hey had no basis to say, 鈥楴o, I鈥檓 not going to review that.鈥欌


“We need to lean into the technology to solve that inefficiency because that is one of the most solvable problems that we have in contract management; but also, it doesn鈥檛 need to be perfect, it doesn鈥檛 need to be a template that solves everything. Let鈥檚 make it good enough.鈥


In this case, she explains, tying contracting decisions to the business at large has helped her adopt a strong stance in dealing with the business. 鈥淭here are certainly things you need to guard against, but honestly, if your non-disclosure is eight years versus three years, who knows, who cares?鈥 she says, adding that it鈥檚 not worth an attorney鈥檚 time when they need to provide value to the business. 鈥淩eviewing an NDA is not any value to anybody.鈥

And it鈥檚 in these kind of human-centric decisions in which technology can play a valued-added role, especially as this technology continues to evolve. 鈥淭here is no reason to even really be having that my paper/your paper discussion,鈥 Lettie notes. Today, contract technology can take a template, turn it into a playbook, and put it against that third-party contract. 鈥淚t comes in, it goes in the engine, you get a comparison, you sign it, or you don鈥檛, and you move on with your day. It shouldn鈥檛 take longer than 30 minutes.鈥

Brown agrees, noting that in evaluating solutions, he comes back to the question: 鈥淲hat do I want my lawyers to be working on? Working on a multi-million-dollar M&A deal, or working on a single paper?鈥

To help free up that time, JPMorgan Chase鈥檚 legal team has developed a suite of 11 GenAI models, most slated for knowledge management, but that can be used across entire product line. This process provides a host of new capabilities, such as the ability to ask direct questions about contracts and documents. 鈥淲e built models that have data across all of our products and services globally,鈥 Brown adds. 鈥淚t makes for a more intelligent way for the solution to interact with our internal professionals.鈥

That scale of technology-build may not be right for every legal department; but both Lettie and Brown agree that legal departments should be thinking about not just what the technology can do, but how it fits into the overall collaborative team picture. Departments also need to examine their ability to accept failure if it does not work.

Brown tells a story from two years ago, about a contract vendor that had a fantastic pitch to solve a crucial problem, However, the in-house trial wasn鈥檛 going well. Rather than push forward unnecessarily, the in-house team decided to take a step back. 鈥淭hank god we did that,鈥 Brown says, 鈥渂ecause we were able to recover and prepare for the next wave, which was [Chat]GPT.鈥

The result is a lesson: The whole team needs to be on board to truly innovate. 鈥淪tart small and fail fast,鈥 Brown says today. 鈥淒o not be afraid to let leadership know that something鈥檚 not going right.鈥


You can find more听coverage of LegalWeek eventsover the years here

]]>
How best to integrate climate-conscious clauses in supply chain contracts /en-us/posts/international-trade-and-supply-chain/climate-conscious-clauses-supply-chain-contracts/ https://blogs.thomsonreuters.com/en-us/international-trade-and-supply-chain/climate-conscious-clauses-supply-chain-contracts/#respond Thu, 02 Feb 2023 15:01:32 +0000 https://blogs.thomsonreuters.com/en-us/?p=55534 As companies increasingly use climate-conscious clauses in their supply chain contracts, several factors will play an important role, including companies鈥 implementation of public green-house gas (GHG) emissions targets or pledges, and increasing standardization of climate-related terminology.

While these contract clauses are not yet commonplace, companies should be aware that adding these provisions will introduce a host of new concepts, terminology, and practical implications, which may make their contract drafting and review process more complex.

There are two basic issues to consider when drafting or reviewing climate-conscious clauses in supply chain contracts, such as sale of goods contracts. First, the parties must consider the enforceability of the clauses; and then, the parties must draft the clauses to work well together with the rest of the contract.

Enforceability of climate-conscious clauses

When drafting or reviewing climate-conscious clauses, counsel must first consider their enforceability. For example, the parties should pay attention to climate-conscious clauses that set out their own contractual remedies, such as liquidated damages provisions. A liquidated damages clause requires the breaching party to pay a predetermined amount to the non-breaching party for the types of breaches that are specified in the clause. The predetermined amount can be a fixed amount, or an amount based on a predetermined formula.

Liquidated damages clauses are only enforceable if they reflect the parties’ compensatory rather than punitive intent. The primary purpose of these clauses must be to compensate the non-breaching party for losses, not to punish the breaching party. This means that climate-conscious liquidated damages clauses that require the breaching party to make payment to the non-breaching party’s favorite environmental nonprofit organization (rather than directly to the non-breaching party) may be unenforceable.


Before inserting any new clauses into a contract form, counsel first should check how well they work together with the contract鈥s existing clauses.


The enforceability of liquidated damages clauses also generally requires that the clause specifies that the liquidated damages are the exclusive remedy for the specified type of breach.

Internal consistency in contracts

Before inserting any new clauses into a contract form, counsel first should check how well they work together with the contract鈥檚 existing clauses. For example, most contracts include a general termination provision that allows a party to terminate the contract if the other party breaches it. The provisions are typically tailored to include different notice, cure period, and other requirements for different kinds of termination-triggering events. In addition to breach of contract, these may include, for example, a party’s insolvency or change in control.

Broadly drafted general termination provisions typically include catch-all language to capture all breaches of contract that are not more explicitly set out as a termination-triggering event in the clause. Many broadly drafted general termination provisions may therefore already cover breaches of newly included climate-conscious obligations.

Problems can arise if, in addition to a general termination clause, the contract also includes a dedicated clause providing early termination rights for breach of climate-conscious obligations with its own requirements. Unless climate-related breaches are specifically carved out from the general provision, it may be unclear which provision applies.

A thorough review and comparison of the contract’s climate-conscious and other clauses will enable the parties to detect these and other unintended inconsistencies. Indeed, other unintended inconsistencies can arise if the contract includes such items as:

      • Different standards to determine whether different types of breaches have occurred. For example, the contract might include a materiality qualifier for the breach of the supplier’s delivery obligations but not for the breach of the supplier’s climate-conscious obligations.
      • A dedicated limitation of liability clause that aims to limit the types or amounts of damages recoverable for the breach of climate-conscious obligations in addition to a general limitation of liability clause.
      • A dedicated indemnification provision for breach of climate-conscious obligations in addition to a general indemnification provision.
      • Special price adjustment provisions that are triggered by climate-related events as well as a general price adjustment clause.

As climate-conscious clauses in supply chain contracts become more commonplace, companies should make themselves aware of how adding these provisions may make their contract drafting and review process more difficult and prepare now for that.


This article was written in conjunction with the Practical Law Commercial Transactions group. For more information on including climate-conscious clauses in supply chain contracts, you can contact here.

]]>
https://blogs.thomsonreuters.com/en-us/international-trade-and-supply-chain/climate-conscious-clauses-supply-chain-contracts/feed/ 0
Using human-centered design to power AI for contract analysis /en-us/posts/legal/human-centered-ai-contract-analysis/ https://blogs.thomsonreuters.com/en-us/legal/human-centered-ai-contract-analysis/#respond Thu, 12 May 2022 12:52:46 +0000 https://blogs.thomsonreuters.com/en-us/?p=51065 The incredible developments happening today in artificial intelligence (AI) and natural language processing (NLP) are allowing for an increased sophistication of innovative use in legal tech, regulatory tech, tax & accounting, and corporate work. Further, these innovations are challenging our understanding of knowledge work on one hand, and our understanding of collaboration between AI systems and human experts on the other.

The AI industry has started to use the umbrella term human-centered AI to describe the methods and research questions around such concepts as:

      • humans-in-the loop-systems;
      • how AI features are explained and understood;
      • research on trust and mental models on the interaction with AI systems;
      • balancing human domain expertise and AI analysis;
      • collective intelligence; and
      • collaborative decision-making.

Yet, how can a human-centered approach to design, data science experimentation, and agile development be applied to a real-world use case, such as AI-powered contract analysis?

Empowering contract analysis

Contract analysis itself encompasses various activities around contract review, clause extraction, comparisons of positions, deviation detection, and risk assessment.

In our research we studied how review and reporting on contracts is structured into finding answers to specific questions 鈥 such as which entities are involved in a contract, or what are different parties鈥 obligations 鈥 the answers to which are based on the interpretation and assessment of related legal language. State of the art information extraction听methods that apply NLP techniques to identify and extract specific clauses, positions, or obligations can greatly assist such task-driven review.

AI However, this review and analysis is not a one-way street. While a reviewer benefits from AI assistance, user input such as annotation, acceptance or rejection of AI powered suggestions, or flagging of potential legal issues can serve as good feedback into the system. Ideally, an end-user might engage in a dialog with the machine that does not only speed up the review but makes use of such feedback to improve extraction and analysis algorithms.

While initial definition and design of an AI-powered system ideally starts with an in-depth understanding of the problem space, user needs, and user goals, successful product innovation builds on lean experimentation and co-creation with domain experts and end-users. In this way, we can structure the design process in a participatory, human-centered way that enables various stakeholders to contribute, evaluate and shape the requirements and design of the system.

Moving to lean discovery

Design and AI communities are sharpening their toolkit for problem discovery and solution exploration. Various methods and techniques that have been borrowed from Human Computer Interaction (HCI) and User Experience (UX) also can be applied to design and experiment methods for AI innovation.

First, however, we start with a focus on an understanding of the information flow and the aspects of distributed cognition that occurs between different stakeholders and end users involved in contract review and analysis.

Through shadowing and co-creation workshops user researchers elicit crucial detail about the contract review processes, specific cognitive tasks and handling of information as well as common pain points.

When looking at contract analysis workflows, we might observe core activities in more detail, such as the comparison of a contract under review to guidance and other documents as well as legal professionals鈥 own expertise and knowledge. Lawyers or paralegals might review contracts based on internal documents such as a heads of terms to identify acceptable and unacceptable positions in comparison to a client-specific playbook. They may also compare a contract to some form of a standard or precedent contracts.

AIInnovation and research teams can benefit strongly from working closely with subject matter experts, such as legal experts on commercial real estate. Getting a grasp of the legalese and terminology involved in the review, as well as the legal weight of specific terms 鈥 such as in the example of a legal lease review the difference to put or keep a premise in good condition, a good repair or substantial repair 鈥 might prove particularly useful for the framing of AI research questions that could help provide the right answers to legal professionals.

Enabling rapid experimentation

Interdisciplinary teams of data scientists, designers, and engineers can explore various alternative solutions and evaluate different aspects of this ongoing process. Data scientists research state-of-the-art AI techniques; engineers explore aspects of production and how algorithms are put into action; and designers evaluate requirements and investigate how to best translate capabilities to the end-user.

A guiding principles for this kind of human/AI collaboration focuses its experimentation on AI-assisted workflows that 鈥渒eep the human in the loop.鈥 As pointed out by Ben Shneiderman, automation while maintaining some level of human control is not a contradiction. While automating tasks such as search for relevant clauses, clause classification, and facts extraction, ideally as much control as possible still resides with the legal professional and end-user of the system. AI features need to be made accessible and comprehensible for a non-technical audience, so that it is still entirely up to the legal professional to decide which suggestions to use in a report or further analysis. Ideally, this process would be easy-to-use and fall back on more manual workflow and simpler mechanisms, such as simple keyword search or Ctrl-F.

AIImagine the notion of a task-driven review that essentially lets the user easily select a number of questions he or she wants to analyze in a contract. Selecting specific review tasks and consciously assigning them to the machine can serve as a mediator, both to explain the capability of the system and to allow an easy interaction between user and underlying AI models. Human-centered design methodology provides a framework to run experiments, making use of mock-ups and semi-functional prototypes that allow end-users to explore data science questions, as well as interaction and display of model output.

User testing and productionization

It can be particularly helpful to evaluate any contract analysis system as early as possible, with input from domain experts and legal professionals. Indeed, you might want to focus both on the assessment of the quality of AI models, the evaluation of end-user experience, and the perceived quality of the system. Experimental user studies showed a gain in efficiency and increased levels of perceived task support when using an AI-powered system, as compared to a manual workflow. And of course, early user testing with law firms and legal professionals can inform the design and iterations of any new product.

However, it is crucial to leave enough lead time for designers, researchers, and business stakeholders to flesh out the details of the solution and define the requirements in the first place 鈥 before they move on to development. With sufficient resource and time, further questions can be explored as the development cycle goes forward.

AIThe idea to 鈥渟tart small and scale up later鈥 is yet another core concept that might help users focus resources early on. In the context of contract analysis, users might want to focus on specific document review use cases, such as due diligence, re-papering, or contract negotiation; or in specific practice areas or domains, such as real estate, service license agreements, or employment records. Once a system works, it can always be scaled to other use cases, capabilities can always be added post-launch.

The future of human-centered AI

AI systems offer fantastic opportunities to support and assist professional workflows. It is crucial, however, not to ignore the incredible value of human expertise and professionals鈥 ability to relate information to a broader context and to 鈥渃onnect the dots鈥.

Taking a human-centered approach, we can inform the way forward into a future for knowledge work and professional services that builds on collective intelligence and human/AI collaboration.

For legal tech innovation, this human-centered approach and a focus on systems that 鈥渒eep the human in the loop鈥 seems particularly appropriate. Legal evaluation, risk assessment, and legal advice require assisting systems that are explainable and interpretable. Audit trails and design patterns that allow end-users to overwrite machine suggestions, offer feedback, and re-train models will go far to support better understand and trust in the process 鈥 and ultimately increase the adoption of AI systems that assist legal work.

]]>
https://blogs.thomsonreuters.com/en-us/legal/human-centered-ai-contract-analysis/feed/ 0