On-Demand Webinars Archives - 成人VR视频 Institute https://blogs.thomsonreuters.com/en-us/topic/on-demand-webinars/ 成人VR视频 Institute is a blog from 成人VR视频, the intelligence, technology and human expertise you need to find trusted answers. Wed, 18 Mar 2026 20:11:26 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 The efficiency imperative: AI as a tool for improving the way lawyers practice /en-us/posts/ai-in-courts/improving-lawyers-practice/ Wed, 18 Mar 2026 17:45:16 +0000 https://blogs.thomsonreuters.com/en-us/?p=70024

Key insights:

      • AI brings improved efficiency 鈥 AI accelerates tasks like document review and research, freeing lawyers to pursue more high-value work for clients.

      • AI does the work of a team of lawyers 鈥 AI levels the playing field for small law firms and solo practitioners by providing additional capacity without adding headcount, thereby allowing fewer lawyer to do the work of many.

      • Yet, AI still needs guardrails 鈥 Lawyers must remain accountable, however, with human oversight and review to ensure that AI outputs are accurate and correct, thereby preserving nuance and professional judgment.


Already, AI is no longer a theoretical concept for legal professionals, nor is it a nice-to-have for law firms that are seeking to impress their clients with improved efficiency and cost savings. That means, the practical question now becomes how to adopt AI in ways that improve speed and capacity of lawyers without compromising accuracy, confidentiality, or professional judgment.

The strongest near-term value shows up where modern practice is most strained: high-volume inputs and relentless timelines. In that environment, AI can be most helpful as an accelerant for the first pass through large bodies of material.

This possibilities, opportunities, and challenges of using AI in this way were discussed by a panel of experts in a recent webinar, , from the听, a joint effort by the National Center for State Courts听(NCSC) and the 成人VR视频 Institute (TRI).

One panelist, Mark Francis, a partner at Holland & Knight, described one way that AI can be an enormous help. “Anything where we’re dealing with large volume of materials that need to be reviewed [such as] large sets of documents, large sets of legal research, large sets of discovery. Obviously, AI can be leveraged in all of those circumstances.” That framing is important because it anchors AI’s utility in a familiar workflow: review, triage, and synthesis at scale.

AI also has a role earlier in the workflow than many attorneys expect. In addition to sorting and summarizing, it can help generate starting structures. For lawyers drafting motions, client advisories, demand letters, contract markups, or internal investigations memos, the hardest step can be getting traction from a blank page. 鈥淚t’s really good at content or idea generation,鈥 Francis said, adding that lawyers can ask AI to 鈥済enerate some ideas for me on this topic, or generate an outline of a document to cover a particular issue.”


“AI is definitely going to benefit some of the small law firms who cannot actually afford the workforce. AI can be an extension when it comes to the automation.”


Of course, that does not mean letting an AI model decide what the law is; rather, it means using AI to produce an initial outline, identify possible issues to consider, or propose alternate ways to organize an argument. Then, the attorney should apply their own judgment to accept, reject, refine, and verify the AI鈥檚 output.

For legal teams, the ideal mindset is that AI can compress the time between intake and a workable first draft, whether that draft is a research plan, a deposition outline, a set of contract fallback positions, or a motion framework. However, speed is only valuable if it facilitates careful lawyering, not just taking shortcuts.

Efficiency that scales down, not just up

AI’s impact is not limited to large law firms with dedicated tech & innovation budgets. In fact, the benefits may be most transformative for smaller legal organizations that feel every hour of administrative drag and every unstaffed matter. Panelist Ashwini Jarral, a Strategic Advisor at IGIS, underscores how broad the current level of AI adoption already is. “AI is already being used in a lot of legal research, contract analysis, and in office operations,鈥 Jarral explained. 鈥淲hether that’s in a small law firm or a large law firm, everybody can benefit from that automation with this AI.”

For many practices, that list maps directly onto the work that consumes lawyers鈥 time without always adding commensurate value: repetitive research steps, first-pass contract review, intake and scheduling, matter administration, and other operational tasks.

Historically, scale favored organizations that could hire more associates, paralegals, and support staff to push volume through the pipeline. Now, AI offers a different form of leverage: additional capacity without adding headcount. “It is definitely going to also benefit some of the small law firms who cannot actually afford the workforce,鈥 Jarral said, adding that 鈥淎I can be an extension when it comes to the automation.” For a solo or small firm, that extension can show up as faster first-pass review of contracts, quicker summarization of records, more consistent intake workflows, and reduced time spent on repetitive back-office tasks.

At the same time, it is crucial to be clear-eyed about what is being automated. While AI can help deliver efficiency, it does not offer legal judgment itself. The legal profession still must decide, matter by matter, what level of review is required and what risks are acceptable.


“Lawyers are trained a certain way, and AI is never going to be trained that way. AI misses nuances. We’re always going to need lawyers; we’re always going to need the human in the loop.”


And that鈥檚 where implementation discipline becomes a strategic differentiator. Law firms that treat AI as a general-purpose shortcut tend to create risk; while firms that treat AI as a workflow component, with guardrails, review steps, and clear accountability, are more likely to capture value without compromising quality.

The non-negotiable: lawyers remain accountable

Any serious conversation about AI in legal practice must address these limits, panelists agreed. The Hon. Linda Kevins, a Justice on the Supreme Court in the 10th Judicial District of New York (Suffolk County), offered the most direct articulation of the boundary line: “Lawyers are trained a certain way, and AI is never going to be trained that way. AI misses nuances. We’re always going to need lawyers; we’re always going to need the human in the loop.”

Indeed, legal work is saturated with nuance. The same set of facts can carry different weight depending on jurisdiction, judge, forum, procedural posture, and the client’s goals and risk tolerance. Even when the law is clear, the right action often is not. To strive for true justice requires judgment about timing, framing, business consequences, reputational risk, and settlement dynamics. Those are not merely inputs for an AI to process 鈥 they are human decisions that define legal representation.

As the webinar made clear, this is the point at which responsible use becomes practical, not abstract. If AI is used for research support, contract analysis, or document review, lawyers need an explicit approach for verification and oversight. The outputs may look polished and may sound confident; however, confidence is not accuracy, and professional responsibility does not shift to a vendor or an AI model. Human review is not a ceremonial or perfunctory step, nor is it a formality. Rather, it is the core control that protects clients and the court, and it is the inflection point that turns AI from a novelty into a defensible tool.

In practice, the human in the loop means deciding in which instances AI can assist and in what instances it cannot. It also means reserving an attorney鈥檚 time for the decisions that carry legal and ethical consequences and building repeatable habits that prevent teams from drifting into overreliance on AI, especially under deadline pressure.

The legal profession can capture real benefits from AI, including speed, scalability, and improved access, but only if it adopts the technology in a way that preserves what Justice Kevins highlighted: training, nuance, and human accountability.


You can find out more about how AI and other advanced technologies are impacting听best practices in courts and administration here

]]>
Webinar: World Day Against Trafficking in Persons /en-us/posts/events/webinar-world-day-against-trafficking-in-persons/ Fri, 23 May 2025 15:17:48 +0000 https://blogs.thomsonreuters.com/en-us/?post_type=lei_events&p=65948 Join the 成人VR视频 Institute for a comprehensive virtual training session in observance of World Day Against Trafficking in Persons. This event will take place on Wednesday, July 30th at 10:00 AM CST, bringing together experts from various sectors united in the fight against human trafficking.

Our distinguished panel includes experts from the 成人VR视频 Social Impact Institute, and nonprofit partners Spotlight, and New Friends New Life. Additionally, a representative from U.S. Homeland Security Investigations will provide insights into the collaborative efforts required to tackle this critical issue.

This training aims to educate participants on the complexities of human trafficking, the impact on victims, and effective strategies for prevention and intervention. Attendees will gain valuable knowledge on the roles of various stakeholders, including non-profit organizations, law enforcement, and the community, in addressing trafficking and supporting survivors.

We invite individuals and organizations committed to making a difference to join this informative session, as we work together to create a world free from exploitation.

Watch the full recording below!

]]>
Deepfakes on trial: How judges are navigating AI evidence authentication /en-us/posts/ai-in-courts/deepfakes-evidence-authentication/ Thu, 08 May 2025 17:03:17 +0000 https://blogs.thomsonreuters.com/en-us/?p=65811 AI-generated evidence presents significant challenges for courts today, as judges and attorneys grapple with determining the authenticity, validity, and reliability of digital content that may have been artificially created or manipulated. The rapid advancement of generative AI (GenAI) technology has outpaced the development of reliable detection tools, and now GenAI is testing traditional evidentiary frameworks through sophisticated deepfakes and AI-altered materials that are increasingly difficult to distinguish from genuine evidence.

There are significant challenges involved with relying on automated tools to detect and authenticate evidence, says , Research Professor at the University of Waterloo in Ontario, Canada. “We aren’t at the place right now where we can count on the reliability of the automated tools,鈥 she explains, adding that most computer scientists consider this a tricky problem.

Defining types of AI evidence

AI-generated evidence falls into two distinct categories. One, acknowledged AI-generated evidence is openly disclosed as created or modified by AI, such as accident reconstruction videos or expert analysis tools. These applications are transparent about their AI origins and creation or modification methods, which allow courts to evaluate them as such.

Second, unacknowledged AI-generated evidence is presented as authentic and unconnected to any AI creation or manipulation when it is, in fact, AI-generated or manipulated. Such examples include deepfake videos, fabricated receipts, and manipulated photographs. This type of manipulated evidence poses significant challenges for detection and authentication in court proceedings.

The , a joint effort by the National Center for State Courts (NVSC) and the 成人VR视频 Institute (TRI), recently published two bench cards as practical resources for judges who may face the evidentiary challenges presented by and AI-generated evidence. These tools help judges make real-time decisions when confronted with potentially AI-generated materials. The bench cards provide structured questions about evidence sources, chain of custody, and potential alterations to help guide judicial evaluation.

Authenticating unacknowledged AI-generated evidence

The current legal framework for authentication of AI-generated evidence sets a fairly low bar for admissibility, according to , Senior Specialist Legal Editor at 成人VR视频 Practical Law. Generally, evidence is admissible when a party provides enough information that a reasonable jury could find the evidence is more likely than not authentic. This is often done by offering extrinsic evidence. For example, a party seeking to authenticate a voice recording may offer the testimony of a witness who is familiar with the speaker鈥檚 voice. Federal Rule of Evidence (FRE) 901(b) offers several more examples of authentication methods.

Judges usually make the authentication determination under FRE 104(a), deeming evidence authenticated if a reasonable jury could find it more likely than not genuine. Indeed, judges have the authority and responsibility to act as gatekeepers in court trials to make preliminary decisions about evidence admissibility before it goes to a jury and to assess witness credibility, which remains crucial in evaluating evidence.

However, in some circumstances, the jury must determine whether the evidence is authentic. Specifically, when a party disputes the authenticity of evidence and there is sufficient evidence for a reasonable jury to find in favor of either party, a question of fact exists. In this instance, FRE 104(b) requires the court to leave the authentication determination to the jury.

Right now, of the Santa Clara County (Calif.) Superior Court says that the tools that judges already possess to determine authenticity are useful, but the landscape is evolving. In particular, the liar’s dividend 鈥 a concept when authentic evidence is falsely claimed to be AI-generated 鈥 is a current challenge in which the existing rules may not be sufficient.

Dr. Grossman agrees, noting that courts will need to develop strategies to address this issue, including requiring parties to provide evidence to support their claims that evidence is fake. “I think the courts will see [the liar’s dividend] sooner than the deepfakes.鈥

Recent cases addressing AI and evidence

Several significant court decisions have shaped the treatment of AI-generated evidence, including the State of Washington v. Puloka case, in which due to lack of reliability. In contrast, a California state court rejected a challenge to video evidence in Huang v. Tesla that centered on the vague possibility that the video could have been a deepfake.

The increasing sophistication of deepfakes, both audio and video, poses significant challenges for judges and attorneys in detecting and authenticating evidence. 鈥淕enAI generates content using two algorithms, one that creates content and one that distinguishes it from reality, creating a constant feedback loop that improves the AI’s ability to generate realistic fakes,” Dr. Grossman explains.

Key questions judges should ask

To address the challenges of AI-generated evidence, Dr. Grossman suggests that judges ask themselves three key questions:

      • Is the evidence too good to be true?
      • Is the original copy or device missing?
      • Is there a complicated or implausible explanation for its unavailability or disappearance?

In addition, Judge Yew advises judges to consider the credibility of the witness and order in-person appearances when necessary. And Dr. Grossman says that substantial scientific work is necessary before courts should trust AI tools.

, Dean and Professor of Law at the University of New Hampshire鈥檚 Franklin Pierce School of Law, is pushing for the creation of a comprehensive framework for the evaluation and ongoing development of AI-powered legal tools. “Legal AI tools should undergo the same sort of rigorous training and testing that humans undergo to do the same kind of work,” Carpenter says, adding that this approach would ensure reliability while adapting to evolving technology.


Check out the next webinar, , in the here

]]>
Chatbots for justice: Building AI-powered legal solutions step by step /en-us/posts/ai-in-courts/chatbots-for-justice-building-ai-powered-legal-solutions/ Wed, 12 Mar 2025 22:39:16 +0000 https://blogs.thomsonreuters.com/en-us/?p=65222 Low-income people in the United States can’t afford adequate legal help in 92% of civil matters, and the promise of AI could potentially make legal services more affordable, according to the . In fact, several court systems and nonprofits are demonstrating this promise, a couple of which were recently highlighted in a webinar series hosted by the .

For example, the developed the chatbot Beagle+ to assist people with step-by-step guidance on everyday legal problems. , Digital & Content Lead at the People鈥檚 Law School, led the efforts to create Beagle+ with technical assistance from, Founder of Tangowork. And the Alaska Court System (ACS) and听 using a grant from the NCSC to develop an AI-powered chatbot called the听Alaska Virtual Assistant, or AVA.听Jeannie Sato, Director of Access to Justice Services of ACS worked with , CEO and Founder of LawDroid, to develop the tool.

How courts can successfully experiment with AI

Jackson, McGrath, Sato, and Martin all offered their step-by-step guidance on how courts and nonprofits can experiment and use AI successfully within courts systems.

Step 1: Determine the problem

When starting a generative AI (GenAI) legal assistance project, it is crucial to first pinpoint the specific legal needs and challenges faced by your target audience. McGrath noted that he sees several common examples, including providing public access to legal information, creating internal resources like bench books for judges, and automating court document preparation.

To properly identify the problem, conduct thorough user research to understand pain points related to accessing and applying legal information. For instance, Martin suggests starting by speaking with court staff. “I think we sometimes get caught up in the excitement about wanting to throw AI at the problem and create a solution,鈥 Martin explains. 鈥淎nd there are many use cases, but I think the part that’s really important is to meet with your staff, meet with everyone who’s being impacted by the burden of work, and then determine, based on that, what is the best choice.”

Taking the time upfront to clearly define the problem will help ensure that any AI solution being developed is truly meeting a demonstrated need.

Step 2: Craft a vision

Shifting from problem identification to crafting a vision for the GenAI-powered solution is crucial. The People鈥檚 Law School鈥檚 Beagle+ chatbot illustrated this well. 鈥淏egin with the end in mind,鈥 says Jackson. 鈥淲hen you begin a project, keep in mind what you’re trying to achieve and what success looks like because that’s going to be different for each person.鈥

Jackson further described how in 2018, the initial vision was to create a chatbot capable of intelligently answering questions about consumer and debt law in British Columbia. Today, while that vision is realized, the ability of GenAI technology to adapt and improve over time necessitates a continuous and evolving vision.

Step 3: Allocate realistic resources

Assessing available resources is crucial before embarking on a GenAI project, with a realistic evaluation considering such factors as existing legal content, technological capabilities, staff expertise and capacity, and budget.

It鈥檚 important to examine the state of the organization鈥檚 existing legal information, including its documents and web pages, to determine the quality and consistency. Indeed, conflicting information across sources often can confuse GenAI models.

For staff capacity, Sato explains how the ACS started with a small team of people, which included the court administrator, the chief technology officer, a webmaster, and two to three staff attorneys, who were necessary for content review, testing, and feedback. It is not uncommon for an initial project to consume about 30% of each team member鈥檚 time.

Technological expertise is also a key consideration in resource assessment. In fact, Martins says this underscores the importance of working with a technology partner that can help navigate the different choices and options available, including the need to understand options for AI model selection, vector databases, and embedding strategies. While some may consider using large language models (LLMs)to reduce costs, the expenses for setup and maintenance often outweigh the benefits compared to using established services like OpenAI.

Financial resources are also a consideration, of course; however, it is worth noting that the cost of OpenAI tokens is often surprisingly low compared to other project expenses. For the creators of Beagle+, for example, using OpenAI鈥檚 tool has cost no more than $75 per month, according to Tangowork鈥檚 McGrath.


Courts can explore the possibilities of AI tools in tackling their specific legal challenges by experimenting within


Addressing common concerns

Our experts say that two common concerns often arise when considering the use of GenAI to solve justice gaps: one is the need for multilingual capabilities; and the second is how to handle AI-generated inaccurate information, or so-called hallucinations.

鈥淎dvanced LLMs like GPT-4 demonstrate impressive multilingual capabilities and are able to understand and respond in numerous languages on-the-fly without requiring additional training or configuration,鈥 explains McGrath. 鈥淢ultilingual support is a key advantage of modern LLMs, enabling chatbots to serve diverse populations with minimal additional development effort.鈥

However, hallucinations are a significant concern when using LLMs for legal applications. Fortunately, the combination of several advanced strategies can mitigate hallucinations:

      • First, grounding responses in providing context through techniques like retrieval-augmented generation can help tether outputs to verified source material.
      • Second, careful prompt engineering and relevancy scoring can further constrain responses.
      • And finally, automated checks that compare model outputs to source documents can flag potential hallucinations.

At the same time, manual expert review by humans 鈥 known colloquially as human in the loop 鈥 remains crucial, even with automated safeguards in place. Therefore it is key to periodically sample responses for human verification and focus more intensive review on higher-risk conversations.

Creating a successful AI-powered chatbot for legal information requires careful consideration of the several steps cited above. By following these actions and staying up to date with the latest developments in AI technology, courts and organizations working to close the justice gap can create effective and responsible chatbots that provide valuable legal information to those who need it most.


You can register here for the upcoming NCSC webinar on March 19, which will explore the

]]>
Webinar: Enhancing Metrics for the General Counsel’s Office: Elevating Your Department’s Story (Part II) /en-us/posts/events/webinar-enhancing-metrics-for-the-general-counsels-office-elevating-your-departments-story-part-ii/ Thu, 20 Feb 2025 17:37:12 +0000 https://blogs.thomsonreuters.com/en-us/?post_type=lei_events&p=65033 In today’s dynamic business environment, legal departments play a critical role in protecting the organization’s interests and enabling its strategic goals. However, traditional performance metrics, often focused solely on cost and time, can inadvertently paint a picture of legal as a cost center, obscuring the true value and strategic contributions of the team. Are your current metrics truly reflecting the impact your legal department has on the bottom line and the overall success of the business?

This exclusive two-part webinar series, “Enhancing Metrics for the General Counsel’s Office,” is designed to equip General Counsel and legal leaders with the knowledge and practical tools to effectively demonstrate the听real听value of their legal departments. We’ll delve into how to move beyond simply tracking expenses and hours, and instead, focus on metrics that showcase the proactive, strategic, and business-enabling work of your legal team.

In this series, you will learn how to:

  • Optimize existing metrics:听Move beyond cost and time to showcase both efficiency and effectiveness.
  • Transform your metrics:听Incorporate “Protect and Enable” metrics, highlighting the proactive and strategic contributions of your legal team.
  • Communicate your value:听Distil complex data into a concise and compelling narrative for senior leadership, including a practical exercise on creating a powerful one-slide summary.

]]>
Webinar: Enhancing Metrics for the General Counsel’s Office: Aligning Your Value to the Business (Part I) /en-us/posts/events/webinar-enhancing-metrics-for-the-general-counsels-office-aligning-your-value-to-the-business-part-i/ Thu, 20 Feb 2025 17:31:58 +0000 https://blogs.thomsonreuters.com/en-us/?post_type=lei_events&p=65027 In today’s dynamic business environment, legal departments play a critical role in protecting the organization’s interests and enabling its strategic goals. However, traditional performance metrics, often focused solely on cost and time, can inadvertently paint a picture of legal as a cost center, obscuring the true value and strategic contributions of the team. Are your current metrics truly reflecting the impact your legal department has on the bottom line and the overall success of the business?

This exclusive two-part webinar series, “Enhancing Metrics for the General Counsel’s Office,” is designed to equip General Counsel and legal leaders with the knowledge and practical tools to effectively demonstrate the听real听value of their legal departments. We’ll delve into how to move beyond simply tracking expenses and hours, and instead, focus on metrics that showcase the proactive, strategic, and business-enabling work of your legal team.

In this series, you will learn how to:

  • Optimize existing metrics:听Move beyond cost and time to showcase both efficiency and effectiveness.
  • Transform your metrics:听Incorporate “Protect and Enable” metrics, highlighting the proactive and strategic contributions of your legal team.
  • Communicate your value:听Distil complex data into a concise and compelling narrative for senior leadership, including a practical exercise on creating a powerful one-slide summary.

]]>
Chatbots for justice: The impact of AI-driven tech tools for pro se litigants /en-us/posts/ai-in-courts/chatbots-pro-se-litigants/ Wed, 12 Feb 2025 15:14:13 +0000 https://blogs.thomsonreuters.com/en-us/?p=64812 Access to justice is a fundamental pillar of a fair and equitable society, yet only one-in-four respondents to the survey agreed that courts are doing enough to help individuals navigate the court system without an attorney. Many of these pro se litigants still face substantial barriers to accessing legal assistance.

However, AI-powered chatbots now offer a promising solution by providing timely, tailored legal information to those in need 鈥 and two early examples are the chatbots Beagle+ and AVA.

Beagle+ makes Canadian law accessible in plain language

Beagle+ is a chatbot powered by generative AI (GenAI) and developed by . The chatbot assists people with step-by-step guidance on everyday legal problems by allowing users to input their legal concerns in their own words. The chatbot responds with appropriate information, links to relevant resources, and potential next steps. , Digital & Content Lead at People鈥檚 Law School, led the efforts to create Beagle+ with technical assistance from , Founder of Tangowork. Jackson and McGrath worked together to launch Beagle+ in early-2024.

Central to the success of Beagle+ is its thoughtful design and user-centric approach. The team prioritized creating a system that is both empathetic and informative with a primary focus on providing users with clear, actionable guidance. The chatbot’s ability to integrate seamlessly with existing web resources without requiring dual data maintenance is another significant achievement because it reduces operational overhead while maintaining up-to-date legal content.

Although the tool is successful, Jackson and McGrath faced challenges throughout the developmental journey. One key barrier to overcome was ensuring the chatbot did not give incorrect legal advice from its training data. Another challenge was improving the system’s ability to handle nuanced legal questions. To address these challenges, the team used iterative testing and refinement to achieve a 99% accuracy rate in legal conversations.

Alaska state court develops its first chatbot

The Alaska Court System (ACS) partnered with , a legal technology company which has pioneered access to justice chatbots since 2016, and used a grant from the National Center for State Courts to develop an AI-powered chatbot called the Alaska Virtual Assistant, or AVA. The tool, which is in the final testing phase before launching, will help self-represented litigants navigate probate estate cases.

AVA uses enhanced retrieval augmented generation, which combines information retrieval with GenAI for improved accuracy and context in responses that are based on the court’s existing self-help web content, according to , CEO and Founder of LawDroid. Notably, AVA provides citations to verifiable sources and suggested follow-up questions to aid self-represented litigants in finding the information they didn鈥檛 even know they needed. ACS and LawDroid have been testing both OpenAI鈥檚 ChatGPT4 and Anthropic鈥檚 Claude Sonnet 3.5 and comparing accuracy and tone. A decision has not yet been made on which model will ultimately be used, according to Jeannie Sato, Director of Access to Justice Services of ACS.

Managing the complexities of legal language and ensuring the chatbot’s responses are consistent and reliable were two main challenges experienced during the development of AVA. These were addressed through a combination of meticulous content review, the use of advanced AI models, and continuous collaboration with legal and technical experts. Also, substantial effort was spent to create a comprehensive knowledge base from existing web content to ensure external sources did not leak in and result in erroneous responses to prompts. The production of AVA also required rigorous testing and refinement to address inaccurate inferences and inconsistent responses.

What the courts can learn from AVA and Beagle+

The development of Beagle+ and AVA yielded several key lessons that courts and legal services organizations can benefit from, including:

Focus on user needs during development 鈥 When creating public-facing legal tools, the most important requirement during the development and implementation journey is considering the needs of the average self-representing user who may have limited or no knowledge of the legal system. Beagle+ and AVA balance empathy with clear information to ensure the delivery of user-centric guidance that is both compassionate and practical, containing actionable insights and support. Additionally, both tools prioritize clear and concise language to achieve a reading level that is understood by the general public.

Collaborate with an interdisciplinary team 鈥 Both projects stressed the importance of having a multidisciplinary team that possesses legal and technical expertise along with a commitment to use plain language. This helps ensure that the chatbot is legally accurate, technically sound, and easy to understand.

Use iterative testing and human review听鈥 The development teams of both projects used rigorous and recurring testing and regular human review of responses 鈥 they also focused on using information solely from trusted sources (the knowledge base) to guarantee that users receive correct legal guidance. Maintaining a system for documenting and preserving all prompts and responses helps track accuracy and allows the team to monitor progress over time. ACS found that instructing the model to include a citation to the source of the information can help confirm accuracy and improve user confidence.

Continuously evaluate and improve the chatbot听鈥 Both teams underscored the importance of ongoing refinements to the knowledge base, stemming from iterative testing and user feedback analysis to maintain accuracy and improve the chatbot’s performance over time.

Dedicate resources well 鈥 Cost is often a factor for smaller court systems as well as for nonprofits and legal aid organizations. However, the most important factor in resource planning is dedicating the appropriate amount of internal staff time to the AI project. Project managers should plan to dedicate at least the 30% of one staff person鈥檚 time to build and review the knowledge base, evaluate and refine output, and fulfill other responsibilities. Allocate 30% of another person鈥檚 time for technical development.

Conclusion

As AI-powered legal chatbots continue to evolve, they offer a promising path to bridge the justice gap and empower self-represented litigants. By learning from successful implementations like Beagle+ and AVA, courts and legal services organizations can develop more effective tools to increase access to justice for all.


Join us for on February 19 to delve deeper into the technical aspects of building and monitoring these AI tools

]]>
Webinar: The 2024 State of the US Legal Market Report /en-us/posts/events/webinar-the-2024-state-of-the-us-legal-market-report/ Thu, 25 Jan 2024 19:28:06 +0000 https://blogs.thomsonreuters.com/en-us/?post_type=lei_events&p=60248 成人VR视频, in partnership with Georgetown University Law鈥檚 Center on Ethics and the Legal Profession, is proud to announce the release of the 2024 report on the State of the US Legal Market.

The definitive, data-driven overview of the legal profession, our report offers unparalleled guidance for law firm leaders, practitioners, and affiliated partners around the globe on the state of a market in flux.

As we look to the year ahead for legal services providers, our distinguished faculty will discuss key findings on financial performance, staffing strategies and talent challenges, ongoing market segmentation, buyer preference changes and numerous other pertinent challenges impacting legal services in 2024.

]]>
Webinar: The 2023 State of the Legal Market Report /en-us/posts/events/the-2023-state-of-the-legal-market-report/ Fri, 27 Jan 2023 18:33:14 +0000 https://blogs.thomsonreuters.com/en-us/?post_type=lei_events&p=55505 成人VR视频, in partnership with Georgetown University Law鈥檚 Center on Ethics and the Legal Profession, is proud to announce the release of the 2023 Report on the State of the Legal Market. The definitive, data-driven overview of the legal profession, our report offers unparalleled guidance for law firm leaders, practitioners, and affiliated partners around the globe on the state of a market in flux. As we embark upon another pivotal year for legal services providers, our distinguished faculty discusses key findings on financial performance in the current economic downturn, talent recruitment and retention challenges, ongoing market segmentation, and numerous other pertinent challenges impacting legal services in 2023.

The on-demand recording is now available!听

]]>