AI-Assisted Research on Westlaw Precision Archives - 成人VR视频 Institute https://blogs.thomsonreuters.com/en-us/innovation-topics/ai-assisted-research-on-westlaw-precision/ 成人VR视频 Institute is a blog from 成人VR视频, the intelligence, technology and human expertise you need to find trusted answers. Fri, 08 Nov 2024 16:19:30 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 Kirsty Roth at TechCrunch Disrupt: Lessons for the GenAI Revolution /en-us/posts/innovation/kirsty-roth-at-techcrunch-disrupt-lessons-for-the-genai-revolution/ Fri, 08 Nov 2024 10:21:01 +0000 https://blogs.thomsonreuters.com/en-us/?post_type=innovation_post&p=63780 成人VR视频 Chief Operating and Technology Officer Kirsty Roth was among the speakers at TechCrunch Disrupt 2024, a three-day conference in San Francisco that drew more than 10,000 attendees from 35 countries. The event featured industry leaders addressing critical challenges in the evolving tech landscape as well as opportunities for attendees to network, collaborate, and build partnerships.

Roth鈥檚 session, , explored how companies worldwide 鈥 from startups to global enterprises 鈥 are taking action to move faster and take advantage of the opportunities that generative AI technology offers. She discussed how 成人VR视频 has embraced generative AI tools and transformed from a content company to an engineering-focused technology company.

She highlighted the seven generative AI products 成人VR视频 released in the past year, noting how the company transitioned from releasing product updates quarterly to rolling out 85% of its updates weekly. Roth also shared the lessons 成人VR视频 has learned 鈥 from success and failures 鈥 to become a leader in the commercialization of generative AI solutions.

Check out Roth鈥檚 full TechCrunch Disrupt presentation .

]]>
The Progressive Rise of Generative AI: A Conversation With David Wong and Joel Hron /en-us/posts/innovation/the-progressive-rise-of-generative-ai-a-conversation-with-david-wong-and-joel-hron/ Wed, 30 Oct 2024 09:50:36 +0000 https://blogs.thomsonreuters.com/en-us/?post_type=innovation_post&p=63649 In honor of the one-year anniversary of the first episode of TechConnect, highlights the progressive rise of generative AI In the past year.

鈥淎s fast as it started, it really feels like in the last year, there’s been an even more rapid acceleration, and many companies racing to become leaders in this field, including 成人VR视频,鈥 said Joel Hron, chief technology officer, 成人VR视频.

Hron and David Wong, chief product officer, 成人VR视频, shared their takes on the most significant advances in generative AI technology, including improvements in accessibility to the technology, with more developer tools alongside reduced costs and more out-of-the-box capabilities.

Wong said he鈥檚 most excited about large language models鈥 ability to have longer context windows, enabling them to keep more information in their short-term memory and answer ever-more complex questions.

鈥淭hat鈥檚 critical for the way 成人VR视频 uses a lot of these models,鈥 Wong said.

鈥淭he agentic behaviors of the models have become more robust in their ability to plan and ability to use reason over complex information,鈥 Hron added.

They also discussed balancing the need to innovate and go fast with the need for ethical, responsible and high-quality AI development.

Wong noted how 成人VR视频 is best positioned to develop professional-grade AI, grounded in fact and data. He emphasized customers鈥 need for measurable solutions, so they can discern tools鈥 accuracy rates, as well as the need for security and privacy.

Wong said 成人VR视频 has the scale and infrastructure to understand customers鈥 needs and develop solutions to solve their biggest challenges, guided by a philosophy and process that ensures the right balance between moving fast and ensuring quality.

Hron said the company鈥檚 human-centric approach to AI development is key.

鈥淥ur human expertise at 成人VR视频 and the level of rigor and quality we put behind both our content and our products for many years has really been a cornerstone of our brand,鈥 Hron said.

Hron said the iterations between technology and domain experts are crucial to how 成人VR视频 helps customers streamline their workflows with AI, such as with AI-Assisted Research on Westlaw Precision and CoCounsel Core.

They also highlighted the 成人VR视频 acquisition of Materia, an AI assistant and platform for accounting and auditing professionals.

鈥淚t鈥檚 a reinforcement of our belief in AI assistants being in the hands of every professional and a reinforcement of our commitment around AI across our entire product portfolio,鈥 Hron said.

He added that Materia鈥檚 strengths have included leaning into the long context and multimodal capabilities of generative AI as well as enabling agentic behavior.

Hear more of Wong and Hron鈥檚 insights on Materia as well as the evolution of generative AI in the of the TechConnect series, which brings diverse and dynamic perspectives from all corners of the technology world with thought-provoking questions and conversation.

]]>
A Holistic Approach to Advancing Generative AI Solutions /en-us/posts/innovation/a-holistic-approach-to-advancing-generative-ai-solutions/ Thu, 17 Oct 2024 11:42:10 +0000 https://blogs.thomsonreuters.com/en-us/?post_type=innovation_post&p=63467 At 成人VR视频, our vision is to deliver an AI assistant for every professional we serve. As part of that, our focus is on delivering benefits for our customers across the breadth of our AI- and non-AI-powered features. We know that our solutions deliver benefits to customers in many ways, including AI-powered automation.

In April of this year, we shared our vision to provide a GenAI assistant for each professional we serve. CoCounsel embodies our ongoing efforts to augment professionals鈥 work with GenAI skills, enabling professionals to accelerate and streamline entire workflows to increase efficiency, produce better work, and deliver more value for their clients.鈥疧ur continued investment in GenAI is driven to enable professionals across industries to accelerate and streamline entire workflows through a single GenAI assistant.

We believe our investment in GenAI 鈥 along with our integration to customer data as well as third-party integrations 鈥 extends the value customers derive from CoCounsel beyond our connected experience and our verified and trusted content. Our work with Microsoft, for example, includes CoCounsel integrations across Word, Outlook and Teams 鈥 meeting professionals where they鈥檙e already working.

AI and large language models are proving to be powerful tools that deliver efficiency gains and strengthen research practices for our customers. Yet our efforts to redefine work with GenAI are rooted in our strong foundation of editorial enhancements, authoritative content and technological expertise, alongside our long history of working closely with customers. That鈥檚 why we continue to build out AI- and non-AI-powered solutions to help with the entire workflow for legal, tax, and risk and compliance professionals. While AI may not be perfect, it can significantly help professionals reduce the amount of work and manage more complex and substantive work more efficiently. We collaborate with our customers to help them understand that AI is an accelerant rather than a replacement for their own research.

Benchmarking expectations

As a leader in innovation and AI research, we recognize the role that independent benchmarking brings in ensuring the accuracy, transparency, and accountability of evolving GenAI solutions. We believe that benchmarking can improve both the development and the adoption of AI. We also see it as one component in a broad range of ways we consider and understand the benefits AI delivers for our customers. We work with our customers as their trusted partners for change, helping them to confidently understand and adopt new technologies, looking at both their immediate value and role in long-term transformation, and leveraging our deep understanding of their businesses.

At 成人VR视频, our understanding of the holistic value of our products is based on customers鈥 usage and the benefits they derive. Our customers have run more than 2.5M searches through AI-Assisted Research鈥痮n Westlaw Precision since its launch late last year, and they tell us it’s saving time and improving productivity. Similarly, internal testing of CoCounsel’s skills has yielded impressive results, particularly with regards to CoCounsel鈥檚 document review capabilities.

Our benchmarking support is reflected in our participation in studies including Vals.ai as well as two consortium efforts 鈥 from Stanford and Litig 鈥 exploring how to best evaluate legal AI. We are submitting CoCounsel AI skills to the Vals.ai benchmarking study in five areas of evaluation 鈥 Doc Q&A, Data Extraction, Document Summarization, Chronology Generation, and E-Discovery.

is a first attempt at establishing a standard, and so we should view this work as the first iteration and an opportunity to learn versus treating it like a gold standard. For example, one limitation of the benchmarking methodology is that each vendor鈥檚 results are evaluated based on the text output alone, removed from the interface and experiences of the individual products. This discounts the work each vendor has done to design interfaces and safety features to minimize the harms of errors. This reinforces the need for a holistic evaluation of each product being tested, ideally as designed for the user.

Looking ahead, my expectation is that, while accuracy will continue to improve, no products will produce answers entirely free of errors. And as we鈥檝e shared with our customers, every AI product requires human expertise for verification and review 鈥 regardless of the accuracy rate. As the current approach to benchmarking rates an accuracy percentage 鈥 we need to be very clear on this point 鈥 whether the product produces a score in the low or high 90th percentile, all answers still must be checked 100% of the time.

I look forward to our ongoing collaboration with customers and industry partners as we continue our work towards minimizing inaccuracies and increasing the usefulness of the research outcomes for GenAI tools and all our solutions.

]]>
The Transformative Role of AI in Professional Tools: A Conversation With David Wong and Leann Blanchfield /en-us/posts/innovation/the-transformative-role-of-ai-in-professional-tools-a-conversation-with-david-wong-and-leann-blanchfield/ Wed, 02 Oct 2024 13:33:39 +0000 https://blogs.thomsonreuters.com/en-us/?post_type=innovation_post&p=63286 Leann Blanchfield, head of Editorial, 成人VR视频, said now is the most exciting time in her 30+ years with the company.

In the latest , Blanchfield shared how the power of generative AI 鈥 and the dramatic leap it鈥檚 making in how professionals across industries can access large quantities of data 鈥 is transforming the legal industry and beyond. Blanchfield credits the more than 1,500 attorney editors on her team, who create and enhance content, with harnessing the power of generative AI for legal research.

Human expertise is just one component of how 成人VR视频 is capitalizing on the potential of generative AI. Three elements are critical, as David Wong, chief product officer, 成人VR视频, noted in his comments about the launch of CoCounsel 2.0 at ILTACON: 鈥淲e have the data, the expertise, and the tech. Few have all three in such quantity and depth.鈥

In the new , Wong focused on the role of human domain experts, noting they鈥檙e key to the process of creating and validating data used by AI models for professional research.

鈥淭here’s a lot of both prompt engineering, fine tuning, and system refinement that’s necessary to get quality to a usable spot,鈥 Wong said. 鈥淓xperts, experienced researchers and experienced lawyers can help to gauge whether or not the systems are correct. We couldn’t have an objective, quantified measure of quality on these systems without the editors, without those experts.鈥

Wong and Blanchfield discussed the importance of human experts in ensuring the accuracy and reliability of AI.

鈥淢aintaining accuracy is at the heart of what the editorial team does,鈥 Blanchfield said. 鈥淚t鈥檚 the number-one priority across every editorial team. We maintain our content to be accurate and trusted.鈥

Wong acknowledged it鈥檚 challenging for the team to process and update unstructured, constantly changing data in real time. He said that 成人VR视频 ensures that its AI models are customized and meeting the varying needs of various jurisdictions through a combination of software and algorithms that take advantage of the LLMs.

鈥淪o when you ask a question of , for example, we are running an end-to-end algorithm which runs search, retrieves data, re-ranks, interprets and then ultimately passes that information to a large language model to synthesize and produce the answer,鈥 Wong said. 鈥淚t’s a very complicated system which involves multiple types of technology, multiple types of information retrieval.聽Processing unstructured, dynamic data and customizing AI models requires integrating multiple technologies and algorithms to optimize performance.鈥

Hear more of Wong and Blanchfield鈥檚 insights on integrating AI into professional tools and ensuring that information is trustworthy in the of the TechConnect series, which brings diverse and dynamic perspectives from all corners of the technology world with thought-provoking questions and conversation.

]]>
How Harmful Are Errors in AI Research Results? /en-us/posts/innovation/how-harmful-are-errors-in-ai-research-results/ Fri, 02 Aug 2024 14:19:28 +0000 https://blogs.thomsonreuters.com/en-us/?post_type=innovation_post&p=62473 AI and large language models have proven to be powerful tools for legal professionals. Our customers are seeing the gains in efficiency and tell us it鈥檚 greatly beneficial. However, there has been a lot of discussion lately of errors and hallucinations, but what hasn鈥檛 been discussed is the extent of harm that comes from errors or the benefits of answers with an error.

First, let鈥檚 settle on terminology. We should use terms like 鈥渆rrors鈥 or 鈥渋naccuracies鈥 instead of 鈥渉allucinations.鈥 鈥淗allucination鈥 sounds smart, like we鈥檙e AI insiders and know the lingo, but the term is often defined narrowly as a fabrication, which is just one type of error. Customers will be as concerned, if not more concerned, about non-fabricated statements from non-fabricated cases that, despite being real, are still incorrect for the question. 鈥淓rrors鈥 or 鈥渋naccuracies鈥 are much better and more encompassing ways to describe the full range of problems we care about.

Next, let鈥檚 consider types of errors and risk of harm from each. Error rates are often just reported as a percentage, which is a binary view 鈥 either an answer has an error or it does not, but that鈥檚 overly simplistic. It conflates the big differences in risk of harm from different types of errors and ignores the potential benefit of lengthy and nuanced answers that contain a minor error.

There are dozens of ways to categorize errors in LLM-generated answers, but we鈥檝e found three to be most helpful:

  1. Incorrect references in otherwise correct answers
  2. Incorrect statements in otherwise correct answers
  3. Answers that are entirely incorrect

A fourth category of error that sometimes comes up in discussions with customers is about inconsistency, where the system provides a correct answer one time, then later, when the same exact question is submitted, the answer is different and sometimes less complete or incorrect. Minor differences in wording are very common when submitting the same question. Substantial differences are uncommon, but when they do result in an error, the error simply falls into one of the three categories above.

Incorrect references refer to situations where an answer is correct, but the footnote references provided for a statement of law does not stand for the precise proposition of the statement. Fortunately, risk of harm with these types of errors appears to be low, since they鈥檙e easy to detect when researchers review the primary law cited. Answers with these types of errors still offer substantial benefit to researchers because they get them to the right answer quickly, often with a lot of nuance about the issues, but the researcher still has to use additional searches or other research techniques to find the best source material.

Incorrect statements in otherwise correct answers are often obvious in the answer. An answer might say the law is X in paragraphs 1 鈥 4 and then, inexplicably, declare the law is Y in paragraph 5, then go back to stating the law is X in paragraph 6. Risk of harm with these errors also appears to be low, since the inconsistency is obvious and prompts the researcher to dig into the primary law to figure it out. Answers with these types of errors still offer some benefit, since they point the user to highly relevant primary law, explain the issues, and help the researcher with what to look for when reviewing primary law.

Answers that are entirely wrong are more problematic. These are quite rare in our testing, but they do occur. Often a simple check of the primary sources cited will resolve the error quickly, but sometimes additional research is needed beyond that. These answers still offer some benefit to researchers, since they often point to relevant primary law in a way that is more effective and useful than traditional searching, but they also come with greater risk of harm, since the incorrectness of the answer is not obvious, and simply reviewing cited sources does not always resolve the issue.

These sound scary, but researchers have been dealing with this type of issue for ages. For instance, secondary sources can be incredibly helpful for summarizing complex areas of law and offering insights, but they sometimes fail to discuss important nuance, and sometimes the law has changed since they were written. If researchers relied on them alone, without doing further research, they would be at risk of harm, even if they consulted cited primary sources.

Yet we would never tell researchers to avoid using secondary sources because they can sometimes be beautifully written, very convincing, and utterly wrong. What we tell researchers is they can be enormously helpful for research but must be used as part of a sound research process where primary law is reviewed, and tools like KeyCite, Key Numbers, and statutes annotations are used to make sure the researcher has a complete understanding of the law.

Individual research tools have rarely been perfect. Their value has been in improving sound research practices. Stephen Embry captured this idea well in his recent blog post, :

鈥淭he point is not whether Gen AI can provide perfect answers. It鈥檚 whether, given the speed and efficiency of using the tools and their error rates compared to those of humans, we can develop mitigation strategies that reduce errors. That鈥檚 what we do with humans. (I.E. read the cases before you cite them, please).鈥

But if you must check primary resources and engage in sound research practices when using a research tool, is there really any benefit to using it? If it improves overall research times or helps surface important nuance that might otherwise be missed, the answer is yes.

Prior to launching AI-Assisted Research, we knew large language models would not produce answers free of errors 100% of the time, so we asked attorneys if the tool would be valuable even with an occasional error, and if we should we release it now or wait until it was perfect?

Most of the attorneys said, 鈥淚 want this now.鈥 They saw clear benefits and thought an occasional error was worth it for the extraordinary benefits of the new tool, since they would easily uncover an error when reading through primary law. They said that if they knew the answers were generated by AI, they would never trust them and would verify by checking primary sources. If there was an error, those primary sources (and further standard research checks, like looking at KeyCite flags, statute annotations, etc.) would reveal it. That鈥檚 why we put AI in the name of this CoCounsel skill, so researchers would be encouraged to check primary sources.

Our customers have submitted over 1.5 million questions to AI-Assisted Research in Westlaw Precision. Generally, three big research benefits come up in discussions:

  1. It gives them a helpful overview before diving into primary sources.
  2. It uncovers sub-issues, related issues, or other nuances they might not have found as quickly with traditional approaches.
  3. It points them to the best primary sources for the question more quickly and efficiently than traditional methods of research.

Customers have described these benefits with great enthusiasm, telling us AI-Assisted Research 鈥渟aves hours鈥 and is a 鈥済ame changer.鈥

Lawyers know they need to rely on the law when writing a brief or advising a client, and the law lies in primary law documents (cases, statutes, regulations, etc.). Researchers have always known that when they鈥檙e looking at something that is not a primary law document, such as a treatise section, a bar journal article, or an answer from AI, they must check the primary law before relying on it to advise a client or write a brief. That鈥檚 why we cite to primary law in the answers and why we provide an even greater selection of relevant primary and secondary sources under the answers 鈥 to make this checking easy.

But what about ? That lawyer submitted his brief without ever reading any of the cases he was citing.

That can鈥檛 be the standard for considering the value of products like Westlaw that provide a rich set of research tools that make it easy to check primary sources, understand their validity, and find related material. If the standard were, a user might not read any of the primary law, many high-value research capabilities today would be deemed useless.

The way to dramatically reduce the risk of harm from LLM-based results or any other individual research tool, like secondary sources, is what it has always been: sound research practices.

Jean O鈥橤rady conveyed this beautifully in :

鈥淒oes generative AI pose truly unique risks for legal research? In my opinion, there is no risk that could not be completely mitigated by the use of traditional legal research skills. The only real risk is lawyers losing the ability to read, comprehend and synthesize information from primary sources.鈥

At 成人VR视频, we鈥檙e continuing to work on ways to reduce all types of errors in generative AI results, and we expect rapid improvement in the coming months. Because of the way large language models work, even with retrieval augmented generation, eliminating errors is difficult, and it鈥檚 going to be quite some time before answers are completely free of errors. That鈥檚 the bad news.

The good news is that harm from these types of errors can be reduced dramatically with common research practices. It鈥檚 why we鈥檙e not only investing in generative AI projects. We鈥檙e also continuing to build out a full suite of research tools that help with the entire research process because that process will continue to be important.

Even when errors get reduced to just 1%, that will still mean that 100% of answers need to be checked, and thorough research practices employed.

We鈥檙e currently involved in two consortium efforts to provide benchmarking for generative AI products. When generative AI products for legal research are tested against these benchmarks, I expect we鈥檒l see the following:

  • None of the products will produce answers that are all entirely free of errors.
  • All the products will require sound research practices, including checking primary law documents, to reduce risk of harm.
  • When sound research practices are employed, the risk of harm from errors in the answers is small and no different in magnitude from the risks we see with traditional research tools like secondary sources or Boolean search.

Even in the age of generative AI, sound research practices remain important and are here to stay. As Aravind Srinivas, CEO and cofounder of , said,

鈥淭he journey doesn鈥檛 end once you get an answer鈥 the journey begins after you get an answer.鈥

I think Aravind鈥檚 statement applies perfectly to legal research and to the art of crafting legal arguments. Even as our teams strive to reduce errors further, we should keep in mind the benefits of generative AI and weigh them against the new and traditional risks of harm in tools that are less than perfect. When used as part of a thorough research process, these new tools offer tremendous benefits with very little risk of harm.

This is a guest post from Mike Dahn, head of Westlaw Product Management, 成人VR视频.

]]>
成人VR视频 Launches AI-Assisted Research on Westlaw and Additional Generative AI-Powered Solutions /en-us/posts/innovation/thomson-reuters-launches-ai-assisted-research-on-westlaw-and-additional-generative-ai-powered-solutions/ Mon, 13 Nov 2023 18:55:04 +0000 https://blogs.thomsonreuters.com/en-us/?post_type=innovation_post&p=61821 成人VR视频 today announced a series of generative AI initiatives designed to transform the legal profession. Headlining these initiatives is聽.

Available now to customers in the United States, this skill helps legal professionals quickly get to answers for complex research questions. This generative AI skill leverages innovation in Casetext, created by taking a 鈥渂est of鈥 approach using the 成人VR视频 Generative AI Platform.

With AI-Assisted Research and聽, attorneys are empowered with eight generative AI-powered core skills, including AI-Assisted Research on Westlaw Precision, Prepare for a Deposition, Draft Correspondence, Search a Database, Review Documents, Summarize a Document, Extract Contract Data, and Contract Policy Compliance. The company also laid out high-level product roadmaps to develop numerous additional generative AI skills to address customer-specific needs. Each additional skill will be built on a common software framework within the 成人VR视频 Generative AI Platform.

AI-Assisted Research allows customers to ask complex legal research questions in natural language and quickly receive synthesized answers, with links to supporting authority from Westlaw content and links to further examine that authority. AI-Assisted Research streamlines the initial phase of legal research with sophisticated answers to questions and the authority those answers are based on, saving hours of work. These responses are founded on more than 150 years of 成人VR视频 classification, analysis, and editorial expertise contributed by subject matter experts and attorney editors.

AI-Assisted Research employs Retrieval Augmented Generation (RAG) to prevent the large language models (LLMs) from making up things like case names and citations by focusing the LLMs on the actual language of Westlaw content. Future plans include expanding generative AI throughout the research process in Westlaw and bringing these capabilities to versions of Westlaw outside the United States.

Also, the company announced that it will be building on the AI assistant experience Casetext created with CoCounsel, the world鈥檚 first AI legal assistant. Later in 2024, 成人VR视频 will launch an AI assistant that will be the interface across 成人VR视频 products with generative AI capabilities.

The AI assistant, called CoCounsel, will be fully integrated with multiple 成人VR视频 legal products, including Westlaw Precision, Practical Law Dynamic Tool Set, Document Intelligence, and HighQ, and will continue to be available on the CoCounsel application as a destination site.

In addition, 成人VR视频 will introduce generative AI within Practical Law Dynamic Tool Set in January 2024. Customers will benefit from generative AI within Practical Law through a new interface with an AI legal assistant, which will quickly provide answers using conversational language 鈥 all validated by trusted Practical Law content created and maintained by a team of more than 650 legal experts.

Read the聽press release聽for more on AI-Assisted Research on Westlaw Precision, the new generative AI assistant connecting all 成人VR视频 generative AI products, 成人VR视频 Generative AI Platform, generative AI capabilities in Practical Law, and CoCounsel Core. For more on how 成人VR视频 is ensuring that its AI products and skills are built responsibly, check out the company鈥檚聽Data and AI Ethics Principles.

]]>