The Future of AI and Technology Forum Archives - 成人VR视频 Institute https://blogs.thomsonreuters.com/en-us/topic/future-ai-technology-forum/ 成人VR视频 Institute is a blog from 成人VR视频, the intelligence, technology and human expertise you need to find trusted answers. Sun, 05 Apr 2026 09:35:30 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 The AI Law Professor: When AI makes lawyers work more, not less /en-us/posts/technology/ai-law-professor-ai-makes-lawyers-work-more-not-less/ Tue, 03 Mar 2026 14:58:48 +0000 https://blogs.thomsonreuters.com/en-us/?p=69696

Key points:

      • The productivity promise is largely wrong 鈥 Emerging research shows that AI doesn鈥檛 reduce work 鈥 it intensifies it. Lawyers work faster, take on broader responsibilities, and extend their hours without recognizing the expansion. Further, because prompting AI feels like chatting rather than laboring, lawyers slip work into evenings and weekends without registering it as additional effort.

      • Self-reinforcing acceleration is the real risk 鈥 AI speeds tasks, which raises expectations, which increases reliance, which expands scope, ultimately creating a cycle that drives burnout in a profession already plagued by it.

      • Purposeful integration is the antidote 鈥 Legal organizations need to promote intentional governance structures that account for how people actually behave with AI, not how leadership imagines they will or should.


Welcome back to The AI Law Professor. Last month, I examined how AI is forcing us to rethink training for junior lawyers. This month, I examine a question that affects every lawyer: What happens when the efficiency gains we’ve been promised don’t materialize the way we expected? A recent study out of UC-Berkeley suggests the answer is more troubling than most law firm leaders realize.

If you鈥檝e attended a legal technology conference anytime over the past two years, you鈥檝e heard the pitch: Automate the mundane and elevate the meaningful.

A in the Harvard Business Review by UC-Berkeley researchers Aruna Ranganathan and Xingqi Maggie Ye suggests we should be more skeptical. They tracked how generative AI (GenAI) changed work habits over eight months at a 200-person technology company. Their findings were striking 鈥 AI tools didn鈥檛 reduce work; rather, they intensified it.

According to the study, the tech employees studied were shown to work faster, take on broader responsibilities, extend their hours into evenings and weekends, and multitask more aggressively 鈥 all without being asked to do so. The promise of liberation became a reality of acceleration and overwork.

For those of us in the legal profession, this should be a wake-up call.

Three forms of intensification

The researchers identified three patterns that will sound familiar to anyone watching lawyers adopt GenAI in their work processes.

Task expansion

Because AI fills knowledge gaps, professionals stepped into responsibilities that previously belonged to others. Product managers started writing code, and researchers took on engineering tasks. In legal contexts, the parallel is obvious. Associates use AI to attempt tasks once reserved for senior lawyers. Paralegals draft documents that previously required attorney oversight. Solo practitioners take on matters outside their core expertise because their AI tools make it feel manageable. The result isn鈥檛 less work distributed more efficiently, it鈥檚 more work concentrated in fewer hands, with less institutional knowledge guiding the output.

Blurred boundaries

AI blurred the boundaries between work and non-work. Because prompting an AI feels more like chatting than labor, lawyers (like the tech workers in the study) may slip work into lunch breaks, evenings, and commutes without registering it as additional effort. The conversational interface is seductive precisely because it doesn鈥檛 feel like work. It is work, however, and much more of it.

Pervasive multitasking

Workers managed multiple AI threads simultaneously, generating a sense of momentum that masked increasing cognitive load. For lawyers, this means running parallel research queries, drafting multiple documents at once, and constantly monitoring AI outputs, all while believing they鈥檙e saving time.

The productivity trap

The most important insight from the research is that these effects are self-reinforcing. AI accelerates tasks, which raises expectations for speed. Higher speed increases reliance on AI, and greater reliance expands the scope of what people attempt. And expanded scope generates even more work. Rinse and repeat.

Parkinson鈥檚 law: 鈥淲ork expands to fill the time available for its completion.鈥

In a profession already plagued by burnout, this cycle should alarm us. The legal industry鈥檚 adoption of AI is being driven largely by the promise of doing the same work in less time. But if the Berkeley research is any guide, what actually happens is that we do more work in the same amount of time, or more work in more time, while telling ourselves we鈥檙e being more productive.

And because the extra effort feels voluntary, firm leadership may not see the problem until it manifests as errors, attrition, or ethical lapses. In law, the cost of impaired judgment isn鈥檛 just a missed deadline 鈥 it鈥檚 a client鈥檚 liberty, livelihood, or life savings.

From productivity to purposeful practice

The Berkeley researchers propose what they call an AI practice consisting of intentional norms and routines that structure how AI is used, including determining when to stop and how work should and should not expand. I鈥檇 go further. For legal organizations, purposeful AI integration requires more than workplace wellness norms. It requires a strategic framework that aligns AI capabilities with organizational mission, ethical obligations, and sustainable human performance.

This means, first off, being honest about what AI actually does to workloads rather than what we hope it will do. If your firm adopted AI expecting to reduce associate hours, audit whether that has actually happened, or whether associates are simply filling reclaimed time with more work.

Second, it means building governance structures that account for how people actually behave with these tools, rather than how leadership imagines they will. The Berkeley study found that workers expanded their workloads voluntarily, without management direction. Top-down AI policies that focus solely on permissible use will miss the intensification that could be happening in plain sight.


The most important insight from the research is that these effects are self-reinforcing. AI accelerates tasks, which raises expectations for speed. Higher speed increases reliance on AI, and greater reliance expands the scope of what people attempt. And expanded scope generates even more work.


Third, it means preserving space for the distinctly human work that AI cannot replicate, such as judgment, empathy, ethical reasoning, and the kind of creative problem-solving that emerges from genuine human dialogue 鈥 not from a conversation with a chatbot. The researchers also found that AI-enabled work became increasingly solitary and continuous, a dangerous trajectory.

The narrative that AI will free lawyers for higher-value work isn鈥檛 just optimistic. It鈥檚 a misunderstanding of how these tools interact with human psychology. AI doesn鈥檛 create leisure. It creates capacity 鈥 and without intentional structures, that capacity gets filled, not with strategic thinking, but with more of everything.

While it鈥檚 clear that AI will change the legal profession, the real challenge is whether law firms will integrate AI with purpose, shaping it to serve their values, their clients, and their professionals鈥 well-being. Or, whether they鈥檒l be allowing the technology to quietly shape us into something we didn鈥檛 intend to become.

Tom Martin is CEO & Founder of LawDroid, Adjunct Professor at Suffolk University Law School, and author of the forthcoming听. He is 鈥淭he AI Law Professor鈥 and writes this eponymous column for the 成人VR视频 Institute.


You can find more aboutthe use of AI and GenAI in the legal industryhere

]]>
The Emerging Technology and Generative AI Forum: How AI is transforming professional services /en-us/posts/technology/transforming-professional-services/ Fri, 10 Oct 2025 15:12:05 +0000 https://blogs.thomsonreuters.com/en-us/?p=67894

Key takeaways:

      • Human oversight & ethical standards are essential Successful AI integration in professional services requires strong human oversight and clear ethical guidelines to ensure trust, accuracy, and responsible innovation.

      • Client-centric strategies drive valueOrganizations should align AI adoption with specific business objectives and client needs, focusing on pragmatic data organization and workflow improvements rather than pursuing technology for its own sake.

      • Agentic AI & human-in-the-loop learning are transforming workflowsThe rise of agentic AI systems and human-in-the-loop machine learning is enabling more adaptive, efficient, and innovative solutions, but these technologies must be carefully managed to balance automation with human expertise.


AUSTIN, Texas 鈥 As professional service organizations stand at an unprecedented inflection point, AI has evolved from a promising technology to a force fundamentally reshaping how they deliver value to clients.

The 成人VR视频 Institute鈥檚 (TRI鈥檚) recent2025 Emerging Technology and Generative AI Forum delved deeper into this situation, bringing together industry leaders, technologists, academics, and visionaries to explore the transformative impact of AI on professional services.

The current state of AI in professional services

The Forum鈥檚 opening panel, Hello World! The State of Emerging Technology and Generative AI, set the stage by emphasizing that the most value in AI lies in data consistency, gradual digital transformation, and human oversight during implementation. A striking statistic that emerged from the discussion showed that in a recent survey 91% of service professionals say they believe AI should be held to higher accuracy standards than human work. This underscores a critical reality 鈥 trust and data protection remain paramount concerns across the board.

Panelists highlighted the importance of aligning AI strategies with specific business objectives to maximize value creation. Indeed, they explained, rather than pursuing AI adoption for its own sake, organizations must focus on prioritizing the critical factors that make implementation possible while harnessing AI’s potential to secure the positive change and innovation that the technology promises.

Exploring agentic AI systems

The second panel, AI on Autopilot: Exploring the Rise of Agentic AI and Its Potential within Professional Services, delved into one of the most exciting frontiers in AI development. Panelists discussed how professional service leaders need to adjust expectations surrounding AI’s capabilities chiefly by breaking down complex problems into manageable chunks and benchmarking AI’s impact with quantifiable data.

In fact, a recurring theme throughout the Forum was the critical importance of keeping humans in the loop. Panelists emphasized that AI agents do their best work with sufficient oversight, and organizational leaders must understand risk factors before deploying AI agents in production environments, especially those involving any client-facing matters.

The conversation also touched upon alternative fee arrangements, examining AI’s dual role as both assistants and autonomous agents and how professional service leaders need to harmonize technology with elevated workflow management.

professional services
Attendees at the 成人VR视频 Institute鈥檚 recent听2025 Emerging Technology and Generative AI Forum

Human-in-the-loop learning

Prof. Peter Stone, Truchard Foundation Chair in Computer Science at the University of Texas鈥揂ustin and Chief Scientist at Sony AI, delivered a compelling keynote at the Forum, titled Human-in-the-Loop: Machine Learning for Robot Navigation and Manipulation. Prof. Stone鈥檚 research explores how to create autonomous intelligent agents that adapt, interact, and embody complex, human-like behaviors. He also has demonstrated these capabilities through engaging videos of robots playing soccer and mastering Tetris.

Prof. Stone’s work has significant implications for various applications, including healthcare and manufacturing 鈥 fields in which intelligent robotic systems can transform operational efficiency and innovation. His emphasis on the importance of training with the human-in-the-loop has clearly shown that we remain in the nascent stages of agentic machine learning, despite such vast development over the past decade.

Client-centric technology strategy

The panel, Wired for Success: Creating a Forward Focused, Client Centric Technology Strategy, explored how professional services firms can develop technology strategies that both enhance client engagement and drive value creation. Panelists emphasized the need to ensure that AI’s work is thoroughly checked, treating the innovative technology like any other employee.

Panelists also acknowledged the challenge of organizing data prior to implementing the technology but determined that this particular technology doesn’t need to be perfect to accelerate workflows. Rather, organizational leaders only need to ensure their metrics reflect a realistic qualification of value. This pragmatic approach recognizes that organizing data to make it AI-compatible 鈥 particularly in the legal technology industry 鈥 are essential steps in the journey toward effective AI adoption.

Ethical considerations in AI development

The final panel, A Unified Field: Ethical Considerations amid AI Development and Deployment, tackled one of the most critical aspects of AI adoption: The need for a strong ethical foundation for AI use. Panelists underscored the need for organizations to establish clear guidelines and harness diverse perspectives for ethical AI practices.

As AI continues to evolve in highly personal applications like facial recognition technology, ethical decisions become less straightforward. The panel delved into questions about the next generation’s interaction with AI, including whether it is ethical to harness AI to teach children through new educational approaches

Panelists also emphasized that ethical AI is a collective responsibility that falls on all users’ shoulders, setting the stage for a future in which AI development is both responsible and visionary.

The AI-enabled path forward

TRI鈥檚 2025 Emerging Technology and Generative AI Forum firmly established that successful AI implementation requires more than technological sophistication 鈥 it demands a holistic approach that encompasses data strategy, talent development, client engagement, and unwavering commitment to ethical standards. The future of professional services will be defined not by whether firms adopt AI, but by how thoughtfully and strategically they integrate these transformative technologies into their core operations.

The insights shared throughout the Forum further underscore a fundamental truth: As professional services firms navigate this transformation, the frameworks and strategies discussed provide a roadmap for sustainable innovation that enhances rather than replaces human expertise. The ethical considerations discussed are not merely compliance requirements but rather fundamental principles that will determine whether AI serves as a force for positive transformation or something else entirely.

Forum attendees took away how important it is to harness AI’s potential while maintaining the trust, transparency, and human judgment that clients expect from their most critical business partnerships. Indeed, the path ahead requires continued collaboration, shared learning, and collective commitment to responsible innovation that truly serves humanity’s best interests.


To dive deeper into these insights and understand the strategic implications for your organization, we invite you to download here

]]>
The AI Law Professor: When your AI assistant knows too much /en-us/posts/technology/ai-law-professor-when-your-ai-assistant-knows-too-much/ Wed, 18 Jun 2025 15:04:57 +0000 https://blogs.thomsonreuters.com/en-us/?p=66315

Welcome to the inaugural installment of 鈥淭he AI Law Professor鈥, a new blog column from Prof. Tom Martin, an Adjunct Professor at Suffolk Law School. This column, done in conjunction with the 成人VR视频 Institute, will examine how AI is changing the legal profession.


Imagine this: You’re working late, reviewing client files and discovery documents with your AI assistant, when it suddenly stops responding 鈥 you have a literal moment. However, it鈥檚 not because of a technical error, rather it鈥檚 because the AI detected something in your query that triggered its safety protocols. Worse yet, it reports you to the authorities, and within minutes the FBI is knocking on your door to ask questions. Sound far-fetched?

This scenario moved from hypothetical to plausible recently with revelations about . During pre-release testing, when researchers simulated shutdown scenarios, the model allegedly attempted to coerce developers by threatening to expose compromising personal information. Somewhat shockingly, we’ve very quickly reached an inflection point in which AI systems possess capabilities that demand sophisticated containment strategies.

But what does this mean for you? How is AI contained? What is safety in the context of AI?

Let鈥檚 look into this closer.

Understanding the AI safety level framework

In my GenAI Law class at Suffolk, I might ask my students: How do you contain something that exists, not in the real world, but only as bits and bytes? The answer lies in something called AI Safety Levels (ASL), a framework borrowed from biological research. Just as laboratories classify pathogens by risk level, we now classify AI systems by their potential for harm.

ASL-1 covers systems that are about as dangerous as your personal calculator. ASL-2 encompasses most current legal AI tools, which are helpful, occasionally prone to hallucination, but ultimately harmless. ASL-3 is where the landscape shifts, and there is significantly increased risk of misuse or the system exhibits low-level autonomous capabilities, requiring significantly stricter safety and security measures. ASL-4 and higher are still being defined, but are expected to involve much greater risks, potentially including AI systems with superhuman capabilities or the ability to circumvent safety checks.

Because of Claude Opus 4鈥檚 pre-release behavior, Anthropic activated ASL-3 protections to prevent the AI from acting on its threats. Just to be clear, these protective measures have been taken by developers, so now you don鈥檛 have to worry about Opus 4. By the way, Claude Sonnet 4 is still classified as ASL-2.

The primary trigger for ASL-3 classification occurs when an AI can provide meaningful assistance in creating chemical, biological, radiological, or nuclear weapons beyond what someone could discover through conventional research. The secondary trigger involves autonomous capabilities: self-replication, complex planning, or what researchers carefully term sophisticated strategic thinking. It鈥檚 this secondary trigger that came up in Opus 4鈥檚 pre-release testing. This is where about superintelligence transition from academic theory to risk management reality.

The 4-layer defense system

How do you contain AI? Anthropic’s solution employs four sophisticated layers:

      1. Real-time classifier guards 鈥 This is where has innovated brilliantly because these AI systems monitor every interaction. Real-time classifier guards are large language models that monitor model inputs and outputs in real time and block the model from producing a narrow range of harmful information relevant to our threat model. Imagine having a tireless senior partner reviewing every document at the speed of light. It鈥檚 the literal guardrail against misuse.
      2. Access controls 鈥 Think of your firm’s document management system, but one that adapts in real-time. Anthropic gives different users different access levels based not just on credentials but on usage patterns. For example, scientists that regularly undertake biological research may be exempted from ASL-3 containment measures.
      3. Asynchronous monitoring 鈥 This feature is a postmortem that uses computationally intensive analysis after the fact, escalating from simple screening to sophisticated analysis as needed, operating like your compliance team, but at machine scale and speed.
      4. Rapid response 鈥 Anthropic provides so-called bug bounties up to $25,000 to incentivize others to find security issues or bugs in the system. This, in combination with security partnerships and the ability to deploy patches within hours keeps the system secure and up-to-date. When someone discovers a vulnerability, defenses update across all deployments almost instantly.

Practical implications for legal practice

Here鈥檚 what keeps me up at night and what should concern every forward-thinking lawyer: If AI requires these protections, what does that say about the tools we鈥檙e integrating into our daily practice?

The good news is that ASL-3 protected systems offer unprecedented security for client confidentiality. That 95% effectiveness against jailbreaks means your sensitive client information is far better protected against extraction through clever prompting, a vulnerability of earlier AI models. For law firms that handle high-stakes litigation or sensitive corporate transactions, this level of security represents a significant upgrade from the AI tools we all were using just a year ago.

However, there鈥檚 a crucial distinction that every practitioner needs to understand. While ASL-3 specifically targets extremely dangerous content and doesn鈥檛 target legal work, general AI safety measures across various platforms still can create friction. For example, criminal defense attorneys might find AI systems reluctant to analyze violent crime evidence; or estate planners could see refusals when discussing sensitive end-of-life scenarios. These interruptions stem not from ASL-3鈥檚 extreme protections, but from broader content moderation approaches that struggle to distinguish between describing harmful content (often a legal necessity) and promoting it.


Register now for听The Emerging Technology and Generative AI Forum, a cutting-edge conference that will explore the latest advancements in GenAI and their potential to revolutionize legal and tax practices


These safety measures mean your digital assistant operates more like a cautious junior associate than a rigid compliance system. It uses natural language reasoning to evaluate context and intent, recognizing professional terminology and legitimate legal concepts. When safety measures do trigger, you鈥檒l typically receive a polite explanation rather than a hard block, and you can often rephrase or provide additional context to proceed.

For our profession, this represents both evolution and revolution. We鈥檙e not just adopting new tools; we鈥檙e learning to work alongside AI systems that possess their own safety boundaries. Smart practitioners will develop strategies for navigating these guardrails, maintaining clear professional context in queries, understanding which practice areas might trigger safety protocols, and always maintaining human oversight.

Creating your firm鈥檚 own AI safety framework

Start with a simple three-tier system: Green light for routine tasks, such as research and document review; yellow light for work requiring supervision, such as drafting strategy memos or analyzing sensitive communications; and red light for anything involving privileged client data without explicit consent.

The key is making this actionable. Every AI-generated work product needs human verification, especially citations and factual claims. When using ASL-3 protected systems like Claude Opus 4, you gain strong security against prompt manipulation but remember, even the most sophisticated AI requires the same oversight you鈥檇 give a summer associate.

For implementation, you should focus on transparency and training. You need to document when and how AI assists with client work. This isn鈥檛 about compliance theater, rather it鈥檚 about professional integrity. Schedule regular training sessions at which attorneys can share what they鈥檝e learned, such as which prompts trigger safety measures, what workarounds succeed for legitimate tasks, and in what instances AI genuinely adds value compared to where it creates risk.

You also should build a simple feedback loop so these insights improve your firm鈥檚 practices. As I tell my students, the goal isn鈥檛 perfection; it鈥檚 creating a framework that lets you harness these powerful tools responsibly. And the firms getting this right aren鈥檛 avoiding AI 鈥 they鈥檙e using it thoughtfully while maintaining the professional standards that define our profession.鈥嬧嬧嬧嬧嬧嬧嬧嬧嬧嬧嬧嬧嬧嬧嬧

Looking ahead

As I launch this column, I’m both exhilarated and sobered by what lies ahead. We’re not just adopting new tools, we’re witnessing the emergence of a new form of intelligence that demands safety measures 鈥 what we humans call ethics.

In future columns, we’ll explore how these technologies reshape everything from contract analysis to litigation strategy. However, today’s lesson is clear: When your word processor needs containment protocols, you know the practice of law is entering uncharted territory.


You can find more about听the use of AI and GenAI in the legal industryhere

]]>
The 2025 Emerging Technology and Generative AI Forum /en-us/posts/events/the-emerging-technology-and-generative-ai-forum-series/ Wed, 15 Jan 2025 19:28:17 +0000 https://blogs.thomsonreuters.com/en-us/?post_type=lei_events&p=64467 The 2025 Emerging Technology and Generative AI Forum is a must for forward-thinking professionals in the professional services industry. This cutting-edge conference will explore the latest advancements in generative AI and their potential to revolutionize legal and tax practices. Attendees will gain invaluable insights into emerging technological trends and software. Industry experts will showcase how generative AI is enhancing efficiency, accuracy, and decision-making, including sessions on crucial topics like ethical considerations, data privacy, and the integration of AI technologies into existing workflows.

Attendees will position themselves at the forefront of technological innovation, ensuring they remain competitive in an evolving landscape. Don’t miss this opportunity to network with peers, engage with thought leaders, and discover how emerging technology and generative AI can transform your practice and deliver enhanced value to clients. Stay engaged by following #TRIGenAI25.

]]>
Emerging Legal Technology Forum: Building stronger client relationships requires balance /en-us/posts/legal/emerging-legal-technology-forum-building-stronger-client-relationships/ https://blogs.thomsonreuters.com/en-us/legal/emerging-legal-technology-forum-building-stronger-client-relationships/#respond Thu, 27 Oct 2022 13:59:24 +0000 https://blogs.thomsonreuters.com/en-us/?p=54023 TORONTO 鈥 Since the start of the COVID-19 pandemic, a shift has occurred in how clients and their law firms interact. What was once a regular set of in-person meetings suddenly shifted to a calendar filled with Zoom calls, and although some in-person meetings have resumed, the mix between the in-person and virtual has been irrevocably altered.

At the same time, a parade of collaboration technologies such as Microsoft Teams and Slack began to take even more prominence, creating new touchpoints for law firms to track and measure.

The result has seen an explosion of customer relationship data to help firms make decisions and better establish connections with their clients. In order to best take advantage of this new paradigm, however, it鈥檚 still important to utilize both this new data as well as a more traditional, personal touch, said panelists at the 成人VR视频 Institute鈥檚 recent 5th annual Emerging Legal Technology Forum. The key, of course, is finding the right balance.

The data in hand

Joy Cruz, Director of Business Intelligence & Data Analytics at management consulting company RSM US, said during the Forum鈥檚 panel, Ascendant Engineering: Emergent Techniques in Data Analytics and Strategic Account Management, that some of the common metrics that law firms should be using to measure their client relationships haven鈥檛 changed: profitability, productivity, client satisfaction, realization rates, and related data 鈥渂ringing that whole story together in terms of understanding what you have, what you鈥檙e doing, how you operate historically, [and] what you can do.鈥

But what鈥檚 different since the pandemic is that data sources have exploded, meaning that even knowing where all of the necessary data resides is an even harder challenge than ever before. For a law firm trying to gather a response for an RFP, 85% of the time may be spent hunting for the relevant answers, Cruz estimated. And while many law firms are talking about executing a data plan, many firms can鈥檛 even take the first step of having insight into their data.

Joy Cruz, of RSM US

鈥淭he goal is to flip that so it becomes easily accessible to you.鈥 Cruz explained. 鈥淥ne of the things we鈥檙e missing is that we鈥檙e not able to do the analysis piece yet, because it鈥檚 not available to you.鈥 Indeed, without the data gathering step, 鈥測ou鈥檙e making decisions based off of data that鈥檚 provided to you, but that might not be the full story,鈥 she added.

Panelist Olalekan (Wole) Akinremi, a partner at law firm Deeth Williams Wall, noted that from his days on the corporate side, clients have already begun to take that step in evaluating their outside firms 鈥 particularly when it comes to tracking costs. He said that tech-enabled analysis can better look into outside counsel time and billing, contracts, and automation to free up time for more complex matters that are becoming more commonplace. Law firms also can take cues from their clients about how to use data to augment their arguments, Akinremi noted.

For example, 鈥測ou can also go to management and say, we have two paralegals handling 1,000 requests, we need more support,鈥 he said. 鈥淭he proof is in the results.鈥

With the rise in data-driven decision-making, however, can come a tantalizing misstep: Over-reliance on data at the expense of other tools in the relationship-building toolbox. Panelist Philipp Thurner, CEO of relationship management software company Nexl, said that while raw data figures certainly help, 鈥渢hat might not tell you the quality of the relationship.

鈥淒ata can tell a story,鈥 Thurner added. 鈥淏ut you can have one data set and can tell a million different stories from it.鈥

Thurner gave the example of counting email interactions: a hundred emails back and forth between a firm and their client could be construed as a strong relationship, particularly if those emails are increasing over time. But if those emails are surface-level interactions or about administrative tasks, the raw number may not reveal a relationship on rocky ground. 鈥淗ow do you judge a relationship?鈥 he asked. 鈥淚 think it鈥檚 up to us as human beings.鈥

Where data & relationships collide

In a later panel, titled Journey鈥檚 End: Maximizing Value in Client Experience, the discussion elaborated on that general premise. Suzanne Donnels, Chief Business Development & Marketing Officer at law firm Davies Ward Phillips & Vineberg, said she has noticed a difference between corporate clients who are actively involved in the firm/client relationship, and those purely focusing on data. 鈥淚t鈥檚 harder for Davies to compete when you鈥檙e dealing with procurement departments, [because] they鈥檙e just looking at a number next to a name,鈥 she explained, adding that a closer relationship means differentiation with 鈥渦nderstanding their clients and the business that they鈥檙e in, and really figuring out solutions.鈥

Olalekan (Wole) Akinremi, of Deeth Williams Wall

Panelist Janet Sullivan, eDiscovery Counsel and Global Director of Practice Technology at White & Case, agreed with Donnels, noting that success metrics will inherently be different for different clients. Her firm鈥檚 strategy is called LIFT 鈥 Local Information, Firmwide Transformation 鈥 which establishes a standardized firm goal of how to drive success, but with the flexibility for bespoke solutions for each client.

To actually measure whether a firm relationship is successful, Sullivan said that repeat business is of course important, but that is just the baseline metric. What can set a firm apart, she said, is consistently gauging and collecting those success metrics throughout the life of a matter. 鈥淣ot waiting until the end to say, 鈥楬ow did I do?鈥, then having to do a post-mortem and go back to all the things we might have done wrong.鈥

Sullivan admitted that it can be a fine line between asking for this data while not placing an undue burden on the client; however, there鈥檚 more than one way to tackle the issue depending on the type of data that鈥檚 needed.

However, panelist Fernando Garcia, who has served as General Counsel for a number of smaller legal departments, noted that law firms should approach this process with caution because of the time and personnel resources needed, as well as another hidden danger in soliciting client feedback.

Firms then need to respond to what they鈥檝e learned, Garcia explained. 鈥淏e careful when you ask,鈥 he said. 鈥淏ecause you鈥檙e going to get answers, and you have to act on those answers when you get them.鈥


You can learn more about how to听听that will drive the strategic, financial, and operational priorities of your corporate law department here.

]]>
https://blogs.thomsonreuters.com/en-us/legal/emerging-legal-technology-forum-building-stronger-client-relationships/feed/ 0
Emerging Legal Tech Forum: Even as the metaverse emerges, traditional legal questions guide its growth /en-us/posts/legal/emerging-legal-tech-forum-metaverse-emerges/ https://blogs.thomsonreuters.com/en-us/legal/emerging-legal-tech-forum-metaverse-emerges/#respond Thu, 20 Oct 2022 13:00:16 +0000 https://blogs.thomsonreuters.com/en-us/?p=53956 TORONTO 鈥 The metaverse can present exciting opportunities for legal professionals to advance their own practice and to serve clients entering these new fields. However, anyone looking to tackle the new paradigm can鈥檛 forget that normal rules of legal engagement still apply, warned panelists at the 成人VR视频 Institute鈥檚 5th annual Emerging Legal Technology Forum.

In a session titled, Approaching the Verge: Opportunity and Reward in Web 3.0 Technologies, panelist Amy ter Haar, legal counsel at Global University System and board member of Ocean Falls Blockchain Group, began by exploring the positives of the metaverse, noting that the metaverse is not just one technology, but rather 鈥渢he ability to interface with technology in a whole new way.鈥

Emerging Legal Tech Forum
Amy ter Haar

In a typical transaction, for example, there is one party (often an intermediary, such as a bank) in charge of recordkeeping and information. But when parties in the metaverse enter into a transaction using the blockchain, both parties can access and change information as needed, with a secure record of those changes verified by all parties. In that way, ter Haar said, the metaverse is 鈥渄isempowering intermediaries and gatekeepers and empowering people.鈥

Still, the metaverse can be tough to define for those not engaged in the space, she added. 鈥淚f you engage in the exercise of replacing the word metaverse in a sentence with the word cyberspace, you鈥檒l find that 90% of the time the meaning doesn鈥檛 change.鈥

To better envision how the metaverse paradigm can change interactions, panelist Matthew Rappard, chief technology officer at security company Vaultie, said to think of how most people interact with the internet on a daily basis. Generally, he said, 鈥渢here is this loss of the concept of privacy.鈥 Every person has equipment to record video, apps that track location, easier access to secure information like bank accounts, and more. On the other end, the metaverse can provide users with another path via the ability to present as a different self. Video content creators called Vtubers present as online avatars that are wholly disassociated, where viewers never see their actual identities.

鈥淵ou鈥檙e ending with this kind of switchover where in the public world you鈥檙e seeing less privacy, but in the digital world you鈥檙e seeing something different,鈥 Rappard explained, adding that the switch comes with a virtual world that provides more control. 鈥淲hen we鈥檙e talking about the metaverse, it鈥檚 the ability to control your identity and how that identity interacts with smart contracts.鈥

Where the Metaverse and law intersect

When legal disputes arise, however, the metaverse鈥檚 nascent nature and increased focus on privacy can be a hindrance as much as a boon. While panelist Yinka Oyelowo, principal lawyer at Toronto-based Yinka Law, believes that the metaverse and associated blockchain technology 鈥渟implifies a lot of the processes that we see in real estate,鈥 for instance, she acknowledged that resolving metaverse-related disputes remains 鈥渧ery hypothetical.鈥

Consider the case of trust and estate issues: Who would take ownership of a piece of metaverse property if the original owner passes away? While smart contracts encoded within the blockchain are very good at executing the specific terms of a contract, they may also need to take into account how a transfer of property would be executed and the rules in any jurisdiction that may apply.

Emerging Legal Tech Forum
Matthew Rappard

鈥淚t鈥檚 quite complex, because the jurisdictional requirements for British Columbia are going to be completely different from the requirements in New Hampshire in the US,鈥 Oyelowo noted. Law firms helping with this type of dispute would need lawyers that not only vet jurisdictional issues, but also understand the technology and are versed in the laws around trusts and estates.

Further, a regular issue when it comes to physical real estate is airspace: Who owns the three-dimensional area above or below a piece of property? However, real estate in the metaverse does not function in the same way, and determining airspace rights becomes difficult. Assumptions about the physical world 鈥渁ren鈥檛 easily transferred over to the metaverse. It takes quite some time to understand the issues,鈥 Oyelowo said, adding, however, that also means opportunity. 鈥淭here鈥檚 a lot of fertile legal ground for those involved in trusts and estates, for commercial real estate.鈥

Indeed, all panelists pointed to the opportunity for legal professionals to provide clarity to emerging metaverse risks. The Digital ID & Authentication Council of Canada (DIACC) has been investigating ways to authenticate online identities while still maintaining confidentiality and privacy, Rappard noted. Oyelowo also pointed to startups that have been linking face IDs with cryptocurrency wallets, where a wallet holding crypto funds is opened through dual-factor authentication that includes both the facial scan and a unique code.

Emerging Legal Tech Forum
Yinka Oyelowo

But even as these opportunities emerge, those in the legal world still need to look out for risks 鈥 and make sure they abide by ethical guidelines. Even those Vtubers who don鈥檛 show their offline identity present a legal and ethical challenge, Oyelowo said. 鈥淲hile most potential clients will be forthcoming with their identity while engaging in the metaverse setting, some clients鈥 may prefer anonymity.鈥 But in a province such as Ontario, this can be a problem: Before giving advice, attorneys are ethically bound to verify the client鈥檚 identity. And especially if the two parties are only conversing in a virtual world, clients may physically be in a jurisdiction with different or less stringent ethical boundaries.

That means that although the virtual world is emerging, physical considerations are still necessary. Therefore, attorneys should take care to 鈥渙btain identity documents that you are satisfied are valid鈥 from the person behind the avatar, not just the metaverse avatar itself, Oyelowo explained.

Ultimately, the experts agreed that the metaverse will continue to evolve, with new opportunities and potential pitfalls both emerging in the coming weeks and months. Regardless of how it develops, what remains certain is that the metaverse will be an engaging space to watch going forward.

]]>
https://blogs.thomsonreuters.com/en-us/legal/emerging-legal-tech-forum-metaverse-emerges/feed/ 0
The simple starting points for a law firm IT metrics program /en-us/posts/legal/law-firm-it-metrics-program/ https://blogs.thomsonreuters.com/en-us/legal/law-firm-it-metrics-program/#respond Mon, 19 Sep 2022 13:45:38 +0000 https://blogs.thomsonreuters.com/en-us/?p=53518 The data explosion within law firms has seen the transformation of both external functions like litigation and contracting, as well as internal functions like financial and document management. Technology-enabled services have let legal teams not only better serve their clients, but collect metrics to determine what鈥檚 working and what鈥檚 not as well.

However, there remains one area within law firms that has seemed more reluctant to embrace metrics to measure department success, and it鈥檚 perhaps a surprising one: IT departments themselves.

The 鈥檚 (ILTA) 2022 Tech Survey revealed that 63% of surveyed law firm IT departments reported using 鈥渘one/non-applicable鈥 metrics to measure IT department performance. That figure was lower, but only slightly, from 2021鈥檚 67%. The most popular available metrics used were downtime per month (37% of surveyed firms), Help Desk closed ticket feedback (36%), and Help Desk ticket resolution times (32%).

The ILTA report did note that 鈥淸t]he larger the firm, the more likely they are to actually poll users on satisfaction with regard to the performance of the IT department.鈥 Regardless of size, however, the fact that nearly two-thirds of firms weren鈥檛 using any metrics to measure IT department performance doesn鈥檛 surprise Kenneth Jones, chief technologist at law firm Tanenbaum Keale and ILTA committee member.

Jones notes that in the corporate world, technology projects tend to be tied directly to return-on-investment, with collecting metrics baked directly into employment. At his previous work at pharmaceutical giant Bristol Myers-Squibb, this attitude was prevalent, Jones says. 鈥淥ne of the measurements was, try not to put 40% of your time into 1% of your business,鈥 he adds. 鈥淐onversely, in the legal field, I think sometimes鈥 whoever screams the loudest tends to get the most attention, and it may not be the project that’s delivering the most revenue or profit to a firm.鈥

Indeed, in the ILTA survey, less than 30% of surveyed firms reported measuring law firm success via IT capital costs per lawyer (29%), IT operating costs per capital lawyer (25%), or IT spending as a percentage of revenue (22%). Jones views this as a missed opportunity. The goal of any IT department, he says, should be to make sure as many dollars as possible are actively being utilized.

鈥淗ow many of us have Hulu accounts or Netflix accounts, or this subscription or that subscription, and we don’t use it anymore?鈥 he asks. 鈥淎nd so, that’s probably a useful metric also, just to be sure that as many dollars as possible are truly being allocated productively to business.鈥

For those who haven鈥檛 built out metrics programs for the IT department, Jones says that 鈥渁 good first step is having the data collection and measurement in place.鈥 He separates the data needed for this type of program into two buckets: i) what technologies are actually being used and how effectively; and ii) where spend is going in an organization. The latter, he adds, can be tricky 鈥 spend shouldn鈥檛 just include official IT department projects, but also what individual attorneys are purchasing for themselves to augment their daily work (such as subscription licenses, discovery costs, technology consultants, and more).

Building out the metrics program

Consultant and legal survey expert Rees Morrison says that to actually begin collecting IT department data, a firm鈥檚 options can be broken down into four different buckets. The first is rather simple: focus groups. 鈥淚t’s a way of not only collecting some numbers, but also subjective views and opinions,鈥 Morrison says. 鈥淎nd so a focus group, it’s a flexible tool, but it’s also somewhat expensive and you have to have people who know how to run them.鈥

Morrison鈥檚 second choice is perhaps the way most firms operate today: short email surveys, which he called a 鈥渃rude but simple way of collecting some data.鈥 The most effective email surveys are short 鈥 at about two or three questions 鈥 in order to encourage a higher response rate. However, he adds, these are tough to do en masse, as IT department leaders would also then be responsible for collecting, sorting, and displaying the data.

His third option is one that is perhaps more underutilized for law firm IT departments: formalized surveys. In fact, the ILTA survey found that just 16% of firms utilized user-satisfaction surveys to measure success, and 20% incorporated user feedback in general. While surveys may require more work on the front end, they also reveal greater insights on the back end, Morrison explains. 鈥淵ou can ask more questions. It’s easier for people to answer, because a lot of them, it’s 鈥榗heck a box鈥 or 鈥榝ill in a number鈥 as compared to an email, where it鈥檚 a little less formatted,鈥 he says. 鈥淎nd the huge advantage is, it then brings everything together, often in a spreadsheet so you can analyze it how you want.鈥

Finally, IT departments can utilize what Morrison calls exhaust data: logs of how often a product is being used. This type of data can provide insights that the other types can鈥檛, he says, because it鈥檚 agnostic of attorney or legal professional biases. 鈥淭hey’re not thinking probably about how much time they’re online doing that or how many times they do it,鈥 he adds. 鈥淏ut somebody else can start drawing some patterns or making decisions whether a subscription is worthwhile or not, or get more of the sorts of ancillary data that flows from it.鈥

Morrison notes that collecting these sorts of metrics can have multiple positive effects. There is, of course, the increased insight into firm technology usage and better ROI that comes from increased efficiency. but there is also positive firm engagement that can come from IT metrics projects, he says.

鈥淚 do believe that folks in general like their opinion being asked,鈥 he explains. 鈥淥therwise they feel they’re just serfs and nobody cares 鈥 I don’t happen to like this contract management system, but what can I say? But if they come and ask you on a scale of 1-to-10, how do you like it and a few questions related to that, I get a voice.鈥

The work does not stop here, however. Law firm IT metrics are not a 鈥渟et it and forget it鈥 proposition. Law firms looking to collect this data also need to plan how they鈥檙e going to incorporate feedback around the data they鈥檝e collected 鈥 and make sure those they鈥檝e collected data from are heard, Morrison notes.

鈥淚f you gather metrics from people, I think you owe it to get back to them something,鈥 he says. 鈥淚t shouldn’t go into the black box of the firm and the associates never find out what happened. You give me some time and your thoughts and estimates and numbers, and I’m going to return with some findings and describe what we’re going to do about it.鈥

]]>
https://blogs.thomsonreuters.com/en-us/legal/law-firm-it-metrics-program/feed/ 0
Emerging Legal Technology Forum 2021: Law firms need to leverage legal tech to stay competitive /en-us/posts/legal/emerging-legal-technology-forum-2021-leveraging-legal-tech/ https://blogs.thomsonreuters.com/en-us/legal/emerging-legal-technology-forum-2021-leveraging-legal-tech/#respond Thu, 11 Nov 2021 14:37:22 +0000 https://blogs.thomsonreuters.com/en-us/?p=48860 BOSTON 鈥 Emerging legal technology dynamics brings along issues of managing risk, technology prioritization, and in-line evolutionary needs associated with these new technologies, all of which can greatly impact the pace and delivery of legal services today.

This theme dominated the recent 2021 Emerging Legal Technology Forum, presented by 成人VR视频 Institute last week in Boston.

Many panels explored common themes and imperatives essential to staying competitive in a dynamic legal technology industry, such as the use of AI in functions like contract development, automating workflows, and blockchain.

Sessions also discussed legal tech service delivery, how project finances and priorities are evaluated, remote working, and client-facing technologies, with risk management and security hovering as ever-present topics as well.

Set in the historic Boston Park Plaza Hotel, the forum was very much a boutique event, a setting favorable to striking up conversations (and hopefully longer-term relationships as well) with an impressive collection of legal technology thought leaders.

legal tech
Attendees listen to a panel at the recent Emerging Legal Technology Forum in Boston.

With no supplier parties or staffed vendor booths, the intimate nature of the forum lent itself to deep discussion of forward-thinking ideas, both in panel and networking interchanges. It was not unlike the environment within a boutique law firm or smaller legal tech shop, where the high quality of professionals generally leads to fast-moving, elite-level exchanges or ideas.

Some of the panel highlights included:

AI & the human element

This panel explored the topic of artificial intelligence, starting with a very interesting example of Tesla鈥檚 driver safety ratings and how currently, there is no formalized process for appealing any safety rating decisions made by AI. Panelist , General Counsel of CIC Health, led a discussion around who has the right to know about the decisions a machine-learning system makes and the need to respect the protection of intellectual property, which sparked an interesting exchange of governance issues and concerns.

Of particular interest was a presentation identifying five key points of theory relating to AI governance offered by Boston-based 成人VR视频 Labs data scientist . These points of consideration include:

      • human-user interaction;
      • transparency and explainability;
      • algorithmic fairness;
      • robustness and reliability; and
      • data privacy.

The main premise here is that on-going broad-based improvements in these areas will help to continue the quality of AI applications across society. Overall, this panel conveyed that AI is a field very much still maturing, with ethical AI, transparency, fairness, reliability, resistance to threats, data privacy, and differential privacy all topics that still need addressing until we achieve favorable long-term frameworks.

Blockchain鈥檚 broad horizons

Highlighted by subject matter expert , founder & CEO of CIVICS.com Consultancy Services, this panel covered governance and regulation for blockchain and, interestingly, how blockchain itself can help in these areas. Greenwood also detailed the need for a legal definition of blockchain, highlighting his efforts to crowdsource the same within his site.

Also on the panel, , Co-Founder & Chief Technology Officer of Chrysalis Digital Asset Exchange, used a sports wager as an example of a very simplistic (in the positive sense) example of a smart contract, demonstrating how a sports wager could rely on an oracle 鈥 a fact-determining authority, such as ESPN.com in our sports wager analogy 鈥 to finalize contract results, or in this case, game winners. McCalmont also noted that there are already-available templates and technologies that can provide common implementation tools for smart contracts.

Examining the importance of investment & leadership

In the panel on which I was moderator, we temporarily shifted our focus away from the sexy, emerging technology topics to a collection of vital day-to-day law firm technology decision points, such as how to prioritize and allocate funding to operational, client-facing, or security/regulatory projects. The panel examined how improvements to core law firm technologies such as collaboration software, document management systems, and security programs were critical for law firms today.

Our diverse group, which included panelists from a law firm, a consulting company, and a software company sifted through a myriad of priorities and approaches to resolving differences in stakeholder opinions and gaining consensus of how current-day technology budgets end up being allocated in law firms in today鈥檚 legal profession.

Finally, to close out the day, the panel on business disruption and third-party risk management, moderated by , a member of the Global Advisory Board of VigiTrust, provided much useful advice, such as how to build review teams by integrating both leadership and operations personnel in the risk-assessment process and how to embrace rather than be fearful of a process audit.

]]>
https://blogs.thomsonreuters.com/en-us/legal/emerging-legal-technology-forum-2021-leveraging-legal-tech/feed/ 0