成人VR视频 presents a session on ethics in AI, as part of the TR Takeover of Legal Geek event, that will highlight why ethical work in AI is critical
As the legal industry 鈥 like many similarly situated industries across world markets 鈥 increasingly embraces artificial intelligence (AI) to jump-start the automation, efficiency, and interconnectivity of its operations, it may be wise to pause before throwing that switch.
Even while full adaptation of AI is still in its infancy in many areas of legal, there are still stories of ethical problems with the use of AI that have bubbled to the surface. These problems 鈥 including embedded biases in AI-spun algorithms, questions over security and privacy, and uncertainty over the role of human judgment 鈥 have made the full deployment of AI and its far-reaching consequences an area of concern. It鈥檚 even created a new field of 鈥渞obo-ethics鈥 and has piqued the interest of and the World Economic Forum.
, Partner Director of Ethics & Society for Microsoft, how important it is to consider the ethical aspect of expanding AI. 鈥淲e don鈥檛 always see what it鈥檚 doing to us,鈥 Lane explains. 鈥淎I tech can be a power multiplier, and it can help people scale very quickly.鈥 However, she adds, algorithms that are trained on lots of data can also include biases within that data which are then reflected in these algorithmic models.
鈥淲ho will be impacted? What are the unintended consequences?鈥 Lane asks. 鈥淯ltimately, it means thinking about responsibility and accountability.鈥
To further this critical discussion, 成人VR视频 is presenting a session on ethics in AI, as part of on March 10, a half-day event that will feature the latest insights on the future of the legal profession and the impact of the newest legal technology.
The session will highlight why it鈥檚 important to care about AI ethics, noting that collected data always reflects the social, historical, and political conditions in which it was created. 鈥淎rtificial intelligence systems 鈥榣earn鈥 based on the data they are given,鈥 says Milda Norkute, Senior Designer at 成人VR视频 Labs, a team focused on AI innovation that will be presenting tangible examples of how it鈥檚 applying these principles and processes in practice.
You can register here to attend on March 10.
These pre-existing conditions in which the data is collected, along with many other factors, can lead to biased, inaccurate, and unfair outcomes, Norkute explains, adding that this problem only grows as artificial intelligence and related technologies are used to make decisions and predictions in such high-stakes domains as criminal justice, law enforcement, housing, hiring, and education. 鈥淭hese biased outcomes have the potential to impact basic rights and liberties in profound ways,鈥 she says.
Nadja Herger, a Data Scientist at 成人VR视频 Labs, will walk attendees through how the idea of ethics in AI is considered聽throughout the聽design, development and deployment聽process, showing step-by-step how that high-level聽process unfolds.
鈥淔or AI ethics to appropriately be taken into account, it is essential to reflect on its implications at every step of the lifecycle,鈥 Herger says, adding that means including questions such as: What is the impact of an imperfect AI system? Is there bias in our training data? How are users expected to interact with the AI system? How can we show how the AI system came to a certain decision to strengthen a user’s trust?
鈥淚t is essential for corporations to take a proactive approach with these issues, to ensure sustainable, ethical, and responsible use of AI,鈥 she says.
Eric Wood, a partner at the law firm will also discuss the specific impact of AI on companies and law firms in a discussion alongside Norkute and Herger. This session with attendees will examine topics such as creating AI guidelines for your company, how ethical considerations around AI can come up at work, or whether there needs to be stricter regulation of AI.
Overall, the session will address the most crucial issues faced by organizations when dealing with AI. How do you ensure that its use is being done ethically, and not accelerating the biases and problems already inherent in society at large?
Dr. Paola Cecchi-Dimeglio, a behavioral scientist and senior research fellow for Harvard Law School鈥檚 Center on the Legal Profession and the Harvard Kennedy School, that it鈥檚 very important for legal organizations or companies in general to determine why they are using AI in the first place. 鈥淵ou have to remember that with many legal organizations, the data they are looking at is either what is publicly available or data they have gathered from working with their clients. And when artificial intelligence (AI) starts working with this data, it can be a very positive thing for a law firm,鈥 Cecchi-Dimeglio says, noting that this process allows firms to make better decisions about jurisdictions, judges, and client matters in comparable situations.
鈥淏ut problems arise, especially problems with biases, when the organization isn鈥檛 careful about from where it鈥檚 taking its data or about what portion of data it鈥檚 using and not using,鈥 she adds. 鈥淏ecause if you start out with a biased history, you鈥檙e going to have biased results.鈥