Deepfakes Archives - 成人VR视频 Institute https://blogs.thomsonreuters.com/en-us/topic/deepfakes/ 成人VR视频 Institute is a blog from 成人VR视频, the intelligence, technology and human expertise you need to find trusted answers. Thu, 15 May 2025 12:48:02 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 Deepfakes on trial: How judges are navigating AI evidence authentication /en-us/posts/ai-in-courts/deepfakes-evidence-authentication/ Thu, 08 May 2025 17:03:17 +0000 https://blogs.thomsonreuters.com/en-us/?p=65811 AI-generated evidence presents significant challenges for courts today, as judges and attorneys grapple with determining the authenticity, validity, and reliability of digital content that may have been artificially created or manipulated. The rapid advancement of generative AI (GenAI) technology has outpaced the development of reliable detection tools, and now GenAI is testing traditional evidentiary frameworks through sophisticated deepfakes and AI-altered materials that are increasingly difficult to distinguish from genuine evidence.

There are significant challenges involved with relying on automated tools to detect and authenticate evidence, says , Research Professor at the University of Waterloo in Ontario, Canada. “We aren’t at the place right now where we can count on the reliability of the automated tools,鈥 she explains, adding that most computer scientists consider this a tricky problem.

Defining types of AI evidence

AI-generated evidence falls into two distinct categories. One, acknowledged AI-generated evidence is openly disclosed as created or modified by AI, such as accident reconstruction videos or expert analysis tools. These applications are transparent about their AI origins and creation or modification methods, which allow courts to evaluate them as such.

Second, unacknowledged AI-generated evidence is presented as authentic and unconnected to any AI creation or manipulation when it is, in fact, AI-generated or manipulated. Such examples include deepfake videos, fabricated receipts, and manipulated photographs. This type of manipulated evidence poses significant challenges for detection and authentication in court proceedings.

The , a joint effort by the National Center for State Courts (NVSC) and the 成人VR视频 Institute (TRI), recently published two bench cards as practical resources for judges who may face the evidentiary challenges presented by and AI-generated evidence. These tools help judges make real-time decisions when confronted with potentially AI-generated materials. The bench cards provide structured questions about evidence sources, chain of custody, and potential alterations to help guide judicial evaluation.

Authenticating unacknowledged AI-generated evidence

The current legal framework for authentication of AI-generated evidence sets a fairly low bar for admissibility, according to , Senior Specialist Legal Editor at 成人VR视频 Practical Law. Generally, evidence is admissible when a party provides enough information that a reasonable jury could find the evidence is more likely than not authentic. This is often done by offering extrinsic evidence. For example, a party seeking to authenticate a voice recording may offer the testimony of a witness who is familiar with the speaker鈥檚 voice. Federal Rule of Evidence (FRE) 901(b) offers several more examples of authentication methods.

Judges usually make the authentication determination under FRE 104(a), deeming evidence authenticated if a reasonable jury could find it more likely than not genuine. Indeed, judges have the authority and responsibility to act as gatekeepers in court trials to make preliminary decisions about evidence admissibility before it goes to a jury and to assess witness credibility, which remains crucial in evaluating evidence.

However, in some circumstances, the jury must determine whether the evidence is authentic. Specifically, when a party disputes the authenticity of evidence and there is sufficient evidence for a reasonable jury to find in favor of either party, a question of fact exists. In this instance, FRE 104(b) requires the court to leave the authentication determination to the jury.

Right now, of the Santa Clara County (Calif.) Superior Court says that the tools that judges already possess to determine authenticity are useful, but the landscape is evolving. In particular, the liar’s dividend 鈥 a concept when authentic evidence is falsely claimed to be AI-generated 鈥 is a current challenge in which the existing rules may not be sufficient.

Dr. Grossman agrees, noting that courts will need to develop strategies to address this issue, including requiring parties to provide evidence to support their claims that evidence is fake. “I think the courts will see [the liar’s dividend] sooner than the deepfakes.鈥

Recent cases addressing AI and evidence

Several significant court decisions have shaped the treatment of AI-generated evidence, including the State of Washington v. Puloka case, in which due to lack of reliability. In contrast, a California state court rejected a challenge to video evidence in Huang v. Tesla that centered on the vague possibility that the video could have been a deepfake.

The increasing sophistication of deepfakes, both audio and video, poses significant challenges for judges and attorneys in detecting and authenticating evidence. 鈥淕enAI generates content using two algorithms, one that creates content and one that distinguishes it from reality, creating a constant feedback loop that improves the AI’s ability to generate realistic fakes,” Dr. Grossman explains.

Key questions judges should ask

To address the challenges of AI-generated evidence, Dr. Grossman suggests that judges ask themselves three key questions:

      • Is the evidence too good to be true?
      • Is the original copy or device missing?
      • Is there a complicated or implausible explanation for its unavailability or disappearance?

In addition, Judge Yew advises judges to consider the credibility of the witness and order in-person appearances when necessary. And Dr. Grossman says that substantial scientific work is necessary before courts should trust AI tools.

, Dean and Professor of Law at the University of New Hampshire鈥檚 Franklin Pierce School of Law, is pushing for the creation of a comprehensive framework for the evaluation and ongoing development of AI-powered legal tools. “Legal AI tools should undergo the same sort of rigorous training and testing that humans undergo to do the same kind of work,” Carpenter says, adding that this approach would ensure reliability while adapting to evolving technology.


Check out the next webinar, , in the here

]]>
Deepfakes: Federal and state regulation aims to curb a growing threat /en-us/posts/government/deepfakes-federal-state-regulation/ https://blogs.thomsonreuters.com/en-us/government/deepfakes-federal-state-regulation/#respond Wed, 26 Jun 2024 18:05:34 +0000 https://blogs.thomsonreuters.com/en-us/?p=61990 Is it real, or is it a deepfake? Deepfakes are simulated images, audio recordings, or videos that have been convincingly altered or manipulated to misrepresent someone as saying or doing something that the person did not actually say or do.

In March 2023, for example, a deepfake image of Pope Francis wearing a white puffer coat went viral on social media, confusing millions of viewers. In January 2024, sexually explicit deepfake images of Taylor Swift circulated on social media, causing an uproar among her millions of fans and in the news media.

Those images and the artificial intelligence (AI) tools that create deepfakes raised public awareness about the significant risks posed by the unauthorized creation, disclosure, and dissemination of these digital forgeries, which can result in defamation, intellectual property听(IP) infringement, breach of听publicity rights, harassment, fraud, blackmail, election interference, and incitement to violence and social and civil unrest.

Two generative AI (GenAI) tools are required to create a deepfake image or recording. One tool creates the image or recording, and one tries to detect if the output is fake. These systems are referred to as competing generator and discriminator artificial听neural networks听within a generative adversarial network (GAN), which is a deep learning process. The generator (also known as the encoder) analyzes input data and extracts key features from the recording to produce outputs, which are sent to the discriminator (also known as the decoder) to detect artificial outputs like a manipulated voice recording.

The generator and discriminator create a feedback loop, causing the generator to produce increasingly higher-quality artificial outputs and the discriminator to increasingly improve in detecting them. The feedback loop repeats until the desired quality of deepfake recording or image is created.

Federal legislation to combat deepfakes

Currently, there is no comprehensive enacted federal legislation in the United States that bans or even regulates deepfakes. However, the requires the director of the National Science Foundation to support research for the development and measurement of standards needed to generate GAN outputs and any other comparable techniques developed in the future.

Congress is considering additional legislation that, if passed, would regulate the creation, disclosure, and dissemination of deepfakes. Some of this legislation includes the , which requires the Science and Technology directorate in the U.S. Department of Homeland Security to report at specified intervals on the state of digital content forgery technology; the , which aims to protect national security against the threats posed by deepfake technology and to provide legal recourse to victims of harmful deepfakes; the , which would improve rights to relief for individuals affected by non-consensual activities involving intimate digital forgeries and for other purposes; and the , which requires the National Institute of Standards and Technology to establish task forces to facilitate and inform the development of technical standards and guidelines relating to the identification of content created by GenAI, to ensure that audio or visual content created or substantially modified by GenAI includes a disclosure acknowledging the GenAI origin of such content, and for other purposes.

States pursue deepfake legislation

In addition, several states have enacted legislation to regulate deepfakes, including:

    • Texas 鈥 makes it a criminal offense to fabricate a deceptive video with intent to injure a candidate or influence the outcome of an election.
    • Florida 鈥 criminalizes images created, altered, adapted, or modified by electronic, mechanical, or other means to portray an identifiable minor engaged in sexual conduct.
    • Louisiana 鈥 criminalizes deepfakes involving minors engaging in sexual conduct.
    • South Dakota 鈥 revises laws related to the possession, distribution, and manufacture of child pornography to include computer-generated child pornography, defined as any visual depiction of an actual minor that has been created, adapted, or modified to depict that minor engaged in a prohibited sexual act; an actual adult that has been created, adapted, or modified to depict that adult as a minor engaged in a prohibited sexual act; or an individual indistinguishable from an actual minor created using AI or other computer technology capable of processing and interpreting specific data inputs to create a visual depiction.
    • New Mexico 鈥 amends and enacts sections of New Mexico’s Campaign Reporting Act by adding disclaimer requirements for advertisements containing materially deceptive media and creates the crime of distributing or entering into an agreement with another person to distribute materially deceptive media.
    • Indiana 鈥 requires certain election campaign communications that contain fabricated media to include a disclaimer. The legislation also permits a candidate depicted in fabricated media that does not include a required disclaimer to bring a civil action against specified persons.
    • Washington 鈥 relates to fabricated intimate or sexually explicit images and depictions. The law creates civil and criminal legal remedies for victims of sexually explicit deepfakes.
    • Tennessee 迟丑别听 鈥 updates and replaces the state’s Personal Rights Protection Act of 1984 to protect an individual’s name, photograph, voice, or likeness; provide for liability in a civil action for activities related to the unauthorized creation and distribution of a person’s photograph, voice, or likeness; and includes liability for persons who distribute, transmit, or otherwise make available technology with the primary purpose of unauthorized use of a person’s photograph, voice, or likeness.
    • Oregon 鈥 requires a disclosure of the use of synthetic media in election campaign communications.
    • Mississippi (effective July 1) 鈥 creates criminal penalties for the wrongful dissemination of digitizations, which are defined as the alteration of an image or audio in a realistic manner utilizing an image or audio of a person, other than the person depicted, or computer-generated images or audio, commonly called deepfakes; or the creation of an image or audio through the use of software, machine-learning AI, or any other computer-generated or technological means.

Additional state bills regulating deepfakes are pending in Florida, Virginia, California, and Ohio, and are being considered in other states.

Additional steps to mitigate deepfake risks

In addition to relying on the government to enact comprehensive legislation to regulate deepfakes and law enforcement and the courts to enforce that legislation, businesses can take several additional steps to reduce their exposure to the risks posed by deepfakes.

These steps include knowing how to defend against the increasingly sophisticated use of AI-enabled phishing and social engineering attacks; preventing AI-enabled harassment and impersonation by using social media responsibly; ensuring that the company has comprehensive employee and vendor policies in place to guard against AI and social media risks; and educating employees about how to properly use social media and AI tools.

]]>
https://blogs.thomsonreuters.com/en-us/government/deepfakes-federal-state-regulation/feed/ 0
Practice Innovations: Seeing is no longer believing 鈥 the rise of deepfakes /en-us/posts/technology/practice-innovations-deepfakes/ https://blogs.thomsonreuters.com/en-us/technology/practice-innovations-deepfakes/#respond Tue, 18 Jul 2023 17:46:03 +0000 https://blogs.thomsonreuters.com/en-us/?p=57851 The most recent Indiana Jones movie The movie makers used artificial intelligence to comb through all of the decades-old footage of the actor and create a younger Ford.

This technology is called deepfake, and it is not just catching on in , but it is also a growing . Recently, more than $240,000 was stolen by someone pretending to be an executive from a British energy company. This event does not seem all that out of the ordinary, except that the executive was not even a real person.听听in order to imitate the real executive 鈥 and they got away with it.

Using artificial intelligence (AI), deepfake technologies can generate or manipulate digital media, particularly video and audio content, in a way that is difficult for viewers to distinguish from authentic, original material. It involves using machine learning algorithms to synthesize new content that is based on existing data, such as images or videos of real people.

However, deepfake technology has the potential to be used for both positive and negative purposes. On the positive side, it could be used to create more realistic visual effects in movies or to generate realistic simulations for training purposes. On the negative side, it could be used to spread false information or to manipulate public opinion by creating fake videos of people saying or doing things that they never actually said or did. There are also concerns about the potential for deepfake technology to be used for malicious purposes, such as creating fake videos of politicians or other public figures in order to discredit them.

What are some other benefits of deepfake technology?

There are many potential benefits of deepfake technology, including:

      • Educational applications 鈥 Deepfake technology could be used to create educational videos or simulations that are more engaging and interactive for students.
      • Improved visual effects 鈥 Deepfake technology could be used to create more realistic visual effects in movies, television shows, and other forms of media. This could lead to a more immersive and engaging viewing experience for audiences.
      • Enhanced simulations 鈥 Deepfake technology could be used to create realistic simulations for training purposes in a variety of industries, such as aviation, military, and healthcare. This could help to prepare professionals for real-life scenarios and improve their decision-making skills.
      • Increased accessibility 鈥 Deepfake technology could be used to create subtitles or translations for audio and video content, making it more accessible to people who are hearing impaired or who speak different languages.

What are some downsides of deepfake technology?

Not surprisingly, there are several potential downsides to deepfake technology, including:

      • Misinformation and propaganda 鈥 Deepfake technology could be used to spread false information or propaganda by creating fake videos or audio recordings of people saying or doing things that they never actually said or did. This could have serious consequences, such as undermining public trust in institutions, sowing political discord, or even inciting violence.
      • Privacy violations 鈥 Deepfake technology could be used to create fake videos or audio recordings of people without their consent, potentially violating their privacy.
      • Personal harm 鈥 Deepfake technology could be used to create fake videos or audio recordings of people that are embarrassing, offensive, or damaging to their reputation. This could lead to personal harm or distress for the individuals depicted in the fake content.
      • Legal issues 鈥 Deepfake technology could create legal issues related to intellectual property, copyright, and defamation. For example, if a deepfake video is used to defame someone or to falsely attribute a statement to them, it could lead to legal action.
      • Ethical concerns 鈥 There are also ethical concerns about the use of deepfake technology, particularly with respect to consent and transparency. It is important to ensure that people are aware when they are interacting with deepfake content and that they have given their consent for their images or voices to be used in this way.

How to guard against deepfake technology

Even though someone can fabricate a fake but persuasive video, software engineers, governments, and journalists can oftentimes still determine if it鈥檚 real or fake, , a disinformation expert at the Stanford Internet Observatory. Usually there are tells that are clues to a careful observer that a deepfake is at work, such as something that doesn鈥檛 look quite right.

For example, in a of Ukrainian President Volodymyr Zelensky, he appears to surrender to Russia in the current conflict. However, his oversize head and peculiar accent identified the video as a deepfake, and it eventually was removed from social media. Unfortunately, as deepfake technology improves, these tells will become harder to spot. Yet, as this technology evolves, detection tools will also evolve.

Despite the lack of mature detection tools, here are some suggestions that may help people and institutions guard against deepfake technology:

Be skeptical of media 鈥 It is important to be critical of the media that you consume and be certain to verify the authenticity of any video or audio content that you come across. Look for signs that the content may be a deepfake, such as unnatural movements or distortions in the video or audio.

Get serious about identity verification 鈥 Users need to exercise due diligence in verifying that someone is who they claim to be.

If available, use deepfake detection tools 鈥 Detection tools are slowly with multiple companies working on . For instance, Intel has introduced a real-time that is able to determine whether the subject in a video is real and shows 鈥渂lood flow鈥 in their face.

Educate users 鈥 Familiarize your users with the types of deepfake content that are out there, and teach them how to be skeptical about media.

Government regulation 鈥 Regulatory or legal measures should be put in place to address the negative impacts of deepfake technology. Unfortunately, given the complex and evolving nature of this technology, many governments around the world are to define how to best protect their citizens. Some governments, however, have already started to consider legislation that would prohibit the use of deepfake technology for malicious purposes.

Adopt a zero-trust security model 鈥 is new way to look at computer security. It works on the assumption that your networks are already breached, your computers are already compromised, and all users are potential risks 鈥 trust no one or anything and always verify.

Confirm and deploy basic security measures 鈥 Basic cybersecurity best practices will play a vital role when it comes to minimizing the risk for any deepfake cybersecurity attack. Some critical actions you can take include: i) making regular backups to protect your data; ii) using stronger passwords and change them frequently; and iii) continuing to secure your systems and educate your users.

What is the future of deepfake technology?

Right now, deepfake technology is in its infancy, and it can be easily recognized as fake. However, deepfake technology is quickly maturing and increasingly becoming more difficult to detect.

While there are many initiatives from technology companies try to combat deepfakes, it will be a until we finally outpace deepfake creators who more often than not, can quickly find new ways to stay ahead of detection methods and continue to cover their tracks.

]]>
https://blogs.thomsonreuters.com/en-us/technology/practice-innovations-deepfakes/feed/ 0
Culture wars, misinformation & Ukraine further complicate firms’ social media policies and presence /en-us/posts/news-and-media/culture-wars-misinformation-social-media/ https://blogs.thomsonreuters.com/en-us/news-and-media/culture-wars-misinformation-social-media/#respond Tue, 10 May 2022 15:44:28 +0000 https://blogs.thomsonreuters.com/en-us/?p=51061 Firms should reassess policies guiding employees’ social media posts on accounts linked to the company and weigh whether they are going too far monitoring employees’ opinions online. Intrusive monitoring risks breaking privacy laws as well as being unethical, say compliance and ethics experts.

“Companies can’t effectively manage social media risk,鈥 says Alison Taylor, an executive director at Ethical Systems in New York and an adjunct professor at NYU Stern School of Business. 鈥淭hey can’t do that anymore听and it’s hard to let go of the idea that they can. Companies have lost control of culture and what employees are saying and doing.鈥

Social media policies are no longer solely a question of controlling employee speech, although that remains a big part of it. Firms also attempt to control their image online, with some resorting to unethical tactics including paying employees to post positive content about the firm using their personal accounts.

“Lots of US companies are trying to reward staff for supportive posts and otherwise manipulate what they say online,鈥 Taylor explains. 鈥淭hey are also reduced to begging them not to leak 鈥 control is not a smart approach.”

Culture wars in the C-suite

Firms now must address what happens when social media brings culture wars issues 鈥 for example, anti-vaccine, women’s health, LGBTQ+ rights 鈥 into the workplace.

Taylor, who is researching companies’ approach to social media, says the culture war side of social media is a big problem in the United States. Companies are under endless pressure to take positions on social issues, and it is becoming a human resources problem, which has prompted some companies to boost monitoring. “What I’m hearing is a lot of employee complaints: ‘I saw him wearing a MAGA t-shirt on Facebook and I don’t want to work with him anymore’,” Taylor adds.

Taylor pointed to the case of Levi’s Jeans’ brand president, Jennifer Sey, who quit the company in February, as an example of what happens when social media, culture wars, and misinformation collide in the c-suite. when Levi’s chief executive asked her to curtail tweeting about COVID-19-related school closures, mask mandates, and Dr. Anthony Fauci, head of the National Institute of Allergy and Infectious Diseases and the chief medical advisor to the president.

Levi’s it听disputed听Sey’s claims she was punished “because her views veered from ‘left-leaning orthodoxy'”. Levi’s social media policy says employees are free to discuss their views but that it expects employees to protect the company’s “reputation and image”.

A new front in Ukraine

The war in Ukraine has opened a front on social media with attempting to influence hearts and minds online. In March, 听TikTok stars about the war to influence online content. Meanwhile, the Russian government is paying TikTokers to produce pro-Kremlin content; and the Ukrainians, too, have a听听linked to the war effort.

Firms should consider issuing additional guidance about posting on emotionally divisive issues and sharing posts that could be misinformation. Firms should be aware social media is monitored by government entities, which could invite future problems, says Frank Brown, a senior director and head of regulatory consulting at Hogan Lovells in London, adding that this is particularly an issue for employees active on LinkedIn where their views appear alongside their employer’s name.

“LinkedIn has changed massively over the last two years in terms of what people are sharing,鈥 explains Brown. 鈥淚 generally take the view that if you’re posting something on LinkedIn, you’re representing your organization to a much larger degree听than you would do on Facebook or Twitter.鈥

Therefore, employees should pause before posting or reposting content as a rule, but that pause becomes critical when misinformation and propaganda is in abundance.

“There’s an awful lot of false information out there. I think Ukraine situation 鈥 as with any conflict 鈥 has brought some allegations of some horrible things that the Russians are supposed to be doing. It may be true, but equally it is commonplace to see misinformation,” Brown says.

Policy and presence

Clearly, firms approach to social media has evolved over the past 15 years. Now, firms need to articulate social media policies more clearly and add context, Brown observes. Most of the time, however, these policies simply remind employees not to bring the firm into disrepute and to be careful about what they say about the company.

Firms’ use of social media to manage their brand and market products using employee accounts has muddied the water somewhat. Firms create their own content and encourage employees to share it 鈥 sometimes paying them to do so. That makes it harder for firms to justify banning employees from social media and raises ethical difficulties about using employee accounts to promote official content.

“People should have a social life, should have a personal life, and should be able to articulate personal views up to a degree. Obviously, that degree is subject to interpretation, and it differs from person to person,鈥 Brown says. 鈥淒o we perhaps self-censor a little bit more than we used to because of what’s happening? I think that’s probably true.”

]]>
https://blogs.thomsonreuters.com/en-us/news-and-media/culture-wars-misinformation-social-media/feed/ 0