Social media Archives - 成人VR视频 Institute https://blogs.thomsonreuters.com/en-us/topic/social-media/ 成人VR视频 Institute is a blog from 成人VR视频, the intelligence, technology and human expertise you need to find trusted answers. Wed, 04 Feb 2026 19:03:55 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 The child exploitation crisis online: Gaps in digital privacy protection /en-us/posts/human-rights-crimes/children-digital-privacy-gaps/ Wed, 04 Feb 2026 18:39:04 +0000 https://blogs.thomsonreuters.com/en-us/?p=69312

Key highlights:

      • Fragmented protection creates vulnerability 鈥擟urrent US privacy laws operate as a patchwork system without comprehensive national standards, leaving children and other users exposed to data exploitation across state lines and international borders.

      • Body data collection opens future manipulation potential 鈥擵irtual reality platforms collect granular biometric information through sensors that can reveal deeply sensitive information about users.

      • Use-based regulations outlast technology changes 鈥 Restricting harmful applications of data provides more durable protection than the current regulatory approach, which relies on categorizing rapidly evolving data types.


Virtual reality (VR), social media, and gaming companies have long avoided robust content moderation, largely out of concern over implementation costs and the risk of alienating users. This reluctance stems from platforms wanting to have the widest pool of users as possible. Yet, the shortsightedness of this decision has consequences, including insufficient protection of children and long-term cost to companies鈥 bottom-lines.

The child exploitation crisis in digital spaces requires better laws and a reimagining of how VR, gaming, and social medial companies balance privacy, safety, and accountability across diverse platform architectures, according to , an expert in child exploitation methods in digital spaces and Policy Advisor at the NYU Stern Center for Business and Human Rights.

Limitations of existing regulatory frameworks

The current regulatory landscape is insufficient to protect children online. The lack of a comprehensive national privacy law in the United States, the use of consent mechanisms, and the haphazard rollout of age verification all expose protection gaps and come with economic and psychological costs, according to Olaizola Rosenblat. For example, some of the dangers include:

Gaps in patchwork of regulations leave children vulnerable 鈥 Regulatory demands for child safety often collide with privacy protections, creating contradictory obligations that platforms cannot realistically satisfy. In the absence of unified standards, however, companies operate in a jurisdictional maze that leaves most users, including children, exposed to data exploitation across borders.

America鈥檚 regulatory landscape remains especially fragmented, with no comprehensive national privacy law to provide consistent protection. comes close to establishing meaningful safeguards, according to Olaizola Rosenblat, yet it still permits companies to collect data even after users opt out of the sale or sharing of their data.

digital privacy
Mariana Olaizola Rosenblat, of the NYU Stern Center for Business and Human Rights

Federal reform attempts, including the , collapsed amid conflicts between states demanding stronger protections and tech lobbyists aligned with conservative representatives seeking weaker standards. In addition, child-specific laws, such as the , provide protection only for those under 13, which leaves older minors and adults vulnerable.

鈥淥nce users turn 13, they fall off a regulatory cliff,鈥 says Olaizola Rosenblat. 鈥淭here is no federal child-specific data protection regime, and existing state-level safeguards are patchy and largely ineffective for teens.鈥

Internationally, the European Union鈥檚 (GDPR), although considered the gold standard for regulation, suffers from a persistent gap between its ambitious text and its uneven enforcement.

Age verification tensions 鈥 These regulatory shortcomings also are evident in debates over age verification. Protecting children requires collecting data to determine user age, yet privacy advocates frequently oppose such measures. Without pragmatic guidance acknowledging these inherent trade-offs, platforms often face contradictory obligations they cannot simultaneously fulfill.

Current consent frameworks offer little protection 鈥 Current consent mechanisms offer users an illusory choice that fails to protect children from data exploitation. Even relatively robust frameworks like the GDPR rely on consent models in which refusal means exclusion from digital spaces essential to modern life. This approach proves particularly inadequate for younger users. Indeed, that about one-third of Gen Z respondents expressed indifference to online tracking.

VR data collections may allow future exploitation

VR platforms differ fundamentally from traditional gaming spaces and social media platforms. Users with VR headsets embody avatars that move through thousands of interconnected experiences. While no actual touching occurs, the experiences feel visceral. Indeed, the psychological and physiological responses can mirror aspects of real-world experiences, which include sexual exploitation, even though no physical contact occurs.

Olaizola Rosenblat explains that the data collected from the sensors can open up the potential for future exploitation. “The inferences that can be drawn from your body-based data collected by these sensors is granular and often intimate,鈥 she explains. 鈥淭he power that gives to companies is pretty remarkable in terms of knowing things about you that you might not even know yourself.鈥

Recommended actions to address challenges

Addressing the child exploitation crisis in digital spaces requires coordinated action, according to Olaizola Rosenblat, and that needs to include:

Universal protection standards 鈥 Corporate action in partnership with legislators is necessary for effective reform that protect all users rather than fragmenting safeguards by age or vulnerability status. Current approaches that shield only younger children create dangerous gaps and leave adolescents and adults exposed once they age out of protected categories.

Enforce existing regulations 鈥 Even well-crafted legislation proves meaningless without robust enforcement mechanisms. Commitment by government agencies along with the appropriate levels of funding is the most meaningful approach to achieve desired outcomes.

Technology-agnostic use regulation 鈥 Rather than attempting to categorize rapidly evolving data types, companies in the VR, gaming, and social media sectors must work with legislators to restrict harmful uses of data such as manipulation, exploitation, and unauthorized surveillance, regardless of technical collection methods. Regulating data use 鈥 rather than the current method of regulation based on categories of data, which include personally identifiable information 鈥 is the right approach.

Public mobilization is essential 鈥 Citizens must understand that the stakes of data exploitation beyond corporate collection also include hacking vulnerabilities and manipulative deployment. Without consumer demand for better protection and the willingness for legislators to pass the laws, regulation will not happen.

The path forward

The digital exploitation of children demands immediate action that transcends partisan divides and corporate interests. Only through coordinated regulatory reform, meaningful enforcement, and sustained public pressure can we create digital spaces in which innovation thrives without sacrificing our privacy and safety. The cost of continued inaction grows steeper each day we delay.


You can find out more on how organizations and agencies are fighting child exploitation here

]]>
New US law emerges to fight 鈥渞evenge porn鈥 amid surge in reported cases /en-us/posts/human-rights-crimes/revenge-porn-law/ Thu, 05 Jun 2025 13:40:56 +0000 https://blogs.thomsonreuters.com/en-us/?p=66175 Over the past 15 years, the convergence of increasing use of social media and ubiquitous utilization of mobile devices has sparked a new area of crime known as image-based sexual abuse (IBSA), or non-consensual intimate imagery 鈥 also known as revenge porn and digital sextortion. Indeed, recent studies suggest this area of illegal activity has jumped by double digits year-on-year.

In the United Kingdom, for example, the Revenge Porn Helpline in reports in 2023, compared to the previous year. Sextortion remained the predominant concern and made up more than one-third (34%) of all cases. The data reveals that overall, there was a 54% increase in sextortion cases compared to 2022, highlighting a concerning pattern that hasn鈥檛 been seen since 2021.

In the United States, the data is a little dated but double-digit increases too are suggested. In 2016, , reported being victims of nonconsensual pornography. A larger , showing 鈥 all in the space of just three years.

Given the rise in the use of generative AI to create deep fakes, it is easy to surmise that cases of IBSA will continue to be on the rise for some time.

Improvements in legal landscape on the horizon in the US

Obtaining justice for victims of IBSA around the world is no easy task. Only about a concerning IBSA as of 2018. In addition, prosecuting offenders in the US up until now has been a patchwork of local and state laws. Indeed, 48 states plus the District of Columbia and Guam .

The good news is that change is on the horizon with the bill known as the or the TAKE IT DOWN Act that was passed on April 28 and signed into law in late May.

The new law prohibits the intentional disclosure of nonconsensual intimate visual depictions, including digital forgeries. It amends Section 223 of the Communications Act of 1934 to criminalize the publication of such depictions without consent, especially if the images are intended to harm or result in harm, including psychological, financial, or reputational damage.


Join us for a free online Webinar: World Day Against Trafficking in Persons to learn more about the complexities of human trafficking, the impact on victims, and effective strategies for prevention and intervention


The Act also defines key terms, such as identifiable individual, and intimate visual depiction, which are important concepts to enforce the bill. For example, an identifiable individual is someone who appears, in whole or in part, in an intimate visual depiction and whose face, likeness, or other distinguishing characteristic (such as a unique birthmark or recognizable feature) is displayed in connection with such depiction. And an intimate visual depiction carries the meaning given in section 1309 of the Consolidated Appropriations Act, 2022, which includes visual representations involving nudity or sexually explicit conduct.

In addition, the bill establishes penalties for offenses involving both adults and minors. For adults, violations can result in fines or imprisonment of up to two years; while for minors, the penalties increase to fines or imprisonment of up to three years. Exceptions to the prohibition include lawful activities by law enforcement and disclosures made in good faith for legitimate purposes such as legal, medical, or educational needs.

The Act also mandates covered platforms, including websites, services, or applications serving the public with user-generated content, to establish a notice and removal process for nonconsensual depictions. Platforms must remove such content within 48 hours of a valid request and are protected from liability for good faith removal actions. The U.S. Federal Trade Commission is empowered to enforce compliance, treating violations as unfair or deceptive acts.

More action to protect potential victims needed

Research into the negative impacts of victims is significant. Among IBSA victims, 93% experienced considerable emotional distress, and 82% faced substantial challenges in social, work, or other vital aspects of their lives, according to the . Meanwhile, just more than half (51%) of victims revealed they had contemplated suicide. Additionally, 55% were concerned about their professional reputation being damaged due to IBSA, and 39% reported that it had negatively impacted their career.

These negative implications point to the need for additional measures beyond legal avenues to safeguard potential victims. One area in which there is a large need is building awareness among adults under 40, which of IBSA cases. This is especially true because people in this age range have .

Likewise, increasing education for caregivers and parents in order to better prevent their children from becoming victims is also critical. Resources such as 成人VR视频鈥 Safe Settings campaign and the U.S. Department of Homeland Security鈥檚 are great examples of awareness-building campaigns.

In particular, caregivers should protect their children by teaching them about the risks of sharing personal or inappropriate content online and the lasting nature of digital information. They should also guide them on how to set up privacy controls on mobile apps, recognize online predators, and find trusted adults for support. Parents should also stress the consequences of sexting and cyberbullying and emphasize that sharing sexual abuse material is illegal.

The TAKE IT DOWN Act represents a significant step forward in the fight against IBSA and digital exploitation, offering new legal avenues to protect victims and hold offenders accountable. However, legal measures alone are not enough, and continued efforts in education and awareness are essential to prevent future occurrences, empower potential victims, and foster a safer online environment for everyone.


You can find out more about the ways to strengthen online privacy rights here

]]>
Study highlights need for new tools, tactics & frameworks to combat child exploitation & trafficking /en-us/posts/human-rights-crimes/new-tools-tactics/ Thu, 17 Apr 2025 02:11:28 +0000 https://blogs.thomsonreuters.com/en-us/?p=65575 Thanks to the proliferation of online access around the world, the exploitation of children, especially on the internet, is getting worse. In fact, of unidentified children revealed that more than 60% of unidentified victims were prepubescent, including infants and toddlers; and 84% of images on the database contained explicit sexual activity, according to a joint report published by INTERPOL and听ECPAT International in February 2018.

Since then, children accounted for 38% of victims detected globally, and since 2019 there has been an increase of approximately 38% in recorded child victims, according to a .


Join us for a free online Webinar: World Day Against Trafficking in Persons to learn more about the complexities of human trafficking, the impact on victims, and effective strategies for prevention and intervention


To combat this growing issue, the , which is part of the Horizon 2020 Research and Innovation program funded by the European Commission, collaborated on a cross-jurisdictional initiative aimed at addressing human trafficking and child sexual abuse, including that which occurs online.

According to , an anti-trafficking expert at the (ICMPD), the project brings together comprised of 24 partners in 17 countries, international organizations, civil society groups, research institutes, and educational institutions. ICMPD鈥檚 work is focused on research and prevention strategies developed with partner organizations in Spain, the United Kingdom, Bangladesh, and Colombia. These four countries were selected to provide varied perspectives and approaches to tackling these issues.

Study reveals common factors

The HEROES Project identified several across the countries studied that place children at risk of online sexual exploitation and abuse. Some of these factors include:

Environments with emotional neglect 鈥 Children lacking emotional support from dysfunctional family backgrounds consistently appear most susceptible to exploitation, according to Hainzl. In particular, adolescents and teenagers represent a particularly vulnerable demographic due to their developmental stage and increasing online presence. While girls are targeted more frequently, boys are not exempt from these dangers. Similarly, children with psychological or emotional challenges, including those experiencing isolation or loneliness, also faced heightened risk as they may seek connection online, according to the project鈥檚 findings.

Perpetrators using similar tactics to exploit victims online 鈥 Research on child sexual abuse reveals important distinctions between online and offline exploitation patterns. In many cases, child abuse and child sexual abuse is often perpetrated or initiated by individuals close to the child, including family members or acquaintances, and may correlate with socioeconomic factors. By contrast, online harassment often involves strangers contacting numerous potential victims simultaneously and pursuing those who respond. This distinction highlights the different mechanisms of exploitation that protection frameworks must address.

Economic vulnerability 鈥 Financially disadvantaged households also had higher levels of exploitation across the studied countries. Poverty and economic inequality create conditions in which children and families become vulnerable to various forms of abuse and exploitation.

Gaps in implementation of legal frameworks 鈥 While legal frameworks often exist on paper, inadequate implementation remains a critical issue, with authorities frequently lacking resources or training to enforce existing laws. Underdeveloped protection and migration services further compound these problems by failing to identify potential victims or provide adequate support.

Legal and judicial challenges

The rapid evolution of online technologies and reliance on digital tools by a growing number of people across the globe have brought about a new wave of child exploitation challenges for law enforcement agencies and judicial systems worldwide. As the internet knows no borders, the complexities of investigating and prosecuting online crimes that involve multiple jurisdictions, varying legal frameworks, and ever-changing digital landscapes have created significant obstacles for authorities.

Also, one of the most pressing issues is the inconsistency in the terminology and legal definitions related to child sexual abuse among countries. “There is a problem that is noticed in the terminology being different, especially when it comes to cross-border cases,鈥 says Hainzl adding that this creates confusion in prosecution efforts. And while many countries have legislation addressing human trafficking and child sexual abuse, these laws often lack provisions specifically tailored to online aspects of these crimes.

Further, law enforcement agencies face numerous technical obstacles when investigating online exploitation. Encrypted communications, rapidly changing online behaviors, and difficulties in obtaining and preserving digital evidence all impede successful investigations. 鈥淟aw enforcement should be able to respond to this very quick and very often, but it is not possible because of a lack of knowledge on current and changing tactics and tools, technology, and equipment,鈥 explains Hainzl.

The global fight continues

The global fight against online child sexual abuse requires coordinated international action across multiple fronts. One fundamental starting point would be the development of a common international definition for online child sexual abuse, similar to how the UN Palermo Protocol established a unified understanding of human trafficking, Hainzl says.

In addition, comprehensive training programs for law enforcement and all responsible professionals are essential along with efforts to raise public awareness about online risks. Educational initiatives must target children directly, teaching them to recognize potential dangers in online interactions.

The corporate sector bears significant responsibility in this fight because their platforms often facilitate exploitation. Already, proactive identification systems using algorithms to detect potential cases represent a shift from purely reactive approaches to prevention-focused strategies. And developing regulatory frameworks similar to their own corporate sustainability due diligence requirements could hold companies accountable for monitoring and preventing exploitation on their platforms, says Hainzl.


You can find more on this topic in the 成人VR视频 Institute鈥檚Human听Rights Crimes Resource Center

]]>
How technology is essential to protecting children from predators /en-us/posts/human-rights-crimes/technology-protecting-children/ Wed, 18 Dec 2024 13:03:18 +0000 https://blogs.thomsonreuters.com/en-us/?p=64118 Every child deserves a safe childhood. Yet each year, countless children around the world go missing or are put in dangerous and vulnerable situations. 鈥淭here is no corner of the world not touched by these issues,鈥 says Michelle DeLaune, President and CEO of the National Center for Missing and Exploited Children (NCMEC). 鈥淲e remain committed to finding new and better ways to reach the people who need us most.鈥

Founded in 1984, NCMEC is the nation鈥檚 largest and most influential child protection organization, dedicated to finding missing kids, stopping child sexual exploitation, and preventing crimes against children. Fulfilling that mission requires a partnership among stakeholders across society. 鈥淧rotecting children is everyone鈥檚 responsibility. It requires both the public and private sector working together, bringing their expertise, tools, and resources to make a difference,鈥 DeLaune says.

In collaboration with law enforcement agencies, families, and child welfare organizations, NCMEC has contributed to the recovery of missing children in more than 400,000 cases.

Unfortunately, child exploitation has reached alarming new heights, fueled partly by the widespread use of social media and advanced technology. Paradoxically, these same digital tools are crucial in combating this issue. NCMEC harnesses online data and analytics to provide vital insights, recognize patterns, and develop global initiatives aimed at safeguarding children. This tech-driven approach enables NCMEC to stay at the forefront of child protection efforts worldwide.

Bringing critical information to light

Indeed, AI-driven technology enables data analysis tools that can comb through copious amounts of data to identify patterns and trends in child exploitation and abduction cases. This helps in pinpointing potential risks and focusing resources where they are most needed. 鈥淲e need technology that enables us to connect the dots, target cases where there is the most urgent risk to children, and act on them quickly,鈥 DeLaune explains. 鈥淲e have so much data coming in 鈥 more than human beings can sift through to surface the right information. It鈥檚 the proverbial needle in a haystack; but in this haystack, the needles we are searching for are children in need of assistance.鈥

NCMEC has partnered with 成人VR视频 (TR) for 15 years, and according to DeLaune, partnerships with TR and others allow the organization to have access to tools that 鈥渉elp us do our job faster and get the right information out to law enforcement and the public. That鈥檚 been a game changer.鈥

Also, NCMEC analysts use technology on a daily basis to support missing and exploited child investigations, according to Angela Aufmuth, Executive Director of Analytical Programs at NCMEC. 鈥淲hen a report on a missing or exploited child comes into the Center, time is of the essence. We need to analyze data rapidly,鈥 Aufmuth says. 鈥淲ith technology, we鈥檙e able to access data from multiple sources, quickly putting together pieces of the puzzle. This enables us to identify possible locations and persons of interest and get that information out to law enforcement agencies so they can perform their investigations in the field.鈥

Technology can focus investigations

Of course, access to public records is vital. Technology allows NCMEC to quickly sift through data, such as records of deaths, to focus law enforcement efforts more effectively. Whether working on a missing child case, a case of suspected sex trafficking, a noncompliant sex offender, or a child abduction, having access to public records information is absolutely critical, explains Aufmuth.

鈥淭echnical solutions have given us the ability to access different types of data very quickly.鈥 she says, adding that data on deaths, for example, have been very useful. 鈥淲hen conducting noncompliant sex offender operations, law enforcement will give us a large amount of information,鈥 Aufmuth notes. 鈥淲e鈥檙e then able to batch that data against the technical resources, quickly identifying individuals who are deceased. This enables law enforcement to conduct investigations in more focused and efficient manner.鈥

At NCMEC, the focus is entirely on supporting children and their families as they navigate through unimaginable hardships 鈥淗aving access to these technology tools is absolutely essential to our work. If we did not have access to the data and analysis they provide, we would not be able to support the families we serve and help bring their children home,鈥 Aufmuth explains.

DeLaune agrees, adding: 鈥淣CMEC is a lifeline for families searching for a child or helping them rebuild their life after an exploitation issue. And they are grateful for anyone who can enhance our ability to serve them. The families are counting on NCMEC 鈥 and in turn, NCMEC counts on technology partners to deliver results.鈥


You can find more information about here.

]]>
Awareness of caregivers is key to mitigating online child exploitation via 鈥渟afe settings鈥 /en-us/posts/human-rights-crimes/online-child-exploitation-safe-settings/ https://blogs.thomsonreuters.com/en-us/human-rights-crimes/online-child-exploitation-safe-settings/#respond Mon, 02 Dec 2024 12:40:11 +0000 https://blogs.thomsonreuters.com/en-us/?p=63872 Tragically today a record number of children are being exploited online, and no child is immune from falling victim to sexual abuse. In fact, online child exploitation can take various forms including the distribution of child sexual abuse materials, sextortion schemes, and child sex trafficking.

Sextortion is a form of online blackmail in which children are tricked online into sending intimate pictures of themselves to people who then threaten to distribute the sexual images unless the victim complies with their demands, often money or additional explicit images.

Some of the facts about sexual exploitation among children are stunning, including that:

      • globally, more than 300 million children under the age of 18 have been affected by online child sexual exploitation and abuse in the last 12 months alone, according to Childlight Global Child Safety Institute;
      • there are more than 36 million reports of suspected online child sexual exploitation in the United States alone in 2023, according to U.S. National Center for Missing & Exploited Children (NCMEC);
      • between 2021 and 2023, there was a more than 300% increase in the continued rise of financial sextortion and reports of online enticement of children in the US, according to the NCMEC.

More importantly, the growing use of more advanced content-creation technology like generative AI (GenAI) to produce material depicting child sexual abuse is further fueling the rising threat to children, which include AI-generated images of such materials to extort real images from children and AI-generated images of real children that are manipulated to make it appear as though the child is nude or engaged in sexual acts.

Raising awareness among caregivers is critical

Education campaigns, such as the U.S. Department of Homeland Security鈥檚 and 成人VR视频鈥 Safe Settings campaign continue to provide parents, caregivers, and others with children in their lives, critical information they need to help keep children safe online.听Real online safety starts, and continues, by about the dangers of online exploitation. Indeed, it’s important to create physical safe settings by sharing age-appropriate resources on how kids can safely navigate online, and it鈥檚 also important to use the built-in safe settings on those websites and platforms to which kids have access.


You can learn more here.


When it comes to protecting younger children from online risks, it’s essential to teach them basic safety rules, which Homeland Security suggests should include:

      • Protect children with basic steps 鈥 Start by instructing them not to click on pop-ups, share personal information online, or trust people they meet online.
      • Report questionable content 鈥 Create a plan with them on what to do if they encounter inappropriate content, such as looking away and telling a trusted adult.
      • Teach kindness 鈥 Teach children online etiquette and respect for others, and ensure they know to whom to turn for help if they feel disrespected or uncomfortable.

For tweens and teens, the conversation should focus on more advanced online safety topics, such as:

      • No sharing of personal information 鈥 Discuss the dangers of posting personal information or inappropriate content, and the permanency of online data.
      • Teach privacy, recognize predators, and find help 鈥 Ensure privacy and teach teens how to set up privacy controls on their devices, recognize warning signs of online predators, and identify trusted adults to whom they can turn for help.
      • Discuss sexting, cyberbullying, and other legalities 鈥 Share information about sexting, cyberbullying, and the importance of not sharing sexual abuse material, which is illegal.

For caregivers, it is important to recognize that educating kids about online safety is a process. Having an open and honest conversation with them about the potential risks and consequences of their online behavior occurs best by using two-way dialogue to ensure your child feels comfortable sharing their concerns and experiences.

Steps caregivers can take now

In addition to this open conversation, there are several practical steps that caregivers can take to further safeguard their child’s online activities. These include password-protecting their app store and gaming downloads, setting time and area limits for device use, and setting all apps, games, and devices to private. Turning off location data services on social media and non-essential apps in order to prevent unwanted tracking is also recommended.

Further, educating children about the permanence of online data and the potential long-term consequences of their online actions is also key to having them avoid negative outcomes later. One tool for parents is creating a contract with their child that outlines expectations around online behavior and consequences for misbehavior. Also, create a safety plan in case the child encounters a potentially dangerous situation. Monitoring their friend lists to remove any strangers and warning them about the dangers of chatting with unknown individuals on different platforms are additional best practices.

If a child does fall victim to a predator, avoid forwarding any explicit content and deleting any messages, images, or videos. Instead, save evidence 鈥 including usernames, screenshots, and images or videos 鈥 for law enforcement to collect directly from the device.

Raising awareness among caregivers is paramount in the fight against online child exploitation.听And arming caregivers with the knowledge and tools to create a safe online environment can mitigate the devastating effects of child exploitation and ensure that every child has the opportunity to thrive in a safe and healthy digital world.


You can learn here.

]]>
https://blogs.thomsonreuters.com/en-us/human-rights-crimes/online-child-exploitation-safe-settings/feed/ 0
New laws and regulations around child safety and privacy raise significant questions /en-us/posts/government/child-safety-privacy/ https://blogs.thomsonreuters.com/en-us/government/child-safety-privacy/#respond Tue, 01 Oct 2024 14:49:27 +0000 https://blogs.thomsonreuters.com/en-us/?p=63220 Over the last five years, we have seen a major shift towards the regulation of children鈥檚 safety and privacy in the digital environment. The shift has been driven by increasing public concerns about the risks children face online and the growing realization that internet that was not designed with them in mind.

International governments and non-governmental bodies also have responded to this challenge. In 2021, the United Nations issued its , the adoption of which made explicit that children鈥檚 rights apply in the digital world as well as the real world. The rights covered by this action included the right to privacy, freedom of expression, and protection from commercial exploitation.

Also in 2021, the Organisation for Economic Co-operation and Development (OECD) produced its , which included principles for a safe and beneficial digital environment for children, with the child鈥檚 best interests as a primary consideration. The OECD鈥檚 action also highlighted the need for risk-based and proportionate regulation, supported by measures that provide age-appropriate child safety by design.

As a range of legislation has been announced by the European Union, the United Kingdom, and the United States, it is clear that policymakers are moving away from industry self-regulation as a solution.

Europe and UK pass groundbreaking laws

The most significant new legislation being developed come from the EU and UK. In 2022, the EU passed the (DSA), which includes a requirement that risk assessments be undertaken of the impacts on children鈥檚 rights online. The DSA also includes a prohibition on targeted advertising to children, and it imposes stronger obligations on the largest platforms and search services, including transparency requirements to publish risk assessments.

In the UK, the (OSA) was passed in 2023, imposing duties of care on platforms that provide user-to-user services and search services. This includes risks assessment duties for services that are likely to be accessed by children. The OSA also goes further than the DSA in setting out the types of content that that platforms must prevent children from encountering online.


In 2021, the OECD produced its “Recommendation on Children in Digital Environment”, which included principles for a safe and beneficial digital environment for children, with the child鈥檚 best interests as a primary consideration.


Also in the UK, the government鈥檚 data protection regulator 鈥 the Information Commissioner鈥檚 Office (ICO) 鈥 has introduced the (AADC), which sets out 15 standards that online services likely to be accessed by children must follow to mitigate risks of harm related to data and privacy. This includes a requirement to conduct data protection impact assessments and implement by default protections. The ICO will apply the AADC when enforcing the UK General Data Protection Regulation.

A recent report, , has tracked changes made by online platforms between the years 2017 and 2024, and it highlights 128 changes made Meta, Google, TikTok, and Snap. The report also discusses how these platforms have increasingly focused on safety by design changes, with clear links to the influence of the AADC, DSA and OSA, despite the relatively early stage of implementation.

US moves towards new legislation and regulation

Meanwhile, the US has had longstanding law in the form of the Children鈥檚 Online Privacy Protection Act (COPPA) at federal level, which has been in place since 1998. COPPA imposes certain requirements on operators of online services directed at children under 13 years of age, and on operators of other websites or online services that have actual knowledge that they are collecting personal information online from a child under 13 years of age.

There is now a debate in the US about the limitations of the safeguards provided in COPPA, and the U.S. Congress is now debating the introduction of the Kids Online Safety Act (KOSA), which would create a duty of care for companies operating a platform. Social media platforms would also have to provide children with tools to protect their personal data, disable addictive product features, and opt out of personalized algorithmic recommendations.


There is now a debate in the US about the limitations of the safeguards provided [in previous laws], and the U.S. Congress is now debating the introduction of the Kids Online Safety Act, which would create a “duty of care” for companies operating a platform.


The U.S. Senate passed their bill on July 30 by a vote of 91鈥3, but it is still unclear whether it will pass in the U.S. House of Representatives and reach the statute book before the coming election in November. The law is somewhat controversial as some free speech advocates are concerned that the definition of harm spelled out in KOSA is too vague and could lead to censorship of information.

A number of US states have also passed their own individual legislation. In fact, a from the University of North Carolina at Chapel Hill found that 13 states had passed a total of 23 laws regarding children鈥檚 online safety.

Not all these laws, however, have passed judicial muster. Most famously, the (AADC), which was modeled on the UK approach and passed in 2022, was blocked by a federal court a year later on the grounds that its requirements for companies could violate the First Amendment rights of free speech. California Attorney General Rob Bonta filed to appeal the preliminary injunction with the U.S. Court of Appeal and awaits the final judgment. (Maryland has also passed its version of the in May 2024.)

Conclusion

With other countries, such as Canada and Brazil, also considering similar laws and regulations, there is clear direction towards regulating children鈥檚 safety and privacy online. This creates responsibilities for those companies that provide online services to assess the harms and risks to children, and develop solutions by design to mitigate these risks. Further, these new duties will require companies to develop child rights impacts assessments for their online services and for new features and design changes. Companies will also have to demonstrate how their design solutions effectively protect children in practice.

The new laws and regulations are also moving companies to use age-assurance technologies, to test and understand their effectiveness, and consider other consequences for privacy and the overall user experience.

In response to these new regulations, many companies are developing new governance methods, policies, and work processes to ensure the requirements of these new laws are embedded into the companies鈥 design and engineering practices. Providing evidence and documentation will also be crucial to demonstrate companies鈥 compliance and accountability to regulators.

Indeed, many of these regulations are still new, and companies also are seeking further guidance about what the requirements mean in practice. We also await the first case decisions from regulators which will illustrate how they are setting the bar in terms of unacceptable risks and practices among these online platforms.

Alongside these questions, the new focus on content risks 鈥 not just data protection and privacy risks 鈥 will also create significant challenges for platform operators in balancing new regulatory requirements with their users鈥 expected freedom of expression.

]]>
https://blogs.thomsonreuters.com/en-us/government/child-safety-privacy/feed/ 0
Practice Innovations: Seeing is no longer believing 鈥 the rise of deepfakes /en-us/posts/technology/practice-innovations-deepfakes/ https://blogs.thomsonreuters.com/en-us/technology/practice-innovations-deepfakes/#respond Tue, 18 Jul 2023 17:46:03 +0000 https://blogs.thomsonreuters.com/en-us/?p=57851 The most recent Indiana Jones movie The movie makers used artificial intelligence to comb through all of the decades-old footage of the actor and create a younger Ford.

This technology is called deepfake, and it is not just catching on in , but it is also a growing . Recently, more than $240,000 was stolen by someone pretending to be an executive from a British energy company. This event does not seem all that out of the ordinary, except that the executive was not even a real person.听听in order to imitate the real executive 鈥 and they got away with it.

Using artificial intelligence (AI), deepfake technologies can generate or manipulate digital media, particularly video and audio content, in a way that is difficult for viewers to distinguish from authentic, original material. It involves using machine learning algorithms to synthesize new content that is based on existing data, such as images or videos of real people.

However, deepfake technology has the potential to be used for both positive and negative purposes. On the positive side, it could be used to create more realistic visual effects in movies or to generate realistic simulations for training purposes. On the negative side, it could be used to spread false information or to manipulate public opinion by creating fake videos of people saying or doing things that they never actually said or did. There are also concerns about the potential for deepfake technology to be used for malicious purposes, such as creating fake videos of politicians or other public figures in order to discredit them.

What are some other benefits of deepfake technology?

There are many potential benefits of deepfake technology, including:

      • Educational applications 鈥 Deepfake technology could be used to create educational videos or simulations that are more engaging and interactive for students.
      • Improved visual effects 鈥 Deepfake technology could be used to create more realistic visual effects in movies, television shows, and other forms of media. This could lead to a more immersive and engaging viewing experience for audiences.
      • Enhanced simulations 鈥 Deepfake technology could be used to create realistic simulations for training purposes in a variety of industries, such as aviation, military, and healthcare. This could help to prepare professionals for real-life scenarios and improve their decision-making skills.
      • Increased accessibility 鈥 Deepfake technology could be used to create subtitles or translations for audio and video content, making it more accessible to people who are hearing impaired or who speak different languages.

What are some downsides of deepfake technology?

Not surprisingly, there are several potential downsides to deepfake technology, including:

      • Misinformation and propaganda 鈥 Deepfake technology could be used to spread false information or propaganda by creating fake videos or audio recordings of people saying or doing things that they never actually said or did. This could have serious consequences, such as undermining public trust in institutions, sowing political discord, or even inciting violence.
      • Privacy violations 鈥 Deepfake technology could be used to create fake videos or audio recordings of people without their consent, potentially violating their privacy.
      • Personal harm 鈥 Deepfake technology could be used to create fake videos or audio recordings of people that are embarrassing, offensive, or damaging to their reputation. This could lead to personal harm or distress for the individuals depicted in the fake content.
      • Legal issues 鈥 Deepfake technology could create legal issues related to intellectual property, copyright, and defamation. For example, if a deepfake video is used to defame someone or to falsely attribute a statement to them, it could lead to legal action.
      • Ethical concerns 鈥 There are also ethical concerns about the use of deepfake technology, particularly with respect to consent and transparency. It is important to ensure that people are aware when they are interacting with deepfake content and that they have given their consent for their images or voices to be used in this way.

How to guard against deepfake technology

Even though someone can fabricate a fake but persuasive video, software engineers, governments, and journalists can oftentimes still determine if it鈥檚 real or fake, , a disinformation expert at the Stanford Internet Observatory. Usually there are tells that are clues to a careful observer that a deepfake is at work, such as something that doesn鈥檛 look quite right.

For example, in a of Ukrainian President Volodymyr Zelensky, he appears to surrender to Russia in the current conflict. However, his oversize head and peculiar accent identified the video as a deepfake, and it eventually was removed from social media. Unfortunately, as deepfake technology improves, these tells will become harder to spot. Yet, as this technology evolves, detection tools will also evolve.

Despite the lack of mature detection tools, here are some suggestions that may help people and institutions guard against deepfake technology:

Be skeptical of media 鈥 It is important to be critical of the media that you consume and be certain to verify the authenticity of any video or audio content that you come across. Look for signs that the content may be a deepfake, such as unnatural movements or distortions in the video or audio.

Get serious about identity verification 鈥 Users need to exercise due diligence in verifying that someone is who they claim to be.

If available, use deepfake detection tools 鈥 Detection tools are slowly with multiple companies working on . For instance, Intel has introduced a real-time that is able to determine whether the subject in a video is real and shows 鈥渂lood flow鈥 in their face.

Educate users 鈥 Familiarize your users with the types of deepfake content that are out there, and teach them how to be skeptical about media.

Government regulation 鈥 Regulatory or legal measures should be put in place to address the negative impacts of deepfake technology. Unfortunately, given the complex and evolving nature of this technology, many governments around the world are to define how to best protect their citizens. Some governments, however, have already started to consider legislation that would prohibit the use of deepfake technology for malicious purposes.

Adopt a zero-trust security model 鈥 is new way to look at computer security. It works on the assumption that your networks are already breached, your computers are already compromised, and all users are potential risks 鈥 trust no one or anything and always verify.

Confirm and deploy basic security measures 鈥 Basic cybersecurity best practices will play a vital role when it comes to minimizing the risk for any deepfake cybersecurity attack. Some critical actions you can take include: i) making regular backups to protect your data; ii) using stronger passwords and change them frequently; and iii) continuing to secure your systems and educate your users.

What is the future of deepfake technology?

Right now, deepfake technology is in its infancy, and it can be easily recognized as fake. However, deepfake technology is quickly maturing and increasingly becoming more difficult to detect.

While there are many initiatives from technology companies try to combat deepfakes, it will be a until we finally outpace deepfake creators who more often than not, can quickly find new ways to stay ahead of detection methods and continue to cover their tracks.

]]>
https://blogs.thomsonreuters.com/en-us/technology/practice-innovations-deepfakes/feed/ 0
Fraudsters targeting senior citizens with multiple financial scams /en-us/posts/investigation-fraud-and-risk/senior-citizens-financial-scams/ https://blogs.thomsonreuters.com/en-us/investigation-fraud-and-risk/senior-citizens-financial-scams/#respond Mon, 10 Jul 2023 13:45:25 +0000 https://blogs.thomsonreuters.com/en-us/?p=57748 Senior citizens 鈥 any individual 60 years old or older 鈥 are perceive them to be less tech-savvy and more financially stable. It鈥檚 important to understand the nature of these financial schemes in order to minimize the risks of fraud.

Different types of scams

Government imposter scams

Government imposter scams cost seniors about $122 million in 2021. In these schemes, criminals pretending to be from the Internal Revenue Service (IRS), Social Security Administration (SSA), or Medicare.

These scammers may tell seniors that they owe a debt that must be paid immediately or face arrest, asset seizure, or termination of benefits. They also create a false sense of urgency in order to get their victims to act immediately and avoid talking with anyone who might detect the scam.

Seniors can avoid these schemes by following these guidelines from the FTC:

      • Don’t send cash, gift cards, or cryptocurrency to pay someone who claims they are with the government
      • Don’t give financial or other personal information to someone who calls, text messages, or emails claiming to be from the government
      • Don’t trust your caller ID
      • Don’t click on links in unexpected emails or text messages

Be aware that no government agency will contact a senior citizen or anyone else by phone, email, or text message to demand payments or personal information. This alone is an important safeguard to avoid becoming the victim of a government impersonator scam.

Sweepstakes scams

Sweepstakes or lottery scammers contact seniors by phone or mail to say that the senior has won the sweepstakes or lottery. Sometimes the prize is cash and sometimes it is something else of value such as a new car.


You can read the full white paper

here.


The scammer will tell the senior that in order to claim the prize, the senior has to for example, wire a few hundred to a few thousand dollars to cover processing fees and taxes by gift cards, electronic wire transfers, money orders, or cash. Using these payment methods makes the transactions nearly untraceable.

Often the scammer may also suggest that the senior keep the prize a secret from their family so that it can be a surprise. This keeps the senior from discussing the scheme with others who might intervene to prevent the scam.

Illegal robocalls & phone scams

Many schemes rely on high-volume illegal robocalls that often originate overseas but spoof phone numbers with local area codes. These robocalls are a low-cost way for scammers to identify potential victims for other schemes, and people who were scammed through spam calls lost an average of $431 in 2022.

Robocall scams may do nothing more than target seniors to social engineer them into answering 鈥淵es鈥 to a simple question such as, 鈥淐an you hear me?鈥 When the senior answers, their response is recorded, and that 鈥淵es鈥 answer can be edited to sound . In fact, scams relying on phone calls resulted in $280 million in losses to people 60 and older in 2021, according to the FTC.

Computer tech support scams

Tech support scams can take different forms. In any case, the tech support person may claim to be from a well-known company such as Microsoft or Apple and request “remote access” to the senior’s computer to fix the issue.

Once the scammers have access, the senior may be locked out of their computer until they pay a fee to the scammer. Alternatively, the scammer may use that computer access to steal financial account information, including passwords, that are stored on the computer. These tech support scams resulted in $73 million in losses to seniors in 2021.

Seniors should hang up on any unsolicited call that claims there is a problem with their computer. If there is an issue with a computer, seniors should get help only from a trusted source.

Grandparent scams

Grandparent scams occur when someone calls a senior claiming to be the senior’s grandchild or a law enforcement officer who has detained the senior’s grandchild. The scammer tells the senior that the grandchild is in trouble and needs money to help with an emergency, such as getting out of jail, paying a hospital bill, or leaving a foreign country.

The scammer will play on the senior’s emotions in an attempt to get the senior to wire money to the caller. The fraudster will create a sense of urgency and will pressure the senior to send money in the fastest way possible. If the senior does send money, the scammer will call back to ask for additional money for fees.

To reduce the risk of falling for this scheme, the Senate Special Committee on Aging recommends that seniors:

      • resist the urge to act immediately;
      • ask the person questions only the relative would know to verify their identity;
      • call a phone number that the senior knows belongs to their family member; and
      • check out the story with other members of the family even if asked to keep it a secret.

Romance scams

Romance scams that target seniors are increasing, resulting in significant financial losses

Romance scams happen when a senior meets someone online who lavishes them with attention and . Romance scammers often use dating apps or social media to identify their victims. Some romance scammers target victims in order to use them as money mules. The scammers convince victims to receive the illegal proceeds of crime and then forward that money to the fraudsters, which could result in criminal charges against the unwitting .

To avoid becoming the , seniors should:

      • never send money or gifts to someone they haven’t met in person;
      • take it slowly and take steps to verify the identity of the person;
      • talk to someone they trust about their new love interest; and
      • cut off contact right away if they suspect a romance scam.

Conclusion

The costs of fraud against seniors have significantly increased across all of the top forms of fraud. Awareness of the schemes that scammers use to target older individuals can help reduce the risk that senior citizens will become a victim of these crimes in the future.

]]>
https://blogs.thomsonreuters.com/en-us/investigation-fraud-and-risk/senior-citizens-financial-scams/feed/ 0
Under the influence: Regulatory responses to financial promotions by social media influencers /en-us/posts/investigation-fraud-and-risk/finfluencers-regulatory-response/ https://blogs.thomsonreuters.com/en-us/investigation-fraud-and-risk/finfluencers-regulatory-response/#respond Tue, 06 Jun 2023 17:43:34 +0000 https://blogs.thomsonreuters.com/en-us/?p=57502 International and national regulatory bodies are raising a hue and cry over the use of social media celebrities who can influence consumers’ financial decisions 鈥 so called finfluencers. The number of high-profile cases has increased visibility in this sector leading to a subsequent increase in regulatory focus on these finfluencers.

The European Securities and Markets Authority (ESMA) is conducting a听听with national regulators throughout 2023, checking whether marketing communications, including collaborations with influencers, follow disclosure rules.

The European Commission has听 the Distance Marketing of Consumer Financial Services Directive with new requirements regarding financial services contracts concluded at a distance that will be added to the European Union鈥檚 2011 Consumer Rights Directive, to reflect the digitization of financial services.

The European Parliament’s internal market and consumer protection committee recently voted for these new provisions to include rules requiring finfluencers to declare whether they are competent to promote a product and whether they have received any remuneration. Finance Watch, a European campaign group, wants the revised the Directive to ban influencer marketing of risky investment products.

Stricter rules

At the member-state level, stricter rules on the mass marketing of virtual currencies brought in by the听听have recently taken effect. Spain also introduced new advertising requirements for cryptocurrencies, and French politicians have听听that would ban influencers from promoting investments or digital assets.

In the United Kingdom, the Financial Conduct Authority (FCA)听听from using paid-for social media promotions in February 2022 over concerns about its partnership with a finfluencer, and then separately warned the public about crypto-products promoted on social media. The FCA also stepped up interventions over financial promotions, with 8,582 amended or withdrawn in 2022 against 573 in 2021.

“Last year, we saw an increase in the use of bloggers and influencers on social media, such as Instagram, Facebook and YouTube, promoting financial products, particularly investment products, to younger age groups,鈥 the FCA stated in its听听for 2022. 鈥淲e also saw an ongoing trend in the number of bloggers promoting credit on behalf of unauthorized third parties, with a particular growth in financial promotions targeting students.”

There are two points to note, however. First, it is not automatically wrong for firms to use a social media celebrity or a finfluencer to promote a product or service, provided the rules are followed. And many are doing just that. A听听from the International Organization of Securities Commissions (IOSCO) found regulators had seen more use of influencer marketing by firms, and 43% of European firms planned to increase their use.

“Many influencers have a follower base which is very attractive to financial services firms due to it matching their target customer demographic,” says Ian Taylor, head of crypto and digital assets at KPMG UK. “Using an influencer may also be cheaper than using traditional marketing channels. Until recently, the rules and repercussions surrounding the accuracy of advertising through social media channels were unclear. This meant that using influencers could allow firms to avoid the scrutiny that comes with more traditional marketing campaigns.”

Lack of trust

Second, the rise of finfluencers is symptomatic of other problems facing the retail investment market. Trust in traditional financial and investment firms is not great, and the expense deters people from getting formal financial advice. The FCA’s 2020 Financial Lives survey found that 26% of consumers distrusted the industry, just 35% of 18鈥24-year-olds trusted it, and only 17% of those with 拢10,000 (US $12,527) or more in investable assets had sought advice. As a result, people are becoming self-directed investors, with finfluencers filling an information and confidence void.

“Finfluencers can help to democratize and demystify financial services,” explains Scott Guthrie, director-general of the Influencer Marketing Trade Body in London. “Financial education was once the preserve of the already-rich and middle class. Today finfluencers provide relatable, lived experiences. Through engaging storytelling, finfluencers connect with their communities on important topics that are often not being talked about by their friends and family.”

Regulators’ main concern seems to be what finfluencers promote and how. Many advertise that familiar boogeyman of digital finance: high-risk, sometimes fraudulent, crypto-schemes. Some finfluencers frequently fail to disclose that they are being paid, and the informality that makes them engaging can often mean their promotions break regulations.

An听听on retail market misconduct published in March said finfluencers make investment more accessible, but cause problems with transparency and advice being given by authorized persons. The report recommended that, where applicable, regulators remind firms that they are responsible for the online communications of affiliates such as finfluencers. IOSCO also said regulators should provide guidance on finfluencers’ obligations and be ready to take enforcement or other disruptive action against those who promote misleading and deceptive products or information. IOSCO also recommended that regulators go to where the fight is and use social media platforms to target fraudsters and alert investors.

To that end, Belgium鈥檚 FSMA is backing its new rules on advertising virtual currencies with financial education about the assets, including videos and a game aimed at young people.

In early April, the FCA 听with the Advertising Standards Authority (ASA) and social media celebrity Sharon Gaffka to educate finfluencers about the obligations and risks they face. The FCA stated it wanted to work with finfluencers so they keep on the right side of the law because they often tout products without knowing the rules or understanding the harm they could cause their followers.

Criminal offense

Furthermore, the rules finfluencers should follow when promoting crypto-assets to U.K. consumers are to become more onerous. The government plans to introduce legislation classifying crypto-assets as restricted mass market investments and subject to similar additional warnings and obligations.

“The FCA and ASA reminded finfluencers that making an unlawful financial promotion is a criminal offense that carries a maximum sentence of two years imprisonment and an unlimited fine,” says Kate Dawson, sector lead of capital markets at KPMG’s regulatory insight center. 鈥淒ue to this increased scrutiny, the use of influencers by crypto-businesses is likely to decrease and, in any case, [should be] approached with more caution.”

]]>
https://blogs.thomsonreuters.com/en-us/investigation-fraud-and-risk/finfluencers-regulatory-response/feed/ 0
How organizations can best navigate the polarization & politicization of ESG /en-us/posts/news-and-media/navigating-esg-politicization/ https://blogs.thomsonreuters.com/en-us/news-and-media/navigating-esg-politicization/#respond Mon, 06 Feb 2023 17:17:32 +0000 https://blogs.thomsonreuters.com/en-us/?p=55713 One of the top three biggest challenges for organizations in 2023 is how to de-politicize environmental, social & governance (ESG) issues, says , Managing Director of ESG & Sustainability at Grant Thornton, adding that the challenge is being felt across industries and sectors, and the legal industry is not immune.

In fact, law firm leaders recently have described the delicate dance that needs to be done when the firm is representing clients that could be on opposing sides of an issue in matters that have nothing to do with each other. The new term for these situations is corporate political responsibility, which is when an organization takes a stance on particular topics that are happening right now.

, former executive director of the Law Firm Sustainability Network and Partner at Vorgate Legal ESG Impact, says she sees the pressure law firms and other legal organizations are facing around corporate political responsibility increasing in 2023, citing the responses by many law firms in early 2022 to the Russian invasion of Ukraine as evidence that this pressure is already happening.


One head of sustainability at a law firm commented how some of its largest clients are asking the firm to agree with its public statements on an issue.


Another aspect of this is the pressure that some law firms are getting from their own clients, which may prefer that their suppliers to be on the same side of an issue. One head of sustainability at a law firm commented how some of its largest clients are asking the firm to agree with its public statements on an issue.

Still, another difficult element is how to navigate potentially opposing views between employee groups and clients. Activism among stakeholder groups is increasing; for example, a law firm might have a group of employees who are passionately advocating for the firm to reduce its carbon footprint while it鈥檚 also representing a company in the fossil fuel industry in a transactional matter.

Whatever the specific circumstances, pressure from stakeholders on alignment in values is increasing and unlikely to change direction any time soon. Organizations need to be smart in how they respond, and many organizations are using these best-practice tactics to do so.

Best ways to navigate politicization of ESG issues

Reframe ESG as part of business efficiency 鈥 Make the case that ESG boils down to smart business. In many respects, depoliticizing ESG is an expanded that seeks to widen the lens of enterprise risk and opportunities.

In 2019, the , an听association of chief executive officers (CEOs) of America鈥檚 leading companies, re-defined the purpose of a corporation in a public statement signed by 181 CEOs saying that corporate leaders should be committed to operating their companies in a way that benefits all stakeholders, including customers, employees, suppliers, local communities, and shareholders.

This shift from shareholder value to stakeholder capitalism was important because the statement expanded a corporation鈥檚 purpose beyond shareholder primacy, a directive that had been around since the 1970s.

Today, stakeholder capitalism is analogous to ESG in many ways. Indeed, ESG or sustainability might be the name du jour, but essentially, business measures of performance, success, efficiency, and effectiveness across stakeholder groups are well established and pretty consistent whether a company has a formal ESG strategy or not.

Employ holistic stakeholder listening and align responses to corporate values 鈥 Understand that the landscape and navigation of thorny issues will continue and perhaps get even more precarious. The best way forward is through authentic listening with individual stakeholders.

鈥淭he guidance I give is to relate these issues on how it is going to affect someone and what the corresponding action plan is,鈥 Joshi says. 鈥淚f your client happens to be on the opposite side of an issue, they can respect that you were following up directly to the stakeholder group that communicated that the issue was important.鈥


鈥淒emagoguing is happening. But when you look at the individual elements in [ESG], it’s very hard to actually find reasons why you should not support it.鈥


For example, one law firm leader recently shared how their firm used this tactic in the aftermath of the Dobbs decision, which effectively struck down abortion rights on a federal level. The firm immediately was pushed by its younger employees to take a public stance. And while the firm did not take a public stand on Dobbs, it did respond with authenticity to feedback from one of its key stakeholder groups 鈥 its younger talent 鈥 in a myriad of ways.

Should a client have expressed dissatisfaction on the firm鈥檚 actions, the firm could then point to how it鈥檚 aligning to its values as an employer that relies heavily on high-quality talent. In that role, the firm seeks to provide a culture of care that actively listens to its employees. While a particular client may not agree with the firm鈥檚 actions, it can respect the fact that the law firm listened and responded as part of its commitment to support employees and continue to attract and retain key talent.

Drill down into the details 鈥 Polarization around ESG is real, and in many cases, it is blowback to progress. Examining a specific issue, such as greenhouse gas reduction or corporate governance, within ESG can be an effective way through the murkiness. Indeed, there is general acceptance of climate risk and the need for diversity to produce better business performance.

鈥淒emagoguing is happening,鈥 says , CEO of Benchmark ESG. 鈥淏ut when you look at the individual elements in [ESG], it’s very hard to actually find reasons why you should not support it.鈥

]]>
https://blogs.thomsonreuters.com/en-us/news-and-media/navigating-esg-politicization/feed/ 0