Human Capital Archives - 成人VR视频 Institute https://blogs.thomsonreuters.com/en-us/topic/human-capital/ 成人VR视频 Institute is a blog from 成人VR视频, the intelligence, technology and human expertise you need to find trusted answers. Mon, 13 Apr 2026 20:45:52 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 Human layer of AI: How to build human-centered AI safety to mitigate harm and misuse /en-us/posts/human-rights-crimes/human-layer-of-ai-building-safety/ Mon, 09 Mar 2026 17:33:34 +0000 https://blogs.thomsonreuters.com/en-us/?p=69789

Key highlights:

      • Map risks before building鈥 Distinguish between foreseeable harms that may be embedded in your product’s design and potential misuse by bad actors.

      • Safety processes need real authority鈥 An AI safety framework is only credible if it has the power to delay launches, halt deployments, or mandate redesigns when human rights risks outweigh business incentives.

      • Triggers enable proactive intervention鈥 Define clear, automatic review triggers such as product updates, geographic expansion, or emerging patterns in user reports to ensure your safety processes adapt as risks evolve rather than reacting after harm occurs.


In recent months, the human cost of AI has become impossible to ignore. after interacting with AI chatbots, while generative AI (GenAI) tools have been weaponized to create that digitally undress women and children. These tragedies underscore that the gap between stated values around AI and actual safeguards remains wide, despite major tech companies publishing responsible AI principles.

, a senior associate at , who works at the intersection of technology and human rights, argues that closing this gap requires companies to: i) systematically assess both foreseeable harms from intended AI use and plausible misuse by bad actors; and ii) build safety processes powerful enough to actually stop launches when risks to people outweigh commercial incentives.

Detailing the two-step framework for anticipating and addressing AI risks

To build effective AI safety processes, companies must first understand what they’re protecting against, then establish credible mechanisms to act on that knowledge.

Step 1: Mapping foreseeable arms and intentional misuse

When mapping AI risks during 鈥渞esponsible foresight workshops鈥 with clients, Richard-Carvajal says she takes them through a process that identifies:

    • foreseeable harms that emerge from a product’s design itself. For example, algorithm-driven recommender systems 鈥 which often are used by social media platforms to keep users on the site 鈥 are designed to drive engagement through personalized content, and are well-documented in amplifying sensationalist, polarizing, and emotionally harmful content, according to Richard-Carvajal.
    • intentional misuse that involves bad actors who may weaponize technology beyond its purpose. Richard-Carvajal points to the example of Bluetooth tracking devices, which initially were designed to help people find lost items, but were quickly exploited by stalkers, who placed them in victims’ handbags in order to track their movements and in some cases, to follow them home.

Tactically, the role-playing use of “bad actor personas” by Richard-Carvajal and her colleagues can help clients imagine misuse scenarios and help ensure companies anticipate harm before it occurs rather than responding after people have been hurt.

Step 2: Building a credible AI safety process

Once risks are identified, Richard-Carvajal says she advises that companies identify mechanisms to address them.听The components of a legitimate AI safety framework mirror the structure of robust human rights due diligence by centering on the risks to people.

Indeed, Richard-Carvajal identifies core components of this framework, which include: i) hazard analysis and to anticipate both foreseeable harms and potential misuse; ii) incident response mechanisms that allow users to report problems; and iii) ongoing review protocols that adapt as risks evolve.

Continual evaluation of new emerging risks is needed

As AI capabilities advance and deployment contexts expand, companies must continuously reassess whether their existing safeguards remain adequate against evolving threats to privacy, vulnerable populations, human autonomy, and explainability. Richard-Carvajal discusses each one of these factors in depth.

Privacy 鈥 Traditional privacy mitigations, such as removing information that leads to identifying specific individuals, are no longer sufficient as AI systems can now re-identify individuals by linking supposedly anonymized data back to specific people or using synthetic training data that still enables re-identification. The rise of personalized AI 鈥 in which sensitive information from emails, calendars, and health data aggregates into comprehensive profiles shared across third-party providers 鈥 can create new privacy vulnerabilities.

Children 鈥 Companies must apply a heightened risk lens for vulnerable populations, such as children, because young users lack the same capacity as adults to critically assess AI outputs. Indeed, the growing concerns around AI usage and children are warranted because of AI-generated deepfakes involving real children are being created without their consent. In fact, Richard-Carvajal says that current guidance calls for specific child rights impact assessments and emphasizes the need to engage children, caregivers, educators, and communities.

Cognitive decay 鈥 A growing concern is that too much AI usage can harm human autonomy and contribute to a decline in critical thinking. This occurs when , and it has the potential to undermine their human rights in regard to work, education, and informed civic participation.

Meaningful explainability 鈥 Companiescommitment to explainability as a core tenet of their responsible AI programs was always a challenge. As synthetic AI-generated data increasingly trains new models, explainability becomes even more critical because engineers may struggle to trace decision-making through these layered systems. To make explainability meaningful in these contexts, companies must disclose AI limitations and appropriate use contexts, while maintaining human-in-the-loop oversight for consequential decisions. Likewise, testing explanations should require engagement with actual rights holders instead of just relying on internal reviews.

Moving forward safely

While no universal checklist exists for AI safety, the systematic approach itself is non-negotiable. Success means empowering engineers to identify and address human-centered risks early, maintaining ongoing stakeholder engagement, and building safety processes that have genuine authority to delay launches, halt deployments, or mandate redesigns when human rights outweigh commercial pressures to ship products.

If your company builds or deploys AI, take action now: Give your engineers and risk teams the authority and resources to identify harms early, keep continuous engagement with affected people and independent stakeholders, and create governance that have the power to keep harm from happening.

Indeed, companies need to make sure these steps go beyond simple best practices on paper and make these protective processes operational, measurable, and enforceable before their next product release.


You can find more about human rights considerations around AI in our ongoing听Human Layer of AI serieshere

]]>
Human Layer of AI: How to hardwire human rights into the AI product lifecycle /en-us/posts/human-rights-crimes/human-layer-of-ai-hardwire-human-rights/ Tue, 27 Jan 2026 16:50:00 +0000 https://blogs.thomsonreuters.com/en-us/?p=69143

Key highlights:

      • Principles need a repeatable process 鈥斕齊esponsible AI commitments become real only when companies systematize human rights due diligence to guide decisions from concept through deployment.

      • Policy and engineering teams should co-own safeguards 鈥 Ongoing collaboration between policy and technical teams can help translate ideals like fairness into concrete requirements, risk-based approaches, and other critical decisions.

      • Engage, anticipate, document, and improve continuously 鈥斕齀nvolving impacted communities, running regular foresight exercises (such as scenario workshops), and building strong documentation and feedback loops make human rights accountability durable, instead of a one-time check-the-box exercise.


More and more companies are adopting responsible AI principles that promise fairness, transparency, and respect for human rights, but these commitments are difficult to put into practice when it comes to writing code and making product decisions.

, a human rights and responsible AI advisor at Article One Advisors, works with companies to help turn human rights commitments into concrete steps that are followed across the AI product lifecycle. He says that the key to bridging the gap between principles and practice is embedding human rights due diligence into the framework that guides product development from concept to deployment.

Operationalizing human rights

Human rights due diligence involves a structured process that begins with immersion in the process of building the product and identifying its potential use cases, whether it is an early concept, prototype, or an existing product. This is followed by an exercise to map the stakeholders who could be impacted by the product, along with the salient human rights risks associated with its use.

From there, the internal teams collectively create a human rights impact assessment, which examines any unintended consequences and potential misuse. They then test existing safeguards in design, development, and how and to whom the product is sold. “Typically, a new product will have many positive use cases,鈥 explains Natour. 鈥淭he purpose of a is to find the ways in which the product can be used or misused to cause harm.” In Natour’s experience, the outcome is rarely a simple go or no-go decision. Instead, the range of decisions often includes options such as go with safeguards or go but be prepared to pull back.

Faris Natour, of Article One Advisors

The use of human rights due diligence in the AI product lifecycle is relatively new (less than a decade old) and as Natour explains, there are five essential actions that can work together as a system:

1. Encourage collaboration between policy and engineering teams

Inside most companies, responsible AI is split between policy teams, which may own the principles, and the engineering teams, which own the systems that bring those principles to life. Working with companies, Natour brings these two functions together through a series of workshops to create structured, ongoing collaboration between human rights and responsible AI experts and the technical teams to better co-develop responsible AI requirements.

In the early stages of the collective teams鈥 work, the challenges of turning principles into practice emerge quickly. For example, the scale of applications and use cases for an AI product can make it difficult to zero in on those uses that . Not all products or use cases need to be treated equally, says Natour, and companies should identify those that could potentially cause the most harm. Indeed, these most-harmful uses may involve a “consequential decision” such as in the legal, employment, or criminal justice fields, he says, adding that those products should be selected for deeper due diligence.

2. Consider the principles at each stage of the development process

Broad principles and values, such as fairness and human rights, should be considered at each stage of the lifecycle. For the principle of fairness, for example, teams may assess which communities will use this product and who will be impacted by those use cases. Then, teams should consider whether these communities are represented on the design and development teams working on the product, and if not, they need to develop a plan for ensuring their input.

3. Engage with impacted communities and rightsholders

Natour advocates for companies to actively engage with impacted communities and stakeholders, including those who are potential users or who may be affected by the product鈥檚 use. This could be the company鈥檚 own employees, for example, especially if the company is developing productivity tools to use internally in their workplace. Special consideration should be given to vulnerable and marginalized groups whose human rights might be at greatest risk.

External experts, such as Natour and his colleagues, hold focus groups with such stakeholders as . The feedback from focus groups can then be used to influence model design, product development, as well as risk mitigation and remediation measures. “In the end, knowing how users and others are impacted by your products usually helps you make a better product,” he states.

4. Establish responsible foresight mechanisms

To prevent responsible AI from becoming a one-time check-the-box exercise, Natour says he uses responsible foresight workshops and other mechanisms as a “way to create space for developers to pause, identify, and consider potential risks, and collaborate on risk mitigations.鈥

The workshops use personas and hypothetical scenarios to help teams identify and prioritize risks, then design concrete mitigations with follow-on sessions to review progress. Another approach includes developing simple, structured question sets that push product teams to pause and think about harm. For example, Natour explains how one of his clients includes the question: What would a super villain do with this product? in order to help product teams identify and safeguard against potential misuse.

5. Create documentation and feedback loops for accountability

As expectations around assurance rise from regulators, customers, and civil society, strong documentation and meaningful, accessible transparency are essential, says Natour.听Clear, succinct, and accessible user-facing information about what a model does and does not do, about data privacy, and other key aspects can help users understand “what happens with their data, as well as the capabilities and the limitations of the tool they are using,” he adds.

Further, transparency should enable two-way communication, and companies should set up feedback loops to enable continuous improvement in the ways they seek to mitigate potential human rights risks.

The hardwired future

Effectively embedding human rights into the AI product lifecycle starts with a shared governance model between a company鈥檚 policy and engineering teams. Together they can collectively hardwire human rights into the way AI systems are imagined, built, and brought to market.


You can find more about human rights considerations around AI in our here

]]>
Human rights due diligence and mega sporting events /en-us/posts/human-rights-crimes/mega-sporting-events/ Thu, 22 Jan 2026 11:42:50 +0000 https://blogs.thomsonreuters.com/en-us/?p=69091

Key insights:

      • Effective human rights due diligence 鈥 Human rights can be hardwired into procurement by setting standards that include clear documentation thresholds, a code of conduct that bans forced labor and trafficking, a supplier assessment questionnaire, a locally informed worker safeguards addendum, and a risk-based vendor-grading rubric.

      • Procurement should feature human rights enforceable obligations 鈥 Further, human rights can be hardwired into commitments, such as request for proposals, vendor evaluation, and contract clauses.

      • Engaging unions and community groups early can lead to strong execution 鈥 Effective implementation relies on early stakeholder structures (unions, community groups, etc.), robust worker grievance mechanisms, and independent interviewers, complemented by AI-driven monitoring and continuous, rapid risk response.


Mega sporting events can have a significant impact on local economies, but they also pose substantial human rights risks, including labor exploitation, forced displacement, and sex trafficking. With the Super Bowl and Winter Olympics coming up next month, and the World Cup in summer, it鈥檚 crucial that organizations, communities, and governments prepare now to mitigate any human rights problems with these events.

As an advisor to host cities on human rights with more than a decade of experience now as the chief executive of , I have seen firsthand how the right commitments and responsible contracting practices can help mitigate these risks. By prioritizing human rights and adopting robust contracting practices, the cities that host these mega sporting events can ensure a positive legacy that extends beyond the event itself.

This was a recent topic at an event hosted by 成人VR视频 and the International Labor Organization as part of its in which representatives from host cities, civil society organizations, and governments came together to discuss best practices to turn commitments around human rights into action during the FIFA World Cup games later this year. As a participant in this event, Henekom shared our approach in translating high鈥憀evel human rights commitments into context鈥憇pecific safeguards in order to create the social architecture that aligns organizational practice with community needs.


January is National Human Trafficking Prevention Month in the United States.听Check out our Human Rights Crimes resource center to learn how tostop and prevent human trafficking


Centering human rights by using rigorous contracting standards starts with local jurisdictions working with multidisciplinary stakeholders to embed strong and comprehensive policies and protocols at all stages of event planning. In my experience, an all-inclusive approach typically shares five elements:

      1. Clear thresholds in human rights documentation that are designed for speed of business.
      2. Code of conduct with essential ingredients, which include explicit bans on forced labor, trafficking, and other exploitation.
      3. Supplier assessment questionnaire (SAQ) that flags geographic and sector risk, such as temporary labor of food service employees.
      4. Worker safeguards addendum (WSA) that is built from local labor stakeholders who have lived concerns that help to translate the United Nations Guiding Principles on Business and Human Rights (UNGPs) into local realities.
      5. Risk-based grading rubric for vendors that weights SAQ and WSA responses and turns them into a contracting risk rating.

In my experience, implementing these policies and tools deeply within the organization means embedding requirements at three critical junctures: i) request for proposals (RFPs); ii) vendor evaluation as part of the selection process; and iii) contract clauses. First, when subject-matter experts draft RFPs, the workflow should force-check human rights and sustainability language (or auto-insert standard clauses). Second, during vendor evaluation, the human rights team grades each SAQ/WSA and assigns a risk-based score. Third, contracts must lock in enforceability with particular emphasis on audit rights, corrective action plans, termination for cause, access to remedy, and accountability mechanisms, such as payment withholding.

Vendor contract agreements between the host cities and primary contractors are the best vehicle to incorporate enforcement of these rights. Likewise, provisions for these rights should also be incorporated into contracts between primary contractors and any subcontractors.


Centering human rights by using rigorous contracting standards starts with local jurisdictions working with multidisciplinary stakeholders to embed strong and comprehensive policies and protocols at all stages of event planning.


Temporary labor at mega sporting events 鈥 which include individuals working in private security, souvenir sales, construction, janitorial, and food service 鈥 adds complexity but does not have to stifle efforts to honor decent work and other human rights. With a solid sourcing policy, vendors get practical tools and technical assistance to implement requirements quickly.

Common examples include building a checks-and-balances loop with worker centers to receive complaints, and data reporting to track hours, wages, recruitment fees, and grievance outcomes. The risk-based grading rubric for vendors ideally determines the monitoring intensity, frequency of site visits, and reporting cadence.

Effective approaches for implementation

Beyond contract language, the following three actions and tools to help instill accountability in human rights commitments are recommended:

Working with stakeholders from day one 鈥 To effectively safeguard human rights, it’s crucial to establish standing stakeholder structures, such as advisory councils and labor roundtables, in order to co-create standards and monitor progress with unions and community groups. By doing so, organizations can ensure workers’ voices are heard, issues are escalated, and commitments are translated into tangible results through collective action and remediation advice.

Centering workers and ensuring access to grievance mechanisms 鈥 Establishing on-site, back-of-house centers for workers with confidential and multilingual intake processes, along with clear resolution pathways, is an effective way to drive accountability and reinforce human rights commitments. Using trained, independent worker interviewers with unannounced access to ensure compliance across venues, shifts, and subcontractor tiers further adds to this accountability.

Together, these approaches provide a means for workers to report concerns, verify compliance with policy requirements, and ensure that human rights are respected throughout the supply chain.

Using AI to fortify accountability 鈥 AI offers powerful tools for detecting and preventing labor exploitation in supply chains through automated monitoring and pattern recognition. Likewise, natural language processing may be able to analyze hotline transcripts and grievance logs to identify trends.

Even with the best policies and accountability tools, however, risks still persist because operating and business conditions are dynamic. New suppliers are added late, or a hot day turns into potentially harmful working conditions. This makes human rights due diligence a continuous requirement with ongoing risk monitoring, fast incident response, and a humble posture to make it right quickly, transparently, and fairly.

If host cities want a legacy that lasts beyond the mega sporting events鈥 closing ceremony, it is critical to ensure that the people who made the spectacle possible were seen, protected, paid, and heard. Doing the right thing is strategy 鈥 contracts and worker-centered approaches are how it shows up on the ground.


You can find out more about how organizations are trying to fight against human rights crimes here

]]>
Human Layer of AI: The crosswinds of AI, sustainability, and human rights enter the mainstream in 2026 /en-us/posts/sustainability/human-rights-enter-the-mainstream/ Thu, 08 Jan 2026 16:40:46 +0000 https://blogs.thomsonreuters.com/en-us/?p=68962

Key takeaways:

      • Clean energy takes center stage in corporate AI initiativesAccess to cheap, low鈥慶arbon power will become a core driver of AI competitiveness, especially in the US, where electricity costs are on the rise.

      • Corporate buyers of AI will exert new leverage over suppliers 鈥 Corporate buyers will increasingly use their purchasing power to push data center operators to align AI build鈥憃uts with local climate, water, and community expectations 鈥 not just to supply more metrics.

      • AI’s human labor layer enters mainstream due diligence 鈥 AI labor supply chains will be brought into the mainstream supply chain and require human rights due diligence.


As we enter 2026, there are three main themes that many corporations will need to manage around issues of renewable energy, AI supplier behavior, and labor.

Theme 1: Renewables move to the center of corporate AI strategies

In 2026, AI competitiveness and energy policy will be tightly fused. With AI workloads driving up electricity demand amid datacenter buildouts, particularly in the United States, access to renewable energy sources in the form of abundant, cheap, low鈥慶arbon power becomes a decisive factor in AI pricing and availability.听Countries and companies that lock in this advantage early will shape AI deployment patterns for the rest of the decade.

鈥淭he economics of renewable energy are what is causing it to accelerate, even in the US,鈥 says , an expert in sustainability and business. 鈥淒espite the political winds, the fact is that wind and solar are growing faster鈥 because it is cheaper, better energy.鈥

In addition, countries and firms with large, subsidized renewable energy capabilities and flexible grids, such as China’s massive solar, wind, and hydro infrastructure, will have a low-cost advantage. (However, countries鈥 push for AI may counteract this by prompting governments to prioritize domestic AI stacks over purely cost鈥憃ptimized ones.) Yet, combining this asset , such as Kimi K2 and DeepSeek, it is not outside the realm of possibility that the country could emerge in the top spot in AI development and innovation.

Corporate pressure to increase AI adoption for efficiency combined with stakeholder expectations of investing in a low-carbon future will make renewables the center of corporate AI strategies. Increasingly, companies will be asked where their computers run, what energy mix powers them, how cost effective that energy mix is, and whether companies are effectively endorsing environmentally and socially harmful projects in host communities.

Theme 2: Local backlash forces suppliers and companies to confront AI’s impact

Over the last few years, big names among AI infrastructure providers have tried to take advantage of the AI revolution, in AI-related data centers, cloud systems, and other infrastructure with no end in sight over the next few years.

Despite the demand, local communities in which large data center construction projects are planned are pushing back. According to , $64 billion of data center projects in the US have been blocked or delayed amid local opposition since 2025. This opposition comes in part because of concerns regarding , strains on local water and natural resources, and the reduction of working farmland from data center rezoning attempts in rural communities.

In fact, AI data centers are pushing up electricity demand and fueling higher electricity prices for many US households. And, as retail electricity price increases over the next couple of years are likely to continue, it will be in part because of consuming more electricity.

As a result, the demand from stakeholders 鈥 in particular, those from local communities including local and state politicians 鈥 for increased transparency on the environment and social impacts of corporate AI services is likely to surge. In turn, corporate buyers of AI services will put pressure on the big AI service suppliers to provide more precision in the locations of such data systems as well as disclose more associated sustainability data, such as energy sources, grid impacts, and their level of community engagement where large AI infrastructure is based.

To deal with these competing priorities, boards of companies using AI services will need to reconcile AI cost鈥慶utting with their transition commitments by ensuring that cost advantages are not built on externalizing environmental and social harms.

Not surprisingly, in 2026, more boards will be drawn into explicit debates about whether AI鈥慸riven cost savings justify exposure to higher community, political, and regulatory risk. This turns questions about data center locations and power contracts into mainstream agenda items.

Theme 3: The human layer of AI emerges as a centerpiece of the supply chain

The idea that AI is automating everything will sit uncomfortably alongside a growing recognition that large鈥憇cale AI depends on a largely invisible workforce. Across the full AI life cycle of products 鈥 some of which rely on models that utilize labor in data collection, curation, annotation, labeling, evaluation, and content moderation 鈥 there are thousands of workers performing the tasks that make models safe, accurate, and usable.

As AI systems scale across sectors, demand for this human labor increases in volume and complexity, according to , a human rights expert at Article One Advisory. Indeed, much of it remains outsourced, precarious, or gig鈥慴ased (often in the Global South), with low pay, weak protections, and exposure to psychologically harmful content rampant. Civil society, unions, and regulators are beginning to connect AI innovation with labor rights and occupational health; and this reality makes the human layer of AI a frontline human rights issue rather than a technical detail.

The for AI鈥憆elated labor is likely to move from a niche concern to a mainstream pillar of corporate human rights due diligence. Companies will be under pressure to know what subcontractors and suppliers are doing to ensure human rights for individuals doing AI data enrichment and moderation work, under what conditions, and through which intermediaries.

Following the evolution of how conflict minerals or modern slavery have been integrated into supplier management, a shared view of AI labor supply chains by corporate procurement, legal, product management, and sustainability teams will materialize.

Forward into 2026

As AI becomes embedded in the infrastructure of daily life, companies will face mounting pressure to demonstrate that their AI strategies align with human rights and environmental commitments, not just efficiency gains. The convergence of these three themes signals that transparency in AI governance in 2026 will be inseparable from broader corporate governance and responsibility. And those organizations that treat these themes as compliance checkboxes rather than fundamental design principles will risk both reputational damage and operational disruption in an increasingly scrutinized landscape.

Companies that fear the exaggerated risk of attracting the ire of activists are underestimating the greater risk of losing the goodwill of customers, investors, and employees that they need,” Friedman adds.


You can find out more about how companies are managing issues of sustainability here

]]>
Human Layer of AI: Protecting human rights in AI data enrichment work /en-us/posts/human-rights-crimes/ai-protecting-human-rights/ Fri, 19 Dec 2025 15:43:10 +0000 https://blogs.thomsonreuters.com/en-us/?p=68877

Key highlights:

      • Human rights risks are elevated for data enrichment workers 鈥 Data enrichment workers can face low and unstable pay, overtime pressure driven by buyer timelines, harmful content exposure with weak safeguards, limited grievance access, and uneven legal protections that hinder workers鈥 collective voice.

      • Human rights due diligence is essential for companies 鈥 Companies as buyers of these services must map subcontracting tiers, assess risk by employment model, document worker protections down to Tier-2 and Tier-3 suppliers, and audit and monitor their own rates, timelines, and payment terms to avoid reinforcing harm to workers.

      • Responsible contracting and remedy are a necessity 鈥 Contracts should embed shared responsibility, and include fair rates, predictable volumes, realistic deadlines, funded health & safety and mental鈥慼ealth supports, effective grievance channels, and remediation.


Demand for data enrichment work has surged dramatically with the rapid development and expansion of AI technology. This work encompasses collecting, curating, annotating, and labeling data, as well as providing model training and evaluation 鈥 all of which are critical activities that improve how data functions in technological systems.

However, the workers performing these tasks currently operate under different employment models, according to from Article One Advisors, a corporate human rights advisory firm. Some workers are in-house employees at major AI developers, others work for business process outsourcing (BPO) companies, and many are independent contractors on gig platforms on which they bid for tasks and get paid per piece.

Human rights issues in data enrichment work

Data enrichment workers sit at the sharp end of the AI economy, yet many struggle to earn a stable, decent income. In particular, pay for gig workers often falls short of a living wage because tasks are sporadic, payments can be delayed, and compensation is frequently piece鈥憆ate. Because work flows through , fees and margins get skimmed at each layer and shrink take鈥慼ome pay 鈥 another area of exploitation for today鈥檚 digital labor workforce.

In addition, another human rights issue at work is their right to rest, leisure, and family life and, in some places, even breaching guidance from the International Labour Organization (ILO) or local labor laws. Buyer purchasing practices with aggressive deadlines are a significant upstream driver of this overtime pressure.


National labor protections vary widely, and platform workers in particular often fall through regulatory gaps.


For many, the work itself carries health risks. Labeling and moderation can require repeated exposure to violent or graphic content, with well鈥慸ocumented mental鈥慼ealth impacts. Yet safeguards are uneven. Indeed, workers may lack protected breaks, task rotation, mental鈥慼ealth support, adequate insurance, or the option to switch assignments. Even when content is not graphic, strain shows up as ergonomic problems, stress, and disrupted sleep.

When harm occurs, remedy can be hard to access. Platform-based work setups often provide no clear, trusted point of contact, and reports of retaliation deter complaints. Effective operational grievance mechanisms can be missing, and this leaves workers without credible paths to redress.

Finally, national labor protections vary widely, and platform workers in particular often fall through regulatory gaps. Because work is individualized and online, forming unions or works councils is harder. This weakens workers鈥 collective voice just where and when it is most needed to identify risks, negotiate improvements, and secure remedies.

Due diligence for companies buying data enrichment services is essential

When companies procure data enrichment services, they must recognize that respecting human rights extends throughout the entire value chain and not just with themselves and their direct suppliers. Companies creating trusted partnerships with their suppliers helps to identify issues before they become harmful and create mutual accountability for the humans behind the algorithms.

Article One Advisors鈥 Lloyd explains that the mandatory baseline starts with human rights due diligence, and can be found in areas such as:

      • Risk identification and assessment 鈥 The first step for companies is to identify and assess risks听by understanding their suppliers鈥 model. This means knowing which groups of workers are full-time employees, contracted workers, or platform-based gig workers. Each model carries different risk profiles.
      • Subcontractor ecosystem mapping 鈥 Tracing the subcontracting chain听to see how many layers exist between the supplier and the workers is essential. Fees and pressures compound at each tier of the value chain, says Lloyd.
      • Documentation of worker protections in Tier 2 and Tier 3 suppliers 鈥 Assessing and promoting worker protections for every layer of the value chain 鈥 which includes making sure the wage structures are clearly defined and equitable, health and safety measures are adequate, and protections for exposure to harmful content and effective grievance mechanisms exist 鈥 are baseline elements of human rights due diligence.
      • Examination of company鈥檚 own practices 鈥 Finally, it is necessary for companies to ensure that their own procurement standards and contracts are not reinforcing human rights harms. This includes companies confirming that their contract terms, timelines, and payment schedules are not inadvertently forcing suppliers to cut corners.

Responsible contracting and remedy mechanisms

Companies as buyers of data enrichment services also must instill shared responsibility in owning worker outcomes among themselves, BPOs, platforms, and model developers. Comprehensive, clear human-rights standards, living-income benchmarks, and shared responsibility are essential elements of good purchasing practices. More specifically, these require fair rates for work, predictable volume expectations, and realistic timelines to make sure suppliers do not push excessive hours. In addition, budgets should include cost-sharing for audits, key risk management measures (such as mental health support), and occupational health and safety controls.

Smart remediation turns harmful situations into improved conditions by providing back-pay for underpayment, medical and psychosocial care after exposure to harmful content, contract adjustments to remove perverse incentives, and time-bound corrective action plans co-designed with worker input. As a last resort when buyer and supplier need to part ways, a responsible exit is planned with notice, transition support, and no sudden contract termination that strands workers.

Similarly, grievance mechanisms for platform workers 鈥 who are often dispersed across geographics, classified as independent contractors, and lack line managers or union channels 鈥 need to be contractually documented. Effective grievance redressal needs to include confidential mechanisms and remediation processes, in-platform dispute tools, independent individuals to investigate complaints, multilingual facilitation, and joint buyer-supplier escalation paths to bridge gaps in labor-law protection and deliver credible remedies at scale, Lloyd notes.

Promoting quality through worker well-being

Protecting data enrichment workers is not only an ethical imperative but also essential for AI quality itself. When workers face excessive hours, inadequate pay, or harmful content exposure without proper support, the resulting stress and burnout directly impact data quality outcomes. Companies must recognize that responsibility for worker well-being and quality data outcomes extend throughout the entire value chain and does not solely rest with BPOs providers alone.


You can find more about the challenges companies and their workers face from forced labor in their supply chain here

]]>
The Human Layer of AI: How to build human rights into the AI lifecycle /en-us/posts/sustainability/ai-human-layer-building-rights/ Mon, 24 Nov 2025 16:33:36 +0000 https://blogs.thomsonreuters.com/en-us/?p=68546

Key takeaways:

      • Build due diligence into the process 鈥 Make human-rights due diligence routine from the decision to build or buy through deployment by mapping uses to standards, assess severity and likelihood, and close control gaps to prevent costly pullbacks and reputational damage.

      • Identify risks early on 鈥 Use practical methods to identify risks early by engaging end users and running responsible foresight and bad headlines

      • Use due diligence to build trust 鈥 Treat due diligence as an asset and not a compliance box to tick by using it to de鈥憆isk launches, uncover user needs, and build durable trust that accelerates growth and differentiates the product with safety-by-design features that matter to buyers, regulators, and end users.


AI is reshaping how we work, govern, and care for one another. Indeed, individuals are turning to cutting-edge large language models (LLMs) to ask for emotional help and support in grieving and coping during difficult times. 鈥淯sers are turning to chatbots for therapy, crisis support, and reassurance, and this exposes design choices that now touch the right to information, privacy, and life itself,鈥 says , co-Founder & Principal at , a management consulting firm that specializes in human rights and responsible technology use.

These unexpected uses of AI are reframing risk because in these instances, safeguards cannot be an afterthought. Analyzing who might misuse AI alongside determining who will benefit from its use must be built into the design process.

To put this requirement into practice, a human rights lens must be applied across the entire AI lifecycle from the decision to build or buy to deployment and use, to help companies anticipate harms, prioritize safeguards, and earn durable trust without hampering innovation.

Understanding human rights risks in the AI lifecycle

Human rights risks can surface at every phase of the AI lifecycle. In fact, they have emerged in efforts to train these frontier LLMs in content moderation functions and now, are showing up elsewhere. For example, data enrichment workers who refine training data, and data center staff, who power these systems, are most likely to face labor risks. Often located in lower鈥慽ncome markets with weaker protections, they face low wages, unsafe conditions, and limits on other freedoms.

During the development phase, biased training sets and the probabilistic nature of models can generate misinformation or hallucinations, and these can further undermine rights to health and political participation. Likewise, design choices often can translate into discriminatory outcomes.

Unfortunately, the use of AI-enabled tools also can compound these harms. Powerful models can be misused for fraud or human trafficking. In addition, deeper integration with sensitive data can heighten privacy and security risks.

A surprising field pattern exacerbates the risk when people increasingly use AI for therapy鈥憀ike support and disclose issues related to emotional crises and self鈥慼arm. In particular, this intimacy widens product and policy obligations, which include age鈥慳ware safeguards and clear limits on overriding protections.

Why human rights due diligence is urgent

That鈥檚 why human rights due diligence must start with people, not the enterprise. By embedding human rights due diligence into the lifecycle of AI, development teams can begin to understand the technology and its intended uses, then map those uses to international standards. Next, a cross functional team gathers to weigh benefits alongside harms and to consider unintended uses. Primarily, they need to answer the question, 鈥淲hat happens if this technology gets in the hands of a bad actor?”

From there, the process demands an analysis of severity 鈥 which assesses scale, scope, and remediation, and the likelihood of each use. The final step involves evaluating current controls across supply chains, model design, deployment, and use-phases to identify gaps.

The biggest barrier in layering in a human rights lens into to AI is the need for speed to market. The races to put out minimally viable products accompanied by competitive pressure can eclipse robust governance, yet early due diligence may prevent costly pullbacks and bad headlines. Article One鈥檚 Poynton notes that no one wants to see their product on the front page for enabling stalking or spreading disinformation. Building safeguards early “ensures that when it does launch, it has the trust of its users,” she adds.

How to embed safeguards without slowing teams

The most efficient path in translating human rights into the AI product lifecycle is to turn policy principles, goals, and ambitions into actionable steps for the engineers and the product teams. This requires the 鈥渆ngineers to analyze how they do their work differently to ensure these principles live and breathe in AI-enabled products,鈥 Poynton explains. More specifically, this includes:

Identifying unexpected harms 鈥 One of the most critical, yet difficult components of the human rights impact assessment is brainstorming potential harms. Poynton recommends two ways to make this happen: First, engage with end users to help identify potential harms by asking, 鈥淲hat are some issues that we may not be considering from the perspectives of accessibility, trust, safety and privacy?鈥 Second, run responsible foresight workshops at which individuals play the parts of bad actors to better identify harms and uncover mitigation strategies quickly. Pair that with a bad headlines exercise that can be used to anticipate front鈥憄age failures. Then, ship with these protections in place, pre鈥憀aunch.

Implementing concrete controls 鈥 Embedding safety-by-design should cover both content and contact, a lesson from gaming in which grooming risks require more than just filters. Build age鈥慳ware and self鈥慼arm protocols, including parental controls and principled policies on overrides. Govern sales and access with customer vetting, usage restrictions, and clear abuse鈥憆esponse pathways. In the supply chain, set supplier standards for enrichment and data center work that include fair wages, safe conditions, freedom of association, and grievance channels.

Treating due diligence as value-creating, not box-checking 鈥 Crucially, frame due diligence as an asset rather than a liability. 鈥淢ake your product better and ensure that when it does launch, it has the trust of its users,” Poynton adds.

Additional considerations

Addressing equity must be front and center. Responsible strategies include diversifying training sets without exploiting communities and giving buyers clear provenance statements on data scope and limits.

Bridging the digital divide is equally urgent. Bandwidth and device gaps risk amplifying inequality if design and deployment assume privileged contexts. In the workplace, Poynton stresses that these impacts will be compounded, from entry-level to expert roles.

Finally, remember that AI鈥檚 environmental footprint is a human rights issue. “There is a human right to a clean and healthy environment,” Poynton notes, adding that energy and water demands must be measured, reduced, and sited with respect for local communities, even as AI helps accelerate the clean energy transition. This is a proactive mandate.


You can find out more about the ethical issues facing AI use and adoption here

]]>
Supply chain risk: New developments in corporate human rights responsibility /en-us/posts/human-rights-crimes/supply-chain-risk/ Thu, 04 Sep 2025 17:03:05 +0000 https://blogs.thomsonreuters.com/en-us/?p=67477

Key insights:

      • An uneven regulatory environment 鈥 While some nations (like Chile, Thailand, and South Korea) are implementing ambitious new laws for corporate human rights and environmental due diligence, the EU’s CSDDD, which is facing significant delays and potential weakening, creates a fragmented regulatory environment.

      • Litigation risk is increasing 鈥 Companies are facing growing litigation risk from both greenwashing claims and climate-related human rights cases, and this highlights the legal and reputational dangers of failing to meet stakeholder demands.

      • Disclosure risk is an issue 鈥 Companies are facing a dilemma in that while proactive compliance and transparent reporting can build trust and avoid disputes, voluntary disclosures can also become legally binding and expose them to liability.


The year 2025 has brought changes to the global landscape of supply chain risk management and corporate responsibility for human rights. Countries such as Chile, South Korea, and Thailand are actively considering or are drafting and introducing ambitious new rules that raise the bar for corporate accountability. At the same time, the Europe Union鈥檚 Corporate Sustainability Due Diligence Directive (CSDDD), which was once seen as a benchmark for responsible supply chain management, has faced delays and significant pushback.

In the meantime, companies face challenges in how they should move forward without harmonization and specificity in the regulatory and legal landscape.

Positive and negative legislative steps

So far this year, several countries have advanced strong new laws to hold corporations accountable for human rights and environmental impacts, says , Senior Policy Associate at , which is a global non-governmental organization that helps communities defend their environmental and human rights.

For example, the National Congress of Chile is considering bills that propose requiring corporations of a certain size to implement and report on due diligence efforts with respect to human rights, the environment, and climate change; and the Thailand鈥檚 Ministry of Justice is drafting a mandatory due diligence law to ensure products are free from exploitative labor and environmental harm, Berry explains.


Advocates in human rights climate cases contend that the adverse effects of climate change undermine fundamental human rights, including the rights to life, health, food, water, and liberty.


South Korea鈥檚 new legislation enacts corporate requirements through mandates in comprehensive due diligence, a Victim Support Fund, and robust grievance procedures. 鈥淭he legislation is progressive because of its broad applicability, even to financial sector actors,鈥 Berry says. 鈥淚t requires accountability for human right due diligence at every stage of a supply chain, and it mandates that business enterprises of a certain size are equipped to proactively respond to grievances and facilitate remedy where harm is found.鈥

This legislation is significant because it would require actors across global supply chains to engage with human rights and environmental abuses regardless of whether impacts are deemed financially material.

Other regions may be moving the other way on corporate accountability, however. Recent developments in the CSDDD may have weakened its impact, with the first occurring in early 2025 and referred to as the stop the clock directive that has delayed implementation by a year. In addition, a proposal was made to raise company size thresholds, meaning that fewer companies would be mandated to report under CSDDD if this change takes effect.

The final requirements are unlikely to be defined until early 2026 鈥渂ecause of the lengthy legislative process in the European Union,鈥 says , Partner at Gibson Dunn. 鈥淭he directive must be negotiated and agreed upon by three EU bodies, which are the European Commission, the European Parliament, and the Council (representing Member States). Each body needs to develop and present its own proposal, and only after all proposals are on the table, which is expected by the end of October 2025, will the trilateral negotiations begin.鈥

Companies in a tough spot

Litigation risk, referred to in sustainability circles as greenwashing, is growing for corporations as civil society organizations become more active in bringing claims related to environmental claims and human rights abuses. Consumers, especially younger generations like Gen Z, are increasingly expecting higher standards and greater transparency from businesses.

Community participants are also active in bringing climate-related litigation. Advocates in human rights climate cases contend that the adverse effects of climate change undermine fundamental human rights, including the rights to life, health, food, water, and liberty. Indeed, high-profile lawsuits, such as , illustrate the expanding global threat of legal action for companies that fail to meet stakeholder expectations.

“There was recently a case in Germany where a Peruvian farmer was trying to get damages from a German utility provider,鈥 Fromholzer explains. The farmer argued that the utility provider鈥檚 greenhouse gas emissions contributed to the melting of glaciers in Peru and that this threatened the farmer鈥檚 hometown with flooding. While the claim was not successful, climate change groups hailed it as a win because the judges stated that energy companies could be held responsible for the costs caused by their carbon emissions.

Today, many corporations find themselves in a difficult position as they navigate mounting risks from both proactive and reactive approaches to sustainability and human rights reporting and compliance. On one hand, adopting proactive compliance strategies and robust grievance mechanisms can help companies avoid costly disputes and build stakeholder trust, but it is not without danger. Public disclosures of this information 鈥 even voluntarily 鈥 can later become legally binding and expose companies to liability.


Companies face challenges in how they should move forward without harmonization and specificity in the regulatory and legal landscape.


Yet, adopting proactive compliance strategies could offer advantages to those companies facing evolving regulatory requirements and social expectations. By implementing robust grievance mechanisms and addressing risks early, businesses can avoid costly litigation, reputational damage, and regulatory penalties.

鈥淎ccountability mechanisms and grievance mechanisms aren鈥檛 scary,鈥 Berry says. 鈥淭hey help to harmonize relationship with these communities鈥 rather than approach these issues defensively [and] litigiously, why not approach them proactively?鈥 Indeed, early action can often future-proof operations and build trust with stakeholders.

At the same time, publishing corporate statements on a voluntary basis without the specifics of final legal requirements in legislation holds risk as well. Fromholzer cautions that in the case of CSDDD, voluntary disclosures made today may become legally binding statements required by the EU鈥檚 Corporate Sustainability Reporting Directive (CSRD) that could be used in future litigation.

A published statement based on the requirements of CSRD 鈥渋s now a legally binding statement which you really must be able to defend at the risk of liability,鈥 Fromholzer says. 鈥淚t is no longer marketing but now is part of the annual accounts with all the liability attached to it… [companies] are cornered from both sides. Again, one is the greenwashing approach, and the other one is the legally binding nature of the statements they are now forced to make.鈥

Recommended steps for companies

Either way, companies implementing robust grievance mechanisms and publishing accurate statements backed by auditable data with assurance is a pathway forward through the complex terrain of risk. To effectively address human rights and environmental risks, Berry suggests considering the expectations outlined by for lawmakers advancing human rights and environmental due diligence laws.

To begin, companies should conduct a comprehensive mapping of their entire supply chain, including subsidiaries and business partners, to identify and evaluate potential risks. Then, companies must create and publicly release comprehensive due diligence policies that align with international standards, such as the and . Finally, companies must implement effective grievance systems that provide accessible, safe, and responsive channels for stakeholders to raise concerns and seek redress. Maintaining ongoing dialogue with affected communities and rights-holders cultivates trust and guarantees their substantive involvement in business decision-making.

Once this implementation phase is complete, companies should regularly monitor their operations and publicly report on both adverse impacts and the effectiveness of remediation efforts. They should also consider assurance by a third party for risk mitigation.

Despite ongoing changes and uncertainties in global legislation, the movement toward greater corporate accountability continues to gain momentum. By aligning their practices and obtaining assurance for corporate reporting, companies can stay ahead of regulatory developments, build trust with stakeholders, and reduce their risk exposure.


You can find out more about how companies are navigating disclosure and reporting rules here

]]>
New study reveals Gen Z purchasing power could be a force for ethical labor /en-us/posts/human-rights-crimes/gen-z-purchasing-power/ Fri, 11 Jul 2025 13:53:25 +0000 https://blogs.thomsonreuters.com/en-us/?p=66538

Key insights:

      • Gen Z’s purchasing power 鈥 By 2030, Gen Z will represent 17% of retail spending in the US, significantly influencing industries such as apparel, tea, and coffee to adopt ethical labor practices.

      • Ethical consumerism 鈥 Fully 81% of Gen Z consumers have changed their purchasing decisions based on brand actions or reputation, with 53% participating in economic boycotts.

      • Willingness to pay more 鈥 More than half of Gen Z consumers are willing to pay more for products made without forced labor, despite financial and accessibility constraints.


The United States accounted for more than one-fifth of the world’s imports of goods that were at risk of being made with forced labor, according from earlier this year. In addition, the U.S. Department of Labor recognizes 478 instances of forced and child labor among different goods and nations, and makers and purveyors of coffee, tea, footwear, and some components of apparel.

With the apparel and footwear industry , and the coffee and tea industry valued at in 2024, any change in demand because of fluctuating economic factors or product attributes, including concerns over the use of modern slavery in companies鈥 supply lines, could impact these industries.

And one important economic factor that could influence more ethical practices is the growing purchasing power of Gen Z individuals (those born between 1995 to 2012). Indeed, by 2030, Gen Zers will represent in the US.

Now, produced by the in collaboration with 成人VR视频 indicates that this shift is already underway. A large majority (81%) of Gen Z individuals, who currently comprise about one quarter of the US population, have changed their decision to buy a product based on brand actions or overall reputation, according to the research. Likewise, state in March that they听have, will, or are participating in a current economic boycott听鈥 the most of any generation in the US.

More evidence suggests this trend is not going away any time soon. In fact, Gen Z is leading the way, showing a for sustainable brands (63%) and a higher willingness to pay more (73%) when compared to other generations. And the numbers for the apparel industry demonstrate this as well. A report that more than one-quarter of their wardrobe is second-hand, which is more than double the rate of the general consumer population.

Gen Zers will change their habits to protect workers

In addition, another study from the Dynamic Sustainability Lab that examined Gen Z鈥檚 purchasing habits related to products made with ethical labor reveals several key insights that highlight the growing power of Gen Z buyers.

For example, Gen Z consumers value purchasing apparel, tea, and coffee produced without forced labor, yet these consumers in this group face financial and accessibility constraints when purchasing. Indeed, they rank cost, affordability, and the quality of their products as the top factors influencing their purchasing decisions.

Further, 80% of participants who ranked cost and affordability as the top factor influencing their purchasing decisions also said they are willing to pay more for products with ethical considerations. And when it came to the awareness of modern slavery as a problem in the production of apparel, tea, and coffee, 91% said they were at least somewhat aware.

Specifically, more than 6 out of 10 survey respondents indicated that forced labor was a problem in tea and coffee production, and 8 out of 10 Gen Z consumers indicated that forced labor was an issue for apparel production. In addition, 81% have changed their purchasing decision because of a brand action or decision, while almost 70% said the purchase decision change was entirely or partly because of ethical labor considerations.

Have you changed a purchasing decision because of brand action or reputation?

purchasing power

At the same time, only 43% of Gen Z respondents can name a brand that鈥檚 using forced labor. This suggests the need for greater transparency of supply chain operations on the part of makers and suppliers of consumer goods.

Recommended actions for companies

Almost all (96%) of Gen Z survey respondents said they believe their generation can drive corporate change through consumer power. Companies can leverage this knowledge by doubling down on increasing transparency and building awareness of their efforts. Some steps companies can take toward that include:

Make detailed policies on ethical sourcing available publicly 鈥 Companies should begin by conducting a comprehensive review of their current sourcing practices to identify areas for improvement. Once a thorough understanding is established, they can draft clear and detailed ethical sourcing policies that reflect their commitment to eliminating forced labor and promoting fair practices throughout their supply chain. These policies should then be translated into accessible language and made available on the company website.

Publish an independent audit or conduct a human rights impact assessment 鈥 To demonstrate accountability and transparency, companies can commission an independent third-party audit of their supply chain operations. This audit should assess the company鈥檚 compliance with ethical labor standards and identify any instances of forced labor. The results of the audit should be made publicly available, accompanied by an action plan outlining steps the company will take to address any problems uncovered.

Additionally, conducting a human rights impact assessment will help companies understand the broader social implications of their business practices and identify areas for improvement. This process involves engaging stakeholders, including workers, in order to gather insights and ensure stakeholders鈥 rights are prioritized.

Obtain a forced labor-free certification 鈥 Success in pursuing and achieving certification will require companies to undergo rigorous evaluations and demonstrate their commitment to maintaining forced labor-free operations. Companies should initiate the process by aligning their practices with the standards set by recognized certifying bodies. This may involve revising supplier contracts, implementing robust monitoring systems, and providing training for staff and suppliers on ethical labor practices.

Gen Z is emerging as a strong force in driving ethical consumerism, with their increasing purchasing power influencing industries to adopt more transparent and fair labor practices. As they prioritize ethical considerations, Gen Z’s demand for transparency and accountability from brands offers a significant opportunity for companies to align with these values and foster consumer trust.

This will be of increasing importance as current US tariff policies may potentially result in institutional buyers such as retailers and brands having to source from global producers and manufacturers in new and emerging geographies.


About the Dynamic Sustainability Lab

The is a non-partisan think tank and research organization which examines the opportunities as well as risks and unintended consequences resulting from the adoption of new technologies, new strategies or policies and our growing dependence on foreign-sourced resources and supply chains used in energy, climate and sustainability transitions.

Directed by , the Pontarelli Professor in the Maxwell School of Citizenship and Public Affairs at Syracuse University, the DSL focuses on providing interdisciplinary scientific approaches that support both governments and businesses through the lens of markets, policies, and national security 鈥 what they call, Dynamic Sustainability.


You can find out more about the challenges of fighting against forced labor in global supply chains here

]]>
AI & human rights: The importance of explainability by design for digital agency /en-us/posts/sustainability/explainability-by-design-digital-agency/ Thu, 15 May 2025 14:56:23 +0000 https://blogs.thomsonreuters.com/en-us/?p=65829 AI systems increasingly shape access to rights, services, and opportunities, which makes the ability to understand, evaluate, and respond to AI-driven decisions a structural requirement for exercising human rights. This condition, called digital agency, ensures that individuals retain autonomy and accountability in environments governed by automated systems.

, a recognized AI governance and data protection expert and Co-Founder of Women in AI Governance, calls for the formal recognition of digital agency as a fundamental human right. Securing digital agency requires embedding explainability into AI systems at the design level, making system outputs understandable, accessible, and actionable. Without digital agency, individuals are exposed to systems that decide without visibility, affect without consent, and deny the possibility of meaningful redress.


Join us for a free online听Webinar: World Day Against Trafficking in Personsto learn more about the complexities of human trafficking, the impact on victims, and effective strategies for prevention and intervention


Today, many AI systems operate without meaningful explanation, creating an explainability gap that prevents individuals from recognizing or responding to the impact AI-driven decision may have on their lives. This unchecked deployment of opaque AI can systematically displace individual agency, creating environments in which decisions are made without visibility or contest, Rosenberg warns.

Current legal frameworks, including the European Union鈥檚 AI Act, attempt to mitigate systemic risks through classification and documentation requirements. However, they do not secure operational explainability for individuals affected by AI-driven decisions. Rosenberg argues that recognizing digital agency as a human right is essential to correcting this failure. She advocates embedding explainability into AI systems as a condition for preserving autonomy within increasingly automated governance structures.

Preserving digital agency through explainability

AI governance frameworks often conflate transparency with explainability, although the two concepts serve different functions. Transparency provides limited information about systems鈥 existence or purpose; while explainability ensures that individuals can understand how decisions are made, what influences them, and how they can respond. Most legal frameworks mandate transparency but do not compel explainability, leaving individuals without the means to navigate or challenge AI-driven outcomes.

Embedding explainability by design requires systems to support functional understanding from the outset. Rosenberg defines this threshold as minimum viable explainability: ensuring that AI systems make influencing factors and decision outcomes intelligible enough for individuals to assess, understand, and act upon meaningfully, if necessary. Systems designed without explainability embed opacity as a structural feature, cutting individuals off from seeing how decisions affect them, questioning outcomes, and seeking correction when needed.

Mandating minimum viable explainability ensures that individuals retain agency within AI-mediated environments. Digital agency must serve as the foundation of regulatory frameworks because, without such agency, legal protections remain abstract and unenforceable, Rosenberg explains.

Learning from the history of privacy

The human right to privacy was recognized internationally in 1948, but it did not meaningfully shape digital regulation before systemic harms emerged. AI systems now operate in a similarly underregulated space, necessitating a way to anchor AI regulation to digital agency, warning that without this foundation, systemic harms will again outpace regulatory response, Rosenberg says.

In the area of privacy, for example, the United Nations鈥 Special Rapporteur role helped consolidate regulatory momentum already underway. A Special Rapporteur for AI & Human Rights would be tasked with accelerating global recognition and protections that have yet to fully emerge. Establishing this role requires a UN Human Rights Council resolution that has not been formally proposed, reflecting the delayed global response to technologies already impacting individual rights.

Privacy protections emerged reactively, and digital agency protections must be built proactively to prevent further erosion of autonomy. Recognizing digital agency as a human right is a crucial step to ensure that digital agency protections are established before dependencies erode autonomy beyond repair.

Enshrining digital agency

As AI evolves, protecting human agency becomes imperative. However, recognition must come first: enshrining digital agency as a human right will create the foundation for systemic accountability.

To get there, we need to pursue a three-part strategy that includes:

      1. Recognizing the right to digital agency 鈥 Concerned individuals and organizations need to advocate for the establishment of a UN Special Rapporteur for AI Governance and the formal recognition of digital agency as a protected human right. Advocates should also mobilize support from human rights organizations, policymakers, and legal experts to initiate and advance a UN Human Rights Council resolution affirming digital agency as fundamental to autonomy and dignity.
      2. Establishing minimum viable explainability standards 鈥 Next, supporters should define standards for AI systems that set clear guidelines for what individuals need to preserve agency. International collaboration is essential to develop these standards and integrate them into certification and compliance processes.
      3. Mandating explainability by design 鈥 Requiring that new AI systems embed explainability from the outset, ensuring usability and intelligibility, is a critical step. Regulatory frameworks must ensure that explainability becomes a baseline condition for AI deployment, with voluntary leadership strengthening early adoption.

Today, AI is reshaping the systems that govern individuals, determine rights, and affect autonomy. Protecting digital agency ensures that individuals can understand, navigate, and challenge the decisions that shape their lives. Securing digital agency now is essential to ensuring that technology strengthens human dignity rather than eroding it.


You can find more information here about where current regulations are going concerning AI and its impact

]]>
Preserving ethical business: What should corporations do during this period of perceived human rights de-prioritization? /en-us/posts/human-rights-crimes/preserving-ethical-business-human-rights-de-prioritization/ Tue, 29 Apr 2025 14:48:55 +0000 https://blogs.thomsonreuters.com/en-us/?p=65722 In the first quarter of 2025, the administration of new President Donald J. Trump has cut US foreign aid by ; and in late February, the Trump administration paused enforcement of the Foreign Corruption Practices Act (FCPA) for 180 days while the new U.S. Attorney General reviews existing FCPA actions and issues new guidance for .

Both of these moves reinforce the perception that there are signs of a global rollback in human rights, underscored by the European Union moving to reduce corporate accountability in human rights due diligence.

鈥淐orruption is an enabler of human rights violations, [and] the rollbacks reduce accountability for bribery,鈥 according to human rights experts and of FTI Consulting. Indeed, a reduction in accountability could embolden companies and potentially increase human rights abuses, they explain.

Risks of relaxing FCPA compliance

Over the years, many multinational companies have invested significantly in developing robust internal compliance programs to adhere to FCPA requirements. Weakening these frameworks could lead companies to divert resources away from maintaining compliance, which could allow bad actors to exploit the reduced scrutiny and result in increased fraud, misconduct, and human rights abuses.


Join us for a free online Webinar: World Day Against Trafficking in Persons to learn more about the complexities of human trafficking, the impact on victims, and effective strategies for prevention and intervention


鈥淲hile these rollbacks in the US may indicate a temporary decrease in regulatory pressure within, it is essential for companies to recognize that global regulatory trends are moving towards greater corporate accountability,鈥 not less, says Wong and Cobb. US companies operating internationally must adhere to these emerging standards, and the pause on domestic FCPA enforcement does not eliminate companies鈥 legal and reputational risks.

Wong and Cobb point out that FCPA enforcement has historically been cyclical, and companies reducing compliance efforts now might find themselves unprepared when enforcement resumes. Indeed, the statute of limitations for FCPA violations is five years for anti-bribery offenses and six years for accounting violations.

Recommendations for companies to navigate uncertainty

As businesses face a shifting regulatory landscape, navigating the path forward requires both immediate action and strategic foresight. The following guidance from Wong and Cobb offer a framework for maintaining ethical business practices and stakeholder confidence while adapting to evolving global standards.

In the short term, for instance, companies must adopt proactive strategies to prepare for the shifting landscape created by these rollbacks, including:

      • Monitoring global regulatory trends 鈥 Companies should actively track global regulatory developments to stay ahead of compliance requirements, even if these do not originate from the United States.
      • Engaging with stakeholders 鈥 It is crucial to maintain open communication with investors and stakeholders regarding ongoing anti-corruption and human rights commitments. This engagement ensures transparency and reinforces the company’s dedication to ethical practices.

In addition, companies should that the company maintains a zero-tolerance policy for bribery and corruption. In addition, companies should keep open anonymous hotlines to report potential ethics violations in order to prevent the erosion of a culture of ethics, which often takes years of effort to build. Likewise, companies need to continue monitoring their third-party vendors, consultants, or suppliers because over the past decade, about 90% of FCPA enforcement resolutions have involved third-party representatives or consultants engaged in corruption.

Meanwhile, Cobb and Wong also suggest that companies focus on aligning with international standards and best practices. Adhering to well-recognized international frameworks is crucial to remain competitive. For example, the UN鈥檚 Guiding Principles offers a flexible approach to keeping ethics practices around human rights, according to Wong. Likewise, Cobb suggests that companies voluntarily embrace the EU鈥檚 Corporate Sustainability Due Diligence and its Corporate Sustainability Reporting Directive, once the amendments are finalized, as robust options for compliance reporting.

Regardless of whether these rollbacks had occurred, the overarching recommendation is for companies to maintain robust corporate compliance and human rights risk management programs. This proactive approach not only prepares companies for potential regulatory changes but also positions them as leaders in ethical business practices on the global stage.

By continuing to prioritize compliance and human rights, companies can navigate the evolving regulatory landscape effectively, ensuring long-term business success and sustainability.


You can find more information on how organizations are managing their regulatory obligations here

]]>