Human layer of AI Archives - 成人VR视频 Institute https://blogs.thomsonreuters.com/en-us/topic/human-layer-of-ai/ 成人VR视频 Institute is a blog from 成人VR视频, the intelligence, technology and human expertise you need to find trusted answers. Mon, 09 Mar 2026 17:33:34 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 Human layer of AI: How to build human-centered AI safety to mitigate harm and misuse /en-us/posts/human-rights-crimes/human-layer-of-ai-building-safety/ Mon, 09 Mar 2026 17:33:34 +0000 https://blogs.thomsonreuters.com/en-us/?p=69789

Key highlights:

      • Map risks before building鈥 Distinguish between foreseeable harms that may be embedded in your product’s design and potential misuse by bad actors.

      • Safety processes need real authority鈥 An AI safety framework is only credible if it has the power to delay launches, halt deployments, or mandate redesigns when human rights risks outweigh business incentives.

      • Triggers enable proactive intervention鈥 Define clear, automatic review triggers such as product updates, geographic expansion, or emerging patterns in user reports to ensure your safety processes adapt as risks evolve rather than reacting after harm occurs.


In recent months, the human cost of AI has become impossible to ignore. after interacting with AI chatbots, while generative AI (GenAI) tools have been weaponized to create that digitally undress women and children. These tragedies underscore that the gap between stated values around AI and actual safeguards remains wide, despite major tech companies publishing responsible AI principles.

, a senior associate at , who works at the intersection of technology and human rights, argues that closing this gap requires companies to: i) systematically assess both foreseeable harms from intended AI use and plausible misuse by bad actors; and ii) build safety processes powerful enough to actually stop launches when risks to people outweigh commercial incentives.

Detailing the two-step framework for anticipating and addressing AI risks

To build effective AI safety processes, companies must first understand what they’re protecting against, then establish credible mechanisms to act on that knowledge.

Step 1: Mapping foreseeable arms and intentional misuse

When mapping AI risks during 鈥渞esponsible foresight workshops鈥 with clients, Richard-Carvajal says she takes them through a process that identifies:

    • foreseeable harms that emerge from a product’s design itself. For example, algorithm-driven recommender systems 鈥 which often are used by social media platforms to keep users on the site 鈥 are designed to drive engagement through personalized content, and are well-documented in amplifying sensationalist, polarizing, and emotionally harmful content, according to Richard-Carvajal.
    • intentional misuse that involves bad actors who may weaponize technology beyond its purpose. Richard-Carvajal points to the example of Bluetooth tracking devices, which initially were designed to help people find lost items, but were quickly exploited by stalkers, who placed them in victims’ handbags in order to track their movements and in some cases, to follow them home.

Tactically, the role-playing use of “bad actor personas” by Richard-Carvajal and her colleagues can help clients imagine misuse scenarios and help ensure companies anticipate harm before it occurs rather than responding after people have been hurt.

Step 2: Building a credible AI safety process

Once risks are identified, Richard-Carvajal says she advises that companies identify mechanisms to address them.听The components of a legitimate AI safety framework mirror the structure of robust human rights due diligence by centering on the risks to people.

Indeed, Richard-Carvajal identifies core components of this framework, which include: i) hazard analysis and to anticipate both foreseeable harms and potential misuse; ii) incident response mechanisms that allow users to report problems; and iii) ongoing review protocols that adapt as risks evolve.

Continual evaluation of new emerging risks is needed

As AI capabilities advance and deployment contexts expand, companies must continuously reassess whether their existing safeguards remain adequate against evolving threats to privacy, vulnerable populations, human autonomy, and explainability. Richard-Carvajal discusses each one of these factors in depth.

Privacy 鈥 Traditional privacy mitigations, such as removing information that leads to identifying specific individuals, are no longer sufficient as AI systems can now re-identify individuals by linking supposedly anonymized data back to specific people or using synthetic training data that still enables re-identification. The rise of personalized AI 鈥 in which sensitive information from emails, calendars, and health data aggregates into comprehensive profiles shared across third-party providers 鈥 can create new privacy vulnerabilities.

Children 鈥 Companies must apply a heightened risk lens for vulnerable populations, such as children, because young users lack the same capacity as adults to critically assess AI outputs. Indeed, the growing concerns around AI usage and children are warranted because of AI-generated deepfakes involving real children are being created without their consent. In fact, Richard-Carvajal says that current guidance calls for specific child rights impact assessments and emphasizes the need to engage children, caregivers, educators, and communities.

Cognitive decay 鈥 A growing concern is that too much AI usage can harm human autonomy and contribute to a decline in critical thinking. This occurs when , and it has the potential to undermine their human rights in regard to work, education, and informed civic participation.

Meaningful explainability 鈥 Companiescommitment to explainability as a core tenet of their responsible AI programs was always a challenge. As synthetic AI-generated data increasingly trains new models, explainability becomes even more critical because engineers may struggle to trace decision-making through these layered systems. To make explainability meaningful in these contexts, companies must disclose AI limitations and appropriate use contexts, while maintaining human-in-the-loop oversight for consequential decisions. Likewise, testing explanations should require engagement with actual rights holders instead of just relying on internal reviews.

Moving forward safely

While no universal checklist exists for AI safety, the systematic approach itself is non-negotiable. Success means empowering engineers to identify and address human-centered risks early, maintaining ongoing stakeholder engagement, and building safety processes that have genuine authority to delay launches, halt deployments, or mandate redesigns when human rights outweigh commercial pressures to ship products.

If your company builds or deploys AI, take action now: Give your engineers and risk teams the authority and resources to identify harms early, keep continuous engagement with affected people and independent stakeholders, and create governance that have the power to keep harm from happening.

Indeed, companies need to make sure these steps go beyond simple best practices on paper and make these protective processes operational, measurable, and enforceable before their next product release.


You can find more about human rights considerations around AI in our ongoing听Human Layer of AI serieshere

]]>
Human Layer of AI: How to hardwire human rights into the AI product lifecycle /en-us/posts/human-rights-crimes/human-layer-of-ai-hardwire-human-rights/ Tue, 27 Jan 2026 16:50:00 +0000 https://blogs.thomsonreuters.com/en-us/?p=69143

Key highlights:

      • Principles need a repeatable process 鈥斕齊esponsible AI commitments become real only when companies systematize human rights due diligence to guide decisions from concept through deployment.

      • Policy and engineering teams should co-own safeguards 鈥 Ongoing collaboration between policy and technical teams can help translate ideals like fairness into concrete requirements, risk-based approaches, and other critical decisions.

      • Engage, anticipate, document, and improve continuously 鈥斕齀nvolving impacted communities, running regular foresight exercises (such as scenario workshops), and building strong documentation and feedback loops make human rights accountability durable, instead of a one-time check-the-box exercise.


More and more companies are adopting responsible AI principles that promise fairness, transparency, and respect for human rights, but these commitments are difficult to put into practice when it comes to writing code and making product decisions.

, a human rights and responsible AI advisor at Article One Advisors, works with companies to help turn human rights commitments into concrete steps that are followed across the AI product lifecycle. He says that the key to bridging the gap between principles and practice is embedding human rights due diligence into the framework that guides product development from concept to deployment.

Operationalizing human rights

Human rights due diligence involves a structured process that begins with immersion in the process of building the product and identifying its potential use cases, whether it is an early concept, prototype, or an existing product. This is followed by an exercise to map the stakeholders who could be impacted by the product, along with the salient human rights risks associated with its use.

From there, the internal teams collectively create a human rights impact assessment, which examines any unintended consequences and potential misuse. They then test existing safeguards in design, development, and how and to whom the product is sold. “Typically, a new product will have many positive use cases,鈥 explains Natour. 鈥淭he purpose of a is to find the ways in which the product can be used or misused to cause harm.” In Natour’s experience, the outcome is rarely a simple go or no-go decision. Instead, the range of decisions often includes options such as go with safeguards or go but be prepared to pull back.

Faris Natour, of Article One Advisors

The use of human rights due diligence in the AI product lifecycle is relatively new (less than a decade old) and as Natour explains, there are five essential actions that can work together as a system:

1. Encourage collaboration between policy and engineering teams

Inside most companies, responsible AI is split between policy teams, which may own the principles, and the engineering teams, which own the systems that bring those principles to life. Working with companies, Natour brings these two functions together through a series of workshops to create structured, ongoing collaboration between human rights and responsible AI experts and the technical teams to better co-develop responsible AI requirements.

In the early stages of the collective teams鈥 work, the challenges of turning principles into practice emerge quickly. For example, the scale of applications and use cases for an AI product can make it difficult to zero in on those uses that . Not all products or use cases need to be treated equally, says Natour, and companies should identify those that could potentially cause the most harm. Indeed, these most-harmful uses may involve a “consequential decision” such as in the legal, employment, or criminal justice fields, he says, adding that those products should be selected for deeper due diligence.

2. Consider the principles at each stage of the development process

Broad principles and values, such as fairness and human rights, should be considered at each stage of the lifecycle. For the principle of fairness, for example, teams may assess which communities will use this product and who will be impacted by those use cases. Then, teams should consider whether these communities are represented on the design and development teams working on the product, and if not, they need to develop a plan for ensuring their input.

3. Engage with impacted communities and rightsholders

Natour advocates for companies to actively engage with impacted communities and stakeholders, including those who are potential users or who may be affected by the product鈥檚 use. This could be the company鈥檚 own employees, for example, especially if the company is developing productivity tools to use internally in their workplace. Special consideration should be given to vulnerable and marginalized groups whose human rights might be at greatest risk.

External experts, such as Natour and his colleagues, hold focus groups with such stakeholders as . The feedback from focus groups can then be used to influence model design, product development, as well as risk mitigation and remediation measures. “In the end, knowing how users and others are impacted by your products usually helps you make a better product,” he states.

4. Establish responsible foresight mechanisms

To prevent responsible AI from becoming a one-time check-the-box exercise, Natour says he uses responsible foresight workshops and other mechanisms as a “way to create space for developers to pause, identify, and consider potential risks, and collaborate on risk mitigations.鈥

The workshops use personas and hypothetical scenarios to help teams identify and prioritize risks, then design concrete mitigations with follow-on sessions to review progress. Another approach includes developing simple, structured question sets that push product teams to pause and think about harm. For example, Natour explains how one of his clients includes the question: What would a super villain do with this product? in order to help product teams identify and safeguard against potential misuse.

5. Create documentation and feedback loops for accountability

As expectations around assurance rise from regulators, customers, and civil society, strong documentation and meaningful, accessible transparency are essential, says Natour.听Clear, succinct, and accessible user-facing information about what a model does and does not do, about data privacy, and other key aspects can help users understand “what happens with their data, as well as the capabilities and the limitations of the tool they are using,” he adds.

Further, transparency should enable two-way communication, and companies should set up feedback loops to enable continuous improvement in the ways they seek to mitigate potential human rights risks.

The hardwired future

Effectively embedding human rights into the AI product lifecycle starts with a shared governance model between a company鈥檚 policy and engineering teams. Together they can collectively hardwire human rights into the way AI systems are imagined, built, and brought to market.


You can find more about human rights considerations around AI in our here

]]>
Human Layer of AI: The crosswinds of AI, sustainability, and human rights enter the mainstream in 2026 /en-us/posts/sustainability/human-rights-enter-the-mainstream/ Thu, 08 Jan 2026 16:40:46 +0000 https://blogs.thomsonreuters.com/en-us/?p=68962

Key takeaways:

      • Clean energy takes center stage in corporate AI initiativesAccess to cheap, low鈥慶arbon power will become a core driver of AI competitiveness, especially in the US, where electricity costs are on the rise.

      • Corporate buyers of AI will exert new leverage over suppliers 鈥 Corporate buyers will increasingly use their purchasing power to push data center operators to align AI build鈥憃uts with local climate, water, and community expectations 鈥 not just to supply more metrics.

      • AI’s human labor layer enters mainstream due diligence 鈥 AI labor supply chains will be brought into the mainstream supply chain and require human rights due diligence.


As we enter 2026, there are three main themes that many corporations will need to manage around issues of renewable energy, AI supplier behavior, and labor.

Theme 1: Renewables move to the center of corporate AI strategies

In 2026, AI competitiveness and energy policy will be tightly fused. With AI workloads driving up electricity demand amid datacenter buildouts, particularly in the United States, access to renewable energy sources in the form of abundant, cheap, low鈥慶arbon power becomes a decisive factor in AI pricing and availability.听Countries and companies that lock in this advantage early will shape AI deployment patterns for the rest of the decade.

鈥淭he economics of renewable energy are what is causing it to accelerate, even in the US,鈥 says , an expert in sustainability and business. 鈥淒espite the political winds, the fact is that wind and solar are growing faster鈥 because it is cheaper, better energy.鈥

In addition, countries and firms with large, subsidized renewable energy capabilities and flexible grids, such as China’s massive solar, wind, and hydro infrastructure, will have a low-cost advantage. (However, countries鈥 push for AI may counteract this by prompting governments to prioritize domestic AI stacks over purely cost鈥憃ptimized ones.) Yet, combining this asset , such as Kimi K2 and DeepSeek, it is not outside the realm of possibility that the country could emerge in the top spot in AI development and innovation.

Corporate pressure to increase AI adoption for efficiency combined with stakeholder expectations of investing in a low-carbon future will make renewables the center of corporate AI strategies. Increasingly, companies will be asked where their computers run, what energy mix powers them, how cost effective that energy mix is, and whether companies are effectively endorsing environmentally and socially harmful projects in host communities.

Theme 2: Local backlash forces suppliers and companies to confront AI’s impact

Over the last few years, big names among AI infrastructure providers have tried to take advantage of the AI revolution, in AI-related data centers, cloud systems, and other infrastructure with no end in sight over the next few years.

Despite the demand, local communities in which large data center construction projects are planned are pushing back. According to , $64 billion of data center projects in the US have been blocked or delayed amid local opposition since 2025. This opposition comes in part because of concerns regarding , strains on local water and natural resources, and the reduction of working farmland from data center rezoning attempts in rural communities.

In fact, AI data centers are pushing up electricity demand and fueling higher electricity prices for many US households. And, as retail electricity price increases over the next couple of years are likely to continue, it will be in part because of consuming more electricity.

As a result, the demand from stakeholders 鈥 in particular, those from local communities including local and state politicians 鈥 for increased transparency on the environment and social impacts of corporate AI services is likely to surge. In turn, corporate buyers of AI services will put pressure on the big AI service suppliers to provide more precision in the locations of such data systems as well as disclose more associated sustainability data, such as energy sources, grid impacts, and their level of community engagement where large AI infrastructure is based.

To deal with these competing priorities, boards of companies using AI services will need to reconcile AI cost鈥慶utting with their transition commitments by ensuring that cost advantages are not built on externalizing environmental and social harms.

Not surprisingly, in 2026, more boards will be drawn into explicit debates about whether AI鈥慸riven cost savings justify exposure to higher community, political, and regulatory risk. This turns questions about data center locations and power contracts into mainstream agenda items.

Theme 3: The human layer of AI emerges as a centerpiece of the supply chain

The idea that AI is automating everything will sit uncomfortably alongside a growing recognition that large鈥憇cale AI depends on a largely invisible workforce. Across the full AI life cycle of products 鈥 some of which rely on models that utilize labor in data collection, curation, annotation, labeling, evaluation, and content moderation 鈥 there are thousands of workers performing the tasks that make models safe, accurate, and usable.

As AI systems scale across sectors, demand for this human labor increases in volume and complexity, according to , a human rights expert at Article One Advisory. Indeed, much of it remains outsourced, precarious, or gig鈥慴ased (often in the Global South), with low pay, weak protections, and exposure to psychologically harmful content rampant. Civil society, unions, and regulators are beginning to connect AI innovation with labor rights and occupational health; and this reality makes the human layer of AI a frontline human rights issue rather than a technical detail.

The for AI鈥憆elated labor is likely to move from a niche concern to a mainstream pillar of corporate human rights due diligence. Companies will be under pressure to know what subcontractors and suppliers are doing to ensure human rights for individuals doing AI data enrichment and moderation work, under what conditions, and through which intermediaries.

Following the evolution of how conflict minerals or modern slavery have been integrated into supplier management, a shared view of AI labor supply chains by corporate procurement, legal, product management, and sustainability teams will materialize.

Forward into 2026

As AI becomes embedded in the infrastructure of daily life, companies will face mounting pressure to demonstrate that their AI strategies align with human rights and environmental commitments, not just efficiency gains. The convergence of these three themes signals that transparency in AI governance in 2026 will be inseparable from broader corporate governance and responsibility. And those organizations that treat these themes as compliance checkboxes rather than fundamental design principles will risk both reputational damage and operational disruption in an increasingly scrutinized landscape.

Companies that fear the exaggerated risk of attracting the ire of activists are underestimating the greater risk of losing the goodwill of customers, investors, and employees that they need,” Friedman adds.


You can find out more about how companies are managing issues of sustainability here

]]>
Human Layer of AI: Protecting human rights in AI data enrichment work /en-us/posts/human-rights-crimes/ai-protecting-human-rights/ Fri, 19 Dec 2025 15:43:10 +0000 https://blogs.thomsonreuters.com/en-us/?p=68877

Key highlights:

      • Human rights risks are elevated for data enrichment workers 鈥 Data enrichment workers can face low and unstable pay, overtime pressure driven by buyer timelines, harmful content exposure with weak safeguards, limited grievance access, and uneven legal protections that hinder workers鈥 collective voice.

      • Human rights due diligence is essential for companies 鈥 Companies as buyers of these services must map subcontracting tiers, assess risk by employment model, document worker protections down to Tier-2 and Tier-3 suppliers, and audit and monitor their own rates, timelines, and payment terms to avoid reinforcing harm to workers.

      • Responsible contracting and remedy are a necessity 鈥 Contracts should embed shared responsibility, and include fair rates, predictable volumes, realistic deadlines, funded health & safety and mental鈥慼ealth supports, effective grievance channels, and remediation.


Demand for data enrichment work has surged dramatically with the rapid development and expansion of AI technology. This work encompasses collecting, curating, annotating, and labeling data, as well as providing model training and evaluation 鈥 all of which are critical activities that improve how data functions in technological systems.

However, the workers performing these tasks currently operate under different employment models, according to from Article One Advisors, a corporate human rights advisory firm. Some workers are in-house employees at major AI developers, others work for business process outsourcing (BPO) companies, and many are independent contractors on gig platforms on which they bid for tasks and get paid per piece.

Human rights issues in data enrichment work

Data enrichment workers sit at the sharp end of the AI economy, yet many struggle to earn a stable, decent income. In particular, pay for gig workers often falls short of a living wage because tasks are sporadic, payments can be delayed, and compensation is frequently piece鈥憆ate. Because work flows through , fees and margins get skimmed at each layer and shrink take鈥慼ome pay 鈥 another area of exploitation for today鈥檚 digital labor workforce.

In addition, another human rights issue at work is their right to rest, leisure, and family life and, in some places, even breaching guidance from the International Labour Organization (ILO) or local labor laws. Buyer purchasing practices with aggressive deadlines are a significant upstream driver of this overtime pressure.


National labor protections vary widely, and platform workers in particular often fall through regulatory gaps.


For many, the work itself carries health risks. Labeling and moderation can require repeated exposure to violent or graphic content, with well鈥慸ocumented mental鈥慼ealth impacts. Yet safeguards are uneven. Indeed, workers may lack protected breaks, task rotation, mental鈥慼ealth support, adequate insurance, or the option to switch assignments. Even when content is not graphic, strain shows up as ergonomic problems, stress, and disrupted sleep.

When harm occurs, remedy can be hard to access. Platform-based work setups often provide no clear, trusted point of contact, and reports of retaliation deter complaints. Effective operational grievance mechanisms can be missing, and this leaves workers without credible paths to redress.

Finally, national labor protections vary widely, and platform workers in particular often fall through regulatory gaps. Because work is individualized and online, forming unions or works councils is harder. This weakens workers鈥 collective voice just where and when it is most needed to identify risks, negotiate improvements, and secure remedies.

Due diligence for companies buying data enrichment services is essential

When companies procure data enrichment services, they must recognize that respecting human rights extends throughout the entire value chain and not just with themselves and their direct suppliers. Companies creating trusted partnerships with their suppliers helps to identify issues before they become harmful and create mutual accountability for the humans behind the algorithms.

Article One Advisors鈥 Lloyd explains that the mandatory baseline starts with human rights due diligence, and can be found in areas such as:

      • Risk identification and assessment 鈥 The first step for companies is to identify and assess risks听by understanding their suppliers鈥 model. This means knowing which groups of workers are full-time employees, contracted workers, or platform-based gig workers. Each model carries different risk profiles.
      • Subcontractor ecosystem mapping 鈥 Tracing the subcontracting chain听to see how many layers exist between the supplier and the workers is essential. Fees and pressures compound at each tier of the value chain, says Lloyd.
      • Documentation of worker protections in Tier 2 and Tier 3 suppliers 鈥 Assessing and promoting worker protections for every layer of the value chain 鈥 which includes making sure the wage structures are clearly defined and equitable, health and safety measures are adequate, and protections for exposure to harmful content and effective grievance mechanisms exist 鈥 are baseline elements of human rights due diligence.
      • Examination of company鈥檚 own practices 鈥 Finally, it is necessary for companies to ensure that their own procurement standards and contracts are not reinforcing human rights harms. This includes companies confirming that their contract terms, timelines, and payment schedules are not inadvertently forcing suppliers to cut corners.

Responsible contracting and remedy mechanisms

Companies as buyers of data enrichment services also must instill shared responsibility in owning worker outcomes among themselves, BPOs, platforms, and model developers. Comprehensive, clear human-rights standards, living-income benchmarks, and shared responsibility are essential elements of good purchasing practices. More specifically, these require fair rates for work, predictable volume expectations, and realistic timelines to make sure suppliers do not push excessive hours. In addition, budgets should include cost-sharing for audits, key risk management measures (such as mental health support), and occupational health and safety controls.

Smart remediation turns harmful situations into improved conditions by providing back-pay for underpayment, medical and psychosocial care after exposure to harmful content, contract adjustments to remove perverse incentives, and time-bound corrective action plans co-designed with worker input. As a last resort when buyer and supplier need to part ways, a responsible exit is planned with notice, transition support, and no sudden contract termination that strands workers.

Similarly, grievance mechanisms for platform workers 鈥 who are often dispersed across geographics, classified as independent contractors, and lack line managers or union channels 鈥 need to be contractually documented. Effective grievance redressal needs to include confidential mechanisms and remediation processes, in-platform dispute tools, independent individuals to investigate complaints, multilingual facilitation, and joint buyer-supplier escalation paths to bridge gaps in labor-law protection and deliver credible remedies at scale, Lloyd notes.

Promoting quality through worker well-being

Protecting data enrichment workers is not only an ethical imperative but also essential for AI quality itself. When workers face excessive hours, inadequate pay, or harmful content exposure without proper support, the resulting stress and burnout directly impact data quality outcomes. Companies must recognize that responsibility for worker well-being and quality data outcomes extend throughout the entire value chain and does not solely rest with BPOs providers alone.


You can find more about the challenges companies and their workers face from forced labor in their supply chain here

]]>
The Human Layer of AI: How to build human rights into the AI lifecycle /en-us/posts/sustainability/ai-human-layer-building-rights/ Mon, 24 Nov 2025 16:33:36 +0000 https://blogs.thomsonreuters.com/en-us/?p=68546

Key takeaways:

      • Build due diligence into the process 鈥 Make human-rights due diligence routine from the decision to build or buy through deployment by mapping uses to standards, assess severity and likelihood, and close control gaps to prevent costly pullbacks and reputational damage.

      • Identify risks early on 鈥 Use practical methods to identify risks early by engaging end users and running responsible foresight and bad headlines

      • Use due diligence to build trust 鈥 Treat due diligence as an asset and not a compliance box to tick by using it to de鈥憆isk launches, uncover user needs, and build durable trust that accelerates growth and differentiates the product with safety-by-design features that matter to buyers, regulators, and end users.


AI is reshaping how we work, govern, and care for one another. Indeed, individuals are turning to cutting-edge large language models (LLMs) to ask for emotional help and support in grieving and coping during difficult times. 鈥淯sers are turning to chatbots for therapy, crisis support, and reassurance, and this exposes design choices that now touch the right to information, privacy, and life itself,鈥 says , co-Founder & Principal at , a management consulting firm that specializes in human rights and responsible technology use.

These unexpected uses of AI are reframing risk because in these instances, safeguards cannot be an afterthought. Analyzing who might misuse AI alongside determining who will benefit from its use must be built into the design process.

To put this requirement into practice, a human rights lens must be applied across the entire AI lifecycle from the decision to build or buy to deployment and use, to help companies anticipate harms, prioritize safeguards, and earn durable trust without hampering innovation.

Understanding human rights risks in the AI lifecycle

Human rights risks can surface at every phase of the AI lifecycle. In fact, they have emerged in efforts to train these frontier LLMs in content moderation functions and now, are showing up elsewhere. For example, data enrichment workers who refine training data, and data center staff, who power these systems, are most likely to face labor risks. Often located in lower鈥慽ncome markets with weaker protections, they face low wages, unsafe conditions, and limits on other freedoms.

During the development phase, biased training sets and the probabilistic nature of models can generate misinformation or hallucinations, and these can further undermine rights to health and political participation. Likewise, design choices often can translate into discriminatory outcomes.

Unfortunately, the use of AI-enabled tools also can compound these harms. Powerful models can be misused for fraud or human trafficking. In addition, deeper integration with sensitive data can heighten privacy and security risks.

A surprising field pattern exacerbates the risk when people increasingly use AI for therapy鈥憀ike support and disclose issues related to emotional crises and self鈥慼arm. In particular, this intimacy widens product and policy obligations, which include age鈥慳ware safeguards and clear limits on overriding protections.

Why human rights due diligence is urgent

That鈥檚 why human rights due diligence must start with people, not the enterprise. By embedding human rights due diligence into the lifecycle of AI, development teams can begin to understand the technology and its intended uses, then map those uses to international standards. Next, a cross functional team gathers to weigh benefits alongside harms and to consider unintended uses. Primarily, they need to answer the question, 鈥淲hat happens if this technology gets in the hands of a bad actor?”

From there, the process demands an analysis of severity 鈥 which assesses scale, scope, and remediation, and the likelihood of each use. The final step involves evaluating current controls across supply chains, model design, deployment, and use-phases to identify gaps.

The biggest barrier in layering in a human rights lens into to AI is the need for speed to market. The races to put out minimally viable products accompanied by competitive pressure can eclipse robust governance, yet early due diligence may prevent costly pullbacks and bad headlines. Article One鈥檚 Poynton notes that no one wants to see their product on the front page for enabling stalking or spreading disinformation. Building safeguards early “ensures that when it does launch, it has the trust of its users,” she adds.

How to embed safeguards without slowing teams

The most efficient path in translating human rights into the AI product lifecycle is to turn policy principles, goals, and ambitions into actionable steps for the engineers and the product teams. This requires the 鈥渆ngineers to analyze how they do their work differently to ensure these principles live and breathe in AI-enabled products,鈥 Poynton explains. More specifically, this includes:

Identifying unexpected harms 鈥 One of the most critical, yet difficult components of the human rights impact assessment is brainstorming potential harms. Poynton recommends two ways to make this happen: First, engage with end users to help identify potential harms by asking, 鈥淲hat are some issues that we may not be considering from the perspectives of accessibility, trust, safety and privacy?鈥 Second, run responsible foresight workshops at which individuals play the parts of bad actors to better identify harms and uncover mitigation strategies quickly. Pair that with a bad headlines exercise that can be used to anticipate front鈥憄age failures. Then, ship with these protections in place, pre鈥憀aunch.

Implementing concrete controls 鈥 Embedding safety-by-design should cover both content and contact, a lesson from gaming in which grooming risks require more than just filters. Build age鈥慳ware and self鈥慼arm protocols, including parental controls and principled policies on overrides. Govern sales and access with customer vetting, usage restrictions, and clear abuse鈥憆esponse pathways. In the supply chain, set supplier standards for enrichment and data center work that include fair wages, safe conditions, freedom of association, and grievance channels.

Treating due diligence as value-creating, not box-checking 鈥 Crucially, frame due diligence as an asset rather than a liability. 鈥淢ake your product better and ensure that when it does launch, it has the trust of its users,” Poynton adds.

Additional considerations

Addressing equity must be front and center. Responsible strategies include diversifying training sets without exploiting communities and giving buyers clear provenance statements on data scope and limits.

Bridging the digital divide is equally urgent. Bandwidth and device gaps risk amplifying inequality if design and deployment assume privileged contexts. In the workplace, Poynton stresses that these impacts will be compounded, from entry-level to expert roles.

Finally, remember that AI鈥檚 environmental footprint is a human rights issue. “There is a human right to a clean and healthy environment,” Poynton notes, adding that energy and water demands must be measured, reduced, and sited with respect for local communities, even as AI helps accelerate the clean energy transition. This is a proactive mandate.


You can find out more about the ethical issues facing AI use and adoption here

]]>