Identity verification Archives - 成人VR视频 Institute https://blogs.thomsonreuters.com/en-us/topic/identity-verification/ 成人VR视频 Institute is a blog from 成人VR视频, the intelligence, technology and human expertise you need to find trusted answers. Tue, 07 Oct 2025 15:53:12 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 Is Mexico ready for the biometric CURP? /en-us/posts/government/mexico-biometric-curp/ Tue, 07 Oct 2025 15:53:12 +0000 https://blogs.thomsonreuters.com/en-us/?p=67859

Key takeaways:

      • Legal promise, operational gaps 鈥 The biometric CURP could streamline identity verification in legal and notary contexts but lacks training programs and designated data capture sites.

      • Balancing security and civil liberties 鈥 While intended to help locate missing persons, the CURP raises concerns over government surveillance due to broad access by security agencies.

      • Digital readiness under scrutiny 鈥 Mexico currently lacks the systems and regulatory framework to securely manage biometric data, risking identity fraud and misuse if not properly addressed.


Last July, key reforms to Mexico鈥檚 General Population Law and the General Law on Forced Disappearance were approved, marking the beginning of a transformation in the way people are officially identified in the country.

With these reforms, the biometric Unique Population Registry Code (CURP) becomes an official identification document, which is now mandatory and available in both physical and digital formats that will integrate biometric information such as fingerprints, iris scans, and photographs.

The biometric CURP will be used mainly for identity validation on digital platforms, immigration procedures, access to health services, legal processes, and to support the search for missing persons. With little time left before its implementation, doubts linger regarding how this new system will be implemented and the impact it may have, especially in the judicial and notarial areas.

To provide a professional perspective on the possible impacts, Jos茅 Ra煤l Gonz谩lez Ram铆rez, Master in Notarial Law and aspirant assigned to Notary 1 in Cuernavaca, Morelos, shared his personal perspective on what this mandate will mean for Mexico and its citizens.

Challenges and benefits in the legal and notary fields

In Mexico, there is currently no single official identification document. In the legal and notary field, the passport and voter ID card are mainly used, as they are documents issued by federal institutions that generally use greater security measures. The biometric CURP could represent a solution to this lack of a single document, offering a more reliable tool to validate people’s identity.

However, key parts of the system are still lacking that could have significant consequences. For example, there is no implementation program to train notaries on this document; and while the College of Mexican Notaries is hoping to disseminate training mechanisms in the coming months, so many areas of the system remain undefined that training at this stage could prove difficult.

Further, no designated sites have been reported for the population to go for the official capture of the biometric data that is at the heart of the system鈥檚 methodology. Finally, no official date has been defined for the mandatory use of this new CURP.

Hope or surveillance?

The biometric CURP was approved with the main objective of strengthening the search, location, and identification of missing persons in Mexico. Not surprisingly, this has raised significant government surveillance issues. And while the access to CURP is stipulated to be exclusively for search purposes, access on a consultation basis will be allowed to prosecutors, investigative bodies, and the National Intelligence Center.

This measure has generated divided opinions among Mexicans. On one hand, there is fear that it could become a tool for government surveillance as the National Guard (GN) and the Secretariat of Security and Citizen Protection (SSPC) can access individuals鈥 delicate information that will include bank and telecommunications data. On the other, it represents hope for thousands of families who have been searching for their loved ones in a country in which 42 people disappear daily on average according to the National Registry of Missing and Unlocated Persons (RNPDNO).

Master Jos茅 Ra煤l says he considers the implementation is positive, since its initial purpose is the search for missing persons. The rest of the population鈥檚 concerns, in that sense, would be 鈥渃ollateral damage,鈥 he adds.

“It is going to be an identification that, if done correctly and if the registration is adequate, will strengthen the notary鈥檚 ability to identify the person in front of them and avoid, as much as possible, a false declaration or impersonation at the moment of identification,” Jos茅 Ra煤l explains.

Is Mexico ready?

One of the greatest challenges will be Mexico鈥檚 ability to securely store and manage the vast amount of confidential data required for the biometric CURP. According to Jos茅 Ra煤l, the country currently lacks the necessary systems, infrastructure, and regulatory framework to handle this information effectively.

鈥淚f implemented correctly, this system could provide stronger safeguards against identity fraud,鈥 he explains. 鈥淗owever, without a reliable database and proper data management, it could become a serious problem.鈥

In addition, there is uncertainty about how the data will be captured, with which population the process will begin (those over 18 years of age, or also minors), and how often the database should be cleaned. The population aged 0 to 18 poses a particularly complex challenge due to its size, which current resources and infrastructure are not equipped to handle effectively.

In the coming months, it will be crucial for the Mexican government to define the implementation mechanisms, the initial target population, and the data cleansing processes, as this will be one of the most important aspects for the success or failure of the biometric CURP.

The road ahead

Although a pilot program is currently underway in Mexico City, it is essential to establish a robust action plan for collecting population data. Likewise, a clear framework must be defined for the management, maintenance, and protection of this data, especially considering the sensitive nature of the information and the critical need to prevent misuse. Further, it is crucial to assess whether the government has the technological infrastructure required to securely store this data, or if investment in such storage capabilities will be necessary.


You can learn more about the challenges of identity verification here

]]>
Unintended consequences: How stricter rules under the OBBBA could make fraud easier /en-us/posts/government/obbba-rules-fraud/ Wed, 03 Sep 2025 14:38:15 +0000 https://blogs.thomsonreuters.com/en-us/?p=67427

Key insights:

      • The OBBBA has stricter verification requirements 鈥 These requirements have inadvertently created an environment that incentivizes fraudulent activities, such as identity theft and document forgery, especially among individuals seeking Medicaid and ACA coverage.

      • The OBBBA enacts tax policy modifications 鈥 These modifications, including reduced third-party reporting mechanisms and expanded tax benefits, may create opportunities for underreporting of income among gig workers and small businesses.

      • OBBBA’s contains large-scale funding mechanisms 鈥 These mechanisms along with rapid implementation may put pressure on traditional procurement safeguards, creating potential challenges in oversight and accountability.


The One Big Beautiful Bill Act (OBBBA) aimed to fulfill several policy objectives from the Trump Administration’s campaign platform. However, an examination of the legislation and its implementation reveals mixed results, particularly concerning potential vulnerabilities for fraud, waste, and abuse across various sectors.

Healthcare eligibility fraud: A slippery slope

The OBBBA introduced stricter verification requirements for lawful status, residency, and work eligibility within healthcare benefit programs. While intended to bolster program integrity, these tightened standards have, in some cases, inadvertently incentivized fraudulent activities.

For example, individuals seeking Medicaid and Affordable Care Act (ACA) coverage have reportedly resorted to identity theft, document forgery, and falsified employment or training records to meet the new criteria. This environment has attracted organized fraud networks and unscrupulous enrollment brokers who exploit system vulnerabilities, ultimately impacting legitimate beneficiaries through compromised identities and bilking taxpayers through misallocated resources.

Historically, enhanced verification measures have sometimes led to similar unintended consequences. The period after passage of the Immigration Reform and Control Act of 1986 (IRCA), for instance, saw a rise in widespread counterfeiting operations due to document requirements. Similarly, Medicaid programs have consistently battled identity fraud, and related work requirement pilot programs have shown patterns of misreporting. The rapid, large-scale eligibility transitions during the pandemic also created opportunities for fraudulent activity. These historical examples demonstrate an unfortunate reoccurring pattern that sometimes administrative safeguards, while designed for integrity, can create incentives for sophisticated circumvention strategies.

Tax policy changes: Opening doors to evasion

The OBBBA’s tax policy modifications have introduced new dynamics in reporting and compliance, potentially affecting revenue collection. The legislation reversed the $600 1099-K reporting threshold and raised 1099-MISC/NEC thresholds to $2,000. This reduction in third-party reporting mechanisms makes it harder to ensure accurate income declaration. Simultaneously, the law expanded certain tax benefits, including a 100% federal credit for private-school scholarship donations and a doubled, qualified small business stock exclusion. These changes impact various stakeholders, including gig workers, self-employed individuals, operators of small businesses, high-income investors, charitable organizations, and tax planning professionals.

In addition, state-level scholarship programs with similar 100% credit structures have experienced various forms of abuse. These precedents indicate that the current changes in the OBBBA may create opportunities for underreporting of income among gig workers and small businesses. There is also potential for misuse of the new tax benefits through self-dealing arrangements or sophisticated strategies designed to minimize tax obligations.

History suggests that changes to reporting requirements and tax incentives can create compliance challenges. Past instances in which reporting mechanisms were weakened or enforcement reduced have correlated with an increase in the tax gap 鈥 the difference between taxes owed and taxes collected. The Tax Cuts and Jobs Act (TCJA) era, for example, demonstrated how new provisions could be exploited in unforeseen ways.

Unaccountable spending and contracting fraud: A risky proposition

The OBBBA also established significant funding mechanisms, including a $100 million Office of Management and Budget fund and $30 billion allocated for immigration enforcement activities, that granted relatively broad administrative discretion in their deployment. These substantial appropriations, intended for rapid implementation, may put pressure on traditional procurement safeguards.

Indeed, the sheer scale and urgency of these funding streams have attracted various participants, including agency officials with expanded discretionary authority, established government contractors, and new market entrants.

Historical experience with large-scale, rapidly deployed government funding suggests potential challenges in oversight and accountability. The Department of Homeland Security, for example, has been designated as High Risk by the Government Accountability Office partly due to procurement management concerns. In the past, post-9/11 security initiatives and Iraq reconstruction efforts also revealed vulnerabilities in expedited contracting processes; and more recently, COVID-19 relief programs like the Paycheck Protection Program demonstrated how substantial funding, compressed timelines, and reduced oversight can create conditions conducive to fraud and waste. These precedents suggest that the current funding structure within the OBBBA may face similar risks, including potential misallocation of resources, irregular contracting practices, and exploitation by opportunistic actors seeking to benefit from loosely constrained procurement processes.

Cross-cutting vulnerabilities and systemic impact

The OBBBA’s implementation has introduced significant operational changes across multiple government programs, leading to rapid policy transitions and large-scale re-verification processes. These administrative shifts have generated confusion among beneficiaries and stakeholders, which opportunistic actors have exploited.

This exploitation includes phishing operations and fraudulent benefit fixer services that prey on individuals struggling to navigate the new requirements. The pace and complexity of these changes have challenged traditional oversight mechanisms, as the government鈥檚 capacity for auditing, data analytics, and procurement controls has struggled to keep pace with the scale and speed of implementation demands.

These gaps in oversight and enforcement are likely to create systemic vulnerabilities beyond immediate program integrity concerns. When fraudulent activities succeed, legitimate program beneficiaries face reduced access to services as resources are diverted from their intended purposes. Simultaneously, compliant taxpayers bear increased burdens as fraudulent claims and inefficient spending patterns require additional revenue or reduce the effectiveness of public investments.

This dynamic illustrates how implementation challenges, like those in the OBBBA, can create cascading effects, ultimately undermining both program effectiveness and public trust in government operations, regardless of the underlying policy objectives.


You can find more of our coverage ofthe impact of the One Big Beautiful Bill Acthere

]]>
A deep dive into the growing threat of SIM swap fraud /en-us/posts/corporates/sim-swap-fraud/ Mon, 18 Aug 2025 11:43:20 +0000 https://blogs.thomsonreuters.com/en-us/?p=67236

Key insights:

      • SIM swap fraud is a growing concern 鈥 The scale of this trend is alarming, with 1,075 SIM swap attacks investigated by the FBI in 2023, resulting in losses approaching $50 million.

      • Weak authentication processes in telecoms enable SIM swap fraud 鈥 This allows fraudsters to easily hijack phone numbers, highlighting the need for stronger authentication protocols in the telecommunications industry.

      • Regulatory intervention is necessary to protect customers 鈥 The FCC has introduced rules, such as FCC 23-95, to require telecoms to implement secure methods of authenticating customers before approving SIM changes or port-outs.


Even as cybercrime and fraud have escalated into a global crisis, one type of scheme has is seeing stunning levels of growth: SIM swap fraud. This fraud trend is specifically plaguing the telecom industry and its customers and was only to synthetic identity fraud among damaging fraud schemes in the telecom industry.

SIM swap fraud

Through SIM swap fraud, fraudsters hijack victims’ phone numbers, gaining unauthorized access to sensitive accounts such as banking and cryptocurrency platforms. For example, one bank customer after a fraudster deceived Xfinity Mobile into transferring the customer鈥檚 phone number and then intercepted authentication codes to drain his bank account. Likewise, in March T-Mobile had to involving a cryptocurrency-related SIM swap attack in 2020.

The scale of this trend is alarming. In 2023, 1,075 SIM swap attacks, with losses approaching $50 million. In 2024, a 240% surge in SIM swap cases, 90% of which occurred without victim interaction. These incidents highlight the need for stronger authentication protocols in the telecommunications industry to combat this growing trend.

What is SIM swap fraud?

SIM swap fraud occurs when a fraudster convinces a mobile carrier to transfer a victim鈥檚 phone number to a SIM card they control, exploiting the legitimate feature of mobile number portability. Once the swap is complete, the victim鈥檚 phone loses network connectivity, and the fraudster receives all calls and texts, including one-time passwords for account access.

SIM swap fraud is appealing to fraudsters due to its scalability, the lack of need for technical expertise, and the potential for massive payouts from a single attack, particularly when targeting high-net-worth individuals or cryptocurrency investors. Fraudsters also can increase their impact by exploiting data from large scale breaches purchased on the dark web to target multiple victims simultaneously using automated call lists or scripts and hitting numerous carrier accounts with minimal effort.

Additionally, SIM swapping requires no technical expertise, relying instead on the fraudster鈥檚 ability to manipulate carrier employees into transferring a victim鈥檚 phone number to a fraudster鈥檚 new SIM card. Using basic personal information that is often retrieved from public sources or data leaks, fraudsters can execute attacks with just a phone call or store visit, no coding or hacking skills needed. The only tools required are inexpensive prepaid SIM cards or burner phones.

Indeed, how the scheme works is relatively simple. Once, fraudsters collect personal information about their target 鈥 such as name, address, phone number, date of birth, or financial details 鈥 they use the collected information and contact the victim鈥檚 mobile carrier, posing as the account holder.


SIM swap fraud is appealing to fraudsters due to its scalability, the lack of need for technical expertise, and the potential for massive payouts from a single attack, particularly when targeting high-net-worth individuals or cryptocurrency investors.


Because mobile carriers typically verify identity using security questions, PINs, or personal details, fraudsters may be able to bypass these. And once approved on the account, the mobile carrier deactivates the victim鈥檚 SIM card and ports the phone number to the fraudster鈥檚 SIM. The victim鈥檚 phone displays 鈥淣o Signal,鈥 while the fraudster gains control of all communications.

With the phone number under their control, fraudsters can then intercept codes to reset passwords for email, banking, or cryptocurrency accounts, use password reset features that rely on phone number verification, and 鈥 most damagingly 鈥 transfer funds, make unauthorized purchases, or sell account access on the Dark Web.

Telecoms may be enabling SIM swap fraud

A 2020 Princeton University study of five major prepaid wireless carriers in the United States 鈥 AT&T, T-Mobile, TracFone, US Mobile, and Verizon 鈥 for SIM swap requests and their downstream effects on account protection. The study revealed a that 80% of first attempts at SIM swap fraud were successful. The report noted that a main reason for this success rate was that the carriers relied on weak authentication methods that fraudsters could easily bypass. While each carrier revealed distinct vulnerabilities in their SIM swap processes, none of them required in-person verification or strong multi-factor authentication, allowing fraudsters to execute remote attacks with relative ease.

The study also found that carriers prioritized usability over security, opting for simple authentication methods to streamline customer service, making it easier for fraudsters to exploit vulnerabilities. By undermining security to reduce customer friction, carriers inadvertently weakened their own defenses against SIM swap fraud.

By exposing telecom vulnerabilities and highlighting that telecoms were a weak link in the fraud ecosystem, the Princeton study led to a pivotal change within the telecom industry, sparking widespread attention and leading to lawsuits and consumer complaints. In fact, the study was cited by the U.S. Federal Communications Commission (FCC) in its efforts to create regulations to protect consumers.

Government intervention needed

On November 15, 2023, the FCC introduced rule to specifically address SIM swap fraud. Prior to the new rule, the FCC had general regulations to protect customer data, but these did not specifically target SIM swap fraud or mandate specific authentication and notification protocols that had to be followed by carriers. The lack of targeted regulations allowed inconsistent security practices across the telecom industry, which increased fraud vulnerabilities. FCC 23-95 would require telecoms to implement secure methods of authenticating customers before approving SIM changes, including the use of stronger authentication using methods such as account-specific PINs, passwords, or multi-factor authentication; and a prohibition on the use of 鈥減redictable or easily obtainable information鈥 such as Social Security numbers or birthdates.

Customers must be immediately notified via text or email whenever a SIM change request is made. Notifications must be sent to the customer鈥檚 existing device (if available) or a secondary contact method. Also, carriers must immediately notify customers of failed authentication attempts related to SIM change requests. Further, carriers must train employees to identify fraudulent requests and implement secure processes to prevent social engineering, which is a common tactic in SIM swap fraud.

The new FCC rule established baseline requirements for fraud protection, ensuring consistency across the telecom industry while allowing telecoms the flexibility to adopt advanced tools like biometric authentication or behavioral analytics.

The new rule ; however, telecommunication companies sought more time to upgrade technology and implement new employee training. Thus, the FCC has waived the deadline for all rules adopted in the FCC 23-95, and there has been no further update.

Combating the threat

To combat the rising threat of SIM swap fraud, telecom providers need to adopt strong fraud prevention procedures and tools, similar to those used by financial institutions. Implementing strict authentication protocols, such as multi-factor authentication with biometrics or app-based codes, can significantly reduce unauthorized SIM swaps. Real-time monitoring using AI-driven systems can detect anomalies such as unusual SIM swap requests or account changes from unfamiliar locations. Telecom carriers should also enhance customer verification processes, requiring strong identifiers beyond easily compromised data.

It’s also important to educate consumers about fraud risks and promptly notify them of suspicious activities, as mandated by FCC 23-95, which would allow victims to protect their devices immediately. By encrypting sensitive data, collaborating across industries to share threat intelligence, and leveraging advanced fraud detection tools, carriers can strengthen their defenses while maintaining consumer convenience.


You can find more information on the challenges organizations face in听fighting financial fraud听here

]]>
Protecting children’s privacy online: How to harmonize federal & state laws to ensure internet safety /en-us/posts/human-rights-crimes/harmonizing-laws/ Wed, 21 May 2025 17:03:57 +0000 https://blogs.thomsonreuters.com/en-us/?p=65907 In 2022, about 1.7 million children were victims of a data breach, which means that they had personal information exposed or compromised. In addition, 90% of parents told Pew Research Center that they were having access to their personal information.

Safeguarding children’s data and online privacy is challenging due to the existing fragmented legal framework, which consists of various federal and state laws with differing methods and restrictions. Even so, there are ways to address these gaps, says , Partner in the cybersecurity and data privacy litigation practice at Mayer Brown.

Understanding the current federal and state legal landscape

鈥淭he current legal landscape aiming to protect children’s data and online privacy is a complex patchwork of federal and state laws, each with distinct approaches and limitations,鈥 says Thomson. At the federal level, the Children’s Online Privacy Protection Act (COPPA) is the cornerstone legislation and is enforced by the U.S. Federal Trade Commission (FTC). COPPA primarily targets websites and online services directed at children under 13 years of age and mandates parental consent for the collection, use, and disclosure of personal information. Despite its foundational role, COPPA has faced criticism for its limited age scope and challenges in enforcement.

On the state level, Thomson notes that there has been a notable surge in initiatives to enhance children’s privacy protections. California, for example, leads with the California Consumer Privacy Act and its successor, the California Privacy Rights Act (CPRA), which extend privacy safeguards to minors under 18. This trend has inspired other states to enact similar laws, focusing on regulating children’s data, particularly in connection with social media.


Join us for a free online Webinar: World Day Against Trafficking in Persons to learn more about the complexities of human trafficking, the impact on victims, and effective strategies for prevention and intervention


These state laws often include provisions for age-appropriate design codes and “harmful content age verification” laws, which aim to shield children from potentially damaging online content. However, these efforts sometimes face opposition on grounds of infringing on free speech rights, highlighting the ongoing tension between privacy protection and other legal considerations.

At the federal legislative level, efforts to strengthen children’s online safety have seen mixed success. Initiatives like the Kids Online Safety Act have been proposed to address broader online safety issues, although many such efforts have not yet been passed into law. Recent U.S. Senate hearings have continued to highlight the need for comprehensive federal action.

The challenge remains to harmonize these federal and state efforts to ensure consistent and effective protections for children’s data privacy across the United States. Such enhancements to current protections could include standardizing age definitions, increasing parental control, imposing stricter penalties for non-compliance, and improving education and awareness about online privacy risks. These measures, combined with potential international collaboration, could help close existing gaps and create a more cohesive legal framework to protect children online.

Areas of commonality and divergences

The current legal landscape protecting children’s data and online privacy reveals several important commonalities across jurisdictions. Most prominently, there is widespread recognition that children deserve special privacy protections beyond those afforded to adults, according to Thomson. For example, laws at both federal and state levels requiring parental consent mechanisms for data collection from younger users demonstrate this special protection.

Another common thread is the growing emphasis on privacy by design principles, which requires online services to build child safety and privacy considerations into their products from inception rather than as an afterthought. Additionally, there is increasing consensus that certain exploitative design features which may target children should be restricted, with many laws limiting data retention periods and collection practices.

Despite these commonalities, Thomson points out that significant divergences create a fragmented regulatory environment. Perhaps most problematic is the inconsistent definition of child across jurisdictions. Indeed, COPPA applies to children under 13 only, while state laws like California’s CPRA extend protections to minors under 18. This creates compliance challenges for companies operating across multiple states.


The challenge remains to harmonize these federal and state efforts to ensure consistent and effective protections for children’s data privacy across the United States.


Another key divergence lies in the scope of covered entities. While some laws apply only to child-directed services, others extend to general audience websites that are likely to be accessed by children. Enforcement mechanisms also vary, with some laws relying primarily on regulatory action while others provide private rights of action.

These inconsistencies create regulatory gaps that sophisticated companies and bad actors can exploit, which clearly underscores the need for more harmonized approaches to children’s data protection that can keep pace with rapidly evolving technologies and business models that target young users.

How to close the fragmented legal landscape

To strengthen protections for children’s data privacy and close existing gaps, Thomson explains that a comprehensive approach at both federal and state levels is necessary, which specific steps including:

Establish a consistent age definition 鈥 A uniform age definition should be established across all jurisdictions to ensure consistent application of privacy protections. This would address the current discrepancies under which federal and state laws currently operate.

Improve monitoring tools for parents 鈥 Additionally, enhancing parental control mechanisms, such as developing more user-friendly tools, would allow parents to monitor and manage their children’s online activities effectively.

Expand the scope of protections of personal information 鈥 Specific efforts to reduce the exploitation of children online should include expanding the definition of personal information to encompass biometric data, reflecting the growing use of such data in digital services.

Improve transparency to parents 鈥 Require companies to provide clear, detailed disclosures about their data collection practices and any third-party sharing. This would help parents and guardians make informed decisions about their children鈥檚 digital interactions.

Strengthen consistent protection across geographies 鈥 Establishing global standards for children鈥檚 data privacy through international collaboration can also play a significant role in providing consistent protection across borders.

The splintered legal landscape protecting children’s data privacy creates regulatory gaps that sophisticated companies and illicit actors can exploit. As the digital world continues to evolve, it is imperative that lawmakers and regulatory bodies work together to establish a more cohesive and comprehensive framework for protecting children’s online privacy 鈥 one that will prioritize their safety, well-being, and rights in the face of increasingly complex technological advancements.


You can find out more about how organizations and individuals can fight against child exploitation both online and in the real world here

]]>
Customer ID programs: How best to conduct on-boarding & compliance /en-us/posts/corporates/customer-id-programs/ Thu, 01 May 2025 12:51:50 +0000 https://blogs.thomsonreuters.com/en-us/?p=65736 Among today鈥檚 financial services institutions, there is a strong preference for conducting business in real-time or as close to it as possible. This means everything from opening accounts to wiring funds needs to be done faster and more efficiently.

Among traditional financial institutions, the on-boarding process, which includes customer screening, can consume valuable time and can sometimes extend to several days. Enhancing efficiency necessitates accelerating the screening and compliance procedures to ensure protection for both the customer and the institution involved. This need for efficiency aligns closely with the objectives of the customer identification program (CIP), which plays a vital role within financial institutions by helping to prevent financial crimes.

Compliance with CIP regulations performs several essential functions in addition hindering financial crimes such as money laundering, terrorist financing, and identity theft. Compliance with CIP, which verifies customer identities to deter such illicit activities, is legally required, through regulations like the USA PATRIOT Act. Non-compliance with CIP can result in substantial fines and reputational harm.

Need for ID verification is critical

Efficient identity verification is critical for the risk management function of a financial institution鈥 compliance program, as delays may lead to the inadvertent on-boarding of high-risk clients, who may pose financial and legal risks. Adherence to CIP requirements also supports institutional integrity, fostering trust among regulators, customers, and the public. Additionally, accurate and timely identification underpins ongoing anti-money laundering (AML) and counter-financing of terrorism monitoring, ensuring the continued effectiveness of these efforts.

In essence, prompt compliance is not merely about fulfilling a requirement 鈥 it entails actively safeguarding the financial system and the institution from imminent threats while meeting legal obligations in a timely manner.

As financial crimes evolve, regulatory bodies update CIP rules to address new threats and ensure robust defenses. Technological advancements, such as the development of AI, also play a role, as new tools and methods for verifying customer identities become available, enhancing security and efficiency. Additionally, changes in laws and regulations, such as amendments to the PATRIOT Act, necessitate updates to ensure continued compliance. And global standards 鈥 like those set by the intergovernmental Financial Action Task Force 鈥 may influence CIP rule changes to align with international best practices.

Further, feedback and experience from implementing existing rules can lead to refinement that improves effectiveness and reduces compliance burdens. These changes aim to enhance the ability of financial institutions to prevent financial crimes and maintain compliance while lowering regulatory costs and improving operational efficiency.

Changes to CIP requirements coming in 2025

Much like every other year, compliance professionals in 2025 face potential changes to CIP rules. The most significant changes include:

      • Partial SSN collection 鈥 Banks may be permitted to collect only the last four digits of a new customer’s Social Security number (SSN).
      • Third-party verification 鈥 The full SSN would be obtained from a reputable third-party source before the account is opened.
      • Modernization of on-boarding 鈥 This approach is intended to align regulatory requirements with modern on-boarding processes currently used by many non-bank financial technology firms.
      • Enhanced customer experience 鈥 The proposed change aims to reduce friction between customers and the bank by simplifying the account-opening process.
      • Potential for increased automation 鈥 The use of third-party verification tools could lead to more automated on-boarding processes.

A joint proposal from U.S. Securities and Exchange Commission and the U.S. Treasury Department鈥檚 Financial Crimes Enforcement Network means that it is likely that the CIP rule will be changed within the next year, likely as an update withing the AML rule, which already includes CIP requirements for some investment advisers.

These potential changes to CIP rules reflect the dynamic nature of the financial industry and its regulatory environment and seek to modernize the on-boarding process, aligning it more closely with common practices used by non-bank financial technology firms. In short, these changes are designed to enhance customer experience by reducing friction and simplifying account opening procedures while leveraging automation for greater efficiency.

As financial institutions implement these updates, they will be better positioned to address new challenges, optimize compliance, and continue providing secure and seamless services in an increasingly fast-paced business environment. As a best practice, however, customer-facing institutions should pay close attention to the imminent regulatory changes as well as the timing for compliance. It is likely that compliance effective dates will lie in 2026, but it is important not to rest on that assumption.

The future in real-time

As financial institutions navigate the evolving landscape of real-time transactions, the necessity for efficient customer on-boarding processes becomes increasingly critical 鈥 and CIP plays a pivotal role in that. Indeed, CIP not only ensures the demand for speed but also helps financial institutions in their fight against financial crime.

As such, compliance with CIP regulations is essential for managing risks, fostering trust among stakeholders, and supporting ongoing monitoring efforts. And as regulations and technologies advance, financial institutions need to continuously adapt to best maintain robust defenses and operational efficiency, protecting themselves and their customers from illicit activity.


You can find more information on the challenges financial institutions face in听fighting money laundering and other financial fraud here

]]>
Policing in the AI era: Balancing security, privacy & the public trust /en-us/posts/government/policing-ai-security/ Fri, 28 Feb 2025 13:37:58 +0000 https://blogs.thomsonreuters.com/en-us/?p=65096 Law enforcement is relying more heavily on video evidence, submitted both from community members and by analyzing data from public and privately owned security cameras. As police departments nationwide wade through thousands of hours of video, they are increasingly relying on AI-trained video analytics solutions to decipher data more effectively and more quickly.

The technological evolution of data-driven policing

Aggregating data from multiple sources to more effectively allocate police resources is not a novel concept. Indeed, more than 30 years ago, the New York City Police Department launched , its constantly updated database of daily crimes.

The core practices of and generating visualizations to inform predictive policing have been duplicated across in the United States and are practiced in real-time crime centers of contemporary policing. Data-driven policing can help short-staffed agencies and foster greater community trust and engagement through informed outreach programs and data dashboards.

Historically CompStat and other such programs relied on data from previous crimes to allocate resources and predict future crimes. New York City led American cities in investing aggressively in closed-circuit camera surveillance with its more than a decade ago. Video analytics were a key factor in helping to quickly identify two suspects in the aftermath of the in 2013.

Other contemporary technology such as body-worn cameras, license plate readers, gunshot detectors, internet-connected private security cameras (shared voluntarily with law enforcement departments) and facial recognition technology allow departments to track criminal activity in near-real time. A surge of post-pandemic have funded law enforcement departments with license plate readers, video analytics tools, and security camera investments that serve as to help short-staffed law enforcement agencies to monitor their communities.

Most recently, in the aftermath of the in New York City in December 2024, video analytics and AI facial recognition technology were utilized to help track a suspect鈥檚 path within New York City and then widely distribute security camera footage of the suspect鈥檚 face. The suspect fled the crime scene but could not escape detection by the monitoring New York City.

How AI tools make hours of video content functional

The volume of video captured across these interconnected tools would never be manageable for real-time human review. AI incorporates behavior analysis in complex environments to flag anomalies for human review 鈥 finding the needle in the digital haystack. Algorithms are either (such as identifying if a person enters a secure area, or if a suspicious item is left behind) or are programmed with a learning algorithm which can adjusted based on past behaviors. A learning algorithm would, for example, be able to identify the difference between the movement of a bag blowing across a parking lot compared to a human being in the same parking lot.

Video analytics companies can review body-worn camera footage and recognize instances of police professionalism and thus, enhance many citizens鈥 interaction with police. Law enforcement agencies generally review an inordinately small amount of due to resource constraints, but analytics tools which automatically detect critical events (such as use of force, apprehensions, de-escalation attempts) can help identify areas that call for enhanced training or could improve citizens鈥 experience with law enforcement.

Of course, facial recognition technology and its use by law enforcement agencies has been a hotly debated topic since the 2020 aftermath of the death of George Floyd. The technology for alleged inappropriate use by law enforcement agencies and concerns that the tool has poor accuracy ratings in recognizing black and brown faces. In fact, the regulating law enforcement鈥檚 use of facial recognition technology this past year, and Maryland State Police adopted a , in accordance with the new state law.

Further, new policies and legislation commonly ensure that facial recognition technology is not used alone to establish probable cause or surveil Constitutionally protected activities, and there are additional concerns about as well.

Addressing staffing shortages through more effective policing methods

Video analytics and other data-driven technologies can aid departments in addressing criminal activities more efficiently, especially if their agency is short-staffed. For example, Seattle in three high-activity corridors last year and installed license plate readers on all police department vehicles. This is concurrent with the police department indicating that they will not respond to without additional verification, such as from video, audio, or eyewitness, for example.)

The Seattle Police Department has struggled to address a nearly 30% vacancy rate since the recent global pandemic and must prioritize their respond to calls as responding to all in a timely fashion is not possible. The intersection of technology in policing and staffing shortages has cracked open the door on discussing when and if might work better than traditional law enforcement responses.

Law enforcement agencies can use video analytics and other emerging technologies to respond to crime in a more resource-conscious and efficient manner. These efficiencies must balance public desire for transparency and collaboration in how these technologies are deployed, how privacy is protected, and how data is secured. Also, policy measures should be implemented to ensure the utilization of the most current technologies, and efforts also should be made to train AI algorithms on unbiased data sets to prevent any perpetuation of harm against marginalized communities.


You can find more information about here

]]>
Identity theft is being fueled by AI & cyber-attacks /en-us/posts/government/identity-theft-drivers/ https://blogs.thomsonreuters.com/en-us/government/identity-theft-drivers/#respond Fri, 03 May 2024 14:33:41 +0000 https://blogs.thomsonreuters.com/en-us/?p=61215 The shift towards digital platforms has revolutionized financial transactions, but it has also fueled a surge in fraudulent activities, particularly identity theft cases that are driven by cyber-attacks. Cybercriminals, leveraging stolen identity information, have devised sophisticated schemes, complicating fraud mitigation efforts. And with the frequency of cybersecurity incidents on the rise each year, organizations face a mass of threats like ransomware and data theft, posing significant challenges across industries.

The average cost of a data breach reached an all-time high of , and now, artificial intelligence (AI) has led to a significant increase in the sophistication of cybercrime. From deepfake technology to AI-powered hacking, cybercriminals are exploiting these advancements to orchestrate unique attacks.

How criminals are leveraging AI

Deepfake technology 鈥 One of the most concerning developments is the use of deepfake technology, a blend of machine learning and media manipulation that allows cybercriminals to create convincingly realistic synthetic media content. Criminals then use deepfakes to spread misinformation, perpetrate financial fraud, and tarnish reputations, exploiting the trust we place in digital media.

In a recent , a company suffered a loss of $25 million due to the deception of an employee who fell victim to deepfake impersonations of his colleagues. The individual participated in a video call in which deepfake versions of the company’s United Kingdom-based CFO and other team members were present. According to authorities, scammers engineered these deepfakes using publicly accessible video content.

AI-powered password cracking AI algorithms, including machine learning and deep learning, enable systems to identify patterns and make predictions based on vast datasets. For example, , harnesses machine learning algorithms that operate within a neural network framework. And the tool seems to work, as a study showcasing the effectiveness of PassGAN in password cracking, published by , found that 51% of passwords were cracked in less than a minute, 65% in less than an hour, 71% within a day, and 81% within a month.

The impact of identity theft fueled by cyber-crimes

Further, there has been a 15% increase in the number of data breaches in the United States between 2022 and 2023, which underscores the escalating threat posed by cybercriminals, according to the . Concurrently, breach severity surged by 11%.

Further, digital account openings emerged as the top highest risk with 13.5% of all global digital account openings suspected of fraudulent activity. And 54% of consumers across 18 countries and regions reported being targeted by various forms of fraud attempts between September and December 2023, according to the TransUnion report.

Cybercriminals persist in breaching organizations’ systems to steal consumer identity credentials, which often contain critical information such as an individual鈥檚 date of birth, full Social Security number, and residential address. With a wealth of stolen identity credentials readily available, criminals have become increasingly adept at fabricating identities.

Consequently, this has led to an increase in the use of illicit synthetic identities among accounts opened at US lenders such as auto loans, bank credit cards, retail credit cards, and unsecured personal loans. The surge in this synthetic identity fraud has exposed lenders to potential losses totaling $3.1 billion, representing an 11% increase compared to the end of 2022. Cybercrimes, including identity fraud, are projected to cost the world about $9.5 trillion annually by the end of 2024, according to AuthenticID鈥檚 .

JPMorgan Chase: Battling cyber-threats

JPMorgan Chase鈥檚 CEO Jamie Dimon has identified cybersecurity as the “” facing the financial services industry. This recognition comes in the wake of legal action taken against the bank in January 2023, when a subsidiary of EssilorLuxottica filed a lawsuit alleging negligence in addressing signs of fraud. The lawsuit claimed in orchestrating 243 fraudulent transactions, resulting in the siphoning off of $272 million from Essilor’s manufacturing division.

Since then, JPMorgan has intensified its focus on strengthening cybersecurity measures. A recent disclosure by a JPMorgan executive revealed that the bank repels an astounding . In response to the escalating efforts of hackers, JPMorgan allocates a substantial portion of its $15 billion budget towards cybersecurity initiatives, backed by a dedicated workforce of 62,000 individuals committed to defending against cyber-threats.

Key strategies for cyber-defense

There are several tactics that organizations can take to help mitigate cyber-crime, including:

      • Prioritize investments in comprehensive cybersecurity infrastructure, equipped with advanced threat detection and response capabilities, to effectively safeguard against cyber-attacks.
      • Collaborate closely with regulatory authorities to establish and adhere to rigorous compliance measures, ensuring adherence to industry regulations and standards for data protection and financial security.
      • Embrace cutting-edge technologies such as AI to better develop sophisticated fraud detection systems capable of identifying and mitigating evolving threats in real-time.
      • Establish multidisciplinary teams including experts from fraud, cybersecurity, risk management, and data analytics departments to leverage diverse skill sets and perspectives in developing comprehensive security strategies. Encourage regular knowledge-sharing sessions, joint brainstorming, and collaborative projects to foster a culture of teamwork and innovation. By breaking down silos and promoting collaboration across departments, financial institutions and organizations can enhance their ability to detect, prevent, and respond to emerging threats effectively.
      • Launch targeted educational campaigns to inform customers about common fraud tactics and cybersecurity measures. Offer easily accessible resources such as online tutorials and workshops to empower customers to protect themselves from cyber-threats.

In conclusion, ensuring financial integrity demands every organization鈥檚 constant attention, especially considering the rapid growth of cyber-threats. By fostering a culture of strength, innovation, and collaboration, leaders can effectively address the challenges posed by data breaches and fraud.


You can , here.

]]>
https://blogs.thomsonreuters.com/en-us/government/identity-theft-drivers/feed/ 0
The rising tide of bot attacks: Exploiting identity vulnerabilities /en-us/posts/investigation-fraud-and-risk/bot-exploit-identity-vulnerabilities/ https://blogs.thomsonreuters.com/en-us/investigation-fraud-and-risk/bot-exploit-identity-vulnerabilities/#respond Wed, 15 Nov 2023 18:57:33 +0000 https://blogs.thomsonreuters.com/en-us/?p=59554 In the rapidly evolving landscape of cybersecurity, the prevalence of bot attacks has become a cause for widespread concern. Regardless of an organization’s size or industry, the escalating volume of bots across the internet poses a significant threat. The sheds light on this growing menace, identifying three common types of bot attacks 鈥 carding, account takeover (ATO), and scraping. The statistics are alarming, with all three categories showing substantial year-over-year increases.

ATO attacks saw a staggering 123% rise in the second half of 2022, marking a 108% YoY increase from 2021. Carding attacks, in which bots use multiple simultaneous attempts to authorize stolen credit card credentials, increased by 161%; and scraping attacks, in which bots search websites for data that could be used in fraud schemes, saw a rise of 112% during the same period.

Understanding bot attacks

Bot attacks, fundamentally malicious activities executed by automated programs or bots on digital platforms, exploit vulnerabilities with speed and scale. These attacks can manifest in various forms, such as:

      • New account fraud 鈥 Bots create fraudulent accounts using stolen or synthetic identities to exploit incentives, promotions, or credit offers.
      • Account takeovers 鈥 Bots attempt to gain control over user accounts by exploiting vulnerabilities in authentication processes or using stolen credentials.
      • Scraping 鈥 Bots scrape websites for data, often for purposes such as competitive intelligence, spamming, or selling data on the dark web.
      • Distributed Denial-of-Service (DDoS) attacks 鈥 Overwhelming a network, system, or website with a flood of traffic from multiple sources, rendering it inaccessible to legitimate users.

Bot attacks against financial institutions

Financial institutions, in particular, have become prime targets for bot attacks, exposing vulnerabilities in the account opening process. Criminals are now leveraging hybrid bots 鈥 combining human and automated inputs 鈥 to open money mule accounts at an unprecedented scale. These hybrid bots can elude most banks’ detection capabilities, allowing criminals to open numerous accounts rapidly.

Research indicates that one in every 100 mule accounts is opened by a bot. Criminals exploit stolen or synthetic identities to establish untraceable accounts, often letting them lie dormant to avoid detection. Startlingly, 62% of all new accounts created by criminals in 2022 were financial accounts, making new accounts 9.5-times riskier than mature accounts, according to the .

The issue of mule accounts is not confined to a specific region, rather, it’s a global problem. In 2022, in the United Kingdom alone, 39,578 cases on bank accounts were indicative of money mule behavior. While this is a reduction from 2021, these cases still account for 68% of misuse of bank accounts, according to .

Simultaneously, bots are escalating ATO rates, with fraudsters employing them to gain unauthorized access to victims’ banking, e-commerce, or other accounts. According to , ATO attacks spiked by a staggering 427% in Q1 2023, compared to all of 2022. As more commerce and financial services move online, ATO attacks become not only more accessible but also more profitable. Predictions suggest that global ATO fraud losses will reach almost

AI and biometric authentication in combatting bot-linked fraud

In the face of these escalating threats, organizations are turning to advanced technologies to bolster their defenses. Artificial intelligence (AI) and biometric authentication emerge as powerful tools in the fight against new account fraud and account takeover linked to bots. Some of the ways these advance technologies are being employed, include:

AI-powered detection systems

AI-driven solutions can analyze vast amounts of data in real-time, identifying patterns and anomalies indicative of bot activity. Machine learning algorithms can adapt and learn from evolving attack patterns, enabling organizations to stay ahead of sophisticated bot attacks. By employing AI-powered detection systems, financial institutions can enhance their ability to identify and mitigate threats with unprecedented speed and accuracy.

Biometric authentication

Traditional authentication methods are often vulnerable to bots that exploit stolen credentials. Biometric authentication 鈥 leveraging a customer鈥檚 unique physical or behavioral characteristics such as fingerprints, facial recognition, or voice patterns 鈥 provides an additional layer of security. Bots struggle to mimic the intricate and individualistic nature of biometric identifiers, making it significantly more challenging for them to succeed in ATO attempts or new account fraud.

Multi-factor authentication

Combining AI with biometric authentication in a multi-factor authentication (MFA) approach creates a robust defense mechanism. MFA requires users to provide multiple forms of identification, such as a password, a biometric scan, and a device confirmation. This multi-layered approach adds complexity for bots attempting to breach accounts, significantly reducing the likelihood of successful attacks.

Promoting the enhancement of Know Your Customer rules

Beyond fortifying against bot attacks, the integration of AI and biometric authentication positively impacts the implantation of Know Your Customer (KYC) rules. By implementing these advanced technologies, financial institutions can gain a deeper and more accurate understanding of their customers. Biometric authentication, in particular, provides a unique and irrefutable link between users and their accounts, enhancing the reliability of identity verification.

This heightened KYC strength not only safeguards against fraudulent activities but also ensures that financial institutions can truly know their customers. The combination of AI and biometric authentication establishes a secure and transparent relationship between users and financial institutions, fostering trust and integrity in the digital realm.

Conclusion

As the threat landscape evolves, the integration of AI-powered detection systems and biometric authentication emerges as a formidable defense mechanism, providing real-time analysis and robust identity verification. The significance of these advanced technologies extends beyond defense, positively influencing KYC practices and fostering secure and transparent digital relationships.

The collaboration between industry players and advocates underscores the necessity for heightened cybersecurity measures, awareness, and the continuous advancement of authentication mechanisms. By embracing these innovations, organizations can stay one step ahead in the relentless battle against evolving cyber-threats such as bots, ensuring the trust and integrity of digital interactions in an ever-changing landscape.


For more on this subject, you an access the report here.

]]>
https://blogs.thomsonreuters.com/en-us/investigation-fraud-and-risk/bot-exploit-identity-vulnerabilities/feed/ 0
Using ID verification to prevent fraud, waste & abuse in government unemployment agencies /en-us/posts/investigation-fraud-and-risk/id-verification-preventing-fraud/ https://blogs.thomsonreuters.com/en-us/investigation-fraud-and-risk/id-verification-preventing-fraud/#respond Mon, 17 Jul 2023 17:20:58 +0000 https://blogs.thomsonreuters.com/en-us/?p=57911 State and local government agencies are under immense pressure to deliver critical public benefits, often with constrained resources; and preventing, detecting, and ultimately (and where necessary) investigating potential fraud can be a challenging process.

It鈥檚 critical for government agencies to get the process right, however, especially in an environment in which the efficient and effective use of public funds is scrutinized by a variety of parties, including the general public.

Government unemployment agencies, for example, continue to deal with what is seemingly a trifecta of reinforcing complications, including that i) many agencies lost critical talent during the pandemic; ii) recruiting new talent remains a top concern; and iii) modernizing systems is replete with challenges. Sprinkle in the fact that bad actors are employing ever more sophisticated methods to commit fraud and you have a perfect storm.

All of this makes the stakes of getting identity (ID) verification and, more broadly, fraud prevention correct very high. The recent volume of unemployment insurance (UI) investigations has reached over which has stressed many agencies in a variety of ways. Further, the U.S. Government Accountability Office (GAO) $19 billion in incorrect unemployment insurance payments in fiscal year 2022, excluding assessments for specific programs with higher vulnerability to risks, such as the Department of Labor’s Pandemic Unemployment Assistance benefits.

Critical government programs such as the 鈥 which will see $2 billion in funds allocated and includes the recent creation of the Office of Unemployment Insurance Modernization (OUIM) within the Department of Labor 鈥 offer some relief, but procuring and leveraging the funds is a heavy lift unto itself for any agency.

Moreover, 成人VR视频 Institute research shows that many government agencies want to spend more time in the fraud prevention stage of the process 鈥 which includes ID verification 鈥 but the actual time they are spending in this phase is less than desired. Suffice to say there is both a will and a way.

Increasing sophistication of bad actors

Bad actors are increasingly drawn to the UI system and are only adding to the challenges agencies currently face, leveraging sophisticated attacks through technology. Bot attacks, for example, are moving from generally easily detectable standard bots to periodic bots that very closely mimic the typing patterns of humans.

By harnessing the power of artificial intelligence (AI), bad actors are able to more easily scale their efforts while simultaneously creating more convincing attacks. Synthetic identities are one example of this. Creating synthetic IDs involves the use of a single element of legitimate personally identifiable information (most commonly a social security number) layered with fictitious elements. AI-fabricated details have the potential to add further legitimacy to the synthetic elements thereby complicating the overall detection of the identity in question.

Moving beyond individual synthetic IDs is the increasing trend toward fictitious businesses (also known as fictitious employers). Fraudsters can create fictitious companies, complete with fabricated records, websites, and even employees, to facilitate various types of fraud. These fictitious employers can be used to generate false employment histories, income verifications, and employment references.

This trend is on the rise and is made more difficult to detect given the structures of some agencies. With UI and tax commonly under separate leadership within government agencies, identifying fictitious employers is a significant challenge due to disparate data.

Finding the right balance

Combatting fraud, waste, and abuse must be considered in the context of ensuring good actors encounter a frictionless and simple path to receiving proper benefits. This is where the concept of step-up authentication can be leveraged to ensure that the amount of friction applied is commensurate with the level of threat perceived.

When data elements are verified, aligned, and consistent then of course less friction (if any) is the preferred path. Yet, in cases in which anomalies are detected or data is insufficiently verifiable, then a step-up path to further authenticate an identity may be needed. This process should seek to only put in the least amount of friction required to satisfy internal controls.

A variety of technology solutions are at the disposal of state agencies all with slightly different angles in the ID verification process.

Data analytics programs, for example, can detect patterns and anomalies in employment data, such as a high number of employees associated with a particular employer or unusual salary patterns. These advanced analytical techniques can be deployed into existing systems, thereby maintaining a frictionless customer experience while at the same time detecting fraudulent claims and facilitating smooth identity verification.

Applying deep behavioral and biometric analytics is another level of step-up authentication that can be particularly effective against synthetic IDs at the individual level. Indeed, biometric technologies such as facial recognition and behavioral analytics (for example, how someone holds a device) can be powerful antidotes to fraudulent activity; and matching real-time pictures (such as selfies) with government-issued identification documents can significantly deter fraudsters.

And when it comes to fictitious employers, coordinating across departments within an agency and comparing tax data with UI data is key. Identifying fictitious employers is greatly aided by visualizing and comparing trends across these two departments.

Conclusion

Although the U.S. Pandemic Unemployment Assistance benefits ceased on September 4th, 2021, individuals are still recovering from job losses due to the global pandemic, and there are concerns of . Consequently, a substantial number of unemployment insurance benefits are expected to be necessary to support those in need. Not to mention that 68% of government agencies have expressed concerns that they will continue to see more fraud than expected, according to the 成人VR视频 Institute鈥檚 2023 Government Fraud, Waste & Abuse Report.

These sentiments reflect their distress regarding the ongoing prevalence of fraudulent UI claims, which makes addressing the challenges faced by unemployment departments in verifying identities, detecting fraud, and preventing barriers to entry or workflow disruption so critical.

Successfully navigating these challenges will require government agencies to employ a balanced approach that combines industry expertise, technology, and collaboration.

]]>
https://blogs.thomsonreuters.com/en-us/investigation-fraud-and-risk/id-verification-preventing-fraud/feed/ 0