AI-enabled Regtech Archives - 成人VR视频 Institute https://blogs.thomsonreuters.com/en-us/topic/ai-enabled-regtech/ 成人VR视频 Institute is a blog from 成人VR视频, the intelligence, technology and human expertise you need to find trusted answers. Mon, 13 Apr 2026 08:15:40 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 More SARs, not better ones: Why AI is about to flood the system /en-us/posts/corporates/ai-driven-sars/ Mon, 13 Apr 2026 08:06:52 +0000 https://blogs.thomsonreuters.com/en-us/?p=70285

Key insights:

      • SAR volume is significantly underreported 鈥 Continuing and amended filings add approximately 20% to the official count yet remain invisible in trend analyses.

      • Filing activity is highly concentrated 鈥 A few large financial institutions dominate SARs volume, meaning trends reflect their practices more than systemic changes.

      • Agentic AI will drive a surge in SARs 鈥 Agentic AI risks increased noise over actionable intelligence, without addressing the unresolved question of whether current filings yield meaningful law enforcement outcomes.


The Suspicious Activity Reports (SAR) that financial institutions file with the U.S. Treasury Department鈥檚 Financial Crimes Enforcement Network (FinCEN) provide valuable insight, although they may not offer a comprehensive picture.

Prior to meaningful discussions regarding the future of SARs, it is essential for the financial crime community to clarify what is being measured. In 2025, for example, SAR filings of more than 4.1 million, representing an almost 8% increase compared to the total number of SARs filed in 2024.

Every figure FinCEN has published reflects original SARs only. Continuing activity SARs, which represent roughly 15% of all filings, are submitted under the original Bank Secrecy Act (BSA) identification number and never appear as new filings. Corrected and amended SARs add another 5% on top of that. This makes the real volume of SARs activity approximately 20% higher than what is reported.


The average community bank files fewer than one SAR a week, while the largest institutions file more than 500 a day.


Recent FinCEN guidance giving financial institutions more flexibility around continuing activity SARs sounds significant on paper, but as former Wells Fargo BSA/AML chief Jim Richards points out: “It won’t change the reported numbers 鈥 because those filings were never counted to begin with.” Financial crime professionals need to keep that gap in mind every time a trend line gets cited.

2025 was steady, not spectacular

There were roughly 300,000 SARs filed every single month of 2025, and the most notable thing is that nothing notable happened. That is likely a first on the volume side and worth acknowledging, but beyond that milestone the year did not hand financial crime professionals anything noteworthy. In a space that has dealt with pandemic distortions, crypto chaos, and fraud spikes that seemed to come out of nowhere, steady volume and predictable patterns are a little surprising. A quiet data set, however, is not the same as a quiet landscape, and financial crime professionals who are reading stability as stagnation may find themselves flat-footed when the numbers start moving again.

For example, one of the most underleveraged insights in the SARs space is just how concentrated filing activity really is. The numbers are stark: The top four banks file more SARs in a single day than 80% of the rest of the banks file in 10 years, according to 2019 data from a .

The average community bank files fewer than one SAR a week, while the largest institutions file more than 500 a day. “50 a year versus 500 a day,” notes Wells Fargo鈥檚 Richards, adding that such asymmetry has real implications for how the financial industry interprets trends. Meaningful movement in SARs data, up or down, is almost entirely dependent on what a handful of mega-institutions decide to do.

Not surprisingly, money services businesses (MSBs) are the second largest filing category, and virtual currency exchanges are almost certainly driving recent growth there, even if outdated category definitions make that difficult to confirm directly. Credit unions round out the top three.

The filing philosophy hasn’t changed and shouldn’t

Regulatory noise occasionally suggests that institutions should be more selective about what they file. However, compliance and legal reality have not shifted. No institution has ever faced serious consequences for filing too many SARs, and the cases that result in enforcement actions, reputational damage, and regulatory scrutiny are consistently about missed filings or late ones.

鈥淵ou’re not going to get in trouble from filing too much,鈥 Richards says. 鈥淣obody ever has, and I doubt if anyone ever will.” For financial crime professionals, the calculus remains exactly what it has always been 鈥 when in doubt, file. That posture isn’t going to change, and frankly it shouldn’t.

Yet, here is where the SARs space gets genuinely interesting. Agentic AI use in SARs filings 鈥 systems in which multiple AI agents work through a case from screening to decision to documentation 鈥 is beginning to move from concept to deployment. The impact on filing volume likely will be significant.


The risk is a system flooded with AI-generated SARs of variable quality, creating more noise for law enforcement to sort through rather than sharper intelligence to act upon.


Whereas a small team today might work through a handful of cases a week, AI-assisted workflows could push that into the dozens. Multiply that across institutions already inclined to file rather than miss something, and the result is a coming surge in SARs volume that could play out over the next two to four years.

“Agentic AI has the potential to be a game changer on how we do our work,鈥 Richards explains. 鈥淏ut I believe it’ll guarantee that there will be more SARs filed and not necessarily better and fewer SARs filed.” Indeed, the critical point for the financial crime community to internalize is exactly that.

The risk is a system flooded with AI-generated SARs of variable quality, creating more noise for law enforcement to sort through rather than sharper intelligence to act upon. Once the largest institutions adopt agentic AI as a best practice, others will follow quickly, and regulators will likely be several steps behind.

The value question can’t wait

The has been in place since 2014. Yet after 12 years of filings, the financial crime community still lacks a clear public accounting of whether that data has produced actionable law enforcement outcomes.

So, the question Richards is asking is one the entire industry should be asking: “Has anybody asked law enforcement?”

This question reflects a larger challenge that the industry needs to confront more aggressively, especially as AI technology is set to dramatically increase filing volume across the board. Increasing the volume without improving how the information is used does not represent progress. If SARs are not generating real investigative value, the solution is not to file more of them faster 鈥 instead, the pipeline should be fixed before it grows any bigger.


Please add your voice to 成人VR视频鈥 flagship , a global study exploring how the professional landscape continues to change.

]]>
Submit once, use everywhere: The FDTA & structured business reporting are redefining compliance /en-us/posts/technology/structured-business-reporting/ Mon, 15 Sep 2025 13:22:41 +0000 https://blogs.thomsonreuters.com/en-us/?p=67531

Key takeaways:

      • FDTA drives a data revolution The FDTA and structured business reporting are driving a shift to machine-readable data formats, which could fundamentally transform companies鈥 compliance processes.

      • Organizational change required Successful adoption of the FDTA and structured business reporting requires not only technological upgrades but also significant cultural and skill changes within organizations.

      • Compliance in action To successfully transition to data-driven compliance under the FDTA, organizations should conduct readiness assessments, appoint cross-functional leaders, adopt scalable data architectures, and actively engage with policymakers and vendors early in the process.


Financial reporting is at a turning point as regulatory consistency supersedes a check-the-box mentality. For decades, compliance in the United States has centered on paper-defined, form-based filings that were systemically encoded by software. Paper and e-form documents were submitted to regulators, parsed by analysts, and stored in disparate systems within industry and oversight agencies. This approach is quickly losing relevance, however, due to an explosion of data, new intelligent solutions, and a need for one version of the truth.

The vision of financial data modernization is not new, but its transformative speed and benefits are accelerating both internationally and domestically. For the US, the Financial Data Transparency Act (FDTA), signed into law in 2022, represents the first step to achieving structured business reporting, which was originally proposed in 2017. Each of these public-private initiatives represents sweeping mandates to transition from static reports to machine-readable, standardized data that in turn reduces compliance burdens, improves data quality, and in the long-term, reduces costs.

International data standardization and regulatory consistency directives began decades ago, and they represent lessons learned for domestic policy makers. FDTA legislation and structured business reporting designs represent the end of static reporting and the beginning of data-driven regulatory oversight. Both public agency and private industry leadership can treat the FDTA and structured business reporting as another regulatory mandate, or they can embrace it as the building blocks for financial data modernization in which compliance consistency can be a driver of efficiency, transparency, and competitive advantage.

From documents to data

The FDTA requires US regulators to adopt common, machine-readable formats for the data collected. By 2027, regulatory filings will no longer be about submitting PDFs or e-forms 鈥 rather, submitted filings will require structured, standardized, and interoperable datasets.

This is more than an administrative shift. Indeed, it will force agencies and industry to overhaul data ingestions, governance, taxonomies, and internal systems to ensure that filings are accurate, traceable, and consistent across individual data stores. The chart below showcases the regulatory agencies impact by FDTA and the oversight influences from non-domestic partners.

structured business reporting

While the FDTA will require organizations to invest in supporting data technologies, the longer-term efficiencies, elimination of duplication, and improved transparency must be balanced against data design and governance changes and staff retraining. In this way, the FDTA represents the initial starting point of rethinking regulatory compliance with an eye toward financial data modernization.

Due to its data-driven fundamentals, structured business reporting is designed for regulatory consistency across many agencies 鈥 not regulatory compliance and complexity. Taken holistically and using lessons learned from the foreign initiatives, which have preceded US adoption, structured business reporting is as much a cultural shift as a technological one.

Indeed, structured business reporting will require strong collaborations between public agencies, auditors, software vendors, and financial institutions. And its adoption will continue to be debated until such time that efficiency gains outweigh the pains of operational and data transformations.

Practical impact of people

Data modernization is more than the schemas, taxonomies, and APIs often presented as solutions. Success for the FDTA and structured business reporting are rooted in people and skills. While each envisions native machine-readable data and robust cross-functional data governance demands, the investment in training coupled with continuous change management cannot be underestimated.

The chart below provides a snapshot of the differences using FDTA and structured business reporting as the catalyst for change.

structured business reporting

What is also implied from this comparison is that the legal and audit implications are non-trivial. Once filings are machine readable, discrepancies, errors, or omissions are visible using advanced analytics and growing AI capabilities. Litigation, enforcement, and reputational risks will reside with the quality, variation, and integrity of the data submitted. The skills needed for these eventualities and the demands to get it right the first time will create unfamiliar priorities against traditional business-as-usual practices.

The calls to action

The FDTA is a legislative demand which will accelerate its reach over the next two years. By not actively engaging with agencies to shape the final implementations of the FDTA, industry leaders risk more of the same inefficiencies when it comes to systems, complexities, data fragmentations, and rising compliance costs.

To prepare for the FDTA and structured business reporting, leaders are encouraged to: i) conduct readiness assessments; ii) appoint cross-functional leaders; iii) adopt scalable, compartmentalized data architectures; iv) design success using crawl, walk, run approaches; and most importantly, v) engage with policy makers and vendors early.

The real benefits beyond traditional compliance cultures and systems, once data is standardized, are many-fold, including: i) AI-driven regulatory consistency, such as leveraging RAG and agentic AI; ii) predictive regulation that can anticipate potential exposures; iii) market innovations using transparent data; and iv) safeguards that consistently protect data regardless of the platform or application.

The market demands are clear: Consistent compliance data is an economic imperative. Data modernization will not only meet public regulator demands but also will provide scalable data architectural building blocks that will improve organizations鈥 transparency, auditability, efficiency, and trust.

Institutions that thrive using the FDTA and structured business reporting will treat regulatory data not as a burden of regulatory compliance 鈥 but rather as data-driven, regulatory consistency asset that spans the organization. Indeed, US markets cannot scale the regulatory burdens any further 鈥 the FDTA and structured business reporting may represent an opportunity to permanently shift to data-driven submit once, use everywhere designs.

Of course, this all hinges on something unprecedented 鈥 industry taking the data-driven lead when it comes to financial data modernization and consistency around regulatory compliance.


You can find more blog posts听by this author here

]]>
The rise of autonomous AI: How intelligent agents are redefining strategy, risk & compliance /en-us/posts/technology/autonomous-agentic-ai/ Mon, 03 Mar 2025 12:45:27 +0000 https://blogs.thomsonreuters.com/en-us/?p=65066 The rise of generative AI (GenAI) has been the momentum moving corporate and research expectations, investments, and innovation regardless of industry or discipline. However, in just two short years 鈥 and even without extensive scale and maturity of GenAI production systems 鈥 new variants and challenges are already alternating deployment designs and operating strategies.

In fact, 92% of companies say they will invest more in Gen AI over the next three years, yet only 1% state that their investments have reached maturity, according to from McKinsey & Co.

For leaders currently struggling with the terminology and designs of GenAI 鈥 protocols, messaging, large and small language models, vector databases, algorithms, and more 鈥 a next generational shift is already underway. And it鈥檚 already building on top of early directions, while introducing a new set of requirements and governance demands. Will the AI momentum slow down? Will new AI innovations become mere extrapolations of early-stage data and intelligence advances? Or will something more profound happen?

Indeed, this pace of AI change is dwarfing anything previously experienced. However, what struck me is the question: How do you instill robust oversight for solutions which are temporal, self-learning, and adaptative based on the data they ingest? Everyone has their own understanding of what AI is from the daily blast of media articles, so baselining is necessary.

In 2023, the term pre-training took on new importance as ChatGPT permanently changed the discussion of systems and data 鈥 in addition to costs, cloud architectures, and skills needed. By mid-2024, enterprises were witnessing the rise of retrieval augmented generation (RAG) using external data to improve the accuracy of GenAI and their industry鈥檚 large language models. Now as 2025 emerges, corporate leaders are being blanketed with yet another evolution or .agentic AI

To understand the progression of the question, What is AI? you need to compare the ideas of accountability and design between legacy priorities and ask the emerging questions surrounding accountability of data. It is this data that will simultaneously feed hundreds of AI layered components, and not just the one or two which today are simplistically being anticipated.

Legacy data brings next-gen complexity

Underneath these marvels of AI algorithms and chip technologies, the demand for usable data to improve the accuracy, longevity, and auditability of capabilities continues to strain internal departments and compliance personnel. However, as AI systems explode in their usage and deployment, the vast questions surrounding data complexity 鈥 its lineage, ingestion, storage, manipulation, and cross-domain usage 鈥 are often a black box.

As 2025 unfolds with macroeconomic and political uncertainties, what is certain is that given AI鈥檚 expansive trajectory, data can no longer be isolated or reviewed at a system level. When AI systems are pre-trained on separate data ecosystems, when AI systems begin to feed their outputs to downstream systems, and when AI results are materially different from common control criteria over time, then how will these systems be re-trained on event-driven data and at what cost?

AI discussions today are energetic and promising, especially when solving business demands for efficiency, customer service, profitability, and competitive distinction. Yet there are tradeoffs that must be made when it comes to scope, costs, and time (often referred to as the triple-constraint of program management and budgeting).

After three years, the development and adoption of GenAI solutions is now becoming more common. The legal, compliance, and audit considerations are clearer, and investors from individuals to private equity now conduct due diligence of these solutions to ensure business rule conformity and valuation. Nonetheless, what we are experiencing is that the methods and techniques that used legacy priorities to guide early AI solutions continually show decreased efficacy and importance when it comes to next-gen AI solutions that may possess greater intelligence and shared data.

These shifts, as represented in Figure 2 below, when mapped against an organization鈥檚 triple constraints, illustrate distinctive requirements that are not currently accounted for within the enterprise and its cohesive governance designs. In short, the data controls, compliance, and auditability for a small number of emerging AI solutions will not provide the robustness and scalability demanded when agentic AI begins to migrate or replace early-stage AI capabilities.

The diagram elicits another question, who has the roadmaps to migrate AI solutions to next-gen AI solutions? What happens when an AI system is re-trained or retired 鈥 what happens to all that data?agentic AI

By establishing a baseline for the information already provided, we can see from the details in the diagrams several factors, including:

      • The controls and accountability for large numbers of AI systems changes the discussion of data, its architecture, its reuse, and most importantly, its event-driven ingestion which in turn alters AI outputs (model efficacy).
      • The mechanisms and oversight employed for traditional passive, sample-driven conformity will fail consistently due to interconnectivity, real-time adaptations, and speed of change.
      • The challenges of security, privacy, and ethical data take on new challenges when factoring in agentic AI and its (likely) creation of synthetic data, which in turn is fed back into the system as part of event-driven feedback and continuous improvement.
      • Skills and transformative process guardrails will lag agentic AI capabilities. For example, the newest AI chipset performs total calculations in one second what would have taken a human 125 million years.

Finding proper agentic AI governance

However, beyond the evolution of GenAI, beyond its expansion into RAG, agentic AI can leverage the positive designs underway to aid with its continual refinement and self-learning decision-making. In Figure 3 below, the comparison of these is presented against the new demands being placed on data.

agentic AI

It is in this final illustration that we see a fundamental and permanent shift of priority 鈥 data over system ideation. Legacy methods started with the process, and many AI controls today start with algorithms. For agentic AI, there is a phase shift that must start with the data because these thinking, adjusting systems are built not on rules but on goals. Indeed, agentic AI requires accurate, reusable, and auditable data sources.

Corporations and their innovative leaders are experiencing a technological and generational function shift. The traditional legacy control playbooks and prescriptive development approaches are poorly equipped to address the next-gen requirements. Data is the key for explosive algorithmic intelligence that will be increasingly segmented into reusable modular components that are stacked one upon the other.

Finally, the fundamental challenge for every organization and those overseeing AI automation, are the questions: Can we adapt to meet the technological realities? Can we shift the prioritizations and governance to data before siloed, cascading AI risks result in unintended havoc? Will we, as humans in the AI loop chase the AI algorithms, and will we repeat the same mistakes we made with the rapid adoption of financial and regulatory technologies a decade prior?

For agentic AI in 2025 and its impacts on business models and operational performance, oversight will represent a continual journey 鈥 not a destination.


You can find more blog posts here

]]>
How AI will disrupt fraud prevention & detection technologies /en-us/posts/corporates/technological-considerations-fraud-prevention/ Mon, 23 Dec 2024 13:23:16 +0000 https://blogs.thomsonreuters.com/en-us/?p=64259 Digital channels are widely used today to create efficiency in the on-boarding process for various financial products, including commercial and retail checking accounts, credit cards, automotive loans, and commercial loans. However, these channels also present opportunities for fraudsters, particularly in committing new account fraud, often due to the remoteness and anonymity they offer.

With the rapid acceleration of AI in various business processes and workflows, AI-generated fraud is changing how banks and insurance companies must approach fraud prevention and detection.

After describing the technological considerations these institutions much manage and how they can identify the types of fraud they are up against, this final article in our series explores the implications of AI-generated fraud and how financial institutions and insurance companies are responding to these new challenges.

The impact of AI-generated fraud

One of the most notable examples of deceptive use of artificial intelligence involves a tech portal author who downloaded an 鈥 with great success.
This easily accessible technology highlights the potential risks for banks that rely on biometric identifiers for security and verification, particularly voice recognition. If a bank uses the prompt, My voice is my password; please verify me, it can become vulnerable to voice-cloning attacks. To commit this fraud, illicit actors need only an audio file of the victim’s voice, which can be obtained from a phone鈥檚 automatic answer service or from social media content available online.

Bypassing voice recognition is just the beginning. Visual identifiers, such as facial verification and visual liveness checks, are also at risk due to the explosion of deepfake technology. The innovation in creating realistic looking deepfakes is astonishing, with some being so authentic that they deceive even the most discerning viewers. For instance, a website featuring a was so convincing that many fans believed it was the real actor.

In the value chain of a fraud operation, all other components needed for verification 鈥 such as the victim鈥檚 name, personal information, email account access, and bank details 鈥 must also be in place, especially if a bank relies on two-factor authentication.

fraud preventionWith the continued acceleration of data breaches, however, these components can be at risk as well, and one can assume that personally identifiable information for all American citizens is available on the dark web and ready to be purchased. On platforms such as Telegram, fraud service providers create the necessary components to help fraudsters bypass know your customer (KYC) identity controls. For example, to open an account, a fraudster might use forged state-issued documents, fake identification, and even a cloned voice to impersonate a real person or existing client. One service, called Docs 4 You, enables the creation of a completely new identity, complete with a driver鈥檚 license, selfie videos, and a passport. The goal is to cultivate an identity for the long term and then establish a credit history that can later be maxed out. In one such advertisement, a seller can bypass at least five of the largest .

The insurance industry is also affected by AI-generated images, which are used to simulate car accidents, for example. If it is easy to clone voices and faces, it is even easier to create fake accident images, leading to fraudulent claims that are difficult to detect without thorough investigation and personal inspection of the affected property or vehicle.

How financial Institutions and insurance companies can respond

While machine learning, predictive analytics, and behavioral biometrics are effective for detecting ongoing account fraud, illicit actors seek to use AI-driven fraud to bypass security protocols such as liveness checks and voice verification during customer verification processes in both new and existing account fraud cases.

AI-generated fraud largely impacts three categories, including the use of:

      • AI-generated videos and images to bypass liveness detection;
      • AI-generated voices to bypass voice verification; and
      • AI-generated documents and pictures to be used as supporting documentation (such as IDs, financial records, and insurance claims).

To combat these threats, financial institutions, insurance companies, and corporations must upgrade their detection and prevention capabilities. This includes implementing the latest technologies and introducing new measures during their customer on-boarding and claims management processes to counter AI-generated fraud.

Despite being a target of AI-generated fraud, biometric information remains a crucial component of any on-boarding or verification solution. However, its limitations as a standalone verifier mean it must be combined with existing customer data from robust public sources. For example, if the identity of a customer cannot be verified using public records, a visit to a branch or in-person verification process may be necessary, even if a liveness check is confirmed by a biometric provider.

If an organization relies solely on digital channels, remote verification may be the only option. In such cases, the location of the individual can offer additional insight. For instance, a US-based institution might block account openings or credit card limit expansion requests if the online session or call originates from outside the United States or a specific region within the country.

Combining data, technology & personal interactions

The field of AI detection and prevention technology is rapidly evolving, offering innovative capabilities. Advanced liveness detection now utilizes 3D depth sensing and multi-angle face scans with anti-spoofing algorithms. Deepfake detection AI analyzes frame-level inconsistencies and employs neural networks trained on datasets of authentic versus deepfake videos.

As voice verification becomes more common in the financial industry, anti-spoofing systems can detect audio spectrum inconsistencies and synthetic overtones, which are typical of AI-generated voices. These technologies are particularly effective in call center operations.

For document validation, authentication solutions using optical character recognition and image forensics are essential for detecting fraud. , for instance, adds invisible pixels or audio patterns to documents or files that computers can detect but humans cannot. 听Innovations in , for example, can further help uncover document and image alterations.

Document verification systems and deepfake detection tools are poised to become essential components of the anti-fraud arsenal in financial institutions and insurance companies. Combining the capacity and power of these tools is critical and is achieved through multimodal verification methods. Given the rapid pace of innovation in AI, it is essential to calculate returns on investment over shorter time spans.

Conclusion

Obviously, financial institutions and insurance companies should not rely solely on technology in their fight against AI-driven fraud.

The financial implications of this innovative type of fraud may necessitate additional steps in the account-opening process. For instance, live verification steps 鈥 such as face-to-face verification conducted by local branches or notaries 鈥 could serve as a deterrent to fraudsters.

By combining advanced technology with personal interactions and robust data analysis, financial institutions and insurance companies can better protect themselves against the evolving threat of AI-generated fraud. This multi-faceted approach ensures that while technology plays a crucial role, human oversight and interaction remain integral to the fraud prevention and detection process.


You can read our 3-part听blog听series of the technological considerations on how financial institutions and insurance companies need to manage fraud detection and prevention here.

]]>
Mapping different types of fraud to improve detection & prevention /en-us/posts/corporates/technological-considerations-mapping-fraud/ Fri, 06 Dec 2024 14:10:16 +0000 https://blogs.thomsonreuters.com/en-us/?p=64050 The response of financial institutions to prevent and detect fraud typically begins with the conduction of a fraud risk assessment, which divides fraud into two areas, external and internal fraud. For our purposes, we will focus on external, non-loan-based fraud, which usually involves systematic and replicable approaches to defraud banks and their customers.

Fraud typologies can be segmented based on the two main entities affected: i) new customer fraud or new account fraud; and ii) existing customer fraud or existing account fraud. Each typology requires distinctive technologies for detection and prevention.

New account fraud

New account fraud primarily affects banks and occurs when a customer opens a bank account with fraudulent intent. For simplification, we will focus on remote and digital account opening workflows, as these are the areas where fraud occurs most frequently.

The main considerations in prevention and detection should focus on three aspects: i) personally identifiable information; ii) biometric identification; and iii) technology- and IT-based insights. Let鈥檚 look at each of these in turn:

Personally identifiable information 鈥 This includes all available public records that help identify the customer. Secondary identifiers, such as previous addresses, relatives, telephone numbers, asset registrations, places lived, and other associated records, are essential components. In this instance, cross-referencing the data submitted by the customer with publicly available records can strengthen the customer verification processes.

Biometric information 鈥 This provides an additional layer of verification, whether the biometrics are based on fingers, eyes, or hands. Fingerprint scanning captures unique patterns and is used in mobile devices and security systems. Facial biometrics analyze facial structures and features, while highly accurate iris and retina scanning can also be employed. Voice recognition, traditionally used for verification, is becoming less secure as artificial intelligence can now allow illicit users to bypass it. Hand geometry, though less common, analyzes the shape and size of a customer鈥檚 hands for identification purposes.

Technology- and IT-based insights 鈥 These are critical in assessing customer legitimacy, and in a digital workflow, they can reveal important information about the validity and the risk profile of a customer. For example, geolocation data can reveal where the account is being opened. And when an application is originating in a foreign country, such as Nigeria, it could be a red flag for the bank. In fact, a case can be made to reject all digital applications that have been made from high-risk designations or from foreign countries in general.

Other tech-based insights 鈥 such as network data and IP addresses, Wi-Fi information, ISPs, and domain analysis 鈥 can further determine the risk profile of the customer. Threat intelligence tools, such as proxy and botnet detection, can block suspicious applications, while virtual private networks (VPNs) can also be assessed to identify fraudulent attempts.

By integrating all three of these methods 鈥 robust data capabilities, biometric verification, and IT insights 鈥 banks can better address new account fraud typologies. These include mule accounts, accounts opened with stolen IDs, and synthetic IDs. (Indeed, synthetic ID fraud, the fastest-growing type of fraud in the United States, requires special mention.)

Prevention in these instances involves a systematic approach encompassing three components: First, IT systems block account openings originating from high-risk IP addresses or flagged geolocations. Second, customer data verification against public records filters out applications using previously issued Social Security Numbers or invalid personally identifiable information. (It is possible to check if each customer鈥檚 SSN has been used by other people, likely indicating synthetic ID fraud.) And third, biometric verification can include liveness checks during account opening or credit limit expansions to confirm authenticity.

Fraud against customers

Existing account fraud primarily targets customers, as fraudsters aim to take over accounts or execute unauthorized transactions. This often involves rapid money transfers or payments from the victim鈥檚 account into the fraudster鈥檚 account. Social engineering scams play a central role, ranging from credential and personal information harvesting to real-time scams that exploit authorized push payments and remote access tools.

Multi-factor authentication is a key strategy to mitigate existing account fraud and widely deployed at financial institutions. This type of authentication enhances login security by requiring that users provide multiple forms of identification. These include something they know (such as a password), something they have (a security token or phone), and something they are (biometric data). The underlying IT layer as outlined previously can prevent these forms of attacks prior to a customer interaction when network and device information of the fraudster is obtained, and access is blocked.

Detection technologies now focus on interactions between fraudsters and victims, and real-time monitoring is critical for detecting account takeover scenarios. Behavioral analytics, augmented by machine learning, can build detailed customer profiles based on transaction history, spending patterns, login times, and other behaviors. This data helps financial institutions detect anomalies, such as unusual transaction amounts or login behaviors, which can trigger alerts.

Session metrics, such as the duration of activity and velocity of transactions, can indicate abnormal behavior. Behavioral biometrics, including typing speed, keystrokes, mouse movements, and navigation patterns, also can establish baselines for customer activity. Significant deviations, such as rapid clicks or erratic navigation, can raise red flags.

When suspicious activity is detected, financial institutions can act immediately by alerting the customer, temporarily freezing the account, or requesting additional verification. Authorized push payment scams, which rely on rapid execution, can be disrupted by introducing delays, adding additional verification steps, or by sending tailored messages to the customer.

Behavioral biometrics further enhance security in these cases as well. A customer鈥檚 sudden changes in typing speed, unusual mouse movements, or deviations in other baseline behaviors may be indicators of potential fraud. These tools allow banks to preemptively block fraudulent transactions or slow down execution, giving the customer more time to realize that a fraud attempt is happening and halt the scam.

Conclusion

Fraud prevention and detection in financial institutions require a multi-layered approach. Integrating analysis of multiple methods of prevention and detection 鈥 personally identifiable information, biometric verification, IT insights, and behavioral analytics 鈥 can provide financial institutions with a comprehensive framework to address evolving fraud typologies.

By adopting these measures, financial institutions can protect both their customers and their own operations while building trust and resilience against financial crime.


In the final听part听ofour 3-part听blog听series,听we will see how financial institutions can find the right tech solutions to detect听and听prevent AI-based听fraud.

]]>
Current technological considerations in fraud detection & prevention /en-us/posts/corporates/technological-considerations-fraud-detection/ https://blogs.thomsonreuters.com/en-us/corporates/technological-considerations-fraud-detection/#respond Fri, 22 Nov 2024 10:23:58 +0000 https://blogs.thomsonreuters.com/en-us/?p=63950 Fraud is massively on the rise, and this represents a challenge to those who work to safeguard the financial system and ensure that bad actors are not finding their way into a relationship with a financial institution or defrauding existing consumers.

But how bad is fraud really? One anecdotal demonstration that fraud is really a problem can be seen in that the fraud hotlines of many large financial services companies gives routing options based on fraud type 鈥 for check fraud, press 1; for ID theft, press 2鈥 . If an organizational structure is needed to categorize a financial institution鈥檚 response to fraud, it clearly shows the severity of this situation.

There are three main sources to categorize and quantify fraud: data from industry observers, like the ; data and reports from affected consumers, like the maintained by the U.S. Federal Trade Commission; and data from the banking industry that files suspicious activity reports (SARs). With the triangulation of data from three sources it is possible to quantify fraud trends and assess their severity to both consumers and the financial institutions who serve them.

Numbers from the ID Theft Resource Center, which reports on , show an alarming trend: attacks against financial services companies jumped by more than two-thirds over the last year, making that sector the most compromised industry for the first time ever, a position that was traditionally held by healthcare companies. Overall, in the first six months of this year, more than 1 billion people have been impacted by multiple breaches, many of them being affected multiple times.

A similar picture arises from data coming from the FTC鈥檚 Consumer Sentinel Network, which took in more than 5 million reports, with almost half (48%) of them being fraud related. For the first time, reported fraud topped $10 billion in losses, and it is estimated that the total losses from unreported fraud might be equally large. Imposter Scams were another major source of fraud reported by Consumer Sentinel, with more than 800,000 reports and a median loss of $900 per incident. However, it is the Investment Scam category that has the highest average loss with $7,760 per incident.


Not every technology can detect and prevent all the multiple fraud types that legions of fraudsters unleash on organizations and customers.


The third data point to consider are SARs, which are filed by financial institutions and corporations alike, and surpassed 4 million reports for the first time last year. Check fraud tops the list here, followed by the financial exploitation of elders through a variety of scams including social engineering.

Not every fraud type has the same impact within an organization. At the same time, not every technology can detect and prevent all the multiple fraud types that legions of fraudsters unleash on organizations and customers. The challenge is to obtain a comprehensive picture of the respective technological fundamentals of each fraud type and then create detection and prevention strategies that addresses the fraud and prioritizes the response based on the highest negative organizational impact. Conducting a Fraud Risk Assessment (FRA) is the typical strategy that financial institutions often undertake, and FRAs normally include both internal and external fraud. However, it may be more important today, given the explosion of external fraud out there, that institutions focus on their efforts to detect and deter external fraud to a greater extent.

A good way to develop an external fraud detection and prevention strategy is by segmenting fraud between primary attacks against the financial institution and primary attacks against customers of the financial institution.

Attacks against consumers: The sophistication of social fraud engineers

One of the main types of attacks against consumers are social engineering scams which come in a variety of forms and facets. It is important to understand that social engineering scams exist in such variety that a one solution fits all approach to prevention will not be effective. Among the main social engineering types of frauds affecting consumers are:

Credential and personal information harvesting (or using tactics) 鈥 Phishing is a cyberattack in which attackers send fraudulent emails that appear to come from reputable sources in order to steal sensitive information like login credentials and credit card numbers. Spearphishing is a more targeted form of phishing in which attackers customize their messages to a specific individual or specific individuals within an organization, making it more convincing. Vishing (voice phishing) involves phone calls by which attackers impersonate legitimate entities to extract personal information. Smishing (SMS phishing) uses text messages to trick individuals into providing personal information or clicking on malicious links.

Social engineering scams 鈥 Real-time social engineering scams involve attackers manipulating victims into performing actions or releasing confidential information. These scams often involve impersonating trusted entities, such as bank officials or tech support, and creating a sense of urgency to prompt immediate action.

Remote access tools (RATs) 鈥 RAT attacks involve fraudsters using software to gain unauthorized access to a victim’s computer, sort of the digital equivalent of a home invasion. In these attacks, fraudsters often pose as IT support personnel and convince the victim to install a RAT, which then allows the attacker to control the victim’s computer remotely. The purpose is to steal sensitive information, monitor user activity, and even manipulate files and settings on the victim’s computer.

Authorized push payment (APP) scams 鈥 APP scams involve fraudsters tricking victims into authorizing payments to accounts controlled by the fraudsters. These scams are often carried out in real-time, making it difficult to detect and reverse the fraudulent transactions. APP scams can take various forms, such as impersonation scams in which the attacker pretends to be a trusted entity, or romance scams in which the attacker builds a relationship with the victim and then requests money. The real-time nature of these transactions also can leave little time for victims or financial institutions to intervene.

Attacks against the financial institution: Rise of the BOTs

Synthetic IDs, BOT attacks, and stolen identities are frauds that primarily affect financial institutions. While synthetic ID fraud uses parts of an individual’s personally identifiable information, defrauding that individual and potentially affecting their credit score, it is primarily an attack against a financial institution.

Malware and BOT attacks use malicious software to infect devices and turn them into bots that can be remotely controlled. This infection typically comes in the form of a virus, adware, or a computer worm. These attacks are designed to access customer accounts and infiltrate the systems of financial institutions.

A typical computer virus can replicate itself, spread to other computers, and is programmed to damage a computer by deleting files, reformatting the hard disk, or using up computer memory. Worms are malware that execute independently and can spread to other systems or by email. Adware, on the other hand, may include malicious code that displays ads when a customer is connected to the internet.


In the next part in our 3-part blog series, we will evaluate current and emerging technological capabilities to detect and prevent these frauds from occurring.

]]>
https://blogs.thomsonreuters.com/en-us/corporates/technological-considerations-fraud-detection/feed/ 0
The AI regulatory challenge: Balancing precision against abstraction /en-us/posts/technology/ai-regulatory-challenge/ https://blogs.thomsonreuters.com/en-us/technology/ai-regulatory-challenge/#respond Thu, 19 Sep 2024 15:52:30 +0000 https://blogs.thomsonreuters.com/en-us/?p=63104 Regulatory compliance requires precision and granularity, and increasingly, it is fulfilled by robust data-driven software, and tightly coupled with predictive analytics. As industry monitoring and reporting move from passive to active, adaptive software solutions require artificial intelligence (AI) algorithms and continuous feedback to ensure adherence to guidelines and legal requirements.

Moreover, regulatory specificity 鈥 often driven by consumer advocates, privacy laws, cybersecurity, and criminal activity 鈥 combined with the explosion of AI solutions and technologies, have pushed enterprise demands and capabilities beyond the benchmarks from just 12 months prior. The advancements of AI integration around regulatory compliance have been nothing short of a black swan series of interconnected events.

As 2025 comes into focus with increasingly granular compliance solutions (such as commercial, off-the-shelf products), compliance software continues to deliver governance oversight that is both accurate and efficient. However, in the zeal to embrace intelligent, adaptable regulatory compliance automation, organizational leaders are unknowingly embracing hidden risks and long-term operational implications that could materially impact auditability, skills, and event-driven anomalies.

Critical competencies

Whereas leaders believe in quantum compliance efficiency gains exceeding 35% with the simple non-systemic application of generative AI (GenAI), the realities rarely exceed 20% with the trends already heading to 10% improvements. That gap of promise versus reality is the opportunity.

regulatory

As seen above, the opportunity in 2025-2027 for regulatory compliance software is based on three critical competencies: i) productized data; ii) continuous enterprise integrations; and iii) the skill sets necessary to blend the first two competencies together using deep domain knowledge.

Data is the fuel for active regulatory compliance software effectiveness and operational efficiencies. And continuous integrations are demanded due to the granularity of software use cases and the requirements for adaptation due to changes in the data, what we call event-driven realities.

Skills sets are the third leg of how AI-enabled regulatory compliance software comes to market and requires not just implementation prowess, but the on-going solution efficacy and feasibility representing 85% of lifecycle costs. It is this last item that is the greatest unknown for these rapid-decay regulatory compliance technologies, which illustrates inherent post-implementation weaknesses as leaders rush to move software from passive reporting to active AI-enabled anticipatory innovation.

To mitigate the risks, leaders need to recognize that with the brilliant specificity of compliance software incorporating AI algorithms comes a set of implied requirements for delivering system-to-system interoperability implemented using application programming interfaces (APIs), data isolation modules, and active governance. Without an architectural design proactively linking together the precision of unique solutions (such as a multiple vendor), the burden of integration and adaptability falls on the skill sets of employees who may lack the methods, techniques, and modular mindsets to ensure innovative regulatory compliance relevancy.

Improving employee skill sets

The chart below represents decomposition and compartmentalization from the prior illustration, while showing the realities that are facing regulatory compliance software consumers across increasingly specialized solutions. Even with advanced solutions from leading vendors, the organizational skills demanded to ensure continuous improvement and adjustment resides within the organization and its data-driven designs.

regulatory

Moreover, and beyond the point-based functionality delivered by each software application, the AI management lifecycle 鈥 maintenance, upgrades, and retirements 鈥 will vary depending on scale, complexity, and integration. The average costs beyond the initial implementation of licensing, configuration, and consulting for a five-year period with adaptive AI regulatory compliance software can add millions of dollars to budgets, especially when accounting for upgrades, regulatory changes, personnel, interoperability, data management, and short-cycle decommissioning (created by the intelligent software itself and its rapid-cycling of improvements).

Thus, when examining the practicality of intelligent software against the abstraction of data and regulatory compliance AI architectures, organizational compliance leaders may assess that money is better spent for solutions, not designs. Whereas that is accurate when faced with short-term regulatory burdens, the longer-term features, costs, and reusability (if passively managed) will add significantly to tech budgets. Bottom line is, if regulatory compliance solutions are not managed holistically using an architectural framework, expect the euphoria of implementation to become a hangover.

And if an organization factors in more than just the software cycles, then AI regulatory compliance costs also will come to include the transparency of the data-driven systems. Additionally, there are process costs and regulator discussions that impact auditability, legal and due diligence, and tax consequences beyond the traditional software capitalization.

Clearly, without a redefined strategy and architecture to integrate these important and complex regulatory technologies, their results will mirror legacy applications and their process-driven mindsets.

Fixing disjointed compliance capabilities

So, what can be done to avoid the chaos of disjointed regulatory compliance software capabilities? How will employees鈥 flexible skills be continually aligned when the organization lacks the mechanisms to deal with the ambiguity created by emerging technologies and continually changing regulatory demands across multiple jurisdictions?

To address the designs above, while mitigating the risks of rapid-cycle intelligent regulatory compliance software capabilities, leaders need to adopt a set of interconnected, comprehensive actions they can take to stay aligned with engineering principles.

      • Software segmentation 鈥 Organizations should compartmentalize regulatory compliance software using functional capabilities including their data demands and outputs. This application rationalization represents the first step in creating a regulatory compliance architectural blueprint.
      • Regulatory landscape alignment 鈥 Leaders should enlist tools and partnerships that provide insights into future demands. By using agile frameworks that are underpinned by policy decisions, they need to identify the critical areas and exposures that need to be met with future software.
      • Data and process transparency 鈥 Utilizing common data stacks, leaders should develop active and robust governance automation that proactively delivers against requirements, while ensuring end-to-end auditability and legal due diligence.
      • Privacy, security & ethics 鈥 Leaders need to create isolation designs that delivers zero-trust solutions across all regulatory compliance software components. Identify and continually implement changes that guarantee the integrity of capabilities, while minimizing the necessary re-working that鈥檚 common within traditional regulatory compliance software components.
      • Continuous evaluating and enhancement 鈥 Leaders need to ensure that robust recovery processes and technologies are designed to not only reduce failure points and outages, but also for cross-system adaptability that鈥檚 driven by industry and technological advancements.

These steps represent tangibility for the abstraction of how to assemble intelligent regulatory compliance solutions dealing with high specialization.

In conclusion, the embrace of AI does represent a series of black swan events. When AI is applied to regulatory compliance, especially across existing siloed regulatory technology capabilities, the real efficiencies, risks, and demands are only visible when holistically assembled using robust analysis and design methods. The illustrations showcase what is yet to be 鈥 as intelligent regulatory compliance solutions increasingly disrupt business operations and workflows.

And counter to implementing regulations and compliance demands, the abstraction of intelligent ideas is not a limiting factor 鈥 instead, it represents the blueprint to make rapid-cycle improvements continuously fit.


You can find here.

]]>
https://blogs.thomsonreuters.com/en-us/technology/ai-regulatory-challenge/feed/ 0
D茅j脿 vu: AI is amplifying the mistakes of the past /en-us/posts/technology/ai-amplifying-past-mistakes/ https://blogs.thomsonreuters.com/en-us/technology/ai-amplifying-past-mistakes/#respond Wed, 13 Mar 2024 11:19:45 +0000 https://blogs.thomsonreuters.com/en-us/?p=60715 The acronym of AI (artificial intelligence) has become like the air 鈥 it is all around us, and touches everything we do. Indeed, the advancements of AI are highly efficient, increase revenues, and leverage humans-in-the-loop. However, when it comes to AI in all its ever-changing, kaleidoscope of forms, its growing functionalities, its demands for data, and its advancing intelligence, who is responsible for creating, managing, and retiring the roadmaps of integration?

Simply put, how do all these AI solution pieces fit together or even talk to each other? What happens when there is a need to audit the cascading inputs and outputs or implement error-corrections? Is there any way to identify AI-created data from traditional systems? While uniquely different, AI rapid-growth is exposing the factures and fallacies of cascading upstream and downstream integrations 鈥 and our ability to assess quality, accuracy, and even systems-of-record. Indeed, history is repeating itself.

The future of tomorrow requires a proactive integration of innovative research tempered by domain market forces, consumer behaviors, AI technology (such as chips and software) and digital data explosions all glued together by security, legal, and regulatory requirements. It is a future that demands layers of integrated solutions all requiring transparency, heterogeneity, and risk-attributions.

At its core, AI is a data-driven solution. At its edges, AI represents an ability to extend data ideation using building blocks of functionality uniquely assembled 鈥 but how? Let鈥檚 discuss an illustrative representation of delivering AI governance by design rather than the traditional siloed product mindsets of one-and-done.

AI

The above graphic illustrates the macro-segmentation that is required in an Age of AI often delivered using Agile methods underpinned by industry-defined, universal data models. Yet, even while learning from legacy mistakes, the introduction of AI solution sets still creates both opportunities and challenges.

      • AI-impacted legacy tech 鈥 The burden of repairing the existing impacts of fragmented, siloed legacy systems is estimated to be fix that could cost up to $2 trillion, with profit and operational losses fast approaching $3 trillion per year.
      • Data 鈥渂ar codes鈥 鈥 This represents an easy-to-understand solution that is complex in implementation: From where does the data we use to make decisions, impact operations, or report to investors originate? If regulators, auditors, or legal personnel asked for the traceability of the inputs, can the current or future AI solutions meet the due diligence requirements?
      • Interoperability and trust 鈥 A core tenant of vendor packages, the idea of (open-source) APIs and data virtualization dominated the last 25 years of systems even down to mobile applications. Yet, AI introduces production-ready unknowns for scale, validity, performance, and unintended consequences beyond its 2023-鈥24 pilots.
      • Skills and simplicity 鈥 While researchers seek to expand the options of AI and its capabilities, the challenges are where will the skills come from to operate, deliver, and integrate data that is doubling in volume every year 鈥 not to mention opaque AI systems provided by vendors, skunkworks, startup efforts, and small-scale prototypes.

Making architecture even more challenging, over the last 16 months we鈥檝e seen the traditional segmentation between industry results and research 鈥 internal or academic. Today, hundreds of AI solutions are being developed every week with investment values and M&A actions dominating strategies that represent the front-office fear, uncertainty, and doubt of being left behind.

The complexities of AI, its vast ability to disintermediate processes, and its data foundation require strategies and architectures that weave together rapidly evolving technologies all impacting organizational change and the skills within. What is the approach, design, or outcome? From business leaders, what will it cost compared to the risks of non-compliance?

Around these two questions comes a set of emerging designs to actively managed, multi-modality products and capabilities linked across a fabric of solutions that are analogous to using Legos. The graphic below represents a comprehensive blueprint to actively address the demands of AI governance encased within result-oriented, business-demanded delivery. It indeed is a picture representation worth 10,000 words and numerous upskilling industry and research priorities.

AI

When studying the shock of this graphic, business leaders see complexity and divisional chaos. For larger firms across highly regulated industries such as banking and financial services, these designs are already being discussed across their multibillion dollar budgets and among their thousands of IT staff members. For midsize and smaller firms, designing a multimodal architecture against a blueprint of design seems impossible at first glance.

Yet, development of these blueprints or fabrics has already been done. Looking across industries and disciplines we can see examples from privacy-by-design solutions, outsourcers seeking to modernize their products for consumers, or brand-name consulting leaders who are proactively not just engaging their customers but assembling solutions for them that meet their future needs.

For researchers in industry and academics, their deep understanding of each unique area provides the roadmaps for adaptation in the face of hyperscale and rapid-cycle technologies. This is where corporate leaders who are driven by results must balance what is available with what is possible when the innovation cycles for AI advancements are now measured in weeks and months rather than years. These AI realities, coupled with regulators and their exploding oversight, demand more than the traditional adoption of siloed regulatory technology to create governance solutions.

Only with a holistic approach to AI architecture will enterprises and their researchers arrive at a workable and efficient solution to regulation. Regulation is the glue that demands integration, and integration itself is demanded by fragmented solutions. Indeed, solutions ensure that the enterprise can be profitable in the face of opaque and new market forces. And linking them all together will be unfamiliar, but it is the solution that cannot be left to chance.

While industry and research personnel want AI to be simple, that represents an assumption that there is total transparency and recourse even if we use natural language interfaces. AI is not a magic button that operates in a vacuum 鈥 industry and researchers have already tried this repeatedly and it is a recipe for future chaos.

To ignore rapid-cycle AI progressions as a business leader is problematic, and failing to integrate disparate technological solutions is d茅j脿 vu. The complexity and confusion of AI is just beginning, yet the rush to deep research and fast results is bringing back the ghosts of prior step-functional shifts of innovation and computer advancements.

In the end, AI will be about architectural adaptability 鈥 not just products, technology, data, or even regulators. AI is blending of next generation of demands for strategy and architecture. Those industries and firms that embrace the transformative paradigm will likely come to represent the future of leadership.

]]>
https://blogs.thomsonreuters.com/en-us/technology/ai-amplifying-past-mistakes/feed/ 0
AI, other technology the 鈥渙nly answer鈥 to AML challenges in evolving threat landscape, says ACAMS report /en-us/posts/investigation-fraud-and-risk/ai-aml-challenges-acams/ https://blogs.thomsonreuters.com/en-us/investigation-fraud-and-risk/ai-aml-challenges-acams/#respond Tue, 20 Feb 2024 19:01:29 +0000 https://blogs.thomsonreuters.com/en-us/?p=60421 With financial firms forced to cut anti-financial-crime budgets, artificial intelligence (AI) must clear regulatory hurdles before they can backstop the function, the Association of Certified Anti-Money Laundering Specialists (ACAMS) said .

Despite AI and machine learning “getting better by the day” and nearing readiness for deployment, ACAMS had heard from many institutions that “maybe [the institutions’] own data is not quite ready,” said Craig Timm, senior director of anti-money laundering (AML) with ACAMS, a trade association for anti-financial-crime professionals.

With AML budgets shrinking, AI and related technology “is the only answer” to adequate risk management, Timm explained. “It’s the way they’re going to get more efficient while maintaining effectiveness,” he said. “It’s the way they’re going to fight back against the criminal use of this technology 鈥 there are just steps to get there.” In addition to financial institutions needing to clean up their data, regulators must also create structures and guidance to allow firms to implement AI, he added.

Joby Carpenter, global subject-matter expert on technology and illicit finance at ACAMS, agreed that regulators must also step up. “[Regulators] have not yet got to the point where they’ve said, ‘you can turn off [your] legacy systems and just rely on AI in order to do your due diligence, or your screening, or whatever it may be’,鈥 Carpenter said. 鈥淪o, that’s a big issue.鈥

Carpenter explained that 鈥渞egulators accept that they are in that position 鈥 that they haven’t given permission to turn off those legacy systems 鈥 and they are trying to get around that with initiatives like sandboxes and tech sprints, in order that AI can then be demonstrated鈥 as being effective and falling into line with regulatory requirements, but it does seem to be a fairly slow process to get to that point.”

Justine Walker, head of sanctions, compliance & risk at ACAMS, agreed, adding: “This is a radical moment in time in terms of technology, both in terms of its benefits to the anti-financial crime function, but also the challenges it brings.鈥 Walker added that she thinks that 鈥渋n five years’ time we’re going to be discussing this in a very, very different way 鈥 in what way? I don’t think any of us quite know, but it is changing by the day.”

Top 10 financial crime threats

The AI challenge was only one element of a broader 2024 Global Anti-Financial Crime Threats Report that ACAMS released. The report also outlined the top 10 financial crime threats that ACAMS saw as “high on the radar” in 2024, based on discussions at events held around the world, plus a global survey conducted between Sept.18 and Oct. 22, 2023.

The top ten threats assessed by ACAMS, beginning with the number-one threat, include:

1. Anti-financial crime team budget cuts

The top threat stems from budget cuts and declining anti-financial crime staff amid an evolving and heightened risk environment, the report said. “If institutions cannot manage this threat, it will negatively impact their ability to manage all the other threats in this report and the overall effectiveness of the anti-financial crime function,” ACAMS wrote.

2. Geopolitical tensions and fragmentation

These “dominating concerns” present “fundamental challenges with conflict, cybersecurity, energy security, and strategic competition, which fuel growing risk dilemmas,” the report stated, adding that navigating this fragmented environment “requires adept handling of听emerging scenarios involving conflicts of law and regulations, personnel risks, and evolving market volatility.”

3. Cyber-enabled fraud

Recent technological advancements have prompted a rapid surge in cybercrime, including fraud that鈥檚 enabled by digital media platforms and the darknet, the report noted.

Additionally, pig butchering 鈥 a type of cryptocurrency scam targeting wealthy individuals online through romantic deception to gain their trust and steal their assets 鈥 has led to the loss of billions of dollars since 2021.

4. Sanctions and evasion

While it ranked fourth in the ACAMS threat hierarchy, sanctions and related evasion of sanctions are “paramount in the minds of executive leadership, causing them sleepless nights,” the report stated, adding that those concerns “stem from the use of sanctions for foreign policy objectives and the persistent complexity of maintaining ‘sanctions compliance.'”

Anticipated sanctioning trends in 2024 include two primary drivers. First, stronger alliances will likely lead to the convergence of sanctions and export controls, strategically aimed at restricting Russia’s access to sensitive technology and degrading its war capabilities. Second, there will be a heightened US enforcement campaign against Russia’s sanctions-evasion tactics.

5. Scale and pace of change

The scale and pace of anticipated regulatory change influences the full spectrum of anti-financial crime programs, with emphasis on expected changes affecting AML, sanctions, cybersecurity, crypto-assets, data privacy/data protection, and fraud, the report stated.

6. Abuse of legal entities and arrangements

Amid the backdrop of the global AML-standard setting and renewed demands for global corporate transparency from the global money laundering and terrorist financing watchdog, the Financial Action Task Force, the misuse of legal entities and arrangements also has emerged as a critical threat.

“Anonymous legal entities persist at the epicenter of significant cases involving major corruption, money laundering, tax evasion, and sanctions evasion,” the report noted. “Authorities are anticipated to intensify efforts by enforcing corporate registry structures and imposing more stringent requirements for beneficial ownership due diligence.”

7. Balancing counter-terrorist financing with financial access and humanitarian aid

Maintaining balance between counter-terrorist financing efforts and the facilitation of financial access to humanitarian aid remains a core priority for the international community, humanitarian actors, and compliance functions, according to the report.

8. Lack of risk-based approach

The lack of an effective risk-based approach to regulation and supervision is consistently viewed as a hindrance to the ability of AML regimes to fight financial crime.

9. Weaponized technology

ACAMS also flagged the hostile use of commercial spyware, ransomware, and offensive cyber-capabilities as a growing concern.

10. Internal threats

Concerns about this multifaceted threat were “spearheaded by senior executives and law enforcement figures,” the ACAMS report stated.

Value of threat report insights

AML and sanctions compliance professionals may wish to convey some of the insights gleaned from the ACAMS report to their senior executives 鈥 and perhaps ultimately to the board of directors 鈥 as evidence of the need for adequate AML compliance resources and to increase awareness about the growing role of new forms of technology that could be of use to organizations鈥 anti-financial crime functions.

]]>
https://blogs.thomsonreuters.com/en-us/investigation-fraud-and-risk/ai-aml-challenges-acams/feed/ 0
Know your AI: Compliance and regulatory considerations for financial services /en-us/posts/corporates/ai-compliance-financial-services/ https://blogs.thomsonreuters.com/en-us/corporates/ai-compliance-financial-services/#respond Tue, 19 Dec 2023 19:41:42 +0000 https://blogs.thomsonreuters.com/en-us/?p=59911 Despite the widespread enthusiasm surrounding artificial intelligence (AI), its generative AI (Gen AI) component, and the enormous potential benefits of both, the laws and regulations governing the new technology remain sparse. With nearly a dozen US states enacting AI-related legislation, international bodies developing practice standards, and the recent White House Executive Order on AI and the European Union’s agreement on its EU AI Act, a roadmap to regulation is taking shape.

Pieces of the global regulatory puzzle are now on the table. All signs point to a complex patchwork of laws and regulations, such as cybersecurity and data privacy rules. Unsurprisingly, developing a comprehensive and cohesive AI regulatory framework will be a lengthy process.听Below is a review of potential AI benefits for financial services firms that also describes the likely regulatory path ahead and offers some preliminary, high-level compliance suggestions for firms building AI into their operations.

Use cases for AI

AI is already in use at most firms in its various forms.听Algorithmic trading, risk modeling and surveillance programs are obvious basic examples. Many firms have been using chatbots to assist customers with routine questions and account requests. Such customer engagement tools are a valuable time-saver and productivity-enhancer for customer-facing personnel. The complexity and capabilities of these tools will spread to other areas and will only improve with better AI enhancements.

The use of AI will surely increase operational capacity and productivity within firms. AI models, sometimes called digital workers, are immediately productive from day one, never get sick or take time off and are often faster and more accurate than their human counterparts.听These 24/7 workers will help financial services firms gain efficiency and reduce manual reviews of automated events, as AI-augmented tools have been proven to significantly reduce false-positive alerts requiring such reviews.

AI may also help unify distinct data silos to draw new information and correlations that were previously impossible or unseen. Intelligent document-processing helps to uncover relevant adverse media on subjects of interest in anti-money laundering (AML) and know-your-customer (KYC) investigations. Automated adverse media and sanctions reviews can improve firms’ ability to discover hidden risks among current and prospective customers, vendors and third parties.

AI regulatory roadmap

Like cybersecurity and data privacy laws and regulations, AI is rapidly becoming the next critical obligation for firms. It will become a permanent pillar within all financial services firms’ risk, legal and compliance frameworks. Also, like cybersecurity and data privacy measures, AI regulations will require simultaneous focus on global, federal, state and industry-specific levels, as there will likely be multiple layers 鈥 a proverbial patchwork.

Europe has taken the lead globally on AI governance, with the EU’s Dec. 8 release of the EU AI Act. The European Commission also recently announced an agreement by G7 leaders on a set of international guiding听 and a voluntary听 for AI developers听under the Hiroshima AI process.听The Principles and the Code of Conduct will complement, at the international level, the legally binding rules that the EU intends to codify in the Act.

Further, nearly a dozen US states have enacted legislation on AI, and legislation is pending in almost a dozen additional states. Many of the measures are included in consumer privacy or industry-specific areas, such as healthcare, government, or insurance, according to the non-profit听.

At the federal level in the US, the听听outlines rules for AI, including risk assessment obligations that would directly impact companies developing and utilizing AI technologies. The bill, proposed more than a year ago, remains stalled in Congress. The US government also has issued guidance through the National Institute of Standards and Technology (NIST), such as the 听and the听.

Several courts have also spoken on the use of AI, such as the U.S. Fifth Circuit Court, which proposed that lawyers听. The听听issued similar warnings to lawyers.

The White House laid out some principles and priorities in the听, published in October 2022.听On Oct. 30, 2023, the White House published an听听directing US government departments and agencies to evaluate AI technology and implement processes and procedures to govern its adoption and use.听The Executive Order was accompanied by a听听that summarized the 20,000-word order into a more manageable and reader-friendly 1,900 words.

Compliance suggestions

As financial services firms create, establish and adopt new compliance, risk and legal policies and procedures surrounding AI, they must view AI as any other compliance obligation. Although the regulatory picture is uncertain, essential compliance obligations can and should be applied.听Core compliance principles such as training, testing, monitoring and auditing are all essential in developing AI policies.

Firms should also be sure to include legal counsel, either in-house or external who have the expertise in the relevant areas, because certain existing contracts with data sources and vendors may prohibit the use of some information by AI models. Copyright 成人VR视频ed material is also a concern, so financial services firms should carefully review all existing contracts with their customers and vendors.

Firms must also perform a cost-benefit analysis for their AI projects, because as eager as they are to innovate with AI, they may find that some legacy solutions are cheaper and more effective.听Firms must also prioritize data quality and security because the data they use to train AI models will determine the accuracy and fairness of those models.

All AI endeavors should be run parallel to existing programs and be thoroughly checked for accuracy, and the processes must be documented and audited.听The final output of AI models must include a report that can be saved for audit purposes.

]]>
https://blogs.thomsonreuters.com/en-us/corporates/ai-compliance-financial-services/feed/ 0