Business fraud Archives - 成人VR视频 Institute https://blogs.thomsonreuters.com/en-us/topic/business-fraud/ 成人VR视频 Institute is a blog from 成人VR视频, the intelligence, technology and human expertise you need to find trusted answers. Wed, 29 Apr 2026 07:19:58 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 Your best employee might be your biggest conflict of interest /en-us/posts/corporates/employee-conflict-of-interest/ Mon, 27 Apr 2026 16:36:02 +0000 https://blogs.thomsonreuters.com/en-us/?p=70639

Key insights:

      • Conflict of interest doesn’t start with bad intent 鈥 Often, conflict of interest starts with tenure, trust, and relationships that slowly blur the line between good judgment and personal interest.

      • The real exposure isn’t the fraud itself 鈥 The real damage from conflict of interest can be years of skewed vendor decisions, above-market pricing, and lost competitive ground.

      • Companies shouldn鈥檛 treat conflict of interest as a disclosure problem 鈥 Companies would do well to remember that often conflict of interest is really a data and systems problem.


His access logs were clean, so it took weeks to find out what actually happened. He had been borrowing colleagues’ IT logins, who had handed them over without much thought, even though they knew it broke policy. They just didn’t think it mattered. He used those logins to steer million-dollar contracts to selected vendors who were paying him kickbacks.

The company鈥檚 conflict of interest policy existed, and people had signed it. Yet, nobody checked whether anyone followed it. And this scheme wasn’t even caught internally. Fortunately, someone outside found it.

This gap between knowing something is wrong and believing it matters 鈥 that鈥檚 where conflict of interest lives.

The financial exposure goes well beyond the kickback itself

The kickback that was paid to an insider is not the real cost to the company. The real cost is what happens while nobody is looking. As a result of this fraud, this company didn鈥檛 even know they were experiencing years of sourcing decisions that were shaped by hidden interests, vendors who never got a fair shot, and pricing that stayed above market price because the person managing the relationship had a reason to keep it there.

Throughout many industries, the numbers back this up. The from the Association of Certified Fraud Examiners (ACFE) found corruption in almost half (48%) of all fraud cases. Median loss for corruption schemes was around $200,000, and the average scheme run for about 12 months before anyone catches on. Not surprisingly, 87% of conflict-of-interest fraud perpetrators had no prior criminal record. Indeed, they were trusted employees, not career criminals.

What makes this worse is that most organizations have no reliable way to catch it. Across industry guidance, compliance publications, and professional forums, a consistent picture emerges: The majority of organizations rely entirely on disclosure forms and self-reporting to manage conflicts of interest. Leading compliance expert, Rebecca Walker has publicly admitted that 鈥 and even though the tools exist, almost nobody is using them.

The statistics, however, only capture what gets caught. The psychology of how it starts is harder to measure 鈥 and more important to understand. Conflict of interest rarely begins with a plan to steal. Rather, it starts with tenure, trust, and relationships that make someone hard to replace. Over time, the line between good judgment and personal interest doesn’t get crossed, it just disappears.

Taking a more structured approach

Most companies rely on disclosure forms, ethics training, and a code of conduct. They want to tell people what a conflict looks like, ask them to report it, and assume they will. Too often, they won’t.

Disclosure forms ask employees to self-report behavior they often don’t recognize as problematic, and those who do recognize it worry they’ll be investigated or treated unfairly themselves. They’ve watched junior staff held to strict standards while senior leaders get a pass. Unfortunately, that teaches everyone the same lesson: Stay quiet. When 85% of companies with a code of conduct still have fraud at this scale, the problem is not what people know, rather it鈥檚 how the program is built.

These failures point to three specific gaps in how most organizations approach conflict of interest: i) how they gather information; ii) how they monitor risk; and iii) how they receive reports. A structured framework 鈥 one based on concepts of design, detect, and deploy 鈥 can address each one of these gaps directly, with each component being measurable in financial terms.

Design: Are you collecting facts or asking people to confess?

Take a look at how you approach employees around conflict-of-interest issues. Are you seeking information or just generally hoping the employee admits wrongdoing, even inadvertently. A better approach could be to ask specific questions: How long has the employee worked with this vendor? Can the employee award contracts to them? Does the employee have any ownership stake in a company on the approved vendor list?

Let the employee give the facts and then let the system make the call. When you separate sharing information from being judged for it, people actually share and you get better data. And better data means better procurement decisions. That is not a compliance win 鈥 that鈥檚 a business win.

Detect: Are you looking for conflicts or hoping someone speaks up?

Run your vendor list against your employee records and flag matching addresses, phone numbers, and bank accounts. Check public registries for shared directors between your staff and your suppliers. Look at who has been awarding contracts in the same role for years without rotating, and managers who keep hiring from former employers.

Any company with an ERP system and an HR database can run these checks quarterly. And ACFE data underscores the value in taking the proactive approach: On average, companies using automated transaction monitoring catch fraud within six months and lose about $83,000; and companies that wait for law enforcement to alert them to the fraud take 24 months and lose $675,000.

Deploy: Is your hotline a business tool or a poster on a wall?

Tips catch 43% of all fraud 鈥 more than audits, management reviews, and law enforcement combined. Companies with hotlines lose $100,000 in median fraud; but companies without them lose $200,000. A working tips hotline can cut your losses in half.

However, most hotlines are not functioning as intended. They exist on paper without the visibility, trust, or independence required to generate reliable reports. For example, a senior executive was steering contracts to his own associates. And even though a company hotline existed, the executive actually sat on the committee that received the reports. The tool was built to catch misconduct and was working properly, yet it was controlled by the person committing the fraud. The matter had to be escalated outside normal channels, and the senior executive was eventually fired for cause.

Almost half (46%) of employees who report misconduct face retaliation, according to the , from the nonprofit Ethics and Compliance Initiative. When that is the outcome, silence becomes the rational choice. If you want your hotline to work, promote it every quarter. Show people what was reported and what happened because of it. Make sure no single person can block or read a report before it reaches the right people. Being that proactive around your hotline will give employees proof that the system protects them.

Is it worth the investment?

Of course, the question is not whether your company has a conflict-of-interest policy, it most likely does. Rather, the question is whether you would know if someone were breaking it right now.

Companies that design better fact-gathering, detect through monitoring, and deploy trusted reporting can do more than catch fraud early. They can buy from better vendors, compete on fairer pricing, protect their board from liability, and build a culture in which raising a red flag is seen as protecting the business.

If the honest answer is that you would not know if someone was violating your company鈥檚 conflict of interest policy, then business case for being more proactive has already been made.


You can find more about how companies can best manage business fraud here

]]>
Why the Supreme Court is weighing in on disgorgement, the SEC鈥檚 favorite payback tool /en-us/posts/government/sec-disgorgement-supreme-court/ Fri, 24 Apr 2026 07:31:58 +0000 https://blogs.thomsonreuters.com/en-us/?p=70635

Key insights:

      • Getting at the core legal question 鈥 In a case brought by defendant Ongkaruck Sripetch, the Supreme Court is deciding whether the SEC must prove investors suffered measurable financial loss before courts can order disgorgement, which would require fraudsters to give up illegal profits.

      • Why it鈥檚 high-stakes 鈥 Disgorgement is a major SEC enforcement tool 鈥 representing billions of dollars annually 鈥 so a new requirement to prove investor losses could sharply limit when and how much the SEC can recover.

      • How the justices seemed to lean (so far) 鈥 Questions at the argument before the Court suggested skepticism toward Sripetch鈥檚 position, with several justices asking why it would be an unfair penalty to take back ill-gotten gains and noting the practical difficulty of proving each investor鈥檚 exact loss.


If you鈥檝e ever wondered how the U.S. Securities and Exchange Commission (SEC) actually gets money back after it catches a fraudster, one of its biggest tools, disgorgement, is now under the microscope. This week, the U.S. Supreme Court heard arguments in a case, Sripetch v. SEC, that sounds technical on paper but has at its core a simple question: When the SEC makes a fraudster give up illegal profits, does it have to prove that investors suffered measurable, out-of-pocket losses first?

The case centers on Ongkaruck Sripetch, who the SEC says pocketed illicit proceeds through a classic pump-and-dump scheme from 2013 to 2017. Pump-and-dumps often involve penny stocks in which a person will hype up the price of these thinly traded stocks, then sell into the price spike they caused and walk away richer. Other stock traders who bought into the hype are the ones left holding the bag.

Sripetch admitted violating securities law and, in his subsequent criminal case, was sentenced to 21 months in prison. Separately, in the SEC鈥檚 civil action, a federal court in California ordered Sripetch to repay more than $3 million in ill-gotten gains plus interest.

The Supreme Court case isn鈥檛 a serious argument against the SEC鈥檚 ability to seek disgorgement 鈥 numerous courts have recognized the remedy for years, and Congress has since written the SEC鈥檚 ability to pursue it into federal law. The core question in the case is narrower, yet crucial for the SEC鈥檚 mission. It asks whether the SEC must show that victims suffered pecuniary or economic harm before a court can order disgorgement. Federal appeals courts have split on that point, which is why the Supreme Court agreed to take the case.

What is disgorgement, exactly?

Think of disgorgement as a legal give it back order. If a person or company makes money by breaking the securities laws 鈥 say by manipulating prices, lying to investors, or running a Ponzi-style scheme 鈥 disgorgement is designed to strip the profits away from that wrongdoing and the wrongdoers. In theory, it鈥檚 not about punishing someone for being bad, rather it鈥檚 about making sure crime doesn鈥檛 pay.


In real markets, harm can be scattered across thousands of trades, mixed up with normal price swings, and hard to trace to one bad actor. Disgorgement, on the other hand, gives securities regulators a way to focus on the part that鈥檚 often the clearest: How much ill-gotten profit the fraudster made.


Indeed, that not a punishment framing is important because the SEC has other ways to punish those convicted of securities law violations 鈥 such as civil penalties, disbarment from serving as an officer or director, industry suspensions, and more. Disgorgement is supposed to be different 鈥 an action that aims at profits, not pain. The government鈥檚 position in the Sripetch case puts it bluntly: Disgorgement is meant to strip ill-gotten gains from wrongdoers, not to compensate victims for their losses.

And disgorgement is not a niche tool. The SEC regularly collects big sums of seized money through disgorgement. According to recent figures, the SEC obtained about $1.4 billion through disgorgement in fiscal 2025 (excluding certain amounts), and $6.1 billion the year before, which represented nearly three-quarters of its total financial penalties for that year.

Those numbers may help explain why this Supreme Court fight is being watched so closely: The outcome could either keep the SEC鈥檚 playbook intact or force it to do a lot more legwork before it can ask courts to order payback.

The arguments before the Court

Earlier this week, both sides argued before the Supreme Court as to the potential future use of disgorgement and what requirements the SEC might have to meet when requesting court to order it.

Sripetch鈥檚 argument 鈥 Lawyers for Sripetch told the Court that the SEC shouldn鈥檛 be able to get disgorgement unless it can show that investors actually suffered financial harm, such as a price drop caused by the fraud or some other measurable loss. If the SEC can鈥檛 prove that kind of harm, the lawyer argues, then making Sripetch pay money looks less like giving it back and more like an impermissible penalty that the SEC is not allowed to levy.

The government鈥檚 argument 鈥 Lawyers for the U.S. Justice Department, defending the SEC, said the proof-of-loss requirement makes no sense. Disgorgement, in their view, is about the defendant鈥檚 gains, not the victim鈥檚 losses. One government lawyer summed it up as a straightforward principle: Disgorgement is intended to ensure a defendant does not profit from their own wrongdoing.

At this week鈥檚 argument, the justices sounded (at least generally) more sympathetic to the government than to Sripetch. Justice Amy Coney Barrett pressed the defense on its basic logic: If the court is only taking away ill-gotten gains 鈥 money the wrongdoer was never entitled to 鈥 why is that a penalty at all? Justice Ketanji Brown Jackson made a similar point, suggesting disgorgement would only feel like punishment when someone is forced to pay money that was rightfully theirs.

When Sripetch鈥檚 lawyer suggested the SEC should have to identify and prove each victim鈥檚 dollar loss, Justice Sonia Sotomayor鈥檚 response was basically, Why would anyone bother? If the SEC has to run a mini-trial on every investor鈥檚 exact harm just to reclaim the fraudster鈥檚 profits, disgorgement would be unworkable in many cases.

The practicality of that point is a big deal in securities fraud. In real markets, harm can be scattered across thousands of trades, mixed up with normal price swings, and hard to trace to one bad actor. Disgorgement, on the other hand, gives securities regulators a way to focus on the part that鈥檚 often the clearest: How much ill-gotten profit the fraudster made. The idea is deterrence-by-math 鈥 if you can鈥檛 keep the profits, the incentive to run the scheme shrinks.


The Supreme Court’s ruling, when it comes, could re-shape how the SEC negotiates settlements, litigates fraud cases, and talks about remedies and punishments going forward.


Still, some justices raised broader concerns about how disgorgement gets used in the real world, such as whether certain applications start to look punitive, or whether they raise questions about a defendant鈥檚 right to a trial by jury. However, the Court also seemed interested in deciding only the question of the requirement to prove victims鈥 losses and leaving those bigger constitutional debates for another day.

Why this matters (even if you aren鈥檛 the SEC)

If the Supreme Court agrees with Sripetch and requires proof of investor pecuniary harm, the SEC could face a higher hurdle in cases in which misconduct is real, but losses are tough to quantify on a trade-by-trade basis. That could mean fewer disgorgement awards, smaller ones, or more pressure to rely on classic penalties instead.

If the Court backs the government, however, disgorgement stays what it has largely been 鈥 a fast, flexible way to reclaim profits from securities fraud and a core part of how the SEC tries to keep the securities markets honest.

Either way, the ruling will shape how the SEC negotiates settlements, litigates fraud cases, and talks about remedies and punishments going forward. With the Court expected to issue its decision by the end of June, securities lawyers and stock market mavens will be keeping an eye on this case.


You can find more about the challenges facing the SEC here

]]>
More SARs, not better ones: Why AI is about to flood the system /en-us/posts/corporates/ai-driven-sars/ Mon, 13 Apr 2026 08:06:52 +0000 https://blogs.thomsonreuters.com/en-us/?p=70285

Key insights:

      • SAR volume is significantly underreported 鈥 Continuing and amended filings add approximately 20% to the official count yet remain invisible in trend analyses.

      • Filing activity is highly concentrated 鈥 A few large financial institutions dominate SARs volume, meaning trends reflect their practices more than systemic changes.

      • Agentic AI will drive a surge in SARs 鈥 Agentic AI risks increased noise over actionable intelligence, without addressing the unresolved question of whether current filings yield meaningful law enforcement outcomes.


The Suspicious Activity Reports (SAR) that financial institutions file with the U.S. Treasury Department鈥檚 Financial Crimes Enforcement Network (FinCEN) provide valuable insight, although they may not offer a comprehensive picture.

Prior to meaningful discussions regarding the future of SARs, it is essential for the financial crime community to clarify what is being measured. In 2025, for example, SAR filings of more than 4.1 million, representing an almost 8% increase compared to the total number of SARs filed in 2024.

Every figure FinCEN has published reflects original SARs only. Continuing activity SARs, which represent roughly 15% of all filings, are submitted under the original Bank Secrecy Act (BSA) identification number and never appear as new filings. Corrected and amended SARs add another 5% on top of that. This makes the real volume of SARs activity approximately 20% higher than what is reported.


The average community bank files fewer than one SAR a week, while the largest institutions file more than 500 a day.


Recent FinCEN guidance giving financial institutions more flexibility around continuing activity SARs sounds significant on paper, but as former Wells Fargo BSA/AML chief Jim Richards points out: “It won’t change the reported numbers 鈥 because those filings were never counted to begin with.” Financial crime professionals need to keep that gap in mind every time a trend line gets cited.

2025 was steady, not spectacular

There were roughly 300,000 SARs filed every single month of 2025, and the most notable thing is that nothing notable happened. That is likely a first on the volume side and worth acknowledging, but beyond that milestone the year did not hand financial crime professionals anything noteworthy. In a space that has dealt with pandemic distortions, crypto chaos, and fraud spikes that seemed to come out of nowhere, steady volume and predictable patterns are a little surprising. A quiet data set, however, is not the same as a quiet landscape, and financial crime professionals who are reading stability as stagnation may find themselves flat-footed when the numbers start moving again.

For example, one of the most underleveraged insights in the SARs space is just how concentrated filing activity really is. The numbers are stark: The top four banks file more SARs in a single day than 80% of the rest of the banks file in 10 years, according to 2019 data from a .

The average community bank files fewer than one SAR a week, while the largest institutions file more than 500 a day. “50 a year versus 500 a day,” notes Wells Fargo鈥檚 Richards, adding that such asymmetry has real implications for how the financial industry interprets trends. Meaningful movement in SARs data, up or down, is almost entirely dependent on what a handful of mega-institutions decide to do.

Not surprisingly, money services businesses (MSBs) are the second largest filing category, and virtual currency exchanges are almost certainly driving recent growth there, even if outdated category definitions make that difficult to confirm directly. Credit unions round out the top three.

The filing philosophy hasn’t changed and shouldn’t

Regulatory noise occasionally suggests that institutions should be more selective about what they file. However, compliance and legal reality have not shifted. No institution has ever faced serious consequences for filing too many SARs, and the cases that result in enforcement actions, reputational damage, and regulatory scrutiny are consistently about missed filings or late ones.

鈥淵ou’re not going to get in trouble from filing too much,鈥 Richards says. 鈥淣obody ever has, and I doubt if anyone ever will.” For financial crime professionals, the calculus remains exactly what it has always been 鈥 when in doubt, file. That posture isn’t going to change, and frankly it shouldn’t.

Yet, here is where the SARs space gets genuinely interesting. Agentic AI use in SARs filings 鈥 systems in which multiple AI agents work through a case from screening to decision to documentation 鈥 is beginning to move from concept to deployment. The impact on filing volume likely will be significant.


The risk is a system flooded with AI-generated SARs of variable quality, creating more noise for law enforcement to sort through rather than sharper intelligence to act upon.


Whereas a small team today might work through a handful of cases a week, AI-assisted workflows could push that into the dozens. Multiply that across institutions already inclined to file rather than miss something, and the result is a coming surge in SARs volume that could play out over the next two to four years.

“Agentic AI has the potential to be a game changer on how we do our work,鈥 Richards explains. 鈥淏ut I believe it’ll guarantee that there will be more SARs filed and not necessarily better and fewer SARs filed.” Indeed, the critical point for the financial crime community to internalize is exactly that.

The risk is a system flooded with AI-generated SARs of variable quality, creating more noise for law enforcement to sort through rather than sharper intelligence to act upon. Once the largest institutions adopt agentic AI as a best practice, others will follow quickly, and regulators will likely be several steps behind.

The value question can’t wait

The has been in place since 2014. Yet after 12 years of filings, the financial crime community still lacks a clear public accounting of whether that data has produced actionable law enforcement outcomes.

So, the question Richards is asking is one the entire industry should be asking: “Has anybody asked law enforcement?”

This question reflects a larger challenge that the industry needs to confront more aggressively, especially as AI technology is set to dramatically increase filing volume across the board. Increasing the volume without improving how the information is used does not represent progress. If SARs are not generating real investigative value, the solution is not to file more of them faster 鈥 instead, the pipeline should be fixed before it grows any bigger.


You can find more about the challenges that financial institutions face in managing SARs here

]]>
The banks you don’t know you’re using: Risks of unregulated banking /en-us/posts/government/unregulated-banking-risk/ Wed, 01 Apr 2026 17:10:50 +0000 https://blogs.thomsonreuters.com/en-us/?p=70163

Key insights:

      • Convenience has outpaced consumer understanding 鈥斕齅any users treat apps, prepaid accounts, and rewards programs as simple payment tools, remaining unaware they are entrusting their money to entities with few safeguards.

      • Risk is no longer confined to traditional banks 鈥 Some of the most significant financial activities now occur within platforms and brands that do not resemble banks at all.

      • Opacity enables systemic vulnerability 鈥 The less transparent an institution’s obligations, leverage, and oversight, the easier it is for financial fragility, misconduct, and systemic risk to grow unchecked.


When you think of where money is held, you generally think of a bank. However, as we look at the financial landscape today, money is being held at a wide range of institutions that often have varying levels of safety and oversight. Entities from Starbucks to Visa to Coinbase hold money for individuals, effectively serving as a bank, but often without the regulatory framework that comes with it.

Behind the scenes, it can seem like . In its daily operation, it collects prepaid funds that resemble deposits, holds them as liabilities, and uses them internally 鈥 all without offering interest, cash withdrawals, or FDIC insurance. Starbucks’ rewards program holds $1.8 billion in customer cash, and if it were a bank, that would make it bigger, , than 85% of chartered banks, making the coffee chain one of the .

This dynamic extends well beyond coffee shops. “Popular digital payment apps are increasingly used as substitutes for a traditional bank or credit union account but lack the same protections to ensure that funds are safe,” warns the . If a nonbank payment app’s business fails, your money is likely lost or tied up in a long bankruptcy process.

Shadow banking

Think of a Starbucks gift card as a financial instrument. Technically it is one, but no one seriously worries about it being weaponized for any large-scale financial crimes. Most people鈥檚 concerns about a gift card is either losing it. The real concern lies not in lost gift cards, however, but in the broader trend: Nonbank institutions managing vast sums without commensurate oversight 鈥 and scale matters. A lost gift card is a personal inconvenience; but an unregulated institution managing billions of consumer dollars in leveraged capital is a systemic one.

Shadow banking encompasses credit and lending activities by institutions that are not traditional banks, and crucially, they do not have access to central bank funding or public sector credit guarantees. And because they are not subject to the same prudential regulations as depository banks, they do not need to hold as high financial reserves relative to their market exposure, allowing for very high levels of leverage which in turn can magnify profits during boom periods and compound losses during downturns.

The shadow banking ecosystem is diverse, and each segment of it presents distinct risks:

    • Hedge funds and private equity firms听鈥 Firms like Blackstone, KKR, and Apollo manage vast capital pools using leveraged strategies under limited oversight. Their size and borrowing levels may mean that market reversals can trigger rapid deleveraging, spilling risk into broader markets.
    • Family offices听鈥 A private company or advisory firm that manages the wealth of high-net-worth families, these can operate with even less transparency and often outside direct regulatory scrutiny, enabling them to engage in extreme leveraging and posing risks of sudden collapse.
    • Nonbank mortgage lenders and FinTechs听鈥 This group faces lower capital requirements than traditional banks, leaving thinner buffers to absorb losses during downturns, which can be especially concerning considering this sector鈥檚 rapid growth.
    • Crypto exchanges听鈥 Like much of the cryptocurrency ecosystem, these exchanges operate in jurisdictional gray zones, complicating enforcement and enabling illicit financial flows.
    • Money market funds 鈥 While these are generally perceived as safe, they can suffer runs if confidence in underlying assets erodes, which can force fire sales that destabilize related markets.
    • Special Purpose Vehicles (SPVs) and Structured Investment Vehicles (SIVs)听鈥 These investment instruments allow large institutions to move risk off their balance sheets, rendering such activity invisible to regulators.

Shadow banking may be the single greatest challenge facing financial regulation. These non-traditional institutions act like banks, but without the safeguards that make banks accountable. And where accountability is absent, opportunity often fills the void.

The same opacity that makes shadow banking difficult to regulate also makes it attractive to those with less legitimate intentions. Without mandatory reporting requirements, standardized oversight, or the threat of deposit insurance revocation, these institutions can become conduits for money laundering, fraud, terrorist financing, and sanctions evasion in ways that traditional banks simply cannot. The question is no longer whether these vulnerabilities exist, but how they continue to be exploited.

The challenge of regulation

The global financial system has always evolved faster than the rules designed to govern it. What began as a coffee loyalty program and a few alternative lending platforms has quietly morphed into a parallel financial universe, one that moves trillions of dollars with a fraction of the transparency that traditional banking requires. That gap between innovation and oversight is not just a regulatory inconvenience, it鈥檚 an open door for illicit actors.

Closing that door will require more than periodic enforcement actions or piecemeal legislation. It will require regulators, lawmakers, and institutions to reckon honestly with how broadly the definition of a financial institution has expanded, and who bears the risk when things go wrong. Because historically, it has not been the institutions themselves; rather it has been the customers, the investors, and ultimately the public.

The first step, of course, is awareness. Recognizing that your money does not need to be in a bank to be at risk and that the custodians of that money need not be offshore shell companies to operate in shadows, can transform how we think about financial safety.

The line between a convenient app and an unaccountable financial intermediary is thinner than most realize. And in the world of financial crime, thin lines have a way of vanishing entirely.


You can learn more about the听many challenges facing financial institutions today听丑别谤别

]]>
Financial crime implications of a US-Iran war: The emotional drivers of instability & illicit flows /en-us/posts/corporates/us-iran-war-financial-crime-implications/ Tue, 10 Mar 2026 16:26:26 +0000 https://blogs.thomsonreuters.com/en-us/?p=69898

Key insights:

      • Geopolitical crises fuel financial volatility and illicit activity 鈥 Conflicts have traditionally accelerated capital shifts and flows, creating cover for bad actors.

      • Predictable patterns emerge 鈥 Financial institutions should watch for sudden cross-border activity, unusual cash deposits, and transactions from border areas.

      • Conflict zones enable black market expansion 鈥 They also should adapt their compliance systems to detect more sophisticated methods used by criminals, tightening screening and enhancing staff training.


While business and international politics may appear cold and calculating, these things are often driven by emotion, especially fear 鈥 and fear of instability often drives market volatility.

So it goes as the United States attacks one of the world’s largest militaries and supporters of regional terror groups, causing deepening instability in a Middle East already beset by violence. It is certain that there is already a surge of money flowing in and out of the region for different reasons. Legitimate and illegitimate actors alike will seek to both run away from the crisis and profit from it. However, there are some anti-money laundering specific thoughts that financial institutions need to consider during a time of global uncertainty.

The bottom line 鈥 lots of money is on the move. Funding will send aid groups towards the crisis; it will also send logistical supplies, war material, and other necessities. All of these cost money, and defense sectors in multiple countries will be pumping out munitions to refill stockpiles in any country that is related to or in the neighborhood of the conflict.

Not every large transaction is an unusual, reportable event, but financial institutions now need to look one or two layers below the surface. What does not seem related on the surface is always a red flag. Look at beneficial ownership of companies and vessels, look at relations of the owners, not just the(OFAC) results of those people themselves. The financial system will, and should, allow the legitimate funds to flow. However, financial investigators must remain diligent to catch bad actors that take advantage of the surge in non-profit activity or the urgency with which legitimate businesses operate in a conflict zone.

Risk Factor 1: Capital flight from regime change

Just as the fall of the Al-Assad regime in Syria caused family funds to flow to as regime members fled the country, you will see the same with politically exposed persons (PEPs) who are inevitably fleeing regime change in Iran. A political crackdown will come. Whether the victors are on the side of the West or not remains to be seen, but some factions are going to flee the country and take family wealth with them.

Banks and other financial services should watch for anyone connected to people moving money through neighboring countries in which they may have literally hiked or driven before depositing cash into a financial institution. There are stories of refugees leaving places with gold bands on their arms, cash and false bottom purses, and diamonds in the lining of sweaters. These things will be converted to cash in neighboring countries and put into financial systems less affected by the conflict. An influx of cash throughout the region, therefore, could indicate this type of capital flight.

Risk Factor 2: Illicit finance and black markets

Since the fall of Syria, we have also become aware of that helps fuel addiction and armed conflict. There are certainly other substances and drug trafficking networks about which we know very little on this side of the secrecy veil.

Therefore, this instability will be seen as a time of opportunity for criminal groups. Indeed, with Assad鈥檚 security forces no longer controlling middle eastern captagon and other narcotics trade and various armed groups looking for funding sources, this is an illicit business opportunity.

Financial institutions can expect rapid movement of money between unrelated shell corporations, new corporations, and shadow vessels. They also should expect the black market to boom with drugs, contraband Iranian oil, and funds tied to narcotics that they have only yet to discover. Illegal arms will also generate funding, so all of the methods, both formal and informal, used to transfer value will become active.

In fact, large portions of such funding will flow through financial institutions; and peer to peer payment providers, FinTechs, and money transmitters should be especially wary of funds moving rapidly through their platforms. A burst in conflict means a burst in activity from illicit sources; therefore, enhanced, targeted monitoring is a must.

How financial institutions鈥 risk & compliance teams should respond

First, all financial institutions鈥 risk & compliance departments need to assess their institutions鈥 OFAC and sanctions screening search parameters. This is a good time to dial up fuzzy logic capability and reduce match percentage thresholds. In other words, risk tolerance should go down while the metaphorical dragnet gets wider. Surge the department鈥檚 personnel capability to compensate if you have to, because that is better than a strict-liability OFAC fine. Remember, OFAC sanctions are closely tied to national security, especially when it comes to Iran. This is not an arena in which leniency can be expected. Compliance teams should look at monitoring systems and thresholds immediately, create geographical targeting models to cover the conflict zone, and consider a command center approach to deal with the fluidity of the situation until things settle.

If your institution has not already taken the hint from regulators, this also is an opportunity to double down on Customer Due Diligence and identity verification. Front line staff and embedded business compliance personnel should receive updated training and job aids to increase awareness and hone internal reporting. Indeed, it is an advanced business skill to understand complex corporate beneficial ownership, much less to detect when it may be tied to illicit activity or corrupt regimes. Now is the time to increase that level of knowledge and thereby make the culture of compliance more robust.

In every crisis there is opportunity as well as risk: Managing the risk allows every company to take advantage of the opportunity, shore up its mission, and strengthen the institution.


You can find out more aboutthe geopolitical and economic outlook for 2026here

]]>
AI-powered fraud: 5 trends financial institutions need to understand in 2026 /en-us/posts/corporates/ai-powered-fraud-5-trends/ Tue, 17 Feb 2026 15:19:11 +0000 https://blogs.thomsonreuters.com/en-us/?p=69411

Key insights:

      • AI scales deception 鈥 Fraudsters automate convincing scams, create synthetic identities, and overwhelm legacy controls, making AI an essential part of financial institutions鈥 anti-fraud solution.

      • “All-green” fraud is rising 鈥 The biggest losses often happen in correctly authenticated sessions, making them much harder to detect.

      • Behavior plus collaboration wins 鈥 Financial institutions need to shift from point-in-time checks to real-time, cross-channel behavioral signals and tighter inter-institution cooperation to spot coordinated campaigns and reduce friction without stalling growth.


How financial institutions are facing fraud in 2026 isn’t what it was like even two years ago. AI has industrialized deception, synthetic identities bypass traditional checks, and scams manipulate legitimate customers into moving their own money even as every security control shows green.

Today, financial institutions face a perfect storm, according to Michal Tresner, CEO of ThreatMark, and Sara听Seguin the Director听of听Enterprise Banking at Alloy. Indeed, they鈥檙e trying to manage attacks that scale automatically, identities that look real but aren’t, and victims who authenticate correctly before being convinced to hand over funds.

5 trends financial institutions need to understand in 2026

Looking at each of these five key challenges individually can offer both perspective and possible solutions.

1. The AI threat multiplier

Generative AI (GenAI) and large language models (LLMs) have fundamentally changed the fraud landscape. “AI is now the biggest threat facing financial institutions in 2026,” Tresner notes, adding that fraudsters are leveraging these technologies to create highly convincing content while automating attacks at unprecedented scale 鈥 a combination that overwhelms traditional security systems.

Seguin agrees and confirms this trend is . “Financial institutions are seeing a measurable increase in AI-enabled financial crimes, while consumers increasingly expect banks to deploy AI-based security in response,鈥 she explains. The reality is stark: AI has become an essential tool for both fraudsters and those fighting against them.

2. The onboarding dilemma

In another area, the account opening process represents a critical vulnerability. Seguin points to rising first-party fraud and scams as particularly challenging because perpetrators often appear indistinguishable from legitimate customers going through the onboarding process. “A person may open an account with seemingly normal intentions 鈥 direct deposit or everyday banking 鈥 only to later engage in fraudulent activity,” she explains.


Onboarding is where institutions have the least certainty about either the authenticity of the identity or the legitimacy of the intent.


Tresner identifies a related threat: Synthetic identities. “Rather than stealing real identities, fraudsters now generate convincing fake ones, complete with realistic identity documents and even AI-generated images or video,” he says, noting that these synthetic identity accounts are exploding and frequently serve as infrastructure for moving stolen funds.

The common thread is that onboarding is where institutions have the least certainty about either the authenticity of the identity or the legitimacy of the intent.

3. Authentication under siege

Similarly, and even as financial institutions work to strengthen onboarding controls, account takeover remains a persistent threat. Fraudsters are now using AI to bypass authentication mechanisms at scale, making previously reliable security gates less trustworthy, Tresner explains. 鈥淪uccessful authentication can no longer serve as a definitive indicator of safety.鈥

Indeed, a properly authenticated session may still be the entry point for fraud, whether committed by an intruder or through a legitimate customer who is being manipulated.

4. The “all green” problem

Which brings us to another fraud scenario faced increasingly by financial institutions, and one that Tresner says may be 2026’s most operationally challenging issue 鈥 the fact that many scams don’t trigger traditional fraud controls. When the legitimate account holder initiates a transaction from their usual device and location using correct credentials, every standard check appears normal. The difference is the persuasion happening on the other side as fraudsters convince victims they’re interacting with trusted entities like banks, law enforcement, or romantic partners, and then direct them to transfer money.

Seguin notes that detecting these scenarios requires new approaches, such as identifying subtle behavioral signals like hesitation immediately before a money transfer. “Traditional device and credential checks won’t help when the customer is genuinely authenticated but acting under manipulation,” she explains.

5. Fraud as an industrial operation

Tresner emphasizes that modern fraud is not a series of isolated events but a coordinated, multi-step operation. Campaigns typically begin with establishing or compromising mule accounts, then deploying automated phishing kits to harvest personal data.


Younger users represent a growing target due to their online activity and platform usage, and the emergence of human trafficking-linked fraud operations has worsened this problem.


Not surprisingly, younger users represent a growing target due to their online activity and platform usage, Seguin says, adding that the emergence of human trafficking-linked fraud operations, including sextortion and overseas scam compounds, has worsened this problem.

What works in 2026

Tresner’s core recommendation for fraud investigators in financial institutions is for them to shift their focus from static, point-in-time checks to behavior-based detection. “Behavior profiling and analytics across channels can identify sophisticated actors and manipulation patterns invisible in single transactions or logins,” he explains, stressing that real-time cooperation among financial institutions is critical because fraudsters collaborate, and isolated defenses are insufficient.

Further, Seguin reframes fraud prevention as a growth enabler. “Effective risk controls allow institutions to launch products faster, set higher transaction limits with confidence, and avoid overly restrictive policies driven by fraud concerns,” she notes. Indeed, modern fraud defense isn’t just about reducing losses but about enabling safe expansion.

The 2026 fraud landscape presents compounding challenges: AI-driven scale and realism, onboarding uncertainty from synthetic identities and hidden intent, weakening authentication boundaries, scams that produce legitimate-looking transactions, and industrialized fraud operations that can span channels and institutions. Success in this area requires financial institutions to treat fraud as a behavioral, multi-channel, collaborative challenge because that’s exactly how their adversaries are operating.


You can learn more about the many challenges facing financial institutions today here

]]>
Strange intersections: The state of 21st century financial crime /en-us/posts/corporates/state-of-financial-crime/ Tue, 06 Jan 2026 16:01:04 +0000 https://blogs.thomsonreuters.com/en-us/?p=68951

Key insights:

      • Old laundering patterns have modern wrappers鈥 Nefarious actors now cooperate to move value through mirror-trade commodity flows and sometimes crypto, blending legal transactions with illicit proceeds.

      • FinTech expands laundering options鈥 Peer-to-peer apps, reloadable cards, kiosks, and virtual assets allow for the execution of many small conversion transactions that break up funds and blur clean-to-dirty movement.

      • Fraud scales cheaply in an AI era鈥 As cash use drops, scams and extortion become lower-risk and easier to industrialize 鈥 sometimes through forced-labor scam operations 鈥 making verification and policy adaptation urgent.


When incentives align, strangers can become business partners. In the 21st century, traditional finance, banking, and cash payments have been disrupted by a watershed of technological advances for which we are all unprepared. This time of crisis and opportunity has created an unexpected alliance between FinTech firms and traditional banking institutions.

To fight financial crime, however, it is important to deal with the ever-evolving ways for currency to change forms and change hands across vast distances. This new way of moving money mirrors ancient systems of debt ledgers & interpersonal trust, often known as Hawala or Fei Chien. Criminals continue to innovate with both methods, creating unsettling partnerships.

The cartel-business partnership

Cartels, underground banking networks, and legitimate businesses now collaborate 鈥 sometimes unwittingly 鈥 to launder money by moving value through mirror-trade commodity flows and cryptocurrency, merging legal trade with illegal profits. Near-cash-style FinTech methods 鈥 such as peer-to-peer apps, reloadable cards, kiosks, and virtual assets 鈥 can expand laundering opportunities by enabling numerous small conversion transactions that fragment funds and obscure the movement of illicit money. As cash use declines, fraud, including scams and extortion (sometimes executed through forced-labor scam operations) becomes less risky and easier to scale in the AI era, underscoring the urgent need for verification and policy adaptation.

The flow of illicit cash also extends to digital assets. Some of the cash money that gets stuffed into bitcoin ATM-style kiosks is from the drug trade. Indeed, the U.S. Treasury Department鈥檚 Financial Crimes Enforcement Network (FinCEN) issued an alert on this topic as well and, while the two schemes seem distinct, we can speculate that some of the resulting Bitcoin, crypto, or other virtual assets went to underground bankers facilitating a mirror trade for a countryman.

What is old is new again

In the world of finance, the dawning of a new era of digital, on-demand, borderless transactions provides access to an exciting frontier of possibility. New coins, new blockchain tokenization uses, and new FinTech tools with cool names are all rising and falling faster than the price of bitcoin.

The players in this intersection have figured out that trade is profitable, and legal trade leading to illicit substance trade is even more profitable. Underground shipping, sanctions evasion, and dark web services for money laundering are all profitable by themselves, and when combined, they represent an illicit economic blitzkrieg.


Cartels, underground banking networks, and legitimate businesses now collaborate 鈥 sometimes unwittingly 鈥 to launder money by moving value through mirror-trade commodity flows and cryptocurrency.


Crypto is the new Hawala or Fei Chien because, with no bank or government involved, people can keep common copies of a ledger instead of relying on a hawaladar or Chinese underground banker to keep records. Virtual assets could facilitate the currency side of mirror trades, refilling a person鈥檚 coffers via digital transfer which can then be moved to an exchange and on to a local bank.

Commodities are the new cash because mirror trades are physically settled in commodities. For example, investment in source chemicals for drugs, negotiated at a discount, helps expand the illicit cartel business. Similarly, one-off items can be used for large-cash replacement transactions.

FinTech is the new money service business (MSB). We know that they are regulated the same but often serve different market segments, and many now exchange government fiat currency for one or more forms of cryptocurrency. Money laundering thrives on breaking up funds into smaller amounts to avoid reporting; therefore, a multitude of near-cash options like peer-to-peer payment apps, reloadable cards, and virtual assets help the launderer with this problem.

One might imagine that lower-tier street dealers could have several peer-to-peer payment app accounts for ease of use, because although the criminal is running an illicit business, it鈥檚 a business, nonetheless. Industry experts call these small payments conversion transactions because they usually come from a clean, legitimate payroll source but are converted to dirty funds when spent on an illicit substance or activity.

Fraud is low risk and AI fuels the fire

In this rapid-fire digital transaction world, fraud is the new mugging, complete with racketeering and slave labor farms. The profit margin on physical intimidation has gone down because people use cash less often, and many seldom carry it at all.

Due to digital innovation, communication technology, and AI, however, the barrier to entry for fraudulent theft, extortion, or scamming has gone down dramatically as well. Presumably, the margins are high because the ability to fraudulently communicate has become exponentially enabled by these tech advances. Fraud and scams are ubiquitous to the point of impeding legitimate business from communicating with customers effectively.


The players in this intersection have figured out that trade is profitable, and legal trade leading to illicit substance trade is even more profitable.


Further, slave labor has reared its ugly head in yet another strange intersection among these many things. Fraudsters in Southeast Asia build warehouses filled with tech and then force local people to operate scams and fraud schemes at scale. Aggregated funds from these efforts are sometimes moved via commodity or artifact, but often these funds are gathered from kiosks or peer-to-peer apps and then moved through cryptocurrency transactions until they become increasingly arduous to track.

Looking to the new dawn

It seems every few minutes brings us a new tool, a new opportunity, a new way to move money, and a new way to get scammed out of it all. This expanding capability is fueled by GenAI and even more advanced forms of AI. Business expands, productivity expands, and resources are consumed faster. Fraud is enabled, scaled, and seems to hang in the very air.

With the proliferation of digital, borderless, and AI-enabled everything, the human touch is more important than ever. Business owners note that requests for memorabilia and other tokens of physical value continue to rise. Cash will not go away, but its share of transactions is already diminished with the advent of crypto, new intersections in commodity exchange, and other person-to-person ways to settle accounts.

For the financial institutions, government agencies, and fintech firms that populate this world, creating informed best-practices and sensible policy documents is critical at this phase of innovation. Without a proactive approach we cannot hope to stay ahead of criminals and keep legitimate markets secure.


You can find out more about how organizations are using new methods to detect and prevent financial fraud here

]]>
Blockchain: Built to catch criminals /en-us/posts/corporates/blockchain-catch-criminals/ Fri, 05 Dec 2025 17:01:33 +0000 https://blogs.thomsonreuters.com/en-us/?p=68673

Key insights:

      • Blockchain’s transparency is a double-edged sword鈥 While criminals use crypto for illicit activities, the permanent and public nature of the blockchain ledger creates an undeniable trail, making it a powerful tool for law enforcement to track and seize illicit funds.

      • The rise of crypto forensics鈥 A growing industry of specialized firms and investigators is leveraging blockchain’s inherent design to unravel complex financial crimes, demonstrating that 鈥濒辞蝉迟鈥 crypto funds can often be recovered.

      • An evolving battlefield鈥 Despite the ongoing challenges posed by tools like mixers and privacy coins, blockchain technology is fundamentally shifting how financial crime is fought, turning the very system criminals exploit into the means of their capture.


Cryptocurrencies and other digital assets are used by criminals, which is great for catching them. Indeed, the biggest criticism of crypto since its inception has been its criminal use, which was estimated to be almost half of all activity by the end of 2017. In the past three months alone, asset seizures and forfeitures of more than $22 billion in crypto have been made by authorities in the United Kingdom, the United States, and their international partners.

These historic interceptions of illicit funds prove that the fundamental architecture of blockchain 鈥 the digital ledger that underpins most virtual transactions 鈥 makes it the perfect tool for catching criminals, validating the hypothesis of Satoshi Nakamoto, the presumed pseudonym of the person or persons who developed bitcoin, that fraud could be prevented through intentional system design.

While criminals assumed they could optimize their illegal activities using crypto to obfuscate fund flows, the blockchain ledger’s immutability has created a niche for financial crime investigators seeking to unravel these cases. Companies like Chainalysis, Elliptic, and TRM Labs have become synonymous with these investigations, joined by a growing network of smaller firms that are democratizing crypto investigations, combating terrorist financing and online child abuse. Ultimately working to secure seized assets and prevent further harm. By all measures, the ecosystem is expanding rapidly.

Every crypto transaction creates a permanent trail that allows investigators to catch criminals even years after their crimes. This is how, a digital exchange hack in 2016 that resulted in the theft of 120,000 Bitcoin worth $72 million (at the time) and was chronicled in the Netflix documentary was wrapped up years later with the seizure of $4.5 billion in crypto and the arrest of the two alleged perpetrators in 2022. Law enforcement may not move as fast as crypto, but if the whale is big enough, they will catch it.

Indeed, the scale of cryptocurrency-enabled crime threatens Western economic stability. The FBI received 149,686 crypto-fraud complaints in 2024, totaling $9.3 billion in losses, likely significantly lower than the true figure. More than 100,000 people are trafficked and forced to operate scams from compounds in Cambodia and Myanmar. The Prince Holding Group, a transnational criminal organization headed by Chen Zhi, generated , approximately $10.95 billion annually.

Financial crime as economic warfare

These are just headlines. Further research in the Netherlands shows that only 11.8% of fraud victims actually report being victimized. While many dismiss fraud and blame victims, crypto-related fraud is becoming economic warfare systematically draining wealth from Western economies while enslaving hundreds of thousands in forced labor camps across the Global South. With potentially $80 billion lost annually to crypto fraud, the impact extends beyond the 1.14% of the US federal budget it represents. This illicit outflow causes loss of productive capital, tax base erosion, and reduced economic activity.

Yet the technology accused of enabling this new generation of fraud simultaneously provides the tools to detect and combat these criminal organizations more successfully than any financial crime fighting technology in history. The Chen Zhi case, easily the largest asset forfeiture in US history at around $15 billion, demonstrates this perfectly.


Every crypto transaction creates a permanent trail that allows investigators to catch criminals even years after their crimes.


This is why I’ve spent the last four years studying the crypto ATM industry. While most financial crime professionals saw a problematic service in a problematic industry, I saw a massive dataset of criminal activity that could predict other illicit activity beyond crypto ATMs. This dataset helped identify terrorist financiers, vendors of child sexual abuse material (CSAM), and countless scams and frauds. Layer data-rich sources like crypto ATMs with blockchain data, and a good investigator can achieve remarkable results.

Modern blockchain analytics leverage the features Nakamoto designed for trust and verification. Immutability makes evidence tampering impossible and investigations public; and verifiability allows investigators to validate every step of a criminal’s crypto trail. Consensus mechanisms create a distributed jury of millions, validating the evidence chain further. These features enabled authorities to map the , revealing 76,000 fake social media accounts operated from facilities using 1,250 phones across 10 Cambodian compounds, and tie it to $15 billion in bitcoin.

The same technology facilitating billions of dollars in pig butchering scams annually enables law enforcement to catch the transnational criminals and recover funds. Traditional financial crimes disappear into offshore accounts and shell companies, often leaving investigators blind. However, as anyone in blockchain forensics knows, Locard’s Exchange Principle remains true: Every contact leaves a trace. Blockchain’s public ledger means every suspicious transaction leaves a permanent clue.

Nakamoto’s vision of “electronic transactions without relying on trust” inadvertently created a system for establishing criminal culpability. The blockchain’s public nature convinced criminals they could hide in plain sight, but Nakamoto saw that participants would be deterred from fraud by this transparency. The naive assumption that users had nothing to hide if doing nothing wrong quickly revealed plenty were doing wrong. Still, the system proved fit for purpose once tools were built to catch bad actors. Nakamoto鈥檚 white paper’s emphasis on preventing double-spending through public verification created a framework in which crime-spending leaves permanent evidence. All a good investigator needs is time.

The rise of crypto forensics

As crypto advances, tools like bridges, mixers, and privacy coins pose constant challenges for investigators, but claiming the money is gone when crypto is involved is simply false. As blockchain forensics advances, criminals face an uncomfortable truth: They’ve been conducting operations on a permanent, public, immutable ledger. Their only protection is time and cryptographic puzzles that an entire industry is working to unravel.

While some has been diligent in pointing out some of the challenges in the industry and some of what鈥檚 been missed, there are a lot more illicit fraud cases that never see the light of day because of what has been prevented by blockchain forensics. And while it may not be perfect, the fact that there is an industry working to build a safer financial system than what has gone before is commendable, and the accountability that public ledgers have enabled is energizing for those that must police it.

Unfortunately, the $15 billion Chen Zhi seizure isn’t the end but the beginning. With at least $64 billion stolen annually, these criminals have little incentive to stop. While some scam compounds have been dismantled, reports indicate they’re simply being relocated.

Nevertheless, blockchain is setting a new paradigm in financial crime, one in which the technology enabling crime will eventually become the weapon that defeats it.


You can learn more about financial crimes and other regulatory issues involving cryptocurrencies here

]]>
Scams aren’t just fraud 鈥 they’re engineered to exploit human nature /en-us/posts/corporates/scams-fraud-exploiting-human-nature/ Thu, 20 Nov 2025 19:02:27 +0000 https://blogs.thomsonreuters.com/en-us/?p=68515

Key insights:

      • Traditional fraud breaks systems; scams break people 鈥 Scams directed against individuals weaponize trust, urgency, and emotion and hit victims when they’re stressed or distracted.

      • Nearly 1-in-4 adults have lost money to scams, and that number is climbing 鈥 Criminals now wield deepfakes, voice cloning, and AI to make their pitches eerily convincing 鈥 and the curve is still bending in their favor.

      • By the time someone reaches the payment screen, manipulation has already won 鈥 Real protection means flagging suspicious outreach early, verifying identities in real-time, and building friction into high-risk transactions, all before emotions override logic.


One of the most fundamental distinctions in financial security is this: Every scam is a fraud, but not all fraud is a scam. During this week鈥檚 , it’s worth pausing to note what makes scams different 鈥 and why that difference matters more than ever in 2025.

Traditional fraud typically exploits weak systems, such as stolen credentials, manipulated data, or technical vulnerabilities. Scams, on the other hand, exploit something far more powerful and harder to patch 鈥 human nature itself. Scams can weaponize trust, urgency, and emotion; and when those psychological levers are pulled at just the right moment, even savvy people can find themselves wiring money to someone they’ll never see again.

The threat is only growing

The numbers tell a sobering story. More than 1-in-5 (22%) of adults report losing money to scams, according to the . And Ayelet Biger-Levin, founder of听and creator of ScamRanger, a technology designed to stop scams before they happen, doesn鈥檛 mince words about the growing threat of scams: “From a numbers perspective, scams are on the rise,鈥 she says. 鈥淭hey’re going to continue to rise because criminals are becoming more sophisticated, leveraging the latest technology advancements including large language models (LLMs) and AI agents to scale operations.”

Indeed, her definition cuts straight to what makes scams unique. “A scam is social engineering to convince an individual to either disclose personal information or transfer money directly to a criminal,” she explains, adding that it’s not a system breach; rather, it’s a conversation that goes wrong 鈥 often in ways the victim doesn’t realize until it’s too late.

And the trajectory isn’t encouraging. Biger-Levin says that she expects that the number of adults being victimized over the next 12 to 18 months will only increase. “In the US, I expect it to rise,鈥 she notes. 鈥淐riminals are rapidly leveraging tools that make scams more believable such as deepfakes and voice cloning, which are used for impersonation to increase both scale and success.”

And while we haven’t reached the tipping point yet, the curve isn’t bending in our favor.

Scams adapt to every new channel we create

Here’s the uncomfortable truth: Scams aren’t a glitch in the system; rather, they’re a feature of human society that adapts with every new communication channel we build. Romance scams, investment lures, fake shopping sites, cryptocurrency schemes 鈥 these aren’t amateur operations anymore. They’re often run by organized networks, sometimes operating out of compounds in Southeast Asia, and they’re supercharged by technology that makes deception easier and more convincing than ever.

Deepfakes can put your CEO’s face on a video call. Voice cloning can mimic a family member in distress. Increasingly, agentic AI can personalize phishing at scale, crafting messages that feel eerily tailored to your life. Educating people about ways to keep from becoming victims helps, absolutely. However, when a persuasive story lands at exactly the wrong moment 鈥 when you’re stressed, distracted, or emotionally vulnerable 鈥 logic often takes a back seat.

And if those fighting fraud are waiting until a victim reaches the payment screen to intervene, they’re already too late.

Meeting manipulation where it starts

To make real progress, we need to meet manipulation at first contact 鈥 the moment persuasion begins. That means pairing human-centered design with protective technology across the entire scam lifecycle.

What does that look like in practice? It means flagging risky outreach before it reaches an inbox. Verifying websites and identities in real time, in context; and slowing down high-risk payments while prompting users with friction that feels helpful, not punitive. And critically, it means sharing signals and liability across the ecosystem 鈥 among banks, telcos, social platforms, and regulators 鈥 so they can all work from the same playbook.

The constant in all of this is human psychology. The variable is how well our systems anticipate it.

Biger-Levin says she is optimistic about enforcement improving over time. “I do predict that long-term, these scam compounds are going to be taken down,” she says, adding that she’s also realistic about what comes next. “Criminals are not going to stop there, and by using advanced technology will continue to attack individuals. The one common denominator, though, is human psychology, and that is something we can tackle and protect with the right consumer empowerment in place”

That’s the core challenge. Regulators or financial services compliance agents can shut down a scam operation, but they can’t patch human emotion. Technology solutions must be designed around how people actually think and behave under pressure 鈥 not how we wish they would. That means building systems that recognize when someone is being groomed, when urgency is being manufactured, and when trust is being weaponized.

The old advice still holds鈥 because it reflects how we think

There’s a reason the classic warnings never go out of style. The old saying of, If something seems too good to be true, it probably is, is not outdated wisdom 鈥 it’s a reflection of how scams work by promising outsized returns, instant solutions, or emotional rewards that bypass our rational filters.

Gut checks still matter, Biger-Levin reminds us, adding that doesn鈥檛 mean we can rely on individuals to shoulder the entire burden of vigilance, especially when criminals are using industrial-grade tools to manipulate them.

Scams will always evolve. So, the question isn’t whether they’ll disappear 鈥 they won’t. The question is whether we’re willing to build systems smart enough to protect the humans inside them.

That means reducing exposure at the source, disrupting grooming tactics before they gain momentum, and making the this doesn’t feel right moment easier to spot 鈥 and safer to act on. It also means treating scam prevention not as a user education problem, but as a systems design problem.

We can bend the curve, but only if we stop treating scams as individual failures and start treating them as the systemic, technology-enabled threats they’ve become. The tools already exist; however, the challenge is coordination, accountability, and a willingness to bake protection into every layer of the digital experience.

Because the denominator isn’t changing and human psychology remains constant, the aspect that we can change is how well our systems anticipate it 鈥 and how much harder we make it for criminals to exploit it.

Staying ahead of the scammers

To stay ahead of these scammers, organizations and consumers should take practical steps to prevent and minimize risks. For example, they should stay up to date on the latest scam tactics by keeping an eye on consumer protection updates. These can help you spot red flags, such as urgent demands or unusual payment requests, that may signal a scam.

Also, when you receive unsolicited calls or emails, take a moment to verify their authenticity. Instead of responding right away, contact the organization directly using official contact information. Legitimate companies typically won’t ask for sensitive information like passwords or account details out of the blue.

Finally, boost your digital security by using strong, unique passwords and enabling two-factor authentication. Be cautious when clicking links and avoid those that seem suspicious. Scammers often rely on high-pressure tactics to prompt rushed decisions; so by taking a step back and evaluating the situation carefully, you often can avoid falling prey to their schemes.


You can find out more about how businesses and individuals are navigating fraud schemes here

]]>
Blockchain companies and the Wolfsberg framework: Built to exceed the standard /en-us/posts/government/blockchain-wolfsberg-framework/ Fri, 31 Oct 2025 13:39:56 +0000 https://blogs.thomsonreuters.com/en-us/?p=68267

Key insights:

      • Blockchain data exceeds Wolfsberg expectations 鈥 Public, attribution-rich ledgers give crypto firms immediate access to behavioral, network, and cross-chain signals that traditional banks must retrofit or request from third parties.

      • Crypto companies can leverage this data 鈥 With abundant labeled history and real-time on-chain context, crypto companies can combine rules, supervised machine learning, and unsupervised discovery to identify emerging typologies faster and with clearer explainability.

      • SARs become actionable intelligence, not just checked boxes 鈥 By including wallets, hashes, and traceable flows, this data can turn SARs filings into ready-to-investigate leads for law enforcement, thereby converting compliance from a cost center to a competitive advantage.


The Wolfsberg Group’s on modernizing suspicious activity monitoring comes at a crucial time for cryptocurrency companies. Traditional financial institutions are being encouraged to go beyond basic transaction monitoring by including behavioral analysis, network effects, and various risk indicators in their anti-money laundering (AML) programs. For cryptocurrency companies, the framework describes capabilities that blockchain data infrastructure was essentially built to support.

Wolfsberg’s recommendations map almost perfectly to what blockchain businesses already are able to do. While traditional banks work to update legacy transaction monitoring systems with new capabilities, crypto companies operate in an environment in which the data for complex monitoring already exists. For crypto companies, this shouldn鈥檛 be seen as simply having to adapt to a new standard, but rather as a unique opportunity to set a new standard.

Investigation advantages built into the technology

Traditional financial investigations operate within closed systems. Investigators, at the start of an investigation, primarily have access to data points from their institution and what is available online. They may then need to gather additional information, each controlled by different institutions with their own legal requirements and timelines. The financial trail crosses multiple organizations, jurisdictions, and record-keeping systems that do not communicate with each other. With Suspicious Activity Reports (SARs) filings, investigators are often forced to close an investigation with gaps in the full picture.

Cryptocurrency investigations begin with transparency. Blockchain attribution tools offer visibility into fund flows throughout the entire ecosystem. The financial trail is recorded on a public ledger, in which tracking money doesn’t require negotiating with counterparts or waiting for legal approvals. This fundamentally changes what’s possible during an investigation. Questions that would take traditional investigators weeks to answer through formal channels or go unanswered by the time the SAR is due can be resolved in hours using attribution data and on-chain analysis.


The data available to cryptocurrency companies means they can move past compliance as a check-the-box exercise and start getting creative when thinking about what’s actually possible.


The Wolfsberg framework emphasizes “expanded risk indicator coverage” by analyzing data points beyond transaction amounts, dates, and counterparties. Blockchain companies have easy access to this data 鈥 wallet age, complete transaction history, interaction patterns with decentralized finance protocols, network connections to known bad actors, mixing service usage, cross-chain behavior, and anomalies that would be invisible in traditional banking. The data exists and is readily available for use in innovative and unique ways.

Detection models that can do more than react

Wolfsberg recommends combining three approaches: i) rules-based monitoring for known risks; ii) supervised machine learning for identifiable patterns; and iii) unsupervised methods for detecting emerging threats. Cryptocurrency companies can implement all three at the same time because the underlying data supports each approach.

Rules-based monitoring handles obvious cases such as sanctioned wallet addresses, direct transfers from darknet marketplaces, and transactions routed through high-risk jurisdictions. This represents baseline coverage that almost every crypto company will already have implemented. Adding the ability to look up scam wallets that are self-reported by victims online and community reporting capabilities in blockchain forensic tools, the foundation for much more effective risk mitigation is easily established.

Using blockchain’s historical data, models can be trained on years of confirmed criminal activity that law enforcement or blockchain tools have already identified. As traditional banks can’t access validated historical data across the entire payment ecosystem at this scale, they typically must rely on internal data and industry guidance to develop their models. Cryptocurrency companies, however, can utilize blockchain history and attribution databases that document known illicit activity. This means models can be trained on nearly unlimited applicable data from the past and can even be trained on near-real-time data as it gets added to databases.

Yet, it is with unsupervised learning that crypto companies can genuinely innovate beyond what traditional finance does by feeding attributed wallets, self-reported fraud wallets, and public blockchains directly into machine learning or AI models. With this, companies can analyze complex, interconnected patterns of activity that allow models to continuously identify emerging typologies and patterns in near real-time and potentially instantly expose gaps in a scenario鈥檚 current coverage.


It is with unsupervised learning that crypto companies can genuinely innovate beyond what traditional finance does by feeding attributed wallets, self-reported fraud wallets, and public blockchains directly into machine learning or AI models.


The data available to cryptocurrency companies means they can move past compliance as a check-the-box exercise and start getting creative when thinking about what’s actually possible.

SAR quality as intelligence product

The Wolfsberg framework addresses SAR quality directly, highlighting the problem of financial institutions filing too many low-value reports because their systems generate alerts that they cannot fully resolve. Indeed, institutions file thousands of SARs because they have unanswered questions or are unsure of exactly what is going on due to a lack of available data, not because they’ve identified actual money laundering.

Blockchain data changes what SAR filings can look like in ways that matter for law enforcement. When attribution tools indicate that funds originated from a wallet cluster associated with ransomware, were transferred through a mixing service, appeared in a customer’s deposit address, and were immediately withdrawn to a known cash-out service, the SAR can describe the exact pattern of suspicious activity with on-chain evidence for each step.

Including wallet addresses and transaction hashes in SAR narratives provides investigators with something traditional bank SARs rarely offer: immediate starting points they can follow without additional legal process, immediately making the SAR actionable intelligence.

Law enforcement agencies are overwhelmed with SARs, and it often feels like an investigator鈥檚 SAR filings don’t lead anywhere. However, when investigators can include information that helps law enforcement investigate and prosecute cases quickly and effectively, those investigators also may start seeing activity on the blockchain, such as illicit actors’ wallets slow down or funds be seized from a scammer’s wallet. This not only helps with the feedback loop but also confirms to an investigator that their work is making a real difference.

Building programs that lead instead of follow

The Wolfsberg framework also makes clear that innovation in AML isn’t optional. Criminal networks evolve too quickly for static rule sets and outdated monitoring systems. Advanced approaches need to be explainable, properly validated, and integrated into broader risk management frameworks.

Financial institutions need to build models that fully use available blockchain data, then validate them against on-chain patterns that can be directly observed. They should also train their investigators to understand blockchain attribution and network analysis 鈥 not just how to read a blockchain explorer, but how to interpret what attribution tools reveal about fund flows and network connections. When filing SARs, institutions need to include the on-chain evidence that makes their filings immediately actionable for law enforcement.

Traditional financial institutions are modernizing systems designed for the pre-internet era, while cryptocurrency companies are building compliance programs in a data-rich environment that makes certain investigations more effective than they’ve been in the past. The opportunity here isn’t just about meeting the Wolfsberg recommendations; rather the opportunity is showing what becomes possible when compliance programs are built with these capabilities from the ground up and when the data advantages inherent to blockchain technology get used to their full potential.

That will be what changes how regulators think about the industry 鈥 and what turns compliance from a cost center into a competitive advantage.


You can find more ofour coverage of SARs and related effortsto combat financial crimes here

]]>