Cybersecurity Archives - 成人VR视频 Institute https://blogs.thomsonreuters.com/en-us/topic/cybersecurity/ 成人VR视频 Institute is a blog from 成人VR视频, the intelligence, technology and human expertise you need to find trusted answers. Mon, 29 Sep 2025 19:44:23 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 The implications of 鈥渟maller鈥 AI solutions /en-us/posts/technology/smaller-ai-solutions/ https://blogs.thomsonreuters.com/en-us/technology/smaller-ai-solutions/#respond Thu, 01 Aug 2024 13:30:30 +0000 https://blogs.thomsonreuters.com/en-us/?p=62237 Across industries and their shared functions, the quest for everything AI continues to expand, fueled by firms with trillion-dollar market valuations and startups that showcase seemingly improbable use cases. The chaos and uncertainty of artificial intelligence (AI) is altering the corporate 2025-鈥26 planning and budgeting cycles as 2023-鈥24 solutions are already undergoing material alterations and sunsetting discussions.

As performance measures 鈥 usually around customer service, efficiency, and differentiation 鈥 generally show positive results, the AI pace of change and technological advancements threaten early-solution obsolescence. Whereas AI increasingly is the answer for corporate challenges and impacts, business leaders are now understanding that AI technology is not the end-state 鈥 it鈥檚 just the enabler.

Moreover, while generative AI (GenAI) dominates the discussions and solutions, enterprise implementations also understand that large solutions, such as large language models (LLMs), are expensive to develop and maintain, often requiring vast skills, data sources, and organizational change, which in some cases aren鈥檛 analogous to prior application frameworks. Finally, data relevancy and resiliency are foundational building blocks for intelligent decision-making that should not be sourced from outside the enterprise due to privacy, security, ethics, and accuracy concerns.

So, what should business leaders be considering when reviewing the growing, complex capabilities and transitory forms which comprise today鈥檚 AI? As AI commercial solutions are measured in weeks and months, what steady-state, foundational functions provide sustainable returns during AI鈥檚 many iterations? What is missing? And what will aid leaders with their AI implementation requirements?

To understand and answer these questions, let鈥檚 review the common failure points surrounding current AI implementations, their challenges, and impacts.

When analyzed, this chart highlights two fundamental enterprise core competencies that need to be implemented: Master data management (MDM) and AI governance. To date, these two functions have been done piecemeal as a by-product as firms rushed into AI deployment. However, as AI rapidly evolves, data-centric applications like AI require robust governance and oversight, which will be achieved using MDM and AI governance.

MDM represents a robust software solution that is critical for managing vast data sources across multi-modal AI applications, while ensuring data consistency, accuracy, and accountability across business data sources. For enterprises now thinking about smaller and more diverse AI technologies (or compartmentalized solutions that interact with other AI solutions), MDM represents a critical building block, which addresses the common failure points. Additionally, MDM provides automation for data oversight, ingestion, and governance that is critical to every AI onboarding, learning, and expansion effort.

The second core competency of AI governance represents the application of a data-centric approach for AI solutions. AI governance uses a common source of data that facilitates rapid-cycle technology iteration ensuring accuracy, consistency, and adaptation regardless of industry or common organizational functions. The implications of AI governance as an AI solution become increasingly specialized and iterative, and it resides in the consistent application of cross-department, divisional, and functional data.

To illustrate these increasingly common competencies of MDM and AI governance, Figure 1 conceptually shows that regardless of industry or function (such as audit, legal, or compliance) at the core of future, smaller, and targeted AI solutions resides these two competencies.

Ai solutions

With business leaders and technology providers demanding and delivering granular AI use cases, the infrastructure and architecture that represents AI designs must be proactively constructed. The previous applications that processed surrounding data in isolation is too fragmented, complex, and of inferior quality for current AI solutions.

To showcase why MDM and AI governance are truly foundational for the smaller, iterative AI solutions that coming in 2025-鈥26, Figure 2 further breaks down these evolving competencies and their capabilities. MDM and AI governance represent for AI a common architectural approach that offers data-centric deployment, data relevance, and resiliency that are lacking within current intelligent solutions. As the AI solutions grow more robust and industry-specific 鈥 much like traditional application designs before the explosion of AI capabilities 鈥 the building blocks for cost-effective and linked intelligent software requires data-centric infrastructures.

AI solutions

The graphic above represents the roadmap for the incorporation of business models and requirements beyond the current trends of chasing the technology. Many leaders fear they are being left behind by their competitors if they are not talking and implementing AI (such as FOMO, or fear of missing out). Yet without the design and implementation of MDM and AI governance as part of a robust, future-proof AI portfolio, enterprises will continue to experience the common failure points cited above.

For business leaders, MDM and AI governance provide the framework for common AI solutions regardless of technological specifics. As AI moves to smaller, more cost-effective designs, the data foundations and their active management will be critical to advancing AI technologies. A simplified sequence of high-level steps to implement MDM and AI might include:

      • Phase 1: Assessing and planning of the data environment against the AI requirements, including measurements and priorities.
      • Phase 2: Implementing data governance frameworks, including policies, procedures, standards, and more.
      • Phase 3: Automating data ingestion, integration, and quality across systems and sources that incorporates catalogs, dashboards, and platforms.
      • Phase 4: Building common use cases and reusable isolation modules that ensure quality, consistency, and context.
      • Phase 5: Training, assessing, and gathering feedback while including future designs and requirements as AI and business needs expand.

In summary, MDM and AI governance have often been viewed as isolated or esoteric architectural elements that only large enterprises required. However, with advancements in hardware and software, MDM and AI governance have become more intuitive and cost effective as building blocks for rapid-cycle AI solutions.

For business leaders seeking to get ahead of the AI adoption curves, MDM and AI governance represent a foundation that can level the competitive fields without requiring traditionally significant investments in capabilities that often took years to realize. As the strategy, planning and budget cycles come into focus for the next two years, AI experience from recent pilots and prototypes are driving MDM and AI governance adoption to eliminate fragmentation and inflated costs of operation and maintenance.

Using the post-implementation assessments and opportunities surrounding current AI, leadership can understand that their AI needs are common across industry and functions. However, the implication of smaller AI is that MDM and AI governance are core competencies necessary to continually adapt to the technological progressions that surround expanding intelligent solutions.


You can access more by this author about how , here

]]>
https://blogs.thomsonreuters.com/en-us/technology/smaller-ai-solutions/feed/ 0
Data privacy and biometric technology use /en-us/posts/corporates/biometric-tech-use/ https://blogs.thomsonreuters.com/en-us/corporates/biometric-tech-use/#respond Mon, 22 Jul 2024 14:57:55 +0000 https://blogs.thomsonreuters.com/en-us/?p=62285 Biometrics usually refers to either measurable human biological and behavioral characteristics that can be used to identify an individual or automated methods that can recognize individuals based on certain biological and behavioral characteristics.

Biometric technology has evolved significantly in recent years, and some of the most common uses include identification, health & fitness tracking, authentication, corporate security, and timekeeping.

Currently, no federal law directly addresses the collection, use, storage, and disclosure of biometric data; however,听听of the Federal Trade Commission (FTC) Act gives the FTC broad authority to protect consumers from unfair and deceptive trade practices in or affecting commerce. Under that authority, the FTC may take enforcement action against commercial organizations that engage in unfair and deceptive practices involving biometric data. If an organization that collects and uses biometric data fails to keep its promises to consumers regarding its handling of that data, it risks an FTC enforcement action.

Biometric data collection, use, disclosure, and storage present challenging privacy and security concerns because individuals cannot change their biometric data. In response to the risks presented by this data, Illinois, Texas, and Washington have adopted the following laws focused specifically on biometric data handling:

  • (BIPA)
  • (CUBI)
  • (Washington Biometric Law).

Although only three states thus far have enacted comprehensive statutes addressing biometric data handling, many other states regulate some aspect of biometric data in other ways. Several states, including California, Colorado, Connecticut, Texas, Oregon, and Virginia have enacted general privacy laws that include biometric information in the definition of personal information. In addition, several US cities have adopted ordinances governing biometric data, including New York City and Portland, Oregon. Many other cities have enacted laws that regulate law enforcement’s use of facial recognition technology.

In addition, BIPA, CUBI, and the Washington Biometric Law all impose distinct obligations on persons or entities that collect biometric data compared to those that simply possess biometric data.

The laws鈥 scope of coverage

The scope of coverage under BIPA, CUBI, and the Washington Biometric Law are similar, but the laws differ in the following respects:

BIPA scope of coverage

BIPA applies to private entities that include individuals, partnerships, corporations, or听limited liability companies, and associations or other groups, however organized. Specifically, BIPA broadly applies to those entities collecting or possessing biometric identifiers or biometric information, while CUBI and the Washington Biometric Law only apply to biometric identifiers collected or possessed for commercial purposes.

CUBI scope of coverage

CUBI applies to the collection and possession of biometric identifiers for a commercial purpose. However, CUBI does not define commercial purpose or specify the persons and entities the law covers. CUBI excludes from its scope voiceprint data retained by financial institutions or their affiliates, as defined under the 听(骋尝叠础).

Unlike BIPA and the Washington Biometric Law, CUBI does not specify the persons and entities subject to the law. CUBI also provides fewer exceptions from the law than BIPA and the Washington Biometric Law.

Washington Biometric Law scope of coverage

The Washington Biometric Law covers all individuals and听听except government agencies, activities subject to HIPAA, law enforcement activity, and financial institutions and affiliates subject to the GLBA.

The Washington Biometric Law specifically applies to biometric identifiers collected, maintained, and used for a commercial purpose.

Notice and consent

BIPA, CUBI, and the Washington Biometric Law all include notice and consent requirements before an organization may collect or obtain biometric identifiers or biometric information. Organizations should ensure they implement a system for providing and tracking notice and obtaining consent. This can be done electronically, for example, before collecting a fingerprint scan by providing an electronic notice and consent in which the individual clicks a box to consent. Organizations also should implement a system for storing notices and consents obtained under any applicable statute of limitations.

Sale, use, and disclosure restrictions

BIPA, CUBI, and the Washington Biometric Law all restrict the sale, use, and disclosure of biometric data, yet there are key differences among the laws.

For example, unlike CUBI and the Washington Biometric Law, BIPA prohibits the sale, lease, trade, or profiting from biometric identifiers or biometric information under any circumstance, including with the individual’s consent. Organizations may disclose, redisclose, or disseminate biometric identifiers or biometric information under BIPA for other purposes if they meet an exception.

Also, CUBI and the Washington Biometric Law allow organizations to sell, lease, or disclose biometric data if they meet an exception such as consent. The Washington Biometric Law allows for these disclosures with an individual’s general consent; however, CUBI only allows individuals to consent for certain defined purposes, including where disclosure is required by state or federal law.

Organizations subject to BIPA, CUBI, and the Washington Biometric Law must implement a system to ensure they do not disclose biometric data unless a statutory exception applies, and sell or otherwise profit from biometric data in the organization’s possession unless a statutory exception applies. The exceptions under each law differ so organizations must understand in which cases the laws permit or restrict disclosures.

Security & storage requirements

BIPA, CUBI, and the Washington Biometric Law all require persons and entities to protect biometric data using a reasonable standard of care. However, the laws do not define or provide guidance on what constitutes reasonable data security. Therefore, organizations should conduct due diligence to ensure that they comply with any data security standards applicable to their industry and other generally recognized data security standards to protect biometric data.

BIPA, CUBI, and the Washington Biometric Law all require the destruction of biometric data after a certain time, but no later than when the initial collection purpose ends. To that end, organizations should decide on a retention schedule and implement a system to ensure it destroys biometric data in the required timeframe. For example, if an organization requires customers to scan fingerprints for entry to an amusement park, it can arguably retain that data for the duration of the amusement park season. Or an employer that captures an employee’s biometric data for security purposes, should understand that the purpose typically expires on termination of employment.

To ensure they comply with retention obligations, organizations should retain an outside vendor or work closely with their information technology department.

Determining whether collector or possessor obligations apply

BIPA, CUBI, and the Washington Biometric Law also impose specific obligations on persons or entities in possession of biometric data compared to those that simply collect biometric data. Private entities may have both collector and possessor obligations; however, it should be understood that collector obligations exceed possessor obligations.

Organizations must analyze whether they collect or possess biometric data or both. In Illinois, case law can help organizations assess what constitutes possession of biometric identifiers or biometric information. However, there are no reported cases or guidance on the meaning of possession under CUBI or the Washington Biometric Law. Organizations may therefore decide that compliance with both collector and possessor obligations under these laws is the best way to protect against regulatory action.

If a third-party vendor offers biometric technology or services to customers that collect biometric data in Illinois, Texas, or Washington, it may be a possessor under the laws. These third parties and their customers should address any potential obligations through contractual clauses by examining several factors, including: i) whether the customer collects biometric data in Illinois, Texas, or Washington; ii) which party must comply with all obligations under applicable laws; iii) whether the contract should include indemnification clauses, such as requiring reimbursement for lawsuits, regulatory inquiries, and any other costs associated with biometric data law violations; and iv) whether the clients or third-party vendors have adequate insurance to cover biometric data claims and violations.

As the use of biometrics technology becomes increasingly commonplace among organizations, any companies involved will need to become acquainted with applicable state laws and take special care to preserve individuals鈥 privacy.


This article was written by 成人VR视频

]]>
https://blogs.thomsonreuters.com/en-us/corporates/biometric-tech-use/feed/ 0
Zeroing in on elevating trust in generative AI /en-us/posts/technology/elevating-trust-gen-ai/ https://blogs.thomsonreuters.com/en-us/technology/elevating-trust-gen-ai/#respond Wed, 17 Jul 2024 23:46:36 +0000 https://blogs.thomsonreuters.com/en-us/?p=62172 The , released by the White House in May, further emphasized the vigilance by state and non-state actors that may be mitigated by actions in our control through proactive cyber-incident management, remediation of vulnerabilities, and enhancing resilience.

Indeed, the proliferation of generative artificial intelligence (GenAI) and large-language models (LLMs) pose another tangled web of liabilities 鈥 such as jailbreaking and prompt injection attacks 鈥 that jeopardize the sanctum of privacy, opening the door for bad actors to wreak havoc, exploit weaknesses, and reveal personal data.

Principles of zero trust

While various regulatory frameworks for governing GenAI are works-in-progress 鈥 such as those being created by China, Japan, the United States, and others 鈥 they may provide limited assistance in combating bad actors. Encapsulating GenAI models with a zero-trust architecture provides various security perimeters and outer layers complemented with inner layers, which exhibits challenges for bad actors at every step, although still isn鈥檛 a panacea. Zero-trust architecture should include governance risk & compliance and data loss prevention control measures in contributing to the overarching war against cyber-criminals, fostering the culture of Never trust, always verify.

Contextualizing the primary principles of zero trust as defined by for GenAI by security perimeter, may consist of: i) policy, ii) identity, iii) network, iv) infrastructure, v) system, vi) data, and vii) monitoring. The principles are intertwined during implementation of zero trust.

elevate trust

 

Enterprise policies should be dynamically defined, modulating at the frequency of regulatory requirements, inclusive of privacy, system protection, data, and user communities. Identities should be centrally managed in accordance with identity access management, defined by role-based access control and attribute-based access control, as well further enforced using multifactor control.

The network should employ micro-segmentation, isolating every system, such as an application containing GenAI models, to limit the blast radius and lateral movements of any attacker. The underlying infrastructure of systems supporting GenAI models should be deployed using containers hosted on virtual machines which further natively remove certain types of attacks.

Systems developed using serverless technologies deployed using the techniques mentioned above, inherently prevent other attack types in cloud-native ecosystems. Discovery of all data throughout an enterprise should be categorized, labeled, and protected by policies governed by data loss prevention, limiting access to sensitive data to only those who absolutely need it. Monitoring utilizing the SOAR (security orchestration, automation, and response) and SIEM (security information and event management) tools are critical components for the overall GenAI system, alerting based on attack vectors and further supplemented with a mature incident management response.

GenAI鈥檚 zero-trust playbook

elevating trust

Secure GenAI investments transitioning from Steps 1 to 4, improving digital safety through an organization while elevating trust with its clients.

1. Data

Data governance begins with capturing the organizational goals, stakeholders, key performance indicators, and the definition of success. The organizational privacy policies should be aligned with data provisioned for usage by AI models, while the stakeholders should confirm agreements for usage by AI models and the output from the models. Stakeholders should confirm that respective processes for safe AI are executed, such as responsible and ethical methodologies, that should result in debiased insights or recommendations from the models.

After capturing the governance structure, data discovery should be performed across the digital estate, identifying sensitive data such as Social Security numbers, account numbers, payment card information, address, and government issued identification. Classifiers may consist of automatic pattern recognition, via machine learning, and they may be pre-trained, out-of-the-box, or custom trainable. Upon classifying the data, identify what鈥檚 relevant for ingestion by the model as well as output from the model, and define data loss prevention policies to limit access to the respective data sources. Any data used by the AI models should be isolated within secure containers and accessible via micro-segmented networks, to limit the blast radius.

2. Identity

Define security policies via identity access management and data loss prevention, prohibiting access to GenAI models as well as their inputs and outputs, by role-based access control or attribute-based access control, while restricting data formats. Use entitlement management to define specific roles or assign users per role, granting permissions to applications, data, and models.

Federated identification may also be used, but regardless of the identity access management approach, identities should be validated via identity providers or brokers, who confirm their authenticity. Also, incorporating device fingerprinting may be further beneficial in analyzing user behavior, such as browser type, device type, and IP address, thus reinforcing identify verification. In addition, secrets, key management, or certificates may also be employed to strengthen identity verification.

3. DevSecOps

DevSecOps 鈥 the integration of security practices into every phase of the software development lifecycle 鈥 ensures appropriate governance and robust security testing. Automated testing should include scenarios synthesized for privacy, code of conduct, and safety, while reinforcing regulatory and compliance requirements. The SecOps process should be orchestrated through formal continuous integration and continuous deployment (CI/CD) automated tools, ensuring consistency of packaging within containers, vulnerability assessment testing, and testing threat attack vectors, while maintaining proper version control the container images. The threat attack vectors should be defined for the respective AI models, simulating different scenarios such as prompt injection or jailbreaks.

The containers should have task segregation while the respective virtual machines hosting them provide full security isolation, all of which are then deployed within virtual networks that are also isolated and segmented. Identity access management should be enforced for the respective containers, such as by using key vaults, secrets, or certificates.

4. Monitoring

Refreshing your cybersecurity investments for GenAI should include provisions for data loss prevention, incident management, and other security factors. The inputs and outputs of your AI models should be regulated by your provisions, alerting due to anomalies, while positioning your organization for a rapid recovery as result of any incidents. Cloud access security brokers can provide granular monitoring of the cloud usage services for AI models and their cloud native architecture. The security broker facilitates managing entitlements, accessing controls, detecting unusual activity, as well as preventing data loss via its data loss prevention capabilities, based upon policies defined to protect the organization from usage of sensitive data.

Subsequently, virtual machines and containers hosting AI models should also be monitored, using predefined dashboards which may further aid the collecting and analysis of metrics for performance, resiliency, and networking. Selectively, profilers may be used intermittently to assess performance of containers. AI models such as LLMs should be evaluated for hallucinations and content safety, ensuring outputs are responsible and ethical, as well as trustworthy.

In accordance with data consumption by AI models, data loss prevention and governance risk and compliance best practices should be employed. And where abnormal behavior is detected the respective data security and loss prevention policies should reinforce least-privileged access with alerts or notifications. In addition, active monitoring of changes to legal or regulatory requirements per geographic region may impact compliance standards, resulting in potential privacy violations by AI models and respective data dependencies.

Where do we go from here?

As organizations iteratively improve security posture, cyber-criminals also continue to advance their efforts, modulating at a faster frequency, while uncovering vulnerabilities with GenAI models and their bedrock infrastructure. Emancipating enterprises from the fallacy of security necessitates a continuing calibration of an organization鈥檚 zero-trust architecture. As cybersecurity leaders welcome zero trust as a journey, not a destination, and empower their teams with GenAI tools, they鈥檒l be better equipped to outfox the fox watching the hen house.

]]>
https://blogs.thomsonreuters.com/en-us/technology/elevating-trust-gen-ai/feed/ 0
Medicare and Medicaid fraudsters continue to steal taxpayer money /en-us/posts/investigation-fraud-and-risk/medicare-medicaid-fraud-2024/ https://blogs.thomsonreuters.com/en-us/investigation-fraud-and-risk/medicare-medicaid-fraud-2024/#respond Mon, 13 May 2024 17:23:12 +0000 https://blogs.thomsonreuters.com/en-us/?p=61251 The U.S. Department of Justice (DOJ) reported civil settlements and judgments under the False Claims Act related to healthcare fraud that exceeded $1.8 billion in the fiscal year ending Sept. 30, 2023. Healthcare fraud was the leading source of False Claims Act settlements and judgments in fiscal year 2023.

In addition to recovering taxpayer funds and deterring future fraud, False Claims Act enforcement “also protects patients from medically unnecessary or ,” the DOJ said in a statement.

DOJ enforcement highlights

The agreed to pay $172 million to resolve allegations that it used “inaccurate and untruthful diagnosis codes鈥 for its Medicare Advantage plan enrollees to improperly increase its payments from Medicare. The government alleged that Cigna also relied on improperly reported diagnosis codes reported by vendors without performing or ordering testing to confirm those diagnoses. The Medicare program reimburses Medicare Advantage plans at a capitated rate based on the health of each member. A member with more diagnoses or more complex medical conditions nets the plans a higher reimbursement from the government.

agreed to pay $22.5 million to resolve similar allegations that it had submitted inaccurate diagnosis codes for its Medicare Advantage plan enrollees in order to increase reimbursements. The diagnoses codes were not supported by member medical records.

The DOJ also litigated other cases involving the government鈥檚 , including cases against UnitedHealth Group, Independent Health Corporation, Elevance Health (formerly Anthem), and Kaiser Permanente.

In another case involving false claims, the government alleged that former long-term care facility and related entities submitted claims for “services performed by unlicensed and unauthorized students” and that the services were either not provider or were “effectively worthless.” Cornerstone and the related entities agreed to pay $21.6 million to resolve these allegations.

The DOJ also announced two resolutions involving electronic health records. In the first, (ModMed) agreed to pay $45.4 million to resolve allegations that it solicited and received kickbacks from a lab company in exchange for recommending that its customers use the lab’s pathology services, conspired with the lab company to donate ModMed’s electronic health records technology to healthcare providers, and paid kickbacks to its customers and other influential entities to recommend its technology and refer potential customers. The government also alleged that ModMed’s electronic health record technology did not always use “required standard vocabularies” that caused providers to improperly submit claims for electronic health record incentive payments.

In the second case, agreed to pay $31.2 million to resolve allegations that it misrepresented the capabilities of some versions its electronic health records software that were 鈥渓acking in critical functionality.鈥 The government also alleged the NextGen offered credits worth as much as $10,000 and tickets to sporting and entertainment events to customers whose recommendation of its software led to a new sale.

In another resolution, and its president and CEO agreed to pay $22.9 million to resolve allegations the company had improperly paid physicians “under the guise of medical directorships to induce referrals of home health patients.”

State Medicaid recoveries

Although the federal government often recovers Medicaid funds when pursuing Medicare fraud, states also have a separate responsibility to prosecute Medicaid fraud. Because Medicaid is a federal/state partnership, these recoveries benefit both state and federal taxpayers.

All 50 state, the District of Columbia, Puerto Rico, and the U.S. Virgin Islands have Medicaid Fraud Control Units (MFCUs) to Medicaid provider fraud and patient abuse or neglect.

For fiscal year 2023, in:

      • $1.2 billion recovered;
      • 1,143 convictions (814 for provider fraud and 329 for patient abuse or neglect);
      • 850 exclusions of individuals or entities from federally funded programs; and
      • 436 civil settlements and judgments.

MFCU enforcement highlights

In , the MFCU partnered with other state agencies in a civil investigation of allegations that managed care company Centene overcharged the California Medicaid program by “falsely reporting higher prescription drug costs” for two of its managed care plans. Centene agreed to pay more than $215 million to resolve the allegations.

In , the MFCU investigated the owner of a laboratory for allegedly billing Medicaid for medically unnecessary testing services and providing illegal kickbacks in exchange for the testing. The owner was convicted of conspiracy to commit healthcare fraud, violations of the anti-kickback statute, conspiracy to commit money laundering, and money laundering. The scheme defrauded the North Carolina Medicaid program of more than $11 million.

In another case involving a medical device manufacturer, the National Association of MFCUs partnered with federal agencies to investigate allegations that misled federal healthcare programs regarding the radio-frequency emission generated by some of its devices that could potentially interfere with other devices that use the same radio-frequency spectrum. The company agreed to pay more than $12 million to settle the allegations.

Although healthcare fraudsters continue to steal, scheme, and conspire to steal federal and state healthcare program funds, these enforcement result show that the government is also having a certain amount of success in recovering taxpayer dollars and punishing fraudsters.


For more on , click here.

]]>
https://blogs.thomsonreuters.com/en-us/investigation-fraud-and-risk/medicare-medicaid-fraud-2024/feed/ 0
Identity theft is being fueled by AI & cyber-attacks /en-us/posts/government/identity-theft-drivers/ https://blogs.thomsonreuters.com/en-us/government/identity-theft-drivers/#respond Fri, 03 May 2024 14:33:41 +0000 https://blogs.thomsonreuters.com/en-us/?p=61215 The shift towards digital platforms has revolutionized financial transactions, but it has also fueled a surge in fraudulent activities, particularly identity theft cases that are driven by cyber-attacks. Cybercriminals, leveraging stolen identity information, have devised sophisticated schemes, complicating fraud mitigation efforts. And with the frequency of cybersecurity incidents on the rise each year, organizations face a mass of threats like ransomware and data theft, posing significant challenges across industries.

The average cost of a data breach reached an all-time high of , and now, artificial intelligence (AI) has led to a significant increase in the sophistication of cybercrime. From deepfake technology to AI-powered hacking, cybercriminals are exploiting these advancements to orchestrate unique attacks.

How criminals are leveraging AI

Deepfake technology 鈥 One of the most concerning developments is the use of deepfake technology, a blend of machine learning and media manipulation that allows cybercriminals to create convincingly realistic synthetic media content. Criminals then use deepfakes to spread misinformation, perpetrate financial fraud, and tarnish reputations, exploiting the trust we place in digital media.

In a recent , a company suffered a loss of $25 million due to the deception of an employee who fell victim to deepfake impersonations of his colleagues. The individual participated in a video call in which deepfake versions of the company’s United Kingdom-based CFO and other team members were present. According to authorities, scammers engineered these deepfakes using publicly accessible video content.

AI-powered password cracking AI algorithms, including machine learning and deep learning, enable systems to identify patterns and make predictions based on vast datasets. For example, , harnesses machine learning algorithms that operate within a neural network framework. And the tool seems to work, as a study showcasing the effectiveness of PassGAN in password cracking, published by , found that 51% of passwords were cracked in less than a minute, 65% in less than an hour, 71% within a day, and 81% within a month.

The impact of identity theft fueled by cyber-crimes

Further, there has been a 15% increase in the number of data breaches in the United States between 2022 and 2023, which underscores the escalating threat posed by cybercriminals, according to the . Concurrently, breach severity surged by 11%.

Further, digital account openings emerged as the top highest risk with 13.5% of all global digital account openings suspected of fraudulent activity. And 54% of consumers across 18 countries and regions reported being targeted by various forms of fraud attempts between September and December 2023, according to the TransUnion report.

Cybercriminals persist in breaching organizations’ systems to steal consumer identity credentials, which often contain critical information such as an individual鈥檚 date of birth, full Social Security number, and residential address. With a wealth of stolen identity credentials readily available, criminals have become increasingly adept at fabricating identities.

Consequently, this has led to an increase in the use of illicit synthetic identities among accounts opened at US lenders such as auto loans, bank credit cards, retail credit cards, and unsecured personal loans. The surge in this synthetic identity fraud has exposed lenders to potential losses totaling $3.1 billion, representing an 11% increase compared to the end of 2022. Cybercrimes, including identity fraud, are projected to cost the world about $9.5 trillion annually by the end of 2024, according to AuthenticID鈥檚 .

JPMorgan Chase: Battling cyber-threats

JPMorgan Chase鈥檚 CEO Jamie Dimon has identified cybersecurity as the “” facing the financial services industry. This recognition comes in the wake of legal action taken against the bank in January 2023, when a subsidiary of EssilorLuxottica filed a lawsuit alleging negligence in addressing signs of fraud. The lawsuit claimed in orchestrating 243 fraudulent transactions, resulting in the siphoning off of $272 million from Essilor’s manufacturing division.

Since then, JPMorgan has intensified its focus on strengthening cybersecurity measures. A recent disclosure by a JPMorgan executive revealed that the bank repels an astounding . In response to the escalating efforts of hackers, JPMorgan allocates a substantial portion of its $15 billion budget towards cybersecurity initiatives, backed by a dedicated workforce of 62,000 individuals committed to defending against cyber-threats.

Key strategies for cyber-defense

There are several tactics that organizations can take to help mitigate cyber-crime, including:

      • Prioritize investments in comprehensive cybersecurity infrastructure, equipped with advanced threat detection and response capabilities, to effectively safeguard against cyber-attacks.
      • Collaborate closely with regulatory authorities to establish and adhere to rigorous compliance measures, ensuring adherence to industry regulations and standards for data protection and financial security.
      • Embrace cutting-edge technologies such as AI to better develop sophisticated fraud detection systems capable of identifying and mitigating evolving threats in real-time.
      • Establish multidisciplinary teams including experts from fraud, cybersecurity, risk management, and data analytics departments to leverage diverse skill sets and perspectives in developing comprehensive security strategies. Encourage regular knowledge-sharing sessions, joint brainstorming, and collaborative projects to foster a culture of teamwork and innovation. By breaking down silos and promoting collaboration across departments, financial institutions and organizations can enhance their ability to detect, prevent, and respond to emerging threats effectively.
      • Launch targeted educational campaigns to inform customers about common fraud tactics and cybersecurity measures. Offer easily accessible resources such as online tutorials and workshops to empower customers to protect themselves from cyber-threats.

In conclusion, ensuring financial integrity demands every organization鈥檚 constant attention, especially considering the rapid growth of cyber-threats. By fostering a culture of strength, innovation, and collaboration, leaders can effectively address the challenges posed by data breaches and fraud.


You can , here.

]]>
https://blogs.thomsonreuters.com/en-us/government/identity-theft-drivers/feed/ 0
How businesses should respond to the SEC鈥檚 cybersecurity disclosure rules /en-us/posts/investigation-fraud-and-risk/cybersecurity-disclosure-rules/ https://blogs.thomsonreuters.com/en-us/investigation-fraud-and-risk/cybersecurity-disclosure-rules/#respond Tue, 16 Apr 2024 12:42:11 +0000 https://blogs.thomsonreuters.com/en-us/?p=61040 Cybersecurity operations and reporting are undergoing a heightened level of scrutiny due to contentious cybersecurity issued by the U.S. Securities and Exchange Commission (SEC).

These regulations mandate publicly traded companies to promptly disclose cybersecurity incidents within four business days of identifying their materiality, alongside reporting on their cybersecurity risk management and governance procedures. This move by the SEC underscores the imperative for businesses to actively manage and report cybersecurity incidents, despite the intricate and firm nature of the .

However, businesses also must address a major issue that the SEC did not discuss in its ruling: the impact of generative artificial intelligence (GenAI) on their cybersecurity functions.

A notable step forward

These regulations mark a notable stride towards enhanced accountability and transparency in addressing cybersecurity risks and incidents. Companies are urged to revisit and enhance their disclosure protocols, conduct thorough cybersecurity risk evaluations, establish comprehensive incident-response strategies, invest in cybersecurity infrastructure and training, and institute clear communication channels to ensure compliance with the new mandates. Although these requirements may seem substantial, businesses should already be prioritizing safeguarding their operations, regardless of regulatory directives from the SEC.

The prevalence of data breaches has been on an upward trajectory for several years, with no sign of abating. For example, , in which tens of thousands of customers had their information compromised in a ransomware attack targeting Infosys McCamish Systems, one of the bank’s service providers, in November 2023. While notifications to customers began in February, potentially exceeding state-mandated notification deadlines, reports indicate that more than 57,000 customers were affected, with exposed data including addresses, names, Social Security numbers, dates of birth, and some banking details.

The pervasiveness of data breaches transcends industries and organizational sizes, inflicting millions of dollars in damages on US businesses. A single data breach’s average cost is $4.45 million, underscoring the pressing need for robust cybersecurity measures across all sectors.

New rules and new risks

The SEC鈥檚 cybersecurity disclosure rules, introduced in July 2023, have transformed how public companies must handle and disclose cybersecurity incidents. While the regulations are multifaceted, here鈥檚 what businesses must understand:

Swift, comprehensive incident reporting 鈥 Companies must now disclose 鈥渕aterial cybersecurity incidents鈥 within a strict four-business-day window after gauging the severity of the incident. This replaces the less specific 鈥減rompt鈥 reporting standard that often caused delays. Companies must provide in-depth descriptions of the incident, including the attack鈥檚 nature, the systems compromised, the potential effects on business functions and finances, and the company鈥檚 response strategy.

Yearly disclosure of cybersecurity frameworks 鈥 Alongside incident reporting, companies are now obligated to reveal their cybersecurity risk management policies, governance structures, and incident response protocols in their annual reports. This mandate outlines how they evaluate and control material risks from cyber-threats, how their board and management oversee cybersecurity, and how these safeguards fit into the company鈥檚 broader risk management strategy.

Prioritizing investor protection 鈥 These regulations are designed to furnish investors with reliable, up-to-date insights into how companies tackle cyber-risks, fostering increased transparency and responsibility within the corporate world.

The cost of non-compliance 鈥 Although the SEC hasn鈥檛 yet outlined precise penalties for violating the new rules, their enforcement powers are far-reaching. Fines could reach up to $25 million alongside other disruptive actions like cease-and-desist orders or suspension-of-trading privileges. Even more concerning is the increased likelihood of lawsuits from investors or stakeholders if companies neglect to disclose material cybersecurity events. The SEC鈥檚 rules provide a strong basis for activist investors to challenge companies that fail to meet their obligations.

But what about GenAI?

The report is also notable for what it doesn鈥檛 address: the impact of GenAI. Businesses are increasingly adopting GenAI to do everything from customer service to website search. Yet, GenAI is vulnerable to more subtle forms of manipulation from bad actors, such as their ability to corrupt chatbots and AI-powered search to divulge private customer data or provide inaccurate information. The breaches can act like a slow leak in a tire; a business might not become aware of them for quite some time. And yet, the SEC cybersecurity disclosure rules do not address the potentially devastating impact of GenAI breaches.

GenAI cuts both ways, of course. On the plus side, GenAI offers potent tools to combat cybersecurity attacks and sharpen companies鈥 training abilities and even its SEC reporting. However, GenAI has to be actively managed, and companies should remember that human oversight remains vital throughout the process. This includes training the models to generate valid scenarios or report formats and continually verifying the outputs for quality. GenAI can even help with this, flagging potential oversharing in disclosures based on preset guidelines.

Beyond its failure to mention GenAI, the SEC鈥檚 new cybersecurity disclosure rules have had their fair share of critics. One major sticking point is the whole 鈥渕ateriality鈥 issue and the tight reporting deadlines. Companies are expected to figure out if an incident is significant enough to report 鈥渨ithout unreasonable delay鈥 鈥 then tell the SEC about it within four business days. That鈥檚 a tall order, considering it takes an average of 277 days to even spot and contain most breaches. How are companies supposed to accurately assess the scope of an attack that quickly, without potentially misreporting key details?

Then there鈥檚 the disclosure headache. Companies must walk a tightrope, providing enough information to satisfy the SEC while avoiding revealing so much that they put their security at further risk. It鈥檚 a delicate balance that leaves room for misinterpretation.

Even more concerning are the implications for public and national security. Some experts worry that rushing to disclose incidents could hinder investigations. The SEC鈥檚 rules do offer a loophole 鈥 the U.S. Attorney General can delay disclosure for national security or safety reasons 鈥 but this solution is considered cumbersome and limited.

Despite these criticisms, the rules are law. Companies now face the unenviable task of navigating these complexities as best they can. Indeed, the SEC鈥檚 disclosure rules should be seen not as a burden, but a catalyst for proactive cybersecurity improvement. Businesses that wait until mandatory reporting deadlines to address security are already operating from a position of risk 鈥 and waiting for the SEC to force your hand is a recipe for a future breach.

Company cybersecurity leaders should embrace the opportunity to improve now and stay ahead of the curve.

]]>
https://blogs.thomsonreuters.com/en-us/investigation-fraud-and-risk/cybersecurity-disclosure-rules/feed/ 0
Legalweek 2024: How to traverse the treacherous cyber terrain? Start by keeping it simple /en-us/posts/legal/legalweek-2024-cybersecurity/ https://blogs.thomsonreuters.com/en-us/legal/legalweek-2024-cybersecurity/#respond Thu, 01 Feb 2024 17:56:24 +0000 https://blogs.thomsonreuters.com/en-us/?p=60326 NEW YORK 鈥 The cybersecurity landscape is seemingly changing by the day. There are new regulations to follow, everywhere from the United States and the European Union to Chile and Australia. New cyber-threats and increasingly sophisticated attacks put pressure on businesses and firms to beef up their cyber capabilities, and all of this occurs against the backdrop of a global business landscape that promises both economic and political challenges.

How can lawyers and IT personnel keep up with the cyber-threat onslaught? It starts with a simple mantra: Nail the basics.

At the Navigating the Cyber Threat Terrain: Cybersecurity, Privacy and Legal Sector Focus panel during this week in New York City, cyber-attorneys and experts from companies and law firms assembled to give their advice and experience on how to keep up with emerging threats.

Always aware of everything

One of the biggest challenges, the panel noted, is simply staying aware of the mass of cybersecurity and privacy rules and regulations, particularly for organizations that operate on a global scale. Panel moderator Manny Sahota, Director for Global Cloud Privacy, Regulatory Risk & Compliance at Microsoft, noted that while everyone may have focused on rules coming out of the EU and US recently, simultaneously, Chile updated its security regulations for the first time since 1999.


Even once the legal and IT teams are able to understand the situation, however, there remains the issue of getting others in the organization to care.


It’s a lot to follow but also next to impossible to predict, agreed Daniel Ostrach, Senior Corporate Counsel at Microsoft. 鈥淥ne of the hardest things for us to do is anticipate the way that regulators are thinking 鈥 but we can鈥檛 run our business based on yesterday鈥檚 regulation,鈥 he explained. However, in today鈥檚 climate, just following the regulation 鈥渋s the bare minimum, that鈥檚 table stakes.鈥

Sabrina Ceccarelli, Global Vice President and Assistant General Counsel of Commercial at Lightspeed Commerce, gave the example of one recent privacy regulation: Quebec鈥檚 Law25, which is more similar to the EU鈥檚 General Data Protection Regulation (GDPR) than other Canadian privacy laws. Without enough privacy staff to keep up, her team turned to the privacy resources they did have: 鈥淲e do as much rinse and repeat as we can.鈥 They looked at areas such as training in which they already had pre-established guidance, then updated rather than reinventing the wheel.

Even once the legal and IT teams are able to understand the situation, however, there remains the issue of getting others in the organization to care. Joseph Lee, Director for Information Security & Compliance at law firm Arnold & Porter, said that his most effective method is simple: 鈥淏ombard people over and over and over.鈥 Constant reminders and messaging from multiple sources such as town halls helps people realize that cybersecurity is not a set-it-and-forget-it proposition, Lee said. 鈥淚f you just do an annual training, it鈥檚 not bad, you check a box, but that doesn鈥檛 keep it top of mind.鈥

From the technology standpoint, Rachi Messing, Co-Founder of startup Altorney, also noted that legal has an opportunity to work with engineering to make sure privacy and security is evident in everything they do. For instance, Messing noted that every development ticket or feature request at the company has a mandatory security and privacy analysis. That analysis is 鈥渘ot just a check box,鈥 he said, but forces tech teams to think through potential impacts and why they occur. 鈥淭hat really does force a focus in the culture of, How are we focusing on security? How are we focusing on privacy in everything that we do? Otherwise, that鈥檚 how you find yourself on the front page of The New York Times.鈥

Cyber Dungeons & Dragons

Once the awareness has been achieved, then it falls on the legal, IT, and other security and privacy-related teams to execute. Once upon a time, those teams might have all been separate entities, the panel noted, but Messing added: 鈥淭he truth is, in today鈥檚 world, there really can鈥檛 be a gap.鈥

At his startup, Messing said he and his co-founders did not have the ability for a formal chief information security officer (CISO) or privacy team. However, they picked outside counsel based explicitly on the firm鈥檚 ability to support the company around security, advise on privacy, and then work with the company鈥檚 engineers. 鈥淲orking together there is the only way that a company is going to be able to succeed,鈥 Messing explained. 鈥淚f the two sides are feuding with one another鈥 you鈥檙e never going to be able to survive in today鈥檚 world.鈥

Lightspeed鈥檚 Ceccarelli agreed, noting that the role of the corporate lawyer has changed. She says her legal team鈥檚 mantra last year was 鈥We鈥檙e building GCs,鈥 noting that for many corporate attorneys, the GC chair is their ultimate goal. However, implicit in that is that 鈥渘one of us can call ourselves an excellent tech lawyer if we don鈥檛 understand privacy.鈥 As a result, her team created knowledge-sharing exercises with continuous updates, which created some ownership and accountability for the legal department to work with the whole enterprise. 鈥淟egal counsel can鈥檛 just be doing contracts anymore,鈥 she said. 鈥淲e need to be more than that.鈥


The panel cautioned to make sure that not only is everybody speaking to one another 鈥 especially the lawyers 鈥 but they are speaking the same language when making these plans.


One way to make sure the organization comes together is through tabletop exercise, the panel suggested. Lee admitted that 鈥渢he tabletop exercise may seem like a corporate Dungeons & Dragons sort of thing,鈥 but added that it鈥檚 really important to go through potential risky scenarios. 鈥淚f you don鈥檛 have a plan of action, I make an analogy like it鈥檚 a kids鈥 soccer game, everybody is just going towards the ball,鈥 he explained. Tabletop exercise helps answer some basic questions: Who鈥檚 doing negotiations? Who鈥檚 going to the insurance carrier? Who鈥檚 doing communications, and how much?

From there, Ceccarelli suggested making a formal playbook, to make the process memorable and repeatable. The playbook should include engineering and IT, certainly, but it also gives the legal team a seat at the table to help guard against risk and potential worst-case scenarios. 鈥淏y doing that, you can proceed rather quickly but also mitigating any possible damages from the incident that has occurred,鈥 she added.

Finally, the panel cautioned to make sure that not only is everybody speaking to one another 鈥 especially the lawyers 鈥 but they are speaking the same language when making these plans. Microsoft鈥檚 Ostrach gave the example of a three-page legal memo that might give all of the relevant information on a new regulation but would never be read by engineers 鈥渟o it鈥檚 worthless.鈥 In addition to being a lawyer, today鈥檚 counsel need to be 鈥渁n old-timey phone connector,鈥 making sure that everybody is communicating with one another.

And that goes both ways, Lee of Arnold & Porter added. 鈥淚f you鈥檙e in IT and you鈥檙e not regularly talking to your general counsel, you should.鈥 Perhaps the best thing that all parties can do when it comes to privacy and security is a simple trick, he added: 鈥淏e proactive in terms of having those conversations.鈥

]]>
https://blogs.thomsonreuters.com/en-us/legal/legalweek-2024-cybersecurity/feed/ 0
Tax season is on its way and so is cybercrime: Cybersecurity considerations for tax firms /en-us/posts/tax-and-accounting/cybersecurity-considerations/ https://blogs.thomsonreuters.com/en-us/tax-and-accounting/cybersecurity-considerations/#respond Thu, 11 Jan 2024 17:50:30 +0000 https://blogs.thomsonreuters.com/en-us/?p=60073 During the 2022 tax season, roughly 94% of all taxes filed were 鈥 no doubt taxes, like most of our lives鈥 transactions, now take place in the digital world. As individuals and businesses have increased their online presences, it is expected that by 2025 there will be more than 41.6 billion should devices. And with this increased presence, cyber threats have also been on the rise, with around 800,000 reported cyber incidents that resulted in financial losses of between $7 billion to $10 billion in 2022, according to .

Cybersecurity in tax & accounting firms isn’t just a technical issue 鈥 it’s a critical business priority. With the increasing sophistication of cyber threats, the importance of robust cybersecurity measures within firms has never been more paramount.

Understanding the threats

It can be said, the complete business of tax & accounting firms is based on handling confidential information, making them attractive targets for cybercriminals. As tax firms embark upon the coming tax season, it is imperative that all employees are hypervigilant in their treatment of clients鈥 information. A between Stanford University Professor Jeff Hancock and the security firm found that most cyber incidents begin with employees. Not by malicious activities or being done on purpose, but rather through poor data security hygiene. The way in which criminals attempt to access firms including their clients鈥 information is through phishing attacks, which are fraudulent emails and other communications 鈥 such as text messages, phone calls, and voicemails 鈥 sent to employees that are designed to get them to reveal sensitive information. Over the years phishing has become and continues to be more sophisticated. The object of phishing is simple 鈥 create a data breach or other unauthorized access to clients鈥 sensitive information, which can include their PIN.

If the phishing attack is successful, ransomware may take place. is a malicious software that is loaded into the firm鈥檚 computer systems and ultimately blocks the firm from accessing any information. It can also threaten to make sensitive client information public unless a ransom is paid to the hackers.

The cost of a ransomware attack can be pricey and, in some cases, devastating. The national average cost of cyberattack is almost $1 million, and this cost can include the ransom payment and data recovery efforts. Situations in which a cyberattack could be devastating is when the attack results in hackers selling clients鈥 information on the dark web or elsewhere. For the tax firm that falls victim to this, this damage goes far beyond simple reputation loss, because clients no longer have confidence in the firm鈥檚 ability to keep client information safe. In addition, tax firms are required to immediately .

Tax & accounting firm leaders have a responsibility to their clients and employees to have a robust cybersecurity strategy, which should be a key part of every firm鈥檚 business strategy. Indeed, cybersecurity should be treated with the same thorough and thoughtful ways that the business is thinking about growth, tech investment, or any other significant strategy of the firm. Whether firm leaders assign one person or pull together a team to lead the firm鈥檚 cyber initiatives, it has to be done, including such steps as assessing the firm鈥檚 current vulnerabilities, thinking through if or whether additional technologies may be necessary, and most importantly, cultivating a culture that鈥檚 based on security awareness.

Tax firm cybersecurity best practices

      1. Any cybersecurity plan has to start with the employees, and more importantly, with a focus on employee training and awareness. All employees must be made aware of the potential threats out there 鈥 how to spot them and what to do should they encounter a threat. This can be achieved through regular training, but its most important to foster a culture in which employees are encouraged to be hypervigilant and speak up if they are suspicious or if an incident does occur.
      2. Instituting strong authentication protocols, which requires several steps to prove that someone who is seeking to access information has the right to access it, also is critical. This could mean having , an electronic authentication method in which a user is granted access only after successfully presenting two or more pieces of evidence as to their identity.
      3. Updating software regularly is not only important in order to enhance existing features but is needed to patch security flaws and add new security features.
      4. 鈥 which can protect data from being stolen, changed, or compromised 鈥 and works by scrambling data into a secret code that can only be unlocked with a unique digital key. Encrypting sensitive data provides a strong defense against unauthorized access.
      5. It cannot be overstated but having an (IRP) is just as critical as the precautions mentioned above. An IRP is a written document, formally approved by the senior leadership team, that helps an organization before, during, and after a confirmed or suspected security incident. Because even with all the precautions, a cyber incident can still take place, and having a plan in place for that eventuality can go a long way to minimizing damage.
      6. Leveraging technology to enhance security such as the use of artificial intelligence and machine learning can be pivotal in detecting and responding to cyber threats. These technologies can identify patterns that are indicative of malicious activity more quickly and accurately than can human analysts.

It is critical for tax & accounting firms to be cautious about cybersecurity, not only during tax season but all year-round. Such vigilance is ongoing and evolving, and firms must be in tuned into how to navigate this constantly changing landscape. By staying informed, investing in the right technologies and practices, and fostering a culture of security, tax & accounting firms can better protect themselves and their clients.

]]>
https://blogs.thomsonreuters.com/en-us/tax-and-accounting/cybersecurity-considerations/feed/ 0
Unifying forces: The synergy of cybersecurity and fraud prevention teams /en-us/posts/investigation-fraud-and-risk/cybersecurity-fraud-prevention-teams/ https://blogs.thomsonreuters.com/en-us/investigation-fraud-and-risk/cybersecurity-fraud-prevention-teams/#respond Thu, 04 Jan 2024 13:06:37 +0000 https://blogs.thomsonreuters.com/en-us/?p=59977 For too long, organizations treated cybersecurity and fraud prevention as separate entities, each dealing with its own set of challenges. However, the rising tide of cyber threats has shown that a united front is necessary.

Indeed, 2022 saw the second-highest number of data compromises in the United States in a single year, impacting at least 422 million individuals across various industries, according to the Annual Data Breach Report from the . This startling statistic highlights the urgency for organizations to fortify their defense mechanisms.

The also paints another part of the picture. While reported cybercrime complaints decreased by 5% compared to 2021, the potential total loss increased to $10.2 billion in 2022, up from $6.9 billion in 2021. California, Florida, and Texas led the charts with the highest number of cybercrime victims.

Further, received more than 5.1 million reports in 2022 鈥 among these, 46% were for fraud, and 21% were for identity theft. Credit card fraud accounted for 43.7% of identity thefts, followed by miscellaneous identity theft at 28.1%. This category includes online shopping and payment account fraud, email and social media fraud, and other forms of identity theft. Notably, Georgia, Louisiana, and Florida reported the highest number of identity theft cases.

The divide between fraud & cybersecurity teams

In many organizations, it’s common practice for organizations to maintain distinct cybersecurity and fraud prevention teams, each operating in their own silos. These teams function independently, addressing unique challenges and threats within their respective domains. However, as cyber threats evolve into more sophisticated forms, opportunistic criminals are increasingly discerning ways to exploit the existing divisions.

In the contemporary landscape, criminals have exhibited a high level of ingenuity, leveraging artificial intelligence to perfect fraudulent activities. Simultaneously, they have adopted advanced cybersecurity tactics to navigate and overcome many organizations鈥 defense mechanisms. The separation between cybersecurity and fraud prevention, initially established for organizational efficiency, has inadvertently become a vulnerability.

This disconnect now facilitates multifaceted attacks that transcend traditional boundaries. Criminals adeptly maneuver through the gaps between these specialized teams, executing complex strategies that blend cyber threats and fraudulent activities seamlessly. The intricate interplay between evolving criminal tactics and the segregated nature of cybersecurity and fraud prevention teams highlights the need for a more integrated and collaborative approach in the modern security landscape.

Fraud teams across industries traditionally focus on analyzing patterns of behavior to identify anomalies that may indicate fraudulent activity. They use sophisticated algorithms and machine learning models to detect suspicious transactions, account activities, or identity-related issues. Fraud teams often rely on historical data and trend analysis to develop strategies for preventing and mitigating fraud, placing a significant emphasis on post-authentication monitoring.

Regardless of the industry, cybersecurity teams are primarily concerned with safeguarding the organization鈥檚 entire IT infrastructure, networks, and data from unauthorized access, breaches, and cyber threats. Cybersecurity teams use various tools and technologies, such as firewalls, intrusion detection systems, and encryption protocols, to protect against cyber-attacks. Cybersecurity teams focus on identifying vulnerabilities, implementing patches, and ensuring the organization’s overall security posture.

Creating a holistic security strategy across industries requires a shift in mindset. Cybersecurity professionals understand that the strength of a security system is only as robust as its weakest link. Simultaneously, fraud prevention teams have come to realize the inadequacy of relying solely on customer authentication. Research indicates that 80% of fraud prevention professionals believe in the necessity of continuous monitoring beyond the authentication stage to combat evolving fraud tactics.

Preventing cyber-attacks before fraud occurs

The true power of collaboration lies in preventing cyber-attacks before they escalate into fraud. Organizations can create a proactive defense by integrating cybersecurity measures that focus on access points with fraud prevention strategies that monitor activities beyond authentication.

Let’s consider a scenario in which a cyber-criminal gains unauthorized access to an organization’s database through a phishing attack, obtaining sensitive user information. In a traditional, siloed approach, the cybersecurity team might detect the breach but may not immediately share insights with the fraud prevention team. This delay could provide the attacker with a window of opportunity to exploit compromised information for fraudulent activities.

Now, envision a collaborative environment in which both teams work seamlessly. The cybersecurity team, upon detecting the breach, rapidly shares information about the compromised accounts with the fraud prevention team. Together, they implement real-time monitoring, identifying anomalous activities beyond the authentication stage. This proactive stance allows them to prevent fraudulent activities before they occur, safeguarding user accounts, and mitigating the impact of the initial cyber-attack.

A collaborative cybersecurity and fraud prevention strategy enhances trust by demonstrating a proactive commitment to safeguarding stakeholder assets and data. When stakeholders witness a seamless, integrated defense against cyber threats, it instills confidence and reinforces the notion that their organization is at the forefront of security measures.

Strategic imperatives for unification

To achieve a seamless collaboration between cybersecurity and fraud prevention teams across industries, several strategic imperatives must be considered, including:

      • Integrated training programsDevelop training programs that expose both cybersecurity and fraud prevention teams to each other’s methodologies and tools. Cross-training enhances understanding and fosters a shared language between the two functions.
      • Unified communication channelsEstablish unified communication channels that facilitate real-time information-sharing. Implementing collaborative platforms ensures that insights from cybersecurity incidents are swiftly communicated to the fraud prevention team and vice versa.
      • Shared analytics platformsIntegrate analytics platforms that allow both teams to analyze data collaboratively. Shared dashboards and analytics tools enable a comprehensive view of threats and vulnerabilities across the entire organization.
      • Common metrics and KPIsDevelop common metrics and key performance indicators (KPIs) that align with the overarching goal of a unified defense. This ensures that both cybersecurity and fraud prevention teams are working towards shared objectives and evaluating success based on a common framework.
      • Continuous threat intelligence sharingEstablish a robust framework for continuous threat intelligence sharing. Regular updates on emerging threats and attack vectors empower both teams to adapt their strategies in real-time, staying ahead of evolving cybercriminal tactics.

Looking ahead, the collaboration between cybersecurity and fraud prevention is not just a harmonious alliance 鈥 it’s an essential strategy for safeguarding the digital realm against the evolving tactics of modern criminals. The historical divide between cybersecurity and fraud prevention is giving way to a unified and fortified defense, ensuring organizations across various sectors are equipped to face the challenges of the digital age.

Organizations across industries must embrace the paradigm shift towards collaboration between cybersecurity and fraud prevention teams. This integration is not merely a response to current threats but a forward-looking strategy to fortify industries against the continually evolving tactics of cybercriminals. By breaking down the historical silos and fostering a shared mindset, organizations can build a resilient defense that maximizes resources, prevents attacks, and enhances stakeholder trust in the digital era.

The strategic imperatives outlined provide a roadmap for organizations to navigate this transformative journey toward a unified and comprehensive security posture.

]]>
https://blogs.thomsonreuters.com/en-us/investigation-fraud-and-risk/cybersecurity-fraud-prevention-teams/feed/ 0
What corporate tax departments need to know about the SEC鈥檚 required reporting of cyber incidents /en-us/posts/tax-and-accounting/corporate-tax-departments-reporting-cyber-incidents/ https://blogs.thomsonreuters.com/en-us/tax-and-accounting/corporate-tax-departments-reporting-cyber-incidents/#respond Tue, 02 Jan 2024 14:20:09 +0000 https://blogs.thomsonreuters.com/en-us/?p=59987 There is no facet of life that doesn鈥檛 have a digital presence both personal and professional 鈥 from banking to medical care to simple sharing of photos, jokes, and recipes via social media, our information always is moving across the internet. And that makes the need for most critical, as we seek to protect the online identity, data, and virtual assets of businesses and individuals.

To that end, the U.S. Securities and Exchange Commission (SEC) has introduced new reporting requirements for companies to disclose any cyber incidents that may occur. These requirements have significant implications for corporate tax departments, which handle sensitive financial data and are crucial to maintaining the fiscal integrity of any organization. Tax function leaders need to be aware of what these new requirements entail and how their departments can effectively prepare and comply.

Earlier this year, the SEC US issuers to disclose cyber incidents they have experienced along with annually disclosing material information on their cybersecurity risk management, strategy, and governance. (Foreign private issuers under SEC oversight will need to make similar disclosures.) The SEC noted that this move underscores its commitment to transparency and investor protection in the digital age. For corporate tax departments, these requirements mean a heightened responsibility to safeguard financial data and disclose any breaches that may have material implications.

Not surprisingly, cyber-attacks continue to rise. More than 80% of organizations experienced more than one data breach in 2022, according to the . Indeed, the impact of cyber incidents can be costly 鈥 of a data breach in 2023 pegged at $4.45 million, and the total number of ransomware attacks over the last five years. And the cost of cyber incidents can go beyond financial 鈥 businesses can face their customers鈥 loss of confidence in the company, which could result in lost business and a compromised reputation.

The role of tax departments

Corporate tax departments are one of the few departments in a company that touches every part of the business, utilizing data from all aspects of the company and basing their financial reporting on this data. Now, tax departments must factor in the risk of cyber incidents into their financial reporting processes. Incidents, which by the way, can compromise the accuracy and integrity of their financial data and directly impact tax reporting and disclosures.

In the 成人VR视频 Institute鈥檚 recent , 80% of corporate tax survey respondents said their departments have half or less of their work automated. Also, those respondents that said their departments felt under-resourced received more frequent and higher tax penalties compared to those respondents who felt their departments were sufficiently resourced. Given the sensitive nature of the data they handle, tax departments must be vigilant about data privacy and security.

Understanding the new SEC regulations and what it means for their company is crucial for tax departments to avoid these risks. As departments do their work, they must understand in advance what the potential risks are for their department. That means that risk assessment and management are key.


Corporate tax survey respondents said their departments have half or less of their work automated. Yet, given the sensitive nature of the data they handle, tax departments must be vigilant about data privacy and security.


By reviewing and understanding the SEC regulations and making sure their own procedures adhere to the SEC reporting guideline is essential, including establishing clear procedures for detecting, reporting, and responding to cyber incidents. Effective communication and collaboration with their companies鈥 IT and cybersecurity teams also are critical. This partnership ensures that tax-related data is adequately protected against an increasing number of cyber-threats.

And by working closely with IT, tax departments can beef-up their own security measures and ensuring that the department has and is using protective steps like encryption when dealing with sensitive documents and data. Indeed, leaders need to consider establishing multi-factor authentication and more importantly, making sure staff is adhering to regularly updating the system鈥檚 software.

Whenever possible, such security measures should be incorporated into the workflow or done through formally scheduled security audits. These are critical steps for department leaders to begin being able to predict where their vulnerabilities may lie in how the team works, especially as it handles a tremendous amount of critical data. In this case, prevention is better than a cure 鈥 a conducted by Stanford University and a security firm found that more than 80% of data breaches are caused by employee mistakes. Continuously educating staff about cybersecurity best practices and the importance of reporting irregularities can significantly reduce the risk of such breaches.

The new SEC reporting requirements on cyber incidents underscore the increasing intersection between cybersecurity and financial reporting. Corporate tax departments, as custodians of critical financial data, must take proactive steps to align their operations with these requirements. By enhancing cybersecurity measures, revising policies, and fostering a culture of compliance and awareness, tax departments can not only comply with these new regulations but also fortify their defenses against the ever-evolving landscape of cyber-threats.

]]>
https://blogs.thomsonreuters.com/en-us/tax-and-accounting/corporate-tax-departments-reporting-cyber-incidents/feed/ 0