AI-driven fraud schemes versus the defenders: A potential shift in power where fraud-fighting entities could counteract AI-assisted fraudsters?
In a rapidly evolving digital landscape, government agencies are grappling with increasingly sophisticated attacks. Delaying the adoption of AI in fraud prevention could lead to being overwhelmed, according to recent research. Only one in ten government agencies have all the necessary tools and resources to combat fraud, with nearly a third facing significant resource limitations.
The SAS and Coleman Parkes global research report sheds light on key AI-powered tools and strategies being adopted by government agencies to prevent and mitigate fraud, waste, and abuse.
One such strategy is the use of AI analytics for fraud detection. By analysing large datasets, AI can identify suspicious patterns or anomalies indicative of fraud, such as false claims or billing irregularities. This accelerates investigations and enables proactive enforcement. For instance, the U.S. federal government's use of AI to target health care fraud schemes has resulted in charges against hundreds of defendants in large-scale fraud cases totalling billions of dollars.
AI is also being used for risk management and impact assessments. Before deployment, agencies are required to conduct AI impact assessments, examining data fitness and potential cost savings, as well as ongoing risk monitoring to mitigate issues in AI systems that may cause waste or abuse.
Data security best practices for AI systems are another crucial component. Agencies follow cybersecurity guidance focused on protecting AI systems throughout their lifecycle, addressing risks such as data supply chain vulnerabilities, malicious data modifications, and data drift.
Legal and regulatory measures against AI-enabled fraud are also being enacted. Some states are criminalising the use of AI-generated forged digital likenesses (e.g., deepfakes) for fraud or financial crimes, enabling prosecutors to charge offenders with serious felonies and further discouraging abuse of AI technologies in scams.
Together, these tools and strategies form a comprehensive framework where AI is leveraged both as a detection mechanism and a subject of governance, security, and legal control to reduce fraud, waste, and abuse effectively in government operations.
Governments are urged to invest in AI-driven defences now to avoid higher costs in lost funds, eroded trust, and unmanageable risks. In law enforcement, analytics tools can connect disparate pieces of data, uncovering networks of fraudulent activities.
Looking ahead, over 90% of respondents expect to use Generative AI in the next two years. The market for Generative AI is projected to reach $1.3 trillion USD by 2032. However, the use of AI also presents new challenges. AI-driven malware adapts to evade outdated security systems, surging ahead of human-led defences. Synthetic identities, built using real and fabricated data, bypass traditional checks, enabling fraud rings to claim benefits or open fraudulent accounts across jurisdictions.
Carl Hammersburg, who manages the government and healthcare risk and fraud team at SAS, emphasises the importance of AI in combating fraud. With a background in anti-fraud activities for Washington State's exclusive workers' comp insurer, Hammersburg formed that agency's comprehensive fraud program in 2004, covering tax and premium audit, claim investigation, provider fraud, and collections.
Despite these efforts, 95% of global government agencies have experienced AI-powered fraudulent activity schemes. Fraudsters use AI-powered platforms to generate synthetic identities, craft personalised phishing campaigns, and design malware that evades detection. However, AI can also automate detection, uncover hidden connections, and act in real-time.
AI analysis of whistleblower tips can prioritise high-impact investigations, while machine learning models in payment fraud detection combine behaviour profiling with rules-based detection in a layered fraud prevention approach. These models use advanced methodologies and statistical techniques to identify risky transactions and flag them for further review while reducing friction for legitimate users.
A staggering 96% of public sector employees responsible for monitoring fraud within their organizations say fraud has negatively impacted citizen trust in their agency and its programs. As such, the effective use of AI in fraud prevention is not just about protecting government funds, but also about maintaining public trust and confidence.
Technology plays a vital role in the implementation of AI-powered strategies for fraud prevention in government agencies. This includes utilizing AI analytics for fraud detection, risk management, and impact assessments, as well as data security best practices and legal measures against AI-enabled fraud. Conversely, AI-driven malware poses new challenges, necessitating continuous advancements in AI-driven defenses to combat these evolving threats and maintain public trust.