AI Security Mechanisms in Transaction Processing
In the rapidly evolving world of finance, artificial intelligence (AI) is no longer just a buzzword, but a strategic necessity. AI security is no longer confined to the realm of IT, but spans processes, customer interfaces, and corporate culture.
The traditional IT principle of "security is inconvenient" is increasingly less effective in a world where speed and convenience are paramount. As AI advances, two main groups are emerging: those using AI for security (such as machine learning) and those using it for attacks, employing tactics like deepfakes, phishing emails, and AI-generated deepfakes.
The financial sector is the primary target of these AI-driven attacks. In 2024, AI-powered financial fraud featured sophisticated tactics, including deepfake voice scams, AI-generated deepfakes, and AI-based chatbots used for phishing. For instance, a major incident involved the engineering company Arup losing $25.5 million due to an AI-generated deepfake video call where fraudsters impersonated the UK CFO and colleagues, convincing a finance worker to authorize large transfers.
Synthetic identities created by AI—combining fabricated photos, false addresses, and stolen credentials—have been used to bypass KYC checks and open fraudulent bank accounts or crypto exchange profiles, resulting in significant losses. Additionally, AI-mutated malware adapts in real time to evade detection before launching targeted ransomware or phishing attacks.
These developments indicate a shift towards more automated, realistic, and scalable fraud methods powered by AI technologies. To combat this, financial institutions must enhance their detection and defense strategies.
One step towards this is the use of AI in financial transactions. Visa and Pay.UK tested a system that improved A2A payment detection rates by 40%, with a false positive rate of 5:1. Deutsche Bank has been using AI-based risk models for fraud detection since 2022, while the Sparkassen group uses KIWI, a hybrid learning system for real-time threat adaptation.
However, the human factor remains a central weak point in the face of deepfakes, social engineering, and personalized phishing emails. Transparency, education, and systems that deliver on their promises are crucial for maintaining customer trust in the financial industry, as customer expectations for comfort and security continue to rise.
The rise of AI in finance has also led to an arms race between financial institutions and cybercriminals. Instant payments significantly reduce the time window for security checks, necessitating AI for rapid decision-making. Financial fraud has become professionalized, with networks of start-ups operating quickly, data-driven, and iteratively.
Examples of AI-enabled cybercrimes include the compromising of over 200 customer accounts and causing $50 million in damages in October 2024, using AI-based chatbots. Injection of fake transaction data into the SWIFT network resulted in a loss of €120 million in November 2024. In March 2024, a transfer of $25 million was made from a financial director using AI-generated deepfake voices, and in September 2024, €35 million was transferred from a German big bank using AI-generated voice clones.
The increasing use of AI in finance also opens up additional attack surfaces, particularly with the rise of account-to-account (A2A) payments due to open banking and instant payments. Lack of agility in the industry can become a risk factor due to the rapid evolution of fraud tools.
Despite these challenges, the benefits of AI in finance are undeniable. Modern AI systems can detect suspicious transactions with an accuracy of over 95%. As the battle against AI-driven financial fraud continues, it's clear that a proactive, adaptive approach is essential for maintaining security in the digital age.
- To maintain security in the digital age, financial institutions are increasingly utilizing AI for innovation, developing AI-based systems that can detect suspicious transactions with an accuracy of over 95%, while simultaneously improving A2A payment detection rates by 40%.
- Parallel to these advancements, the realm of cybersecurity faces a growing threat as AI technology is employed by cybercriminals to perpetrate sophisticated financial attacks, such as deepfake voice scams, AI-generated deepfakes, and AI-based chatbots for phishing, causing significant losses and putting the trust of customers in jeopardy.