AI security oversights cause cyber specialists to urgently seek solutions
In the rapidly evolving world of technology, the use of Generative AI (GenAI) applications has seen a significant surge. However, a recent report by Palo Alto Networks, State of Generative AI 2025, has highlighted several concerns regarding data security and loss prevention (DLP) in these applications.
One of the key findings is a substantial increase in GenAI-related DLP incidents. Over the past year, these incidents have more than doubled, accounting for 14% of all data security events. The report points to the increased use of GenAI tools as a primary cause, leading to risks of sensitive data exposure and intellectual property leaks, particularly in the technology and manufacturing sectors.
Another concern is the rise of "Shadow AI," unsanctioned and unmonitored GenAI tool usage that complicates data governance and monitoring for IT security teams. This lack of visibility into AI usage makes it difficult for security teams to control how tools are being used across the organization.
To address these challenges, the report offers several recommendations. Firstly, organizations should establish visibility and control over all GenAI application usage. This includes implementing conditional access policies, managing permissions at user and group levels, and limiting access to AI tools to authorised individuals.
Secondly, organisations should safeguard sensitive data. Real-time content inspection combined with centralized policy enforcement mechanisms should be used to detect and prevent unauthorised data exfiltration in GenAI workflows.
Thirdly, zero trust security architectures should be adopted to counter sophisticated cyberattacks enhanced by AI techniques. This includes continuous verification, minimal privilege, and segmentation to mitigate advanced threats.
Fourthly, strong governance measures should be implemented to address insider threats and Shadow AI. This includes data-sharing restrictions, strict user access controls, detailed audit trails, and behavioural analytics to detect anomalies and mitigate insider risks amplified by GenAI capabilities.
Lastly, regular training and incident response processes should be established to educate employees on AI risks and respond swiftly to breaches involving generative AI.
These recommendations aim to balance innovation with security, enabling organizations to safely harness GenAI while protecting their sensitive data and intellectual property as usage grows rapidly. The report is based on analysis of GenAI traffic from over 7,000 organisations globally throughout 2024.
It's important to note that the majority of generative AI usage is for writing assistance (34%), conversational agents (29%), and enterprise search (11%). However, 10% of these generative AI applications are classified as high risk.
Moreover, non-compliance with emerging AI and data laws can result in severe penalties for organisations. Therefore, it's crucial for organisations to implement conditional access management to limit access to Generative AI platforms, apps, and plugins.
In conclusion, while Generative AI offers numerous benefits, it also presents new security challenges. By implementing the recommendations outlined in the Palo Alto Networks report, organisations can effectively manage these challenges and harness the power of Generative AI while ensuring data security and compliance.
- As generative AI applications continue to proliferate, ensuring compliance with emerging AI and data laws is crucial to avoid severe penalties, since non-compliance can lead to restricted access to Generative AI platforms, apps, and plugins.
- To mitigate the risks of sensitive data exposure and intellectual property leaks associated with Shadow AI and the growing number of data security events due to increased GenAI usage, it's essential for organizations to establish visibility and control over all GenAI application usage, implement conditional access policies, manage permissions at user and group levels, and limit access to AI tools to authorized individuals.