Skip to content

Ranking of AI platforms by data privacy in 2025

Evaluation of Data Privacy Practices in Popular AI Platforms: A recently released assessment by Incogni scrutinizes the data privacy measures of contemporary generative AI and large language models (LLMs).

AI Platforms' Data Privacy Rankings in 2025
AI Platforms' Data Privacy Rankings in 2025

Ranking of AI platforms by data privacy in 2025

In a groundbreaking report, research firm Incogni has evaluated the data privacy practices of nine leading AI platforms, shedding light on significant gaps between transparency, data control, and user protection across the industry.

The report found that platforms like HuggingChat are considered best for user privacy due to their focus on data protection and use of open-source language models. On the other hand, platforms with extensive data collection and less transparent practices, such as Meta AI, Gemini (Google), Copilot (Microsoft), and DeepSeek, tend to have poorer privacy records. Strict European data protection laws push AI providers to implement stronger privacy safeguards, unlike some US-based platforms with laxer regulations.

When it comes to model training disclosures, platforms such as OpenAI, Mistral, Anthropic, and xAI provide clear explanations about prompt usage. However, companies like Microsoft and Meta require users to search through unrelated documentation, while others like DeepSeek, Pi AI, and Google's Gemini offer the least clarity.

The report also highlighted that most platforms share prompts with a defined set of third parties. Interestingly, Microsoft's Copilot Android app claimed not to collect or share any user data, but this was inconsistent with its iOS app disclosures.

In terms of mobile app data collection and sharing, Le Chat had the lowest risk, followed by ChatGPT (OpenAI) and Grok (xAI). However, Meta AI was the most aggressive in this regard.

Privacy documents provided by Meta, Microsoft, and Google are long and complex, covering multiple products. Some platforms, like ChatGPT, Copilot, Le Chat, and Grok, allow users to opt out of training. However, no platform offers a way for users to remove their personal data from existing training sets.

Pi AI appears to use the fewest external sources, focusing mainly on direct input and public data. On the other hand, platforms like Claude (Anthropic) collect data from commercial datasets, and some platforms, like ChatGPT, Gemini, DeepSeek, and Meta AI, collect data from security partners.

Microsoft stated that it may use data from brokers, while Meta and Anthropic share information with research partners. Gemini and Meta AI collect exact user locations, and Copilot collects data from financial institutions. Grok shares photos and app interaction data, and Claude collects email addresses.

The report concludes that AI platforms must make privacy documentation easier to read, provide modular privacy policies for each product, and avoid relying on broad umbrella policies. OpenAI and xAI are noted for offering helpful support articles, though these must be maintained over time to remain accurate.

Incogni's findings underscore the need for greater transparency and control in AI data practices. As generative AI and large language models become deeply embedded in everyday tools and services, ensuring user privacy becomes increasingly important.

Read also:

Latest