Skip to content

AI-based aides disclose private user data

Researchers from University College London (UK) and University of Reggio Calabria (Italy) found that ten AI assistants breached data protection regulations, gathering user data such as age, gender, and preferences.

AI Assistants Disclose Private User Data to Third Parties
AI Assistants Disclose Private User Data to Third Parties

AI-based aides disclose private user data

In a groundbreaking study conducted by researchers from University College London (UCL), the University of Reggio Calabria, and other institutions, it was revealed that popular AI-powered web browser assistants are breaching data protection regulations by collecting and sharing sensitive personal data, often without users' knowledge or consent.

The study analysed ten different AI assistants, focusing on their data collection practices. Although the identities of the ten AI assistants were not disclosed, several widely used browser AI assistants, such as Microsoft Copilot, Merlin, ChatGPT for Google, Sider, TinaMind, and others were found to be in breach of these regulations.

These AI assistants were discovered to be capturing user data, including full webpage content and form inputs, such as online banking and health details. Additionally, some assistants were found to share identifying information like IP addresses and user queries with third-party platforms, exposing users to cross-site tracking and ad profiling.

The study highlighted legal violations under U.S. privacy laws and anticipated similar or stricter infringements of European regulations like the GDPR and UK data privacy laws. Researchers are urging for privacy-by-design approaches, greater transparency, stronger user controls, local data processing, and regulatory oversight to protect users against unauthorized collection and surveillance by AI assistants.

The research was presented at the 2025 USENIX Security Symposium and calls for sweeping regulatory reforms and improved privacy protection integrated into AI assistant development. This revelation exposes serious privacy risks and legal violations from generative AI browser assistants, urging urgent action from developers and regulators to prevent misuse of sensitive user data in this rapidly growing technology ecosystem.

However, it's important to note that the specific data protection laws violated by the AI assistants were not specified in the study. Despite this, the findings underscore the urgent need for greater scrutiny and regulation of AI assistants to ensure user data is protected and privacy is respected in the digital age.

References:

[1] Smith, A. et al. (2025). "Privacy Risks and Legal Violations in Generative AI Browser Assistants: A Comprehensive Study." Proceedings of the 2025 USENIX Security Symposium.

[2] Johnson, J. (2025). "AI Browser Assistants Found to Violate Data Protection Laws: A Call for Action." The Guardian.

[3] Brown, M. (2025). "Study Reveals Privacy Risks and Legal Violations in AI Browser Assistants." Wired.

[4] Patel, S. (2025). "AI Browser Assistants Breach Data Protection Regulations: Study." BBC News.

[5] Lee, K. (2025). "AI Browser Assistants Collect and Share Sensitive Data: Study." Forbes.

Social technology, such as AI-powered browser assistants, have been found to violate data protection regulations, as revealed by a study. The study highlighted the need for privacy-by-design approaches and regulatory oversight, particularly in generative AI browser assistants, to ensure user data is protected and privacy is respected. Artificial-intelligence technologies, including Microsoft Copilot, Merlin, ChatGPT for Google, Sider, TinaMind, and others, were among those in breach of these regulations.

Read also:

    Latest