Skip to content

Data Breaches Strike One-Fourth of Companies in the UK and US Through Toxic Digital Infiltrations

Artificial intelligence (AI) is experiencing an escalation in malicious attempts to distort or manipulate training data, according to a fresh investigation by the IO research group.

Approximately one-fourth of businesses in the UK and US experience data contamination via...
Approximately one-fourth of businesses in the UK and US experience data contamination via cyberattacks

Data Breaches Strike One-Fourth of Companies in the UK and US Through Toxic Digital Infiltrations

In a recent development, the security and compliance specialist, IO, has published its third annual State of Information Security Report. The report sheds light on the increasing concerns surrounding the use of Artificial Intelligence (AI) and its potential risks.

According to the report, just over a quarter (26%) of the IT security leaders polled have suffered a data poisoning attack. This type of attack involves threat actors interfering with model training data to alter its behaviour, a concern that was previously thought to be more theoretical than widespread.

The unauthorized use of Generative AI (GenAI) tools can introduce major risks associated with data leakage and compliance infringements. Earlier this year, DeepSeek's flagship LLM R1 was found to contain multiple vulnerabilities, highlighting the potential dangers of such unsanctioned use.

Chris Newton-Smith, CEO of IO, described AI as a 'double-edged sword.' While AI offers numerous benefits, it also poses significant risks. The report does not explicitly name companies using GenAI tools in an unsanctioned way; however, general risks of such unsanctioned use include data security breaches, intellectual property violations, compliance issues, and inaccurate or biased outputs affecting decision-making. These risks underscore the importance of governance and risk mitigation when deploying generative AI in companies.

The report also revealed that 37% of enterprises are seeing employees use GenAI tools in the enterprise without permission. This unauthorized use could potentially introduce vulnerabilities if the tool in question is not safe.

Respondents feel prepared to defend against various AI-related threats. They reported high levels of preparedness for AI-generated phishing (89%), deepfake impersonation (84%), AI-driven malware (87%), misinformation (88%), shadow AI (86%), and data poisoning (86%).

However, the report's respondents seem conflicted over their attitudes to AI. While they are putting in place acceptable usage policies for AI (75%), they cite AI-generated phishing, misinformation, shadow AI, and deepfake impersonation in virtual meetings as the biggest emerging cybersecurity threats for the coming year.

In a positive note, incidents of deepfake-related attacks fell from 33% last year to 20% this year, according to IO. This decrease suggests that efforts to combat deepfakes are bearing fruit.

In conclusion, the IO report serves as a wake-up call for businesses to address the security and compliance challenges associated with AI. As AI continues to permeate various aspects of our lives, it is crucial for organisations to implement robust governance and risk mitigation strategies to ensure the safe and ethical use of AI.

Read also:

Latest