AI Practitioners Breaching Regulations: Findings from CalypsoAI Report
Unauthorized AI Use in the Workplace: A Growing Security Concern
A new report by CalypsoAI has highlighted the increasing problem of "shadow AI" in the workplace, as employees widely adopt AI tools without official approval or oversight from IT or AI governance teams.
Donnchadh Casey, CEO of CalypsoAI, has stated that the trend should serve as a wake-up call for the need for visibility, enforcement, and cultural change in security programs. The report finds that the rapid adoption of generative AI tools is outpacing the ability of many organizations to manage the risks.
The ease of access to AI tools, such as ChatGPT, their flexibility to improve productivity, and employee preference for faster, less restrictive solutions compared to official corporate tools are key drivers of this trend. In fact, around 29% of employees even pay out of pocket for AI tools they use at work without management knowledge, and most receive little or no training on safe AI use.
This unsanctioned use is pervasive, with surveys showing that up to 93% of employees input company data into unauthorized AI platforms, exposing sensitive client and internal information. As a result, security leaders and professionals face major challenges because traditional IT governance and security strategies are not designed to handle AI-specific risks like prompt injections or data leakage.
The implications for security leadership and professionals are significant. Unauthorized AI use results in frequent exposure of confidential client information (32%) and private internal data (37%), creating vulnerabilities often invisible to IT teams. AI introduces unique threats, such as prompt injection attacks and AI-driven data leakage, which traditional cybersecurity tools and protocols cannot adequately mitigate.
The rapid, decentralized adoption of AI without oversight also undermines organizational control and compliance with data privacy and intellectual property protections. Poor governance of AI usage can erode customer trust, while organizations demonstrating responsible AI practices may gain a competitive advantage.
To address these issues, security professionals must implement AI-centric solutions including AI data gateways, prompt shielding, content filtering, and audit trails to control risk. The lack of formal training on AI risks and governance among employees adds to the complexity of managing shadow AI safely.
The report warns that inappropriate use of AI "isn't a future threat - it's already happening inside organizations today." Nearly half (46%) of the security leaders and professionals surveyed have already submitted proprietary company information to AI systems. One-third of the security leaders surveyed report feeling no guilt about breaking AI rules.
In summary, security leaders must urgently address shadow AI by establishing robust AI governance frameworks and deploying security controls specifically designed for AI threats. Without such measures, organizations face increasing risks to sensitive data, compliance failures, and reputational damage driven by the widespread unauthorized use of AI tools among employees.
The survey findings add weight to concerns raised in SecurityInfoWatch.com's recent article, Survey: Widespread AI Use in the Workplace Creating New Security Risks. The CalypsoAI Insider AI Threat Report reveals that 42% of security leaders and professionals are willing to use AI in violation of company policy. DMP introduces JamAlert, a detection device to combat the growing threat of illegal cell jammers.
References:
[1] CalypsoAI Insider AI Threat Report [2] SecurityInfoWatch.com, Survey: Widespread AI Use in the Workplace Creating New Security Risks [3] Censuswide survey of 1,002 full-time U.S. office workers conducted in June 2023.