The rapid expansion of artificial intelligence in workplaces is outpacing many companies’ ability to manage and secure its use, raising concerns about data exposure and governance gaps.
A new study from Cyberhaven reveals that a significant share of employee interactions with AI platforms now involve confidential business information. The findings suggest that workers are increasingly turning to external AI services, often without formal approval, creating blind spots for corporate security teams.
The research was conducted by Cyberhaven Labs using real-time data tracking methods that monitor how information moves across devices, cloud applications, and AI systems. According to the report, many organizations focus heavily on innovation and productivity gains, while risk management and compliance measures lag behind.
The study highlights the growing popularity of generative AI tools, coding assistants, and custom-built automated agents. In many cases, employees use these systems to improve efficiency, but the lack of centralized oversight increases the likelihood that sensitive data could be shared unintentionally.
Researchers describe the current environment as fast-moving and difficult to regulate, with new tools emerging more quickly than companies can develop policies to manage them. This imbalance between adoption and control is creating challenges for enterprises seeking to balance innovation with security.
Speaking about the findings, Nishant Yoshi noted that organizations must adapt their cybersecurity strategies to account for widespread AI usage. He emphasized that data protection frameworks should evolve to address the realities of modern AI-driven workflows.