In the era of generative AI, shadow AI is proliferating and poses a serious risk to enterprises. The term ‘shadow AI’ refers to the unauthorized use of AI tools in professional contexts. According to recent studies, it affected 30% of employees in 2023; now, 68% of French businesses are impacted. This practice generates significant security issues as employees inadvertently transmit sensitive data.
Shadow AI: When Artificial Intelligence Escapes Corporate Control
Summary: The rise of Shadow AI, where employees use artificial intelligence tools without authorization, poses significant risks to corporate data security and compliance.
Key facts
- 30% of employees in 2023 affected by Shadow AI
- 68% of French businesses impacted today
- 71% of French people are aware of generative AI tools like ChatGPT and Bing
- 50% of U.S. companies updating internal policies to regulate ChatGPT usage
- 72% of French people lack knowledge on how to use AI generatively
Why it matters
Shadow AI can lead to data breaches, non-compliance with GDPR and other regulations, and exposure to cyber threats, posing a substantial risk to corporate data security and legal standing.
Key metrics
- Employees using Shadow AI in France (%): 68% % (According to recent studies.)
- Companies updating internal policies for ChatGPT usage (%): 50% % (U.S. companies as of the latest reports.)