Summary
- Surveys reveal that while 76% of IT leaders predict a transformative role for Generative AI in their companies, with 65% expecting benefits within a year, there’s a growing trend of employees using GenAI tools without official authorization.
- Samsung banned generative AI tools like ChatGPT due to leaks of confidential information, highlighting ongoing concerns about AI security and privacy risks despite the evolving landscape.
- Shadow AI poses greater risks than Shadow IT by exposing every level of an organization to potential data breaches, AI-generated inaccuracies, unauthorized access issues, and regulatory non-compliance.
- Banning generative AI in the workplace can inadvertently increase risks by driving its use underground, bypassing security measures and highlighting the need for more nuanced management strategies.
What research from Dell, Salesforce and Forbes reveals about Shadow AI?
- According to a recent Dell survey, 76% of IT leaders believe GenAI will play a crucial and potentially transformative role in their companies.
- The same survey reveals 65% of IT leaders expect to see tangible benefits from GenAI within the next year.
- A global Salesforce survey of over 14,000 employees in 14 nations found a significant number of generative AI (GenAI) users in the workplace are using these tools without official training, guidance, or authorization from their employers.
- Forbes reports an increasing trend in the unauthorized use of generative AI within companies.
The rapid adoption of GenAI poses challenges, especially when employees use GenAI tools not officially sanctioned by the company. The trend of “shadow AI” usage heightens organizational risks, raising concerns around data security, regulatory compliance, and privacy.
Samsung’s Data Alarm: Shadow AI Emerges from the Shadows
- Samsung banned use of generative AI tools like ChatGPT after they found that ChatGPT possessed confidential information.
- The first incident was involving an engineer who pasted buggy source code from a semiconductor database into ChatGPT, with a prompt to the chatbot to fix the errors.
- The second instance, an employee wanting to optimize code for identifying defects in certain Samsung equipment pasted that code into ChatGPT.
- The third leak resulted when an employee asked ChatGPT to generate the minutes of an internal meeting at Samsung.
Gartner, as early as 2019, pinpointed security as a critical strategic trend in AI. Although the AI landscape has evolved since then, privacy risks remain a paramount concern, especially given the fast-paced changes in the AI field.
What is the impact and challenge of Shadow AI on your organization?
The impact of Shadow AI is expected to be greater than Shadow IT, as highlighted by cio.com. Unlike Shadow IT, where risks were mostly limited to developers, generative AI exposes every user in an organization, from admins to executives, to potential errors.
From discussions with enterprise clients, several emerging challenges associated with shadow AI have been identified:
- Data Protection: This includes the possibility of users inadvertently sharing confidential data with GenAI, leading to unintentional disclosure of sensitive information.
- AI “Hallucinations”: This refers to instances where the AI chatbot generates inaccurate or misleading information, which can lead to incorrect decisions or misinterpretations.
- Access Management: There’s a risk of the GenAI service provider gaining unauthorized access by employees. This could happen through human review of the customer’s data inputs to the AI system.
- Non-compliance with Regulations: The use of unapproved GenAI tools can lead to breaches in cybersecurity and data privacy standards, failing to meet legal and regulatory compliance.
Will banning GenAI help organizations?
- Banning Gen AI in workplaces might reduce visible risks but can drive its use underground, making it harder to control and potentially increasing risks.
- Covert use of Gen AI bypasses security controls and oversight, potentially escalating risks rather than mitigating them.
- Outright bans can be counterproductive; more nuanced strategies are needed to manage Gen AI use effectively.
Lumeus.ai offers Zero Trust Security for AI, enabling IT Security to efficiently manage ShadowAI, control AI access, and enforce AI guardrails. It integrates seamlessly with existing security infrastructures, supporting identity platforms like Okta, Google, Active Directory, and network security platforms from Palo Alto, ZScaler, Fortinet, enabling a smooth deployment.
If you’re interested in a deeper discussion or even in contributing to refining this perspective, feel free to reach out to us.