Combating ‘Shadow AI’ – How to approach unauthorized AI use in the Enterprise
Posted April 3, 2025 by Jamy Sneeden

Shadow IT has been a known problem for decades – the unauthorized use of technology and IT assets for business purposes without the knowledge of the business’s internal IT department. With the rise of AI, a new subset of Shadow IT is starting to prove problematic called Shadow AI. This is the unauthorized use of AI tools for business use cases.
Despite good intentions, Shadow AI presents serious security and compliance concerns. Often times, the free versions of these AI tools state they may use any input data to train future models, effectively turning company data public. There are also data leakage concerns with unvetted AI platforms like the massively popular recent DeepSeek web app and mobile app release.
There are also already malware pretending to be AI tools. In one recent example, an information stealing malware pretended to be local DeepSeek application, but had malware similar to Atomic macOS Stealer (AMOS) embedded in it.[1]
According to Microsoft’s and LinkedIn’s 2024 annual report, 75% of global knowledge workers claimed to use AI. How can an enterprise ensure that this AI usage is not leaking company data or providing otherwise harmful information? Cybersecurity professionals today now must address this problem. In this post, we will cover the most effective ways to combat Shadow AI.[2]
High Level Strategy and Policy
The early stages of any new venture should include high level strategic planning. AI is no exception. Understanding how the business wants to approach utilizing AI lays the foundation to solve any potential related problems like Shadow AI.
Even if there are no immediate plans to utilize a generative AI tool, aligning stakeholders on AI strategy and philosophy creates a clear understanding of how to approach AI related problems.
Questions to consider for your business are: Where would AI provide value in our employees workflows? What is our stance on using AI? When our employees ask about AI, what is our response? If we do not approve of AI usage, but employees may perceive a benefit, how can we effectively enforce no AI policies? Do we have tools in place to handle this or would that require additional tools or a new approach? Are there data governance concerns?
Once there is a strategy to approach AI and its usage, policies can be created. Policies should outline how employees can employ responsible and ethical use of AI, if any.
After the policies are created, the business can assess how the policies can be enforced. Technical controls (such as firewalls blocking unauthorized or untrusted AI websites) can help accomplish this automatically while maintaining observability about what people are accessing.
User Awareness and Training
Often times, Shadow IT does not happen with malicious intent. It can be as simple as an employee trying to take initiative without understanding the associated risks. Shadow AI is no different.
We can take steps to alleviate unintentionally risky actions by simply increasing awareness. This can include: Short memos, creating an AI clause in employee handbooks, making policies public and easy to access, and formal training.
Educating users on the dangers of using unauthorized AI, and the dangers of downloading applications from untrusted sources greatly reduces the chances of individuals making honest mistakes.
Providing an In-House Solution
As AI becomes better and ubiquitous, it only becomes harder to prevent employees from using it. It is easily accessible through web browsers, mobile phones, and even embedded into other applications. These can be tempting productivity boosters that can be the start of Shadow AI.
Sometimes, the best solution for this problem, is simply providing an in-house AI solution for employees to use. This has the benefit of allowing employees to safely and comfortably use company data in their prompts, and even ask the AI about company data that the individual employee already has access to.
This also removes the need for employees to seek out external AIs, decreasing the company’s AI attack surface and making Shadow AI less likely.
Conclusion
Before introducing any new revolutionary technology, it is important to lay the foundation by starting with high level strategy and policy. Once proper policies are in place, awareness is the next step to empowering employees. And, if the goal is to eliminate outside AI usage, implementing an in-house alternative may be the best option.
Each of these steps have varying degrees of maturity. For high level strategy, it can be as little as stakeholders agreeing that external AI use for work is prohibited, or it can be as detailed as dozens of pages of policies written very precisely.
Analyzing what your businesses specific pain points are, and what the near future goals are, should be the guide on what specific areas you may need to focus on. Identifying gaps within policy, user awareness, technical controls to enforce these, or a gap of an appropriate AI tooling to leverage are all possible signals to drill down in one domain.
This is an area Sayers can help. We can provide gap analysis, maturity assessments, and other services to prepare your business for an AI journey. We can also provide an AI readiness checklist, ensure data security when AI is deployed, and provide resources to go from crawling, to walking, to running with AI.
Questions? Contact us at Sayers today to discover extensive technology solutions, services, and expertise to cover all areas of your business.