Since the mid‑20th century, with the arrival of the first commercial computers—and even more rapidly throughout the 1980s, 1990s, and 2000s—the digital transformation introduced technology at such a fast pace that many organizations were unable to effectively manage the growing number of devices. Assets such as personal computers, office software, email, and mobile devices kept appearing, and IT departments were not always able to maintain full control over all of them.
Amid this chaos, the ghost emerged: Shadow IT, or technology in the shadows. INCIBE [1] defines it as:
“Shadow IT refers to the use of systems, devices, software, applications, and IT services that are employed within a company without the knowledge or approval of the IT department.”
Shadow IT can be something as simple as an employee sending a document to their personal email account, storing data in a personal cloud service, or even as complex as entire teams operating analytics tools or business‑critical systems outside the organization’s radar. All of this can pose a serious threat to the organization’s survival. And although Shadow IT often arises from necessity (greater agility, ease of use, lack of alternatives, cost reasons), from lack of awareness (of technologies, of implications), or even from fully intentional actions (for illegitimate reasons or to bypass established controls), it always represents a real and growing risk to information security and governance.
According to a Gartner publication [2], in 2022, 41% of employees had acquired, modified, or created elements of Shadow IT, and projections indicated this figure would rise to 75% by 2027. This trend is pushing CISOs—traditionally responsible for designing controls over assets—to adopt a role increasingly focused on enabling risk‑based decision‑making.
Shadow AI is the unauthorized use of any artificial intelligence (AI) tool or application by employees or end users without the formal approval or oversight of the information technology (IT) department.”Shadow AI can take many forms, including examples such as:
The risks of Shadow AI not only mirror those of Shadow IT, but—as a new subcategory—they also expand them:
◾ Unintentional exposure of sensitive data — When corporate data is entered into cloud‑based language models or AI systems, it may be stored or used to train other models, potentially making it accessible to competitors or even the general public.
◾ Lack of traceability — Many AI systems are not sufficiently transparent and generate content (summaries, decisions, responses) without clearly explaining how or why they arrived at that output.
◾ Dependence on biased or incorrect responses — Without control over the models being used, decisions may be based on questionable information. AI systems may even assume facts that cannot be verified, are inaccurate, or simply do not exist—what is known as hallucinations [7].
◾ Violation of regulations — Especially in regulated countries and sectors, the use of unapproved AI may breach rules such as GDPR, the EU Artificial Intelligence Act (AIA – Regulation 2024/168) [8], or internal company policies and data governance frameworks.
Shadow AI is a formidable threat because it is not visible at first glance. It is not installed like software, nor does it occupy a physical server. Shadow AI lives in browsers, plugins, automations, extensions, or hidden behind business logic completely invisible to the user… and that makes it even harder to detect.
The answer usually does not lie in trying to block AI entirely—users will always find ways to use it—but in establishing the right frameworks and fostering a corporate culture capable of governing it responsibly.
Some key measures include:
◾ Establishing an AI usage policy — This should define which tools are approved, in what contexts their use is acceptable, which uses are prohibited, and the consequences of improper use.
◾ Creating a registration procedure for AI models and tools used within the organization — This procedure should include mechanisms for updates, reviews, and periodic audits. It is difficult to protect what you know, but it is impossible to protect what you don’t even know you have.
◾ Training and involving staff in understanding the risks associated with unsupervised AI use — This often means making each department responsible for its own tools and breaking the assumption that the IT department alone is responsible for all technology‑related risks.
◾ Implementing active discovery tools to detect unauthorized use — Such as secure browsers, DLP solutions with traffic visibility, web proxies, firewalls, and similar technologies. This must go hand in hand with proper user‑permission configuration to prevent the use of unauthorized software.
◾ Designating AI governance roles — Such as Chief AI Officers, AI committees, or specific roles typically placed within compliance areas or under the CISO.
The shift from Shadow IT to Shadow AI is not just a technological evolution; it is an exponential leap in complexity and impact. If Shadow IT involved uncontrolled devices or applications, Shadow AI involves algorithms that decide, predict, and generate—often without us fully understanding how they do it.
What is invisible can become more dangerous than what is external, which is why being able to govern these shadows is not merely a technical task but a strategic necessity for any organization that wants to protect its data, its reputation, and its future.
References:
🔗 [1] https://www.incibe.es/empresas/blog/shadow-it-lo-que-hay-en-la-sombra-de-tu-organizacion
🔗 [2] https://www.gartner.com/en/newsroom/press-releases/2023-03-28-gartner-unveils-top-8-cybersecurity-predictions-for-2023-2024
🔗 [3] https://www.incibe.es/incibe-cert/blog/it-shadow-al-descubierto-riesgos-y-buenas-practicas
🔗 [4] https://www.bbva.com/es/innovacion/shadow-it-un-peligro-en-la-sombra/
🔗 [5] https://www.bbva.com/es/innovacion/shadow-it-el-peligro-de-que-los-empleados-usen-la-tecnologia-por-su-cuenta/
🔗 [6] https://www.ibm.com/think/topics/shadow-ai
🔗 [7] https://www.ibm.com/es-es/think/topics/ai-hallucinations
🔗 [8] https://eur-lex.europa.eu/legal-content/ES/TXT/PDF/?uri=OJ:L_202401689