SecurityFeb 6, 20265 min read

The 'Shadow AI' Crisis: Why Enterprises Are Pivoting to Zero Retention

Employees are pasting sensitive data into ChatGPT. Blocking AI isn't the solution—securing the pipeline is.

The rise of Large Language Models (LLMs) has created a paradox for modern enterprises. On one hand, AI tools deliver extraordinary productivity gains — drafting emails in seconds, summarizing complex documents, translating across languages. On the other, every interaction with a public AI service is a potential data leak. The uncomfortable truth? Your employees are already using AI. The question is whether you know about it.

The Rise of Shadow AI

"Shadow AI" refers to the unsanctioned use of AI tools by employees — without IT approval, without security review, and without any data governance in place. A developer pastes proprietary code into ChatGPT to debug it. A sales manager feeds a client's financial data into an AI assistant to draft a proposal. A lawyer uploads a confidential contract to get a quick summary. None of them intend to cause harm. All of them just created a data exposure.

According to recent industry research, over 70% of knowledge workers have used generative AI tools at work, and roughly half of them have done so without their employer's knowledge or approval. The data they share includes source code, internal strategy documents, customer information, financial projections, and legal correspondence.

The scale of the problem is staggering. In a typical enterprise with 1,000 employees, hundreds of sensitive documents are being processed by third-party AI services every single week — services that may train on user inputs, retain conversation logs, or store data in jurisdictions with different privacy laws.

Why Blocking AI Doesn't Work

Faced with the Shadow AI problem, organizations typically respond in one of two ways:

The Ban Hammer. Block all AI-related URLs at the firewall level. Prohibit the use of ChatGPT, Claude, Gemini, and similar tools. Issue company-wide memos threatening disciplinary action. This approach fails for a predictable reason: employees find workarounds. They use personal devices, mobile data, or VPN services. The productivity gains from AI are too significant for people to simply abandon them because IT said so. Banning AI is like banning the internet in 2005 — it signals that your organization doesn't understand the tools shaping the future of work.

The Policy-Only Approach. Draft an acceptable use policy, run a training session, and hope for the best. This is marginally better than a ban, but it relies entirely on human compliance. Policies don't prevent a distracted employee from accidentally pasting a client's personal data into a prompt. They don't stop someone from uploading a confidential spreadsheet when they're under deadline pressure. Policies describe intent. Architecture enforces it.

The Secure Wrapper: A Third Path

The most effective approach is neither banning AI nor relying on policy alone. It's implementing tools that act as a secure wrapper — a controlled pipeline between your employees and the AI models they need. This means:

Controlled access. AI is available through an approved channel that lives inside the tools employees already use, eliminating the temptation to seek unauthorized alternatives.

Data governance by design. The pipeline enforces what data can and cannot reach the AI model, applying redaction, filtering, or classification rules before any prompt is sent.

Audit trail. Every interaction is logged — not the content itself, but the metadata: who used the tool, when, what type of action was performed. This satisfies compliance requirements without creating a surveillance culture.

Zero retention architecture. The most critical component. No email content, no prompts, no AI responses are stored after the interaction is complete. The data flows through, generates a result, and disappears.

The Case for Zero Content Retention

This is where Kerna's architecture fundamentally differs from most AI tools on the market. We operate on a Zero Content Retention principle. Here's what that means in practice:

No email content is stored. When Kerna processes an email — translating it, summarizing it, generating a reply — the content passes through our pipeline and is discarded immediately after the response is delivered. We don't log it. We don't cache it. We don't use it for training.

No prompts are retained. The instructions sent to the AI model are constructed in real-time and destroyed after execution. There is no database of "what employees asked the AI."

No AI outputs are stored. The translations, summaries, and draft replies generated by the model are delivered to the user's Gmail and never persisted on our servers.

Stripe handles all financial data. We never see, transmit, or store credit card numbers, billing addresses, or security codes. Stripe processes everything. We retain only a tokenized reference to your subscription status.

What we do store is minimal and operational: your email address (for authentication), your subscription plan, your language preferences, and aggregate usage counters (total tokens consumed). That's it. No content. No conversations. No training data.

The GDPR Dimension

For European enterprises, Shadow AI isn't just a security concern — it's a regulatory one. Under GDPR, every transfer of personal data to a third-party processor must be documented, justified, and secured. When an employee pastes a customer's email into ChatGPT, the organization has potentially:

  • Transferred personal data to a processor without a Data Processing Agreement
  • Sent data to servers outside the EU without adequate safeguards
  • Failed to maintain records of processing activities
  • Violated the principle of data minimization

The fines for GDPR violations can reach 20 million euro or 4% of annual global turnover — whichever is higher. Shadow AI turns every employee into an accidental compliance risk.

Kerna addresses this directly. Our infrastructure is hosted entirely in the EU (Google Cloud, europe-west1 region). Data processing occurs within European jurisdiction. And because we retain no content, there is minimal personal data exposure even in a worst-case scenario.

What a Secure AI Email Workflow Looks Like

Consider the difference between an unsecured and a secured workflow:

Without a secure wrapper: An employee receives an email in German. They copy the entire email — including the sender's personal details, any attachments, and confidential business information — and paste it into ChatGPT. They ask for a translation. ChatGPT processes it, potentially retains it, and may use it to improve future models. The employee then copies the translation back into Gmail. At no point was this interaction logged, approved, or secured.

With Kerna: The employee clicks "Analyze" inside Gmail's sidebar. Kerna reads the email content directly through Google's authorized API, sends it to OpenAI for translation with zero retention instructions, receives the result, displays it in the sidebar, and discards all content from memory. The entire process happens inside Gmail. Nothing is copied to an external browser tab. Nothing is stored. The interaction is counted in usage analytics (one translation performed) but the content itself vanishes.

The productivity gain is identical. The security posture is fundamentally different.

Building a Culture of Secure AI Adoption

Technology alone doesn't solve the Shadow AI problem. Organizations need a cultural shift that treats AI as infrastructure rather than a threat. This means:

Make the secure option the easy option. If the approved AI tool requires five clicks and the unauthorized one requires two, employees will choose the unauthorized one every time. Kerna is built directly into Gmail's sidebar — there's nothing to install, no tab to switch to, no credentials to remember.

Be transparent about what's monitored. Employees should know that usage metrics are tracked but content is not. Trust is built through transparency, not surveillance.

Invest in training. Not "don't use AI" training, but "here's how to use AI effectively and safely" training. Show employees the approved tools, demonstrate the workflow, and explain why the security architecture matters.

Lead from the top. When leadership visibly uses the approved AI tools, adoption follows naturally. Shadow AI thrives in organizations where leadership ignores or bans AI while middle management quietly depends on it.

The Bottom Line

The Shadow AI crisis isn't going away. As AI models become more capable, the temptation to use them — regardless of corporate policy — will only grow. Organizations that respond with bans will fall behind. Organizations that respond with policy alone will remain exposed.

The enterprises that will thrive are the ones that embrace AI adoption while engineering security into the pipeline itself. Zero content retention isn't a marketing claim — it's an architectural decision that eliminates an entire category of risk.

Your employees are going to use AI. Give them a way to do it safely.

*Kerna is a Gmail add-on that brings AI-powered email assistance directly into your inbox. Built with Zero Content Retention architecture, EU-hosted infrastructure, and GDPR compliance by design.

The 'Shadow AI' Crisis: Why Enterprises Are Pivoting to Zero Retention – Kerna Blog | Kerna