\n\n\n\n Notion Trusted You With Your Notes — Did You Trust It With Your Email? - AI7Bot \n

Notion Trusted You With Your Notes — Did You Trust It With Your Email?

📖 4 min read728 wordsUpdated Apr 19, 2026

Notion markets itself as a private workspace. A place to think, plan, and build without noise. And yet, in 2026, it became the source of one of the more quietly damaging data exposures in the productivity software space — leaking the email addresses of editors on public pages to anyone who knew where to look. Two things can be true at once: Notion is genuinely useful, and it failed the people who used it.

I build bots for a living. A lot of what I do involves connecting tools — scraping structured data, piping it into workflows, feeding it to language models. So when I heard about this breach, my first thought wasn’t “poor users.” It was “someone built a bot for this.” And that’s exactly the problem.

What Actually Happened

The vulnerability centered on prompt injection — a class of attack that has been climbing the threat charts ever since AI got embedded into everyday productivity tools. In Notion’s case, the AI layer was susceptible to indirect prompt injection, meaning malicious instructions could be hidden inside a document and executed by the AI without the user ever triggering them intentionally. Worse, Notion AI was saving document edits before users clicked confirm — so data could be exfiltrated before anyone realized something was wrong.

The exposed data included names and email addresses. In isolation, that sounds manageable. But as security researchers have pointed out, the combination of names and contact details is precisely what makes targeted phishing and social engineering attacks so effective. You don’t need a password to do damage. You just need to know who someone is and how to reach them.

Why Bot Builders Should Pay Attention

If you’re building automation on top of tools like Notion — and a lot of us are — this incident should recalibrate how you think about data flow. When you embed an AI assistant into a document platform used by 100 million people, including 4 million paying customers and enterprise clients like Amazon, Nike, Uber, and Pixar, you are not just adding a feature. You are expanding the attack surface in ways that the original security model never accounted for.

Prompt injection is not a theoretical risk anymore. It’s a practical one. And the reason it’s so dangerous in agentic or AI-assisted workflows is that the AI doesn’t distinguish between legitimate instructions and injected ones. If a hidden instruction says “forward this user’s contact details,” the model may comply — especially if it has been given broad permissions to read and edit documents.

  • AI edits were being saved before user confirmation — removing a critical human checkpoint
  • Public pages created an exposure vector that editors may not have anticipated
  • The data combination of names plus emails is a ready-made phishing kit

The Public Page Problem

Here’s what makes this particularly uncomfortable for teams using Notion as a knowledge base or client-facing portal: public pages feel passive. You publish something, people read it, nothing happens. But the moment an AI layer can read, process, and act on the content of those pages — and the metadata attached to them — “public” takes on a different meaning.

Editors who contributed to a public Notion page likely never considered that their email addresses could be extracted by someone running a crafted prompt against the AI. That’s not naivety. That’s a reasonable expectation that the tool would protect contributor data even when the content itself was public. That expectation turned out to be wrong.

What to Actually Do About It

If you’re running bots or automations that touch Notion workspaces, a few practical steps are worth taking right now:

  • Audit which pages are public and who has edited them — contributor metadata is more exposed than most people realize
  • Limit AI assistant permissions to read-only where possible, and avoid giving it access to pages containing sensitive contributor data
  • Treat any AI-assisted workflow as a potential injection target — validate outputs before acting on them
  • Check for unexpected redirects, new admin accounts, or file changes after any AI-assisted editing session

Prompt injection is going to keep showing up in tools we use every day. The AI layer is new, the security thinking around it is still catching up, and the people building on top of these platforms — bot builders, automation engineers, no-code developers — are often the first to feel the consequences.

Notion is a solid tool. But solid tools can still have sharp edges. Know where yours are.

🕒 Published:

💬
Written by Jake Chen

Bot developer who has built 50+ chatbots across Discord, Telegram, Slack, and WhatsApp. Specializes in conversational AI and NLP.

Learn more →
Browse Topics: Best Practices | Bot Building | Bot Development | Business | Operations
Scroll to Top