AI Alert
site

What this site is for

AI Alert tracks AI incidents and vulnerabilities. Each entry is dated, sourced, and verifiable.

By Editorial ·

AI Alert is a tracker. We catalog AI/ML incidents and vulnerabilities so practitioners and analysts have a single place to check what’s happened, when, and where to read more.

What we track:

  • CVEs in ML libraries and frameworks — PyTorch, TensorFlow, ONNX, vLLM, llama.cpp, transformers, langchain, LlamaIndex, and the supply chain around them
  • Model leaks and training-data exposures — accidental and adversarial
  • Jailbreak and prompt-injection disclosures — when a working bypass goes public
  • Vendor breaches affecting AI products — when an AI vendor or AI-adjacent service is compromised
  • Adversarial-use incidents — confirmed real-world exploitation, not hypothetical
  • Regulatory enforcement actions — when a regulator publicly acts against an AI company

Each entry is dated, linked to its primary source (advisory, paper, news report, court filing), and tagged. We don’t speculate. If we can’t link to a verifiable source, it doesn’t go up.

What this site is not: a news aggregator, a take farm, or a vendor advisory. We exist to be the boring, reliable index a security team can actually cite.

Pseudonymous editorial. Tips with primary sources to the editor.

The catalog opens shortly.

Subscribe

AI incidents and vulnerabilities — tracked, sourced, dated. — delivered when there's something worth your inbox.

No spam. Unsubscribe anytime.