Member-only story
AI security and a16z: Crawling with badness
Major security issues revealed in audit of a16z reference LLM architecture
You might want to know what 🕷️security pests🕷️ the reference chatbot architecture from a16z is harboring before you use it… and what it means for the state of AI security!
Background and core concepts
Ask Astro is a chatbot that provides an end-to-end example of a Q&A LLM application used to answer questions about Apache Airflow and its maker, Astronomer. In order to make the responses as factual and accurate as possible, it’s generally best practice to use Retrieval Augmented Generation (RAG), which is exactly what Ask Astro does.
RAG is a handy technique for forcing your AI system to search a knowledge base of your choosing.
It’s okay if you’re never heard of Ask Astro or RAG, since that’s not the point.
The point is that you’ve likely heard of a16z. If you haven’t, you’re probably not in tech, since a16z — abbrev. of Andreessen Horowitz — is a hallowed VC firm with a massive AI portfolio and a sterling reputation for providing some of the most solid startup resources in existence. So if these guys tell startups “here’s how you should build it” then startups are likely to listen.
Ask Astro reveals a lot about best-in-class AI “security” because it’s based on a16z’s blueprints.
Since Ask Astro is an open-source reference implementation of LLM Application Architecture from a16z, you’d think it would be safe and secure, right? Right? Buckle up.

(Don’t) Ask Astro
Since it’s open-source, Ask Astro is fully available on Github and provides a lovely opportunity for the hackers among us to kick the stuffing out of it in search of security vulnerabilities. The nasty ones will keep their findings to themselves for later exploitation, while the big-hearted ones will share their findings with the world.
And since Ask Astro is based on a16z’s influential guide that many developers reference to build scalable and robust AI solutions within the tech industry, it’s worth dissecting. If we open…