The Architecture of My AI Lab
This is the technical backbone of my AI Lab: how I set up RAG search, voice assistant, and prompt injection testing in a Next.js environment with Edge Functions and Supabase.
This post walks through how I built architecture, and where it fits in the rest of my work.
This architecture builds on the same patterns I developed for museum kiosks and government platforms, where stability and observability were just as important as features.
My AI Lab contains three major projects:
- RAG Knowledge Search
- Voice Assistant
- Prompt Injection and Safety Lab
They all share a unified architecture.
The foundation
The AI Lab uses:
- Next.js API routes
- Supabase for persistence
- OpenAI for model calls
- pgvector for vector search
- Shared utilities and error handling
Shared design principles
- Streaming responses
- Stateful UI where needed
- Server side input validation
- Structured metadata returned with every model call
RAG architecture
The RAG system uses:
- Document ingestion panel
- Chunking with overlap
- Embeddings stored in pgvector
- SQL similarity search
- Streaming final answers
- Citation panels
Voice assistant architecture
The voice assistant uses:
- Browser SpeechRecognition
- Whisper fallback endpoint
- OpenAI TTS for output
- Tool calling for weather, timezone, search, and math
- Session memory
Prompt injection lab architecture
The security lab uses:
- Level configuration objects
- Backend guards per level
- Output validators
- Custom UI per level
- LocalStorage progress
Why this matters
This system combines design multiple interoperable AI products inside one codebase in a production environment.
Keep exploring
From here you can:
-
See how I applied similar patterns on the American Battlefield Trust map projects
Thanks for reading! If you found this useful, check out my other posts or explore the live demos in my AI Lab.