🤖 AIBy Neel VoraDecember 6, 20252 min read

The Architecture of My AI Lab

AI EngineeringArchitectureRAGVoice AssistantsSecurity

This is the technical backbone of my AI Lab: how I set up RAG search, voice assistant, and prompt injection testing in a Next.js environment with Edge Functions and Supabase.

This post walks through how I built architecture, and where it fits in the rest of my work.

This architecture builds on the same patterns I developed for museum kiosks and government platforms, where stability and observability were just as important as features.

My AI Lab contains three major projects:

  1. RAG Knowledge Search
  2. Voice Assistant
  3. Prompt Injection and Safety Lab

They all share a unified architecture.

The foundation

The AI Lab uses:

  • Next.js API routes
  • Supabase for persistence
  • OpenAI for model calls
  • pgvector for vector search
  • Shared utilities and error handling

Shared design principles

  • Streaming responses
  • Stateful UI where needed
  • Server side input validation
  • Structured metadata returned with every model call

RAG architecture

The RAG system uses:

  • Document ingestion panel
  • Chunking with overlap
  • Embeddings stored in pgvector
  • SQL similarity search
  • Streaming final answers
  • Citation panels

Voice assistant architecture

The voice assistant uses:

  • Browser SpeechRecognition
  • Whisper fallback endpoint
  • OpenAI TTS for output
  • Tool calling for weather, timezone, search, and math
  • Session memory

Prompt injection lab architecture

The security lab uses:

  • Level configuration objects
  • Backend guards per level
  • Output validators
  • Custom UI per level
  • LocalStorage progress

Why this matters

This system combines design multiple interoperable AI products inside one codebase in a production environment.

Keep exploring

From here you can:

Thanks for reading! If you found this useful, check out my other posts or explore the live demos in my AI Lab.

More Posts