< Steven Berdak />
  • Home
  • Contact
  • Projects
  • Demos
Resume
HomeProjectsDemosContact
Located in Extended Bay Area, CA(PST)
© 2026 Steven BerdakBuilt with Next.js • Tailwind • shadcn/ui
← Back to Demos
llmLive

AvaDocs

AvaDocs is a RAG-powered document analysis assistant. Upload PDFs, and it parses and chunks them hierarchically, embeds the content for semantic search, and answers your questions with responses grounded exclusively in the source material. Every answer includes clickable citations that trace back to the exact passage.

Try It NowView Source
AvaDocs preview

What you can do

Upload & ingest documents

Upload PDFs that get chunked hierarchically by section and embedded for semantic search.

Ask questions about your documents

Ask nuanced questions and receive answers grounded only in the retrieved source text.

Trace every claim to its source

Click any citation to jump to the exact section in the original document.

How it works

PDF → S3 storage → Celery async ingestion → hierarchical chunking → pgvector embeddings. Retrieval uses hybrid vector + keyword search with reranking and section expansion. A cloud-hosted LLM (AWS) generates streamed answers using a two-step extract-then-reason approach.

DjangopgvectorCeleryRAGHierarchical ChunkingSSE StreamingAWS

Known limitations

  • •Demo is limited to 3 uploaded documents and 50 questions per session.
  • •Processing large PDFs (100+ pages) may take up to 60 seconds.
  • •Answers are generated by a cloud-hosted LLM — quality varies by question complexity.