Skip to content
#

llm-rag

Here are 21 public repositories matching this topic...

FileChat-RAG is a simple Retrieval-Augmented Generation (RAG) system that allows users to ask questions about the contents of various file formats. It extracts text from PDFs, JSON, text files(.txt, .docx, .odt, .md), and code files, then enables interactive conversations using an LLM powered by Ollama.

  • Updated May 25, 2025
  • Python

Chat With Documents is a Streamlit application designed to facilitate interactive, context-aware conversations with large language models (LLMs) by leveraging Retrieval-Augmented Generation (RAG). Users can upload documents or provide URLs, and the app indexes the content using a vector store called Chroma to supply relevant context during chats.

  • Updated Feb 18, 2025
  • Python

Self-hosted AI powered knowledge base for SMBs: WikiJS + Qdrant Vector search, Chrome extension queries, single Ansible deploy. Unlimited users, no subs - reduce SaaS costs, own your data.

  • Updated Feb 20, 2026
  • Python

🤖 Full-stack conversational AI using a Letta (MemGPT) + RAG hybrid architecture for long-term memory, context persistence, and grounded responses. Built with FastAPI, React, FAISS, and MongoDB, featuring Isabella — a personality-driven assistant with document ingestion, structured memory, logging, and a terminal-style streaming chat UI.

  • Updated Feb 28, 2026
  • Python

Improve this page

Add a description, image, and links to the llm-rag topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the llm-rag topic, visit your repo's landing page and select "manage topics."

Learn more