A minimal Agentic RAG built with LangGraph — learn Retrieval-Augmented Generation Agents in minutes.
-
Updated
Jan 21, 2026 - Jupyter Notebook
A minimal Agentic RAG built with LangGraph — learn Retrieval-Augmented Generation Agents in minutes.
Client-side retrieval firewall for RAG systems — blocks prompt injection and secret leaks, re-ranks stale or untrusted content, and keeps all data inside your environment.
AI-Rag-ChatBot is a complete project example with RAGChat and Next.js 14, using Upstash Vector Database, Upstash Qstash, Upstash Redis, Dynamic Webpage Folder, Middleware, Typescript, Vercel AI SDK for the Client side Hook, Lucide-React for Icon, Shadcn-UI, Next-UI Library Plugin to modify TailwindCSS and deploy on Vercel.
An advanced, fully local, and GPU-accelerated RAG pipeline. Features a sophisticated LLM-based preprocessing engine, state-of-the-art Parent Document Retriever with RAG Fusion, and a modular, Hydra-configurable architecture. Built with LangChain, Ollama, and ChromaDB for 100% private, high-performance document Q&A.
A powerful RAG tool that scrapes YouTube channel videos, extracts transcripts, and enables AI-powered chat interactions using Google's Gemini API.
RAGify is a modern chat application that provides accurate, hallucination-free answers by grounding responses in your documents. No more made-up information - if the answer isn't in your knowledge base, RAGify tells you so.
🩺 RAGnosis — An AI-powered clinical reasoning assistant that retrieves real diagnostic notes (from MIMIC-IV-Ext-DiReCT) and generates explainable medical insights using Mistral-7B & FAISS, wrapped in a clean Gradio UI. ⚡ GPU-ready, explainable, and open-source.
LLMlight is a lightweight Python library for running local language models with built-in memory, retrieval, and prompt optimization, requiring minimal dependencies.
🚀 Build a production-ready Agentic RAG system with LangGraph using minimal code and streamline your AI development process.
RAG Mini Project — Retrieval‑Augmented Generation chatbot with FastAPI backend (Docker on Hugging Face Spaces) and Streamlit frontend (Render), featuring document ingestion, vector search, and LLM‑powered answers
KardiaFlow is a specialized medical RAG system for analyzing clinical documents. Leveraging Chroma, Sentence-Transformers, and Ollama (gemma2:2b), it transforms PDFs into insights. With FastAPI, Docker, and Azure CI/CD, it offers a robust, secure architecture for professional healthcare AI deployment.
A RAG-based retrieval system for air pollution topics using LangChain and ChromaDB.
A hands-on repository of practical AI apps and LLM-based solutions, including AI agents and RAG systems.
A comprehensive, hands-on tutorial repository for learning and mastering LangChain - the powerful framework for building applications with Large Language Models (LLMs). This codebase provides a structured learning path with practical examples covering everything from basic chat models to advanced AI agents, organized in a progressive curriculum.
Domain Chatbot is a web-based AI chatbot designed for small and medium enterprises (SMEs). It helps users get accurate answers by understanding company policies, documents, and other related content. It uses retrieval-augmented generation (RAG) with LLM models to provide precise and relevant responses.
Enterprise-grade multilingual RAG knowledge engine implementing FAISS-powered dense vector retrieval, semantic chunking, embedding pipelines, and citation-grounded LLM inference. Supports Arabic & English Q&A via FastAPI microservices, hybrid mock/OpenAI execution, and scalable document intelligence with production-ready retrieval orchestration.
Hotel IQ is a full-stack platform for analyzing hotel reviews and business data. Built with Angular 19, FastAPI, Power BI, and AI (OpenAI GPT-4o) for sentiment analysis. Visualize KPIs, explore reviews, and generate customer satisfaction insights. 100% French interface.
A production-ready, enterprise-grade Agentic RAG ingestion pipeline built with n8n, Supabase (pgvector), and AI embeddings. Implements event-driven orchestration, hybrid RAG for structured and unstructured data, vector similarity search, and multi-tenant architecture to deliver client-isolated, retrieval-ready knowledge bases.
A doctor-assistive AI system that interprets medical knowledge and patient images simultaneously. It utilizes a Dual-Encoder architecture to cross-reference textbook theory with visual pathology, generating clinically grounded diagnoses.
Production-grade Retrieval-Augmented Generation (RAG) backend in TypeScript with Express.js, PostgreSQL, and Sequelize — featuring OpenAI-powered embeddings, LLM orchestration, and a complete data-to-answer pipeline.
Add a description, image, and links to the retrieval-augmented-generation-rag topic page so that developers can more easily learn about it.
To associate your repository with the retrieval-augmented-generation-rag topic, visit your repo's landing page and select "manage topics."