--- license: apache-2.0 title: Self-Reflective CRAG Application "Info Assistant" sdk: streamlit emoji: 🌍 colorFrom: blue short_description: Self Reflective Multi Agent LangGraph CRAG Application sdk_version: 1.38.0 --- # Project presentation https://gamma.app/docs/Info-Assistant-LangGraph-Approach-to-AI-Assistant-ed9thprs24oyhkj # Overview This project demonstrates a self Reflective corrective Retrieval Augmented Generation (CRAG) application built using LangGraph. The application leverages a Gemma2 9B LLM to provide informative and relevant responses to user queries. It employs a multi-agent approach, incorporating various components for enhanced performance and user experience. # Key Features * Vector Store: Uses Chroma Vector Store to efficiently store and retrieve context from scraped webpages related to data science and programming. * Prompt Guard: Ensures question safety by checking against predefined guidelines. * LLM Graders: Evaluates question relevance, answer grounding, and helpfulness to maintain high-quality responses. * Retrieval and Generation: Combines context retrieval from vector store and web search with LLM generation to provide comprehensive answers. * Iterative Refinement: Rewrites questions and regenerates answers as needed to ensure accuracy and relevance. * Local Deployment: Can be deployed locally for enhanced user data privacy. ## Technical Specifications * LLM: Gemma2 9B * Vector Store: Chroma * Embeddings: Alibaba-NLP/gte-base-en-v1.5 * Workflow: LangGraph * Model API: ChatGroq * Web Search: Wikipedia and Google SERP ## Workflow * User Query: User inputs a question. * Prompt Guard: Checks if the question is safe and appropriate. * Context Retrieval: Searches the vector store for relevant documents. * Document Relevance: Evaluates document relevance using LLM graders. * Web Search: If necessary, conducts web searches on Wikipedia and Google SERP. * Answer Generation: Generates a response using the retrieved documents and LLM. * Answer Evaluation: Evaluates answer grounding and helpfulness using LLM graders. * Refinement: If necessary, rewrites the question or regenerates the answer. ## Customization Options * Model Selection: Choose different LLM models based on specific needs (e.g., larger models for more complex tasks). * Fine-Tuning: Fine-tune the LLM to match specific styles or domains. * Retrieval Methods: Explore alternative vector stores or retrieval techniques. ## Local Deployment * To deploy the application locally, follow these steps: * Set up environment: Install required dependencies (LangGraph, Chroma, LLM API, etc.). * Prepare data: Scrape webpages and create the vector store. * Configure workflow: Define the workflow and LLM graders. * Run application: Execute the application to start processing user queries. ## Future Enhancements * Knowledge Base Expansion: Continuously update the vector store with new data. * Retrieval Optimization: Explore GraphRag. * Integration with Other Applications: Integrate with other tools or platforms for broader use cases.