A modern AWS Bedrock-based Retrieval-Augmented Generation (RAG) web application that provides an intelligent chatbot interface for querying documents and general AI conversations.
Watch the application in action:
open-source-rag-chatbot-1756584527454.mp4
This application leverages AWS Bedrock services to provide two distinct AI interaction modes:
- Knowledge Base Mode: Uses AWS Bedrock Knowledge Base with retrieval-augmented generation for document-specific queries
- Direct AI Mode: Direct interaction with AWS Bedrock's Llama 3 model for general conversations
- Dual AI Interaction Modes
- Knowledge Base RAG: Query specific document collections and knowledge bases
- Direct AI Chat: General-purpose AI conversations using Llama 3
- Modern Web Interface: Clean, responsive design with dark theme
- Real-time Chat: Interactive chat interface with typing indicators
- Document Intelligence: AI-powered document querying and analysis
- AWS Integration: Full integration with AWS Bedrock services
- FastAPI: High-performance Python web framework
- AWS Bedrock: AI/ML services for model inference and knowledge base
- AWS Bedrock Agent Runtime: For knowledge base retrieval and generation
- Boto3: AWS SDK for Python integration
- Uvicorn: ASGI server for FastAPI
- HTML5/CSS3: Modern responsive web interface
- Vanilla JavaScript: Client-side interactivity
- Jinja2: Server-side templating
- AWS Bedrock Runtime: Direct model invocation
- AWS Bedrock Agent Runtime: Knowledge base operations
- AWS Bedrock Knowledge Base: Document retrieval system
- Python 3.8+
- AWS Account with Bedrock access
- AWS CLI configured with appropriate permissions
- Virtual environment (recommended)
-
Clone the repository
git clone <repository-url> cd nextwork-rag-webapp
-
Create and activate virtual environment
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install dependencies
pip install -r requirements.txt
-
Configure environment variables Create a
.envfile in the root directory:AWS_REGION=us-east-2 MODEL_ID=meta.llama3-70b-instruct-v1:0 KNOWLEDGE_BASE_ID=your_knowledge_base_id MODEL_ARN=arn:aws:bedrock:us-east-2::foundation-model/meta.llama3-70b-instruct-v1:0
python -m uvicorn main:app --reload --host 127.0.0.1 --port 8000python -m uvicorn web_app:app --reload --host 127.0.0.1 --port 8000python web_app.pyThe application will be available at http://127.0.0.1:8000
GET /bedrock/query?text=your_question_here
Queries the AWS Bedrock Knowledge Base for document-specific information.
Example:
http://127.0.0.1:8000/bedrock/query?text=what%20are%20Warren%20Buffett%27s%20main%20investment%20principles?
More Warren Buffett RAG Examples:
http://127.0.0.1:8000/bedrock/query?text=what%20is%20the%20margin%20of%20safety%20concept?
http://127.0.0.1:8000/bedrock/query?text=how%20does%20Buffett%20evaluate%20management%20quality?
http://127.0.0.1:8000/bedrock/query?text=what%20are%20the%20business%20tenets%20for%20investing?
GET /bedrock/invoke?text=your_question_here
Directly invokes the Llama 3 model for general AI conversations.
Example:
http://127.0.0.1:8000/bedrock/invoke?text=explain%20artificial%20intelligence
aws-bedrock-rag-webapp/
βββ main.py # Simple FastAPI app with knowledge base endpoint
βββ web_app.py # Full web application with UI
βββ requirements.txt # Python dependencies
βββ architecture/
β βββ image.png # Architecture diagram
β βββ open-source-rag-chatbot-1756584527454.mp4 # Demo video
βββ templates/
β βββ index.html # Main web interface
βββ static/
β βββ style.css # Application styles
βββ venv/ # Virtual environment
- Enable AWS Bedrock: Ensure Bedrock is available in your AWS region
- Model Access: Request access to Llama 3 models in AWS Bedrock console
- Knowledge Base: Create and configure a Bedrock Knowledge Base with your documents
- IAM Permissions: Ensure your AWS credentials have the following permissions:
bedrock:InvokeModelbedrock:RetrieveAndGeneratebedrock-agent:RetrieveAndGenerate
| Variable | Description | Example |
|---|---|---|
AWS_REGION |
AWS region for Bedrock services | us-east-2 |
MODEL_ID |
Bedrock model identifier | meta.llama3-70b-instruct-v1:0 |
KNOWLEDGE_BASE_ID |
Your Bedrock Knowledge Base ID | ABC123DEF456 |
MODEL_ARN |
Full ARN of the Bedrock model | arn:aws:bedrock:... |
- Document Analysis: Intelligent querying of document collections
- Interactive Documentation: Query documentation through natural language
- Knowledge Management: AI-powered information retrieval from knowledge bases
- Investment Research: Query investment strategies and principles (demo includes Warren Buffett's investment philosophy)
- General AI Chat: Use for broader technical discussions and assistance
This demo includes "The Warren Buffett Way" by Robert G. Hagstrom as the sample document for testing RAG capabilities. The knowledge base contains Warren Buffett's investment strategies, business principles, and financial wisdom.
Source Document: The Warren Buffett Way PDF
- Retrieves relevant information from indexed document collections
- Provides context-aware responses based on actual document content
- Ideal for specific questions about stored information and documentation
- Direct access to Llama 3 model capabilities
- General-purpose conversational AI
- Useful for broader technical discussions and explanations
- Clean Chat Interface: Streamlined chatbot interface with mode toggle
- Responsive Design: Works on desktop and mobile devices
- Real-time Feedback: Typing indicators and smooth interactions
The application includes comprehensive error handling for:
- AWS service errors (ClientError, BotoCoreError)
- Missing environment variables
- Model invocation failures
- Network connectivity issues
The application is fully responsive and optimized for:
- Desktop browsers
- Tablet devices
- Mobile phones
- Fork the repository
- Create a feature branch
- Make your changes
- Test thoroughly
- Submit a pull request
This project is open source. Please refer to the license file for usage guidelines.
Built with β€οΈ using AWS Bedrock and FastAPI
