MaxKB: Complete Guide to Building Enterprise-Grade AI Agents
⏱️ Estimated Reading Time: 15 minutes
Introduction to MaxKB
MaxKB is a powerful, user-friendly open-source platform designed for building enterprise-grade AI agents. Developed by the 1Panel team, this comprehensive solution has gained significant traction with over 18,000 stars on GitHub, making it one of the most popular enterprise AI agent platforms available today.
What Makes MaxKB Special?
MaxKB stands out in the crowded AI agent landscape by offering:
- Enterprise-Ready Architecture: Built with scalability and security in mind
- Multi-Model Support: Compatible with various LLMs including Llama3, DeepSeek-R1, and Qwen3
- Knowledge Base Integration: Advanced RAG (Retrieval-Augmented Generation) capabilities
- Visual Workflow Builder: User-friendly interface for creating complex agent workflows
- Open Source Flexibility: GPL-3.0 licensed with active community support
Technical Architecture Overview
Core Technology Stack
MaxKB leverages a modern, robust technology stack:
- Frontend: Vue.js with TypeScript for responsive user interfaces
- Backend: Python with Django framework for robust API services
- LLM Framework: LangChain for seamless AI model integration
- Database: PostgreSQL with pgvector extension for vector storage
- Deployment: Docker and Docker Compose for containerized deployment
Key Features
- Agent Management: Create, configure, and manage multiple AI agents
- Knowledge Base: Upload and manage documents for RAG applications
- Model Integration: Support for various LLM providers and models
- Conversation Management: Advanced chat interface with context awareness
- API Access: RESTful APIs for integration with existing systems
- Multi-tenant Architecture: Support for multiple organizations and users
Installation and Setup Guide
Prerequisites
Before installing MaxKB, ensure your system meets the following requirements:
- Operating System: Linux, macOS, or Windows with WSL2
- Docker: Version 20.10 or higher
- Docker Compose: Version 2.0 or higher
- Memory: Minimum 4GB RAM (8GB recommended)
- Storage: At least 10GB free space
Method 1: Docker Compose Deployment (Recommended)
Step 1: Clone the Repository
# Clone MaxKB repository
git clone https://github.com/1Panel-dev/MaxKB.git
cd MaxKB
# Switch to the latest stable version
git checkout v2
Step 2: Configure Environment Variables
# Create environment configuration
cp .env.example .env
# Edit configuration file
nano .env
Essential Environment Variables:
# Database Configuration
POSTGRES_DB=maxkb
POSTGRES_USER=maxkb
POSTGRES_PASSWORD=your_secure_password
# Redis Configuration
REDIS_PASSWORD=your_redis_password
# MaxKB Configuration
SECRET_KEY=your_secret_key_here
DEBUG=false
ALLOWED_HOSTS=localhost,127.0.0.1,your-domain.com
# Storage Configuration
MEDIA_ROOT=/opt/maxkb/media
STATIC_ROOT=/opt/maxkb/static
Step 3: Launch MaxKB
# Start all services
docker-compose up -d
# Check service status
docker-compose ps
# View logs
docker-compose logs -f
Step 4: Initial Setup
# Access the container for initial setup
docker-compose exec maxkb python manage.py migrate
# Create superuser account
docker-compose exec maxkb python manage.py createsuperuser
# Collect static files
docker-compose exec maxkb python manage.py collectstatic --noinput
Method 2: Manual Installation
Step 1: Install Dependencies
# Install Python dependencies
pip install -r requirements.txt
# Install Node.js dependencies for frontend
cd ui
npm install
npm run build
cd ..
Step 2: Database Setup
# Install PostgreSQL and pgvector
# Ubuntu/Debian
sudo apt-get install postgresql postgresql-contrib
# Enable pgvector extension
sudo -u postgres psql
CREATE DATABASE maxkb;
CREATE USER maxkb WITH PASSWORD 'your_password';
GRANT ALL PRIVILEGES ON DATABASE maxkb TO maxkb;
CREATE EXTENSION vector;
\q
Step 3: Configuration
# Configure Django settings
export DATABASE_URL=postgresql://maxkb:your_password@localhost/maxkb
export SECRET_KEY=your_secret_key
export DEBUG=False
# Run migrations
python manage.py migrate
# Create superuser
python manage.py createsuperuser
# Start development server
python manage.py runserver 0.0.0.0:8000
Configuration and Setup
1. Initial Login and Setup
- Access the Web Interface: Navigate to
http://localhost:8000
in your browser - Login: Use the superuser credentials you created during setup
- System Configuration: Complete the initial system setup wizard
2. Model Configuration
Adding OpenAI Models
{
"provider": "openai",
"api_key": "your_openai_api_key",
"base_url": "https://api.openai.com/v1",
"models": [
{
"name": "gpt-4",
"max_tokens": 8192,
"temperature": 0.7
},
{
"name": "gpt-3.5-turbo",
"max_tokens": 4096,
"temperature": 0.7
}
]
}
Adding Local Models (Ollama)
# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh
# Pull models
ollama pull llama3
ollama pull qwen2
# Configure in MaxKB
{
"provider": "ollama",
"base_url": "http://localhost:11434",
"models": ["llama3", "qwen2"]
}
3. Knowledge Base Setup
Document Upload and Processing
- Create Knowledge Base: Navigate to Knowledge Base → Create New
- Upload Documents: Support for PDF, DOCX, TXT, MD formats
- Configure Chunking: Set chunk size and overlap parameters
- Embedding Model: Select appropriate embedding model for your language
Vector Database Configuration
# Vector database settings
vector_db:
type: "pgvector"
connection:
host: "localhost"
port: 5432
database: "maxkb"
user: "maxkb"
password: "your_password"
embedding:
model: "text-embedding-ada-002"
dimensions: 1536
Creating Your First AI Agent
Step 1: Agent Creation
- Navigate to Agents: Click on “Agents” in the main navigation
- Create New Agent: Click “Create Agent” button
- Basic Information:
- Agent Name: “Customer Support Bot”
- Description: “AI assistant for customer inquiries”
- Avatar: Upload custom avatar image
Step 2: Model Selection
{
"model_provider": "openai",
"model_name": "gpt-4",
"temperature": 0.7,
"max_tokens": 1000,
"top_p": 1.0,
"frequency_penalty": 0.0,
"presence_penalty": 0.0
}
Step 3: Knowledge Base Integration
- Select Knowledge Bases: Choose relevant knowledge bases
- Configure RAG Settings:
- Similarity threshold: 0.7
- Max retrieved documents: 5
- Retrieval strategy: “similarity”
Step 4: Prompt Engineering
System Prompt:
You are a helpful customer support assistant for [Company Name].
You have access to our knowledge base and should provide accurate,
helpful responses based on the available information.
Guidelines:
- Always be polite and professional
- If you don't know something, admit it and offer to escalate
- Provide specific solutions when possible
- Ask clarifying questions when needed
Available Knowledge:
{context}
Step 5: Testing Your Agent
- Test Interface: Use the built-in chat interface to test responses
- Knowledge Retrieval: Verify that relevant documents are being retrieved
- Response Quality: Evaluate the accuracy and helpfulness of responses
Advanced Features and Customization
1. Workflow Automation
MaxKB supports complex workflow automation through its visual builder:
Creating Multi-Step Workflows
{
"workflow": {
"name": "Customer Inquiry Handler",
"steps": [
{
"id": "intent_detection",
"type": "classification",
"model": "gpt-4",
"prompt": "Classify the customer intent: support, sales, billing, technical"
},
{
"id": "route_to_specialist",
"type": "conditional",
"conditions": {
"support": "general_support_agent",
"technical": "technical_specialist",
"billing": "billing_agent"
}
},
{
"id": "knowledge_search",
"type": "rag",
"knowledge_base": "customer_support_kb",
"max_results": 3
},
{
"id": "response_generation",
"type": "generation",
"model": "gpt-4",
"template": "Based on the following context: {context}\n\nProvide a helpful response to: {query}"
}
]
}
}
2. API Integration
MaxKB provides comprehensive REST APIs for system integration:
Authentication
import requests
# Login and get token
login_response = requests.post('http://localhost:8000/api/auth/login/', {
'username': 'your_username',
'password': 'your_password'
})
token = login_response.json()['token']
# Use token for authenticated requests
headers = {'Authorization': f'Bearer {token}'}
Agent Interaction API
# Send message to agent
def chat_with_agent(agent_id, message, conversation_id=None):
url = f'http://localhost:8000/api/agents/{agent_id}/chat/'
payload = {
'message': message,
'conversation_id': conversation_id
}
response = requests.post(url, json=payload, headers=headers)
return response.json()
# Example usage
result = chat_with_agent(
agent_id='123',
message='How do I reset my password?',
conversation_id='conv_456'
)
print(result['response'])
Knowledge Base Management
# Upload document to knowledge base
def upload_document(kb_id, file_path):
url = f'http://localhost:8000/api/knowledge-bases/{kb_id}/documents/'
with open(file_path, 'rb') as file:
files = {'file': file}
response = requests.post(url, files=files, headers=headers)
return response.json()
# Search knowledge base
def search_knowledge(kb_id, query, limit=5):
url = f'http://localhost:8000/api/knowledge-bases/{kb_id}/search/'
payload = {
'query': query,
'limit': limit,
'threshold': 0.7
}
response = requests.post(url, json=payload, headers=headers)
return response.json()
3. Custom Model Integration
Adding Custom LLM Providers
# Example: Custom model provider integration
class CustomModelProvider:
def __init__(self, api_key, base_url):
self.api_key = api_key
self.base_url = base_url
def generate_response(self, prompt, model_config):
headers = {
'Authorization': f'Bearer {self.api_key}',
'Content-Type': 'application/json'
}
payload = {
'model': model_config['model'],
'prompt': prompt,
'max_tokens': model_config.get('max_tokens', 1000),
'temperature': model_config.get('temperature', 0.7)
}
response = requests.post(
f'{self.base_url}/completions',
headers=headers,
json=payload
)
return response.json()['choices'][0]['text']
# Register custom provider
custom_provider = CustomModelProvider(
api_key='your_custom_api_key',
base_url='https://your-custom-llm-api.com/v1'
)
Production Deployment
1. Environment Preparation
System Requirements for Production
- CPU: 4+ cores recommended
- Memory: 16GB+ RAM for optimal performance
- Storage: SSD with 100GB+ available space
- Network: Stable internet connection for model APIs
Security Configuration
# docker-compose.prod.yml
version: '3.8'
services:
maxkb:
image: maxkb:latest
environment:
- DEBUG=false
- ALLOWED_HOSTS=your-domain.com
- SECURE_SSL_REDIRECT=true
- SECURE_HSTS_SECONDS=31536000
- SECURE_CONTENT_TYPE_NOSNIFF=true
- SECURE_BROWSER_XSS_FILTER=true
volumes:
- ./ssl:/etc/ssl/certs
ports:
- "443:8000"
2. Load Balancing and Scaling
Nginx Configuration
upstream maxkb_backend {
server maxkb_1:8000;
server maxkb_2:8000;
server maxkb_3:8000;
}
server {
listen 80;
server_name your-domain.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
server_name your-domain.com;
ssl_certificate /etc/ssl/certs/your-domain.crt;
ssl_certificate_key /etc/ssl/private/your-domain.key;
location / {
proxy_pass http://maxkb_backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location /static/ {
alias /opt/maxkb/static/;
expires 1y;
add_header Cache-Control "public, immutable";
}
}
3. Monitoring and Maintenance
Health Check Script
#!/bin/bash
# health_check.sh
MAXKB_URL="https://your-domain.com"
HEALTH_ENDPOINT="/api/health/"
# Check service availability
response=$(curl -s -o /dev/null -w "%{http_code}" "${MAXKB_URL}${HEALTH_ENDPOINT}")
if [ $response -eq 200 ]; then
echo "✅ MaxKB is healthy"
exit 0
else
echo "❌ MaxKB health check failed (HTTP $response)"
# Send alert to monitoring system
# curl -X POST "https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK" \
# -H 'Content-type: application/json' \
# --data '{"text":"MaxKB health check failed"}'
exit 1
fi
Log Management
# Log rotation configuration
echo "/opt/maxkb/logs/*.log {
daily
rotate 30
compress
delaycompress
missingok
notifempty
create 644 maxkb maxkb
postrotate
docker-compose restart maxkb
endscript
}" > /etc/logrotate.d/maxkb
Troubleshooting Common Issues
1. Installation Problems
Docker Permission Issues
# Add user to docker group
sudo usermod -aG docker $USER
newgrp docker
# Verify docker access
docker run hello-world
Memory Issues
# Check system memory
free -h
# Increase swap if needed
sudo fallocate -l 4G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
2. Model Configuration Issues
API Key Validation
# Test OpenAI API connection
import openai
try:
client = openai.OpenAI(api_key="your_api_key")
response = client.models.list()
print("✅ API connection successful")
print(f"Available models: {[model.id for model in response.data]}")
except Exception as e:
print(f"❌ API connection failed: {e}")
Local Model Issues
# Check Ollama service
systemctl status ollama
# Restart Ollama if needed
systemctl restart ollama
# Test model availability
ollama list
ollama run llama3 "Hello, how are you?"
3. Performance Optimization
Database Optimization
-- PostgreSQL performance tuning
-- Run as postgres user
-- Create indexes for vector search
CREATE INDEX CONCURRENTLY idx_documents_embedding
ON documents USING ivfflat (embedding vector_cosine_ops)
WITH (lists = 100);
-- Analyze table statistics
ANALYZE documents;
-- Check index usage
SELECT schemaname, tablename, indexname, idx_scan, idx_tup_read, idx_tup_fetch
FROM pg_stat_user_indexes
WHERE schemaname = 'public';
Caching Configuration
# Redis caching for improved performance
CACHES = {
'default': {
'BACKEND': 'django_redis.cache.RedisCache',
'LOCATION': 'redis://localhost:6379/1',
'OPTIONS': {
'CLIENT_CLASS': 'django_redis.client.DefaultClient',
},
'KEY_PREFIX': 'maxkb',
'TIMEOUT': 3600, # 1 hour default timeout
}
}
# Cache frequently accessed data
@cache_page(300) # Cache for 5 minutes
def get_agent_config(agent_id):
return Agent.objects.get(id=agent_id).config
Best Practices and Tips
1. Agent Design Guidelines
Effective Prompt Engineering
- Be Specific: Clearly define the agent’s role and capabilities
- Provide Context: Include relevant background information
- Set Boundaries: Explicitly state what the agent should not do
- Use Examples: Include few-shot examples for complex tasks
Knowledge Base Organization
# Recommended folder structure for knowledge bases
Customer Support/
├── FAQ/
│ ├── account_issues.md
│ ├── billing_questions.md
│ └── technical_problems.md
├── Policies/
│ ├── refund_policy.md
│ ├── privacy_policy.md
│ └── terms_of_service.md
└── Procedures/
├── password_reset.md
├── account_setup.md
└── troubleshooting_guide.md
2. Security Best Practices
API Security
# Implement rate limiting
from django_ratelimit.decorators import ratelimit
@ratelimit(key='ip', rate='100/h', method='POST')
def chat_endpoint(request):
# API endpoint implementation
pass
# Input validation
def validate_message(message):
if len(message) > 4000:
raise ValidationError("Message too long")
# Remove potentially harmful content
cleaned_message = bleach.clean(message, strip=True)
return cleaned_message
Data Privacy
# Implement data retention policies
from celery import task
from datetime import datetime, timedelta
@task
def cleanup_old_conversations():
cutoff_date = datetime.now() - timedelta(days=90)
Conversation.objects.filter(
created_at__lt=cutoff_date,
is_archived=True
).delete()
3. Performance Optimization
Vector Search Optimization
# Optimize vector similarity search
class OptimizedVectorSearch:
def __init__(self, embedding_model, vector_db):
self.embedding_model = embedding_model
self.vector_db = vector_db
self.cache = {}
def search(self, query, limit=5, threshold=0.7):
# Check cache first
cache_key = f"{hash(query)}_{limit}_{threshold}"
if cache_key in self.cache:
return self.cache[cache_key]
# Generate query embedding
query_embedding = self.embedding_model.encode(query)
# Perform vector search with filters
results = self.vector_db.similarity_search(
query_embedding,
limit=limit,
threshold=threshold,
filter={'status': 'active'}
)
# Cache results
self.cache[cache_key] = results
return results
Conclusion
MaxKB represents a significant advancement in enterprise AI agent development, offering a comprehensive platform that balances power with usability. Its open-source nature, combined with enterprise-grade features, makes it an excellent choice for organizations looking to implement AI agents at scale.
Key Takeaways
- Versatile Platform: MaxKB supports a wide range of use cases from customer support to complex workflow automation
- Scalable Architecture: Built to handle enterprise-level demands with proper deployment and optimization
- Active Community: Strong community support and regular updates ensure continued improvement
- Cost-Effective: Open-source licensing reduces total cost of ownership compared to proprietary solutions
Next Steps
To continue your MaxKB journey:
- Experiment: Start with simple agents and gradually increase complexity
- Community Engagement: Join the MaxKB community on GitHub for support and contributions
- Documentation: Explore the official documentation for advanced features
- Custom Development: Consider contributing to the project or developing custom extensions
Additional Resources
- Official Website: maxkb.cn
- GitHub Repository: github.com/1Panel-dev/MaxKB
- Community Forum: Available through GitHub Discussions
- Documentation: Comprehensive guides available in the repository
MaxKB empowers organizations to harness the full potential of AI agents while maintaining control, security, and flexibility. Whether you’re building customer support bots, internal knowledge assistants, or complex automation workflows, MaxKB provides the tools and framework needed for success.
This tutorial is part of our comprehensive AI and automation series. For more advanced tutorials and enterprise AI solutions, visit Thaki Cloud.