git clone https://github.com/Tashima-Tarsh/Disha.git
cd Disha
npm install
Create a .env file in the root directory:
# .env
ANTHROPIC_API_KEY=your_anthropic_key # For Claude LLM
OPENAI_API_KEY=your_openai_key # For GPT-4o (optional)
NEO4J_URI=bolt://localhost:7687 # Graph DB (optional)
NEO4J_PASSWORD=your_password # Graph DB (optional)
Note: Disha works with mock providers and open-source APIs — no paid keys required for core functionality.
The Disha CLI is built with Bun + TypeScript + React/Ink:
# Start the CLI
bun run src/entrypoints/cli.tsx
# List available tools
bun run src/entrypoints/cli.tsx --tools
# List available commands
bun run src/entrypoints/cli.tsx --commands
The AI platform runs 7 specialized agents orchestrated via FastAPI:
cd ai-platform/backend
pip install -r requirements.txt
uvicorn app.main:app --reload --port 8000
| Endpoint | Method | Description |
|---|---|---|
/api/v1/investigate |
POST | Launch multi-agent investigation |
/api/v1/agents/osint |
POST | OSINT data collection |
/api/v1/agents/crypto |
POST | Blockchain analysis |
/api/v1/agents/detection |
POST | Anomaly detection |
/api/v1/agents/graph |
POST | Knowledge graph queries |
/api/v1/agents/reasoning |
POST | LLM-powered reasoning |
/api/v1/multimodal/vision |
POST | Image analysis |
/api/v1/multimodal/audio |
POST | Audio transcription |
Multi-agent reasoning with political, legal, ideological, and security analysis:
cd decision-engine
pip install -r requirements.txt
# Run with mock LLM (no model download needed)
DISHA_MODEL_PROVIDER=mock python main_decision_engine.py
# Run tests
DISHA_MODEL_PROVIDER=mock python -m pytest tests/ -v
pip install faiss-cpu sentence-transformers
pip install llama-cpp-python
export DISHA_MODEL_PROVIDER=llamacpp
export DISHA_MODEL_PATH=/path/to/model.gguf
AI classifier and simulation engine for 32+ historical conflicts:
cd historical-strategy
pip install -r requirements.txt
# Start the API server
uvicorn api.main:app --reload --port 8001
# Train the classifier
python model/train.py
# Run a simulation
python simulation/engine.py
AI-powered honeypot stack with threat classification:
# Using Docker (recommended)
cd cyber-defense
docker-compose up -d
# Train the threat classifier
cd model
pip install torch --index-url https://download.pytorch.org/whl/cpu
python train.py
The Model Context Protocol server exposes Disha’s tools for AI assistants:
cd mcp-server
npm install
npm run dev # Development mode
npm run build # Build for production
npm start # Production mode
Live deployment: disha.vercel.app
Next.js dashboard for threat intelligence visualization:
cd web
npm install
npm run dev # http://localhost:3000
python scripts/train_all.py
# Reinforcement Learning (PPO)
cd ai-platform/backend && python -m app.rl.train
# Graph Neural Networks
cd ai-platform/backend && python graph_ai/train.py
# Decision Engine
cd decision-engine && DISHA_MODEL_PROVIDER=mock python train.py
Disha supports continuous self-improvement:
# Offline mode (synthetic data)
python scripts/continuous_train.py --rounds 3 --offline
# Online mode (fetches from abuse.ch, arXiv, OEIS)
python scripts/continuous_train.py --rounds 3
# Single component
python scripts/continuous_train.py --rounds 3 --component rl
Real-time threat monitoring and self-healing:
# Run the full sentinel system
python scripts/sentinel/guardian.py
# Run tests
python -m pytest scripts/sentinel/test_sentinel.py -v
npx biome check src/ # Check
npx biome check --write src/ # Safe fixes
npx biome check --write --unsafe src/ # All auto-fixes
flake8 ai-platform/backend/ --max-line-length=120 --ignore=E501,W503,W504
flake8 decision-engine/ --max-line-length=120 --ignore=E501,W503,W504
| Issue | Solution |
|---|---|
bun command not found |
Install Bun: curl -fsSL https://bun.sh/install \| bash |
| Python import errors | Ensure you’re in the correct subdirectory and have installed requirements |
| PyTorch not found | Install CPU-only: pip install torch --index-url https://download.pytorch.org/whl/cpu |
| Neo4j connection refused | Start Neo4j or set NEO4J_URI environment variable |
| FAISS not available | Install: pip install faiss-cpu (optional dependency) |
| TypeScript errors | Some SDK types are generated at build time — run bun run build first |
Full documentation: WIKI.md · Architecture · LEARNING_LOG.md