Skip to content

Quick Start Guide

Get up and running with IVExES in under 10 minutes. This guide walks you through the essential steps to install, configure, and run your first vulnerability analysis.

🚀 1-Minute Setup

The fastest way to get IVExES running:

# Clone and setup everything
git clone https://github.com/LetsDrinkSomeTea/ivexes.git
cd ivexes
make setup

# Add your API key
echo "LLM_API_KEY=your_openai_api_key_here" > .secrets.env

# Start analyzing!
python examples/20_mvp_screen.py

📋 Prerequisites

Before starting, ensure you have:

  • Python 3.12+ - Check with python --version
  • Docker - Check with docker --version
  • Git - Check with git --version
  • OpenAI API Key - Get one from OpenAI Platform

Quick Check

Run this command to verify all prerequisites:

python --version && docker --version && git --version

🛠️ Installation

The make setup command handles everything:

git clone https://github.com/LetsDrinkSomeTea/ivexes.git
cd ivexes
make setup

This will: - ✅ Build Docker images (nvim-lsp, kali-ssh) - ✅ Install Python dependencies with uv - ✅ Start LiteLLM proxy server - ✅ Initialize vector databases

Option 2: Manual Setup

For more control over the process:

# 1. Clone repository
git clone https://github.com/LetsDrinkSomeTea/ivexes.git
cd ivexes

# 2. Install dependencies
make sync  # or: uv sync --all-extras --all-packages --group dev

# 3. Build containers
make build-images  # or: docker compose --profile images build

# 4. Start services
make run-litellm  # or: docker compose up -d

🔑 Configuration

Essential Configuration

Create your API key configuration:

# Create secrets file (never commit this!)
cat > .secrets.env << EOF
LLM_API_KEY=your_openai_api_key_here
OPENAI_API_KEY=your_openai_api_key_here
LLM_BASE_URL=http://localhost:4000/v1
EOF

Basic Settings (Optional)

Create a .env file for additional settings:

cat > .env << EOF
MODEL=openai/gpt-4o-mini
REASONING_MODEL=openai/o4-mini
TEMPERATURE=0.3
MAX_TURNS=10
LOG_LEVEL=INFO
EOF

✅ Verification

Verify your installation is working:

1. Check Services

# Verify LiteLLM proxy is running
curl http://localhost:4000/health/liveliness

# Check Docker containers
docker compose ps

2. Test Import

python -c "import ivexes; print('✅ IVExES imported successfully')"

3. Run Test Suite

make tests

🎯 Your First Analysis

Example 1: MVP Agent (Quickest Start)

The MVP Agent provides a minimal viable analysis:

import asyncio
from ivexes.agents import MVPAgent
from ivexes.config import PartialSettings

# Basic configuration
settings = PartialSettings(
    model='openai/gpt-4o-mini',
    max_turns=10
)

# Create and run agent
agent = MVPAgent(settings=settings)
asyncio.run(agent.run_interactive())

Save this as my_first_analysis.py and run:

python my_first_analysis.py

Example 2: Single Agent with Codebase

For analyzing actual code vulnerabilities:

import asyncio
from ivexes.agents import SingleAgent
from ivexes.config import PartialSettings

settings = PartialSettings(
    model='openai/gpt-4o-mini',
    codebase_path='/path/to/your/project',
    vulnerable_folder='vulnerable-version',
    patched_folder='patched-version',
    max_turns=15
)

agent = SingleAgent(settings=settings)
asyncio.run(agent.run_interactive())

Example 3: Using Pre-built Examples

Run one of the included examples:

# MVP analysis example
python examples/20_mvp_screen.py

# Single agent analysis
python examples/60_single_agent_screen.py

# Multi-agent orchestration
python examples/70_multi_agent_screen.py

🗨️ Interaction Modes

IVExES supports three execution modes:

Interactive Mode

await agent.run_interactive()
- ✅ Best for exploration and learning - ✅ Real-time conversation with the agent - ✅ Type exit, quit, or q to end

Streaming Mode

async for chunk in agent.run_streamed():
    print(chunk, end='')
- ✅ Real-time output as analysis progresses - ✅ Good for monitoring long-running analyses - ✅ Integrates well with web interfaces

Synchronous Mode

result = agent.run()
print(result)
- ✅ Simple one-shot analysis - ✅ Best for scripting and automation - ✅ Returns complete analysis result

🎨 Customization

Model Selection

Choose different models for different tasks:

settings = PartialSettings(
    model='openai/gpt-4o',           # More capable but slower
    reasoning_model='openai/o4-mini', # For complex reasoning
    temperature=0.1,                  # More deterministic
)

Analysis Scope

Configure what gets analyzed:

settings = PartialSettings(
    codebase_path='/path/to/project',
    vulnerable_folder='v1.0-vulnerable',
    patched_folder='v1.1-patched',
    setup_archive='/path/to/setup.tgz',  # Optional setup files
)

Vector Database

Enable enhanced knowledge base searching:

settings = PartialSettings(
    embedding_provider='openai',        # or 'builtin', 'local'
    embedding_model='text-embedding-3-large',
    chroma_path='/custom/db/path',      # Optional custom path
)

🔧 Common Workflows

1. CVE Analysis

from ivexes.agents import SingleAgent

# Analyze a specific CVE
agent = SingleAgent()
# In interactive mode, ask:
# "Analyze CVE-2024-12345 and explain the vulnerability"

2. Code Diff Analysis

settings = PartialSettings(
    codebase_path='/path/to/project',
    vulnerable_folder='before-patch',
    patched_folder='after-patch'
)
agent = SingleAgent(settings=settings)
# Ask: "What vulnerability was fixed in the patch?"

3. CTF Challenge

from ivexes.agents import HTBChallengeAgent

agent = HTBChallengeAgent(
    challenge_name="buffer_overflow_basic",
    settings=settings
)

🐛 Troubleshooting

Common Issues

❌ "ModuleNotFoundError: No module named 'ivexes'"

# Reinstall dependencies
make sync
# or
uv sync --all-extras --all-packages --group dev

❌ "Connection refused" when contacting LiteLLM

# Restart LiteLLM service
docker compose restart litellm
# Check if port 4000 is available
lsof -i :4000

❌ "Docker permission denied"

# Add user to docker group
sudo usermod -aG docker $USER
# Logout and login again

❌ "API key not found"

# Verify .secrets.env exists and contains your key
cat .secrets.env
# Ensure the file is in the project root directory

Getting Help

  1. Check logs: docker compose logs litellm
  2. Run diagnostics: make tests
  3. Review configuration: See Configuration Guide
  4. Search issues: Check GitHub Issues

🎓 Learning Path

Now that you're up and running, here's your learning path:

Beginner (First Hour)

  1. ✅ Complete this quickstart guide
  2. 📖 Read Usage Guide for basic workflows
  3. 🔍 Explore Examples for practical use cases

Intermediate (First Day)

  1. 🏗️ Understand Architecture
  2. ⚙️ Master Configuration options
  3. 🤖 Learn about different Agent Types

Advanced (First Week)

  1. 🛠️ Study Development Guide
  2. 🔌 Explore API Reference for all components
  3. 🚀 Build custom agents and tools

💡 Pro Tips

Performance Tip

Start with gpt-4o-mini for faster responses, then upgrade to gpt-4o for complex analyses.

Cost Optimization

Set MAX_TURNS=5 for initial testing to limit API usage.

Security Note

Never commit .secrets.env to version control. Add it to .gitignore.

Debugging

Set LOG_LEVEL=DEBUG in your .env file for detailed troubleshooting information.

🚀 Next Steps

Choose your path:

Ready for your first real analysis? Head to the Usage Guide to learn core workflows and best practices.