AI-Powered API Mocking

The API
that builds
itself

Stop writing JSON mocks. Focus on building features. Helix generates realistic API responses automatically using AI.

Built for modern development workflows

01

Zero Configuration

No setup required. Start Helix and make requests to any endpoint. AI analyzes paths and generates realistic responses instantly.

02

Context Awareness

Helix remembers your actions within sessions. Create a user, then list users—it returns what you created.

03

Smart Data

Generates realistic data: proper names, valid emails, correct dates, professional IDs. Your demos look real.

04

Multiple AI Providers

Choose from DeepSeek, Groq, Ollama, or built-in demo mode. Free tiers available with generous rate limits.

05

Chaos Engineering

Simulate production failures: random errors, latency spikes, timeouts. Build resilient applications from day one.

06

Redis Caching

Lightning-fast responses with intelligent caching. Session-aware storage keeps your data consistent.

Zero configuration required

Start Helix, make any request to any endpoint. AI analyzes your request structure and generates a realistic response that matches REST conventions.

Read Documentation →
# Start Helix with Docker $ docker-compose up # Make any request - no setup needed $ curl http://localhost:8080/api/users { "users": [ { "id": "usr_a1b2c3", "name": "Sarah Chen", "email": "sarah@company.com", "username": "schen", "role": "developer", "status": "active", "created_at": "2024-12-10T14:30:00Z" } ], "total": 3, "page": 1, "per_page": 10 } # No mock files. No configuration. Pure magic.

Up and running in 60 seconds

01

Clone & Setup

Clone the repository and run the setup script. It handles everything automatically.

git clone https://github.com/ashfromsky/helix
cd helix
./setup.sh
02

Start Services

Launch Redis and Helix using Docker Compose. Works on any platform.

docker-compose up
03

Make Requests

Hit any endpoint. Helix generates realistic responses automatically.

curl localhost:8080/api/products
# Instant realistic JSON!

Choose your intelligence layer

🎭

Demo Mode

Template-based generation using Faker library

Default
🧠

DeepSeek

Via OpenRouter API with free tier available

500/day

Groq

Ultra-fast LPU inference with generous limits

14.4K/day
🏠

Ollama

Run AI models locally on your machine

Offline
< 80ms
Response Time
100%
REST Compatible
Endpoints Supported
0
Config Files Needed

Ready to ship
faster?