Describe your objective
Write what you want in plain English — a feature, a refactor, a bugfix, or an entire module — and AutoForge breaks it into a parallel task plan, assigning each piece to a specialized AI coding agent.
Tell AutoForge what to build. It splits the work across multiple AI coding agents running in parallel — each one opening a GitHub PR you review and merge.
See how it worksNo prompt engineering degree required. No babysitting agents. No mystery black box.
Write what you want in plain English — a feature, a refactor, a bugfix, or an entire module — and AutoForge breaks it into a parallel task plan, assigning each piece to a specialized AI coding agent.
Multiple AI agents work simultaneously on separate branches, each tackling its assigned task independently — collapsing days of serial work into hours.
By default, every completed unit of work opens a GitHub pull request — you review diffs, test results, and a plain-English summary before anything ships. Or skip GitHub entirely and let AutoForge build fully autonomously.
Every design decision traces back to one constraint: real code, real PRs, real repositories — not sandboxed demos.
Multiple AI coding agents work on your codebase simultaneously, each on its own branch — so implementation speed is no longer your bottleneck.
Every agent writes to a real branch in your real repository — branches, commits, pull requests, and CI checks all live in your existing GitHub workflow, with no proprietary storage or lock-in.
Connect your own OpenAI or Anthropic key and AutoForge calls it directly — no markup, no re-billing, every token visible in your provider dashboard.
Serve your entire team — or your customers — from one AutoForge instance, with workspaces fully isolated at the data and execution layer.
AutoForge has first-class integration with Claude Code and ChatGPT Codex — the two AI agents that consistently produce reviewable, mergeable output on real codebases.
The task plan is computed before any agent runs, stored as a structured plan, and executed in the right order — you can inspect, modify, and re-run any part without starting the whole objective over.
Connect your own API key — OpenAI for Codex, Anthropic for Claude — and AutoForge uses it directly at your provider's published rates. We charge for orchestration, not compute.
Your API keys are stored encrypted at rest and are never transmitted to AutoForge servers during inference — requests go directly from the agent runtime to the model provider.
No AutoForge account is required to call model APIs. If your key works in the playground, it works in AutoForge.
// AutoForge config — your keys, your calls
export default {
providers: {
anthropic: {
apiKey: process.env.ANTHROPIC_API_KEY,
model: "claude-opus-4-5",
},
openai: {
apiKey: process.env.OPENAI_API_KEY,
model: "codex-mini",
},
},
// Tokens go directly to the provider.
// AutoForge never intercepts inference.
}Works with the tools already in your stack.
Model costs go directly to your provider at their published rates. AutoForge charges for the orchestration layer — the planner, the agent runner, the GitHub integration, and the review workflow.
No credit card required
For solo developers kicking the tires. Full access to core orchestration on public and private repositories.
Billed monthly, cancel anytime
For indie hackers and small teams shipping continuously. Unlimited runs, higher parallelism, priority execution.
For engineering teams that need SSO, audit logs, custom model routing, and SLA-backed uptime.
All plans include BYOK (bring your own API keys). Model inference costs are always billed directly by your provider — AutoForge never adds a margin.
If something's still unclear after this, find us on Discord.
Join the waitlist and be first to run parallel AI agents on your real GitHub repositories.
No credit card required. Free tier available at launch.