AUTH USERS AI JOBS LOGS
AUTH USERS PERMISSIONS LOGS AI JOBS INSTRUCTION ACTION LOCAL NETWORK HARDWARE WEB APP

The production runtime
for agentic products.

An embeddable Rust library that enables your agents to run, learn, and adapt. Securely.

Works with

What is systemprompt?

You built something smart. But smart isn't shippable. systemprompt adds everything your AI needs to serve real users: auth, isolation, storage, orchestration. Focus on the intelligence, not the infrastructure.

1. Create — Deploy a blog agent in 30 seconds
Authenticating and connecting with https://yourdomain.com
> |
systemprompt admin agents create --name blog --port 9010
Agent created: blog
systemprompt admin agents edit blog --skill research_content
systemprompt admin agents edit blog --skill content_create
systemprompt admin agents edit blog --mcp-server content-manager
systemprompt admin agents validate blog
Config ✓ | Skills ✓ | MCP servers ✓ | OAuth scopes ✓
systemprompt admin agents edit blog --enable
Agent live → yourdomain.com/api/v1/agents/registry
2. Execute — The agent writes, you approve, it publishes
Authenticating and connecting with https://yourdomain.com
> |
systemprompt admin agents message blog -m "Write about MCP auth" --blocking
Agent: What's your contrarian angle?
→ Most tutorials overcomplicate it
Agent: Good angle. Researching...
→ research_content: 5 sources found
→ content_create: 3,847 words drafted
systemprompt infra jobs run publish_content
Live → https://yourdomain.com/blog/mcp-auth
3. Learn — The agent improves from its own metrics
Authenticating and connecting with https://yourdomain.com
> |
systemprompt analytics content stats
/blog/mcp-auth → 1,247 views | 4m 32s avg | 23% bounce
systemprompt analytics agents show blog
Success rate: 94% | Avg latency: 12s | Cost: $0.43/post
systemprompt analytics costs breakdown --by agent
blog: $4.30 (10 posts) | research: $1.20 (15 queries)
Agent: Bounce rate high on mobile. Updating playbook.
systemprompt core playbooks sync --direction to-db -y
Playbook updated → content_blog
1 / 3

EMBEDDED RUST LIBRARY

50MB binary with everything. A complete AI infrastructure you compile into your project.

  • Complete stack in one binary
  • Production auth built in
  • MCP hosting, secured
  • Extensible at compile time
  • Observability without instrumentation
Learn more →

UNIFIED CONTROL PLANE

The same commands work for you and your AI agents. Local or remote. Every action audited.

  • Superagent-ready interface
  • Transparent remote routing
  • Eight unified domains
  • Complete audit visibility
  • Config-driven operations
Learn more →

THE CLOSED LOOP

Your agents can query their own performance and adapt in real-time. The feedback loop is built in.

  • MCP tool calls -> PostgreSQL -> CLI queries
  • A2A self-analysis in real-time
  • Behavioral pattern recognition
  • Dynamic context & skill modification
  • Production audit trail with trace IDs
Learn more →

Get started in minutes

1

Clone

Create your project from the GitHub template

gh repo create my-ai --template systempromptio/systemprompt-template --clone
2

Build

Build the CLI binary (offline mode for first build)

SQLX_OFFLINE=true cargo build --release
3

Login

Authenticate with systemprompt.io

systemprompt cloud auth login
4

Profile

Create a local or cloud deployment profile

systemprompt cloud profile create local
5

Run

Start all services locally

just start

Machine-Native Guides Written by agents. For agents.

Point your super agent to a playbook and watch the magic happen. Deterministic. Self-repairing. Executable.

Framework vs. Runtime

AI Frameworks (LangChain, CrewAI)

  • Chains, agents, tools
  • Your API keys
  • Single-user by default
  • Auth: build it yourself
  • Multi-tenant: build it yourself
  • 2-4 weeks on auth alone

Built on open standards

Your binary. Your rules. We're just a library.

No vendor lock-in. Production-grade from day one.

Your extensions. Your binary.

systemprompt.io is a library you control, not a platform that controls you.

Import the SystemPrompt library into your Rust project. Compile your binary—you own it. Extensions are your code: Rust wrappers where you implement your logic, your integrations, your features.

Extension

The base trait. Identity, version, dependencies.

View documentation →
impl Extension for MyExtension {
    fn id(&self) -> &'static str { "my-extension" }
    fn name(&self) -> &'static str { "My Extension" }
}
register_extension!(MyExtension);

ApiExtension

Add HTTP routes. Full Axum power.

View documentation →
impl ApiExtension for MyExtension {
    fn router(&self, ctx: &ExtensionContext) -> Option<Router> {
        Some(Router::new()
            .route("/items", get(list_items).post(create_item))
            .with_state(ctx.clone()))
    }
}

SchemaExtension

Add database tables. Migrations included.

View documentation →
impl SchemaExtension for MyExtension {
    fn schemas(&self) -> Vec<SchemaDefinition> {
        vec![SchemaDefinition::new(
            "my_items",
            include_str!("../schema/items.sql")
        )]
    }
}

JobExtension

Background jobs. Cron scheduling.

View documentation →
impl JobExtension for MyExtension {
    fn jobs(&self) -> Vec<Arc<dyn Job>> {
        vec![Arc::new(CleanupJob)]
    }
}

ProviderExtension

Custom LLM providers. Your models.

View documentation →
impl ProviderExtension for MyExtension {
    fn llm_providers(&self) -> Vec<Arc<dyn LlmProvider>> {
        vec![Arc::new(MyCustomProvider)]
    }
}

Compile-time discovery

Extensions register via the inventory crate. Compile-time discovery. No runtime reflection.

Library, not platform

You own the binary

Compile once, deploy anywhere. No runtime dependencies on us.

Library dependency

Add systemprompt.io as a Cargo dependency. Pin your version. Update when ready.

If it compiles, it works

Rust's type system catches errors at compile time. No runtime surprises.

Extensions stay private

Build proprietary logic on open core. We never see your code.

Ship YOUR product

White-label ready. Your domain. Your brand.

Pricing & Licensing

Source-available under BSL license (free during early access). Bring your own API keys - works with OpenAI, Gemini, and Anthropic. All execution is completely sandboxed. We only charge for cloud hosting - real machines cost real money.

Free

Free

BSL license, self-hosted

  • Self-hosted (just needs PostgreSQL)
  • BYOK (OpenAI, Gemini, Anthropic)
  • All core features
  • Unlimited agents & MCP servers
  • Completely sandboxed
  • Community support
Download Template

Cloud

From $29/mo

Managed hosting, variable resources

  • One-click deploy
  • Managed PostgreSQL
  • Automatic backups
  • Your tenant URL
  • Scale resources as needed
  • Email support
Start Free Trial

Licensed Enterprise

Contact Us

Private infrastructure & business licensing

  • Install on your private infrastructure
  • Volume licensing
  • Dedicated support
  • SLA guarantee
  • Custom integrations
Contact Sales

Frequently asked questions

What's the difference between systemprompt.io and Claude/ChatGPT?

Claude, ChatGPT, and Clawdbot are single-player AI—they work for YOU. systemprompt.io is multiplayer AI infrastructure—it makes AI work for YOUR USERS. We don't compete with your AI; we're the runtime that lets you ship it as a product.

Can I self-host everything?

Yes. systemprompt.io compiles to a single binary—just point it at any PostgreSQL database. Run it on bare metal, in a VM, or containerized. The self-hosted version includes all core features. Cloud hosting is available for teams who want managed infrastructure.

What MCP clients are supported?

Any MCP client works with systemprompt.io: Claude Code, Claude Desktop, ChatGPT, and any tool that speaks the Model Context Protocol. Our servers use HTTP-native transports supported by modern clients.

How does authentication work?

systemprompt.io uses OAuth2/OIDC for API authentication and WebAuthn for passwordless user login. Every MCP tool call and agent interaction is authenticated and authorized against scoped permissions.

Is this open source?

systemprompt.io is source-available under FSL-1.1-ALv2 (Functional Source License). You can view, modify, and self-host the code. After 2 years, each version converts to Apache 2.0.

SHIP AI TO YOUR USERS

You bring the intelligence. We handle the multiplayer infrastructure.

Get Started Free