Skip to main content
Jarvis is a self-hosted AI platform that unifies local model inference, multi-agent orchestration, and workflow automation — all running on your own hardware with zero cloud lock-in.

Quick Setup

Get Jarvis running on your hardware in minutes.

Your First Agent

Spawn your first AI agent and delegate a task.

Brain Mesh

Explore the distributed compute layer powering Jarvis.

API Reference

Integrate Jarvis into your workflows via the REST API.

What Jarvis does

Jarvis connects five components into a unified AI operating environment:
  • Brain Mesh — 5 nodes with dedicated GPU compute, orchestrated through LiteLLM
  • Model Fleet — 10+ local models (Llama, Mistral, DeepSeek, and more) running via Ollama
  • Agent Teams — Paperclip and Hermes multi-agent systems with specialized roles (CEO, CTO, Engineer, Writer)
  • MCP Ecosystem — 526+ tools including ServiceNow integrations and custom Jarvis tools
  • n8n Automation — 57+ workflows for infrastructure management, monitoring, and deployment

Get started

1

Set up your environment

Follow the setup guide to configure your nodes and connect to the brain mesh.
2

Spawn your first agent

Use the first agent guide to delegate a task to Paperclip or Hermes.
3

Explore integrations

Connect your workflows using n8n, MCP tools, or the REST API.
Jarvis runs entirely on your own infrastructure. No data leaves your environment unless you explicitly configure external integrations.