How to Install OpenClaw 2026.3.22 Locally on Windows, macOS, and Linux
OpenClaw is an open-source AI agent platform that runs on your own hardware. The 2026.3.22 release turns it into a full "agent operating system" with plugins, multi-model support, and stronger security.
This guide explains what OpenClaw 2026.3.22 is, how to install it locally, and how to use it in real work. It also covers benchmarks, pricing, and how it compares to other self-hosted AI tools.
What Is OpenClaw 2026.3.22
OpenClaw is a self-hosted AI agent runtime that runs on your machine and connects to chat apps, email, and other services through a gateway. You can think of it as an operating system for AI agents rather than a single chatbot.
Version 2026.3.22 adds a plugin marketplace called ClawHub, support for new models like MiniMax M2.7 and GPT-5.4 mini or nano, and major security and sandbox updates.
OpenClaw uses large language models (LLMs) as the "brain" of each agent. An LLM is a model that reads text and generates new text based on patterns it learned during training.
OpenClaw manages long-running sessions, tools, memory files, and cross-channel routing so agents can work across tasks while you stay in control of data.
Key Features
- ClawHub plugin marketplace: Built-in marketplace to search, install, and update skills and plugins with
openclaw skillsandopenclaw pluginscommands. - Multi-model support: Built-in support for MiniMax M2.7, GPT-5.4 mini and nano, Anthropic Claude via Vertex, and many open-source models through providers like OpenRouter and Zhipu GLM.
- Per-agent reasoning controls: Different agents can use different reasoning depth and speed settings, instead of one global level for the whole system.
- Search and web tools: Integrations for Exa, Tavily, Firecrawl, and Firecrawl-based web fetch tools to browse and scrape sites from within agents.
- Sandboxed execution: Multiple sandbox backends including Docker-style containers, OpenShell, and SSH sandboxes with hardened exec and network policies.
- File-based memory: Agent memory stored as Markdown files on disk so you can inspect and edit what the agent "knows".
- Multi-channel gateway: One gateway service to connect agents with WhatsApp, Telegram, Slack, Discord, Email, WeChat, and more.
- ClawBox hardware option: Dedicated Jetson Orin Nano box that ships with OpenClaw pre-installed for plug-and-play local hosting.
How to Install or Set Up OpenClaw 2026.3.22 Locally
This section covers three main install paths: official script, npm package, and source from GitHub.
Note: The 2026.3.22 npm package has known issues with the Control UI and some plugins, especially WeChat integration. Check current issue status before using that specific tag and consider @latest if maintainers have patched it.Prerequisites and Hardware Requirements
Minimum tested specs for a stable OpenClaw node include a 4-core CPU, 8 to 16 GB RAM, and 20 to 40 GB of SSD storage. Linux (Ubuntu 22.04 or 24.04), macOS, and Windows with WSL are all used in community guides.
For the ClawBox device, OpenClaw runs on an NVIDIA Jetson Orin Nano with 8 GB memory and 512 GB NVMe storage.
Basic steps before installation:
- Update your system packages (for example,
sudo apt update && sudo apt upgrade -yon Ubuntu). - Install Node.js (LTS), pnpm or npm, Git, and Python if they are not present.
- Ensure you can reach the internet from the host to pull packages and models.
Method 1: Official Install Script (Recommended for Most Users)
- Open a terminal on Linux or macOS.
- Run the official install script:bash
curl -fsSL https://openclaw.ai/install.sh | bash
This script detects your OS, installs Node.js if needed, installs theopenclawCLI, and launches the onboarding wizard. - Follow the on-screen onboarding steps, choose your default model provider, and accept security prompts.
- At the end, the script offers to install OpenClaw as a background daemon; accept if you want it to start on boot.
Method 2: Install via npm Global Package
- Ensure Node.js and npm are installed on your system.
- Install OpenClaw globally:bash
npm install-g openclaw@latest
The npm registry lists recent versions including 2026.3.13 and 2026.3.22-beta.1. - Run the onboarding and daemon install:bashopenclaw onboard --install-daemon
This command sets up the gateway service and guides you through model keys, channels, and basic security choices. - Confirm that the
openclawservice is active with the status command:bashopenclaw gateway status - If you need the 2026.3.22 tag for testing, pin the version carefully and verify known issues from the issue tracker and Reddit thread.
Method 3: Install from GitHub Source
This method gives full control and is useful for advanced setups or custom builds.
- Clone the repository:bash
gitclone https://github.com/openclaw/openclaw.gitcdopenclaw - Check out the 2026.3.22 tag if available:bash
gitfetch --tagsgit checkout 2026.3.22
The SourceForge mirror and release notes confirm this tag and describe its changes. - Install dependencies using pnpm (preferred in many guides):bash
corepack enable
pnpm install - Build the project:bash
pnpmbuild - Run the onboarding flow from the local build:bash
pnpmopenclaw onboard --install-daemon
A YouTube tutorial shows these steps for building from source and then calling the onboard command.
Special Case: ClawBox Appliance
If you use a ClawBox, OpenClaw comes pre-installed on an Orin Nano board. The usual steps are:
- Connect power and Ethernet to the ClawBox.
- Visit
http://clawbox.localin your browser. - Complete the setup wizard and log in to the web control UI.
How to Run or Use OpenClaw Locally
Once installed, OpenClaw runs as a background gateway plus one or more agents. You can control it from the CLI, the Control UI dashboard, or from chat channels like WhatsApp and Slack.
Starting and Stopping the Gateway
- To start or ensure the gateway is running:bashopenclaw gateway start
- To stop the service:bashopenclaw gateway stop
- To view status and logs:bashopenclaw gateway status
openclaw logs gateway
The refreshed dashboard in recent versions adds modular views for chat, config, agents, and sessions.
Running Your First Agent Session
- Open a terminal and run:bashopenclaw chat
This opens a local chat session with your default agent. - Ask a question like "Summarize my latest email" after you have connected an email skill.
- The message flows through the gateway, which routes it to the correct agent, loads its memory, calls the chosen LLM, and returns a response.
- Use
/btwfor side questions that should not change the future session context.
Using ClawHub Plugins and Skills
ClawHub in 2026.3.22 turns skills into first-class plugins.
Common commands include:
- Search plugins:bashopenclaw skills search email
- Install a ClawHub package:bash
openclaw plugins installclawhub:email-inbox - List installed plugins:bashopenclaw plugins list
- Update skills and plugins:bashopenclaw skills update
openclaw plugins update
Skills define tools that agents can call, such as "read inbox", "create Jira issue", or "update calendar".
Connecting to Model Providers and Local Runtimes
During onboarding, you map agents to one or more model providers such as OpenAI GPT-5.4, Anthropic Claude, MiniMax, or open-source models via OpenRouter and Z.AI GLM.
Examples:
- Use GPT-5.4 mini as a fast default reasoning model, which one benchmark shows around 73 tokens per second compared with 46 tokens per second for a larger model on the same stack.
- Route heavy coding tasks to GLM-4.7 Flash running locally through vLLM or llama.cpp; community tests show 60 to 220 tokens per second on consumer GPUs depending on quantization.
For pure local models, many users attach Ollama or similar servers and then point OpenClaw to these endpoints. This keeps prompts and data on your hardware.
Managing Tokens, Cost, and Session Health
OpenClaw sessions can consume many tokens because they send long histories, tools output, and a large system prompt. A tuning guide shows that context accumulation and tool output can account for over half of total token use.
Practical steps:
- Use cheaper models like GPT-5.4 mini, Claude Haiku, or local open-source models for routine tasks.
- Enable prompt caching for expensive models and keep the system prompt stable across requests.
- Limit Heartbeat intervals so background checks do not wake the agent every few minutes with full context.
- Monitor per-session cost and context usage with commands like
openclaw /status.
Benchmark Results
The table below gathers reported metrics from public benchmarks and vendor tests that match typical OpenClaw setups.
These numbers show that OpenClaw overhead is usually small compared with the model speed and network latency. The main performance drivers are model type, hardware, and whether the model runs locally or behind a remote API.
Testing Details
Performance tests for OpenClaw often use synthetic workloads, such as HTTP health checks or echo endpoints, to measure the gateway and sandbox overhead. A Tencent Cloud guide describes testing on a 2-core, 4 GB Lighthouse instance and tracking throughput, median latency, P95 and P99 latencies, and failure rates for different levels of concurrent users.
In those tests:
- Health-check endpoints reached 2,000 to 5,000 requests per second with sub-5 ms median latency and sub-20 ms P99 latency.
- Echo tests with GPT-4-class backends showed median latencies from 2.1 seconds at 10 users to 5.2 seconds at 100 users, with failure rates under 5 percent up to 100 concurrent users.
Model benchmarks come from providers and independent testers:
- MiniMax M2.7 analysts measure about 52.8 tokens per second on its own API.
- GLM-4.7 Flash tests on H200 GPUs reach up to 4,398 tokens per second at high concurrency and around 207 tokens per second for a single user.
- OpenAI data and third-party reporting show GPT-5.4 mini at roughly 2 times the speed of GPT-5 mini and about 60 percent faster than one larger GPT-5.4 variant in real use.
Token usage tests for OpenClaw also examine how much of a large context window the session history consumes, and how Heartbeat jobs and system prompts add to cost. One optimization guide breaks token use into context accumulation, tool output, system prompt, multi-round reasoning, model choice, and cache misses.
Comparison Table: OpenClaw vs Alternatives
The tools below all help run AI agents or assistants locally but have different goals and designs.
Pricing Table
OpenClaw itself is open source. Costs come from hosting, model usage, and optional services.
Unique Selling Proposition (USP)
OpenClaw stands out by treating AI agents as long-lived, multi-channel workers that live on your machine, not as single chats inside a browser tab. The 2026.3.22 release deepens this view with ClawHub, bundled web and search tools, pluggable sandboxes, and model-agnostic routing, so you can mix remote APIs with local models while keeping memory, tools, and channels in one coherent runtime.
Pros and Cons
Pros
- Self-hosted architecture that keeps memory and logs on hardware you control.
- Rich multi-channel gateway that connects to chat apps, email, and more from one agent system.
- ClawHub marketplace and plugin SDK for installing and updating skills without manual wiring.
- Strong sandbox and security hardening across exec, hooks, network, and device pairing in 2026.3.22.
- Per-agent reasoning settings and support for modern models like GPT-5.4 mini, MiniMax M2.7, and GLM-4.x series.
Cons
- Install and configuration complexity higher than desktop chat apps; requires comfort with terminals and configs.
- Token usage is high by default; without optimization, API bills can rise fast on remote models.
- Some 2026.3.22 packages have regressions, such as broken Control UI or WeChat plugin builds, which need patches or workarounds.
- Long-running agents and Heartbeat jobs demand stable hardware and uptime, closer to running a small server than a desktop app.
Quick Comparison Chart
Demo or Real-World Example: Email and Calendar Assistant on a Local Node
This example describes a simple workflow for a personal assistant agent on a Linux VPS or home server.
1. Install OpenClaw
- Use the official script:bash
curl -fsSL https://openclaw.ai/install.sh | bash - Complete onboarding, choose a default model like GPT-5.4 mini or Claude Haiku, and install the daemon.
2. Connect a Model and Optimize Costs
- Add API keys for OpenAI and Anthropic, or configure a local Ollama server and GLM-4.x model.
- For the main assistant agent, pick GPT-5.4 mini or a similar fast small model to keep latency and price low.
- For heavy research tasks, allow a larger model and enable prompt caching to cut token spend.
3. Install Email and Calendar Skills via ClawHub
- Search for skills:bashopenclaw skills search email
openclaw skills search calendar - Install a packaged inbox skill and a calendar integration through ClawHub:bash
openclaw plugins installclawhub:email-inboxopenclaw plugins installclawhub:calendar-sync - Configure OAuth or app passwords for your email and calendar services through the Control UI or config commands.
4. Define Agent Behavior and Memory
- Open the agent's configuration and memory Markdown files on disk.
- Write clear instructions for how it should sort email, label priorities, and schedule events.
- Keep the core system prompt stable to benefit from caching and predictable behavior.
5. Run the Workflow in Daily Use
- Keep the OpenClaw gateway running as a daemon on your server or ClawBox.
- Each morning, trigger a command or schedule a Heartbeat that asks the agent to "Review new email from the last 24 hours, mark urgent items, and propose a calendar plan."
- The agent reads your inbox via the email skill, writes notes into Markdown memory files, and proposes calendar entries through the calendar skill.
- You approve or adjust changes in your calendar and email client, while the agent keeps state between runs and refines behavior over time.
This pattern shows how OpenClaw turns a set of tools and models into a stable automation agent that lives beside your regular apps, rather than inside a single chat window.
Conclusion
OpenClaw 2026.3.22 moves from a powerful tool to a full agent operating system, with plugins, sandboxes, and modern model support. Installing it locally requires some command-line work but rewards you with a self-hosted, multi-channel assistant that you control.
FAQ
1. Is OpenClaw 2026.3.22 stable enough for production?
The core gateway and agent runtime are mature, but the 2026.3.22 npm package has known Control UI and channel issues, so many users treat this version as an early upgrade and wait for patched builds for production.
2. Do I need a GPU to run OpenClaw?
OpenClaw runs fine on CPU-only servers, though you may use remote APIs or smaller local models. A GPU helps if you want fast, large local models like GLM-4.7 Flash or 8B-class Llama models.
3. Can I run OpenClaw only with local models and no external API calls?
Yes. Many users pair OpenClaw with local engines like Ollama or vLLM and attach open-source models, which keeps data and tokens on their own hardware.
4. How does OpenClaw compare with tools like CrewAI or LangChain?
CrewAI and LangChain are Python frameworks for building workflows inside applications, while OpenClaw is an always-on agent OS with gateways, sandboxes, and channels. They can work together, for example by calling CrewAI-based tools from OpenClaw skills.
5. What are the main ways to reduce OpenClaw token costs?
The most effective steps are using small, fast models for routine work, enabling prompt caching, trimming session history, controlling Heartbeat frequency, and routing heavy tasks to cheaper or local models, which one guide reports can reduce costs by around 95 percent in tuned setups