Goran Stimac
Menu

OpenClaw is powerful enough that security cannot be an afterthought.

The project’s own docs make that clear. It is designed around a personal-assistant trust model with one trusted operator boundary per gateway. That is the right baseline for most setups. If you need hostile-user isolation, the recommendation is to split gateways, credentials, and ideally OS users or hosts.

That is the right mental model for this checklist: secure the operator boundary first, then the tools.

1. Keep The Gateway Private By Default

Start with loopback binding unless you have a real reason to expose the service.

If the gateway only needs to serve a local browser and local chat flows, there is no need to turn it into a public network service. The docs recommend loopback-first deployment and warn against unauthenticated exposure on broader interfaces.

If you need remote access, use a deliberate path such as a reverse proxy, Tailscale, or a tightly controlled network route. Do not publish the gateway directly to the public internet.

2. Use Strong Authentication

Gateway auth should be explicit. A long token or password is better than relying on convenience defaults.

The docs also note that gateway.auth.mode: "trusted-proxy" is an intentional identity-aware setup, not a shortcut to skip auth. If you use it, make sure the proxy is correctly configured and trusted. Otherwise, prefer token or password auth.

Keep credentials out of shared sync folders and back them with proper filesystem permissions.

3. Lock Down Chat Access

OpenClaw gives you multiple layers for who can trigger the bot:

  1. DM pairing or allowlists.
  2. Group mention rules.
  3. Per-channel sender restrictions.

For personal use, pairing is the safest default. For shared rooms, require mentions and keep broad group access off unless everyone in the room is fully trusted.

If more than one person can message the bot, isolate DM sessions with per-channel-peer or a similar mode so you do not leak context across users.

4. Reduce Tool Blast Radius

The biggest risk in any agentic system is not the chat prompt. It is what the agent can do after reading the prompt.

For that reason, start with a narrow tool profile. Keep exec, browser control, filesystem access, and plugin loading restricted until you know they are needed. If the assistant does not need write access, remove it. If it does not need browser access, do not grant it just because it is available.

The docs also recommend sandboxing for workflows that touch untrusted content.

5. Treat Untrusted Content As Hostile

Prompt injection is still a real problem.

The docs are explicit that even a private assistant can be influenced by untrusted content coming from emails, web pages, documents, attachments, or pasted logs. If the assistant can read that content and then act on it, you need guardrails.

That means:

  1. Use a read-only or narrow reader agent where possible.
  2. Keep browser, web search, and file tools off unless they are truly needed.
  3. Prefer strong models for any workflow that can touch tools.

6. Protect State On Disk

OpenClaw stores session data, config, credentials, and logs on disk. That is practical, but it means local filesystem access is part of the trust boundary.

The docs recommend 700 on directories and 600 on files under ~/.openclaw, and they warn against putting state in synced folders like Dropbox or iCloud when it contains secrets or transcripts.

If multiple users share the same host, use separate OS accounts or separate hosts.

7. Run The Security Audit

The project provides an audit command for a reason.

openclaw security audit
openclaw security audit --deep
openclaw security audit --fix

Use it after changing config, enabling new channels, adding a proxy, or broadening tool access. Pay attention to findings around gateway exposure, browser control, filesystem permissions, group chats with exec access, and any permissive tool profiles.

8. Use A Dedicated Browser Profile

If browser control is enabled, keep the assistant on its own browser profile.

That prevents accidental access to personal logins, password managers, or daily-driver browsing state. It also makes debugging easier because the assistant’s browsing environment stays separate from yours.

Bottom Line

The safe OpenClaw pattern is simple: private gateway, strong auth, strict allowlists, narrow tools, real sandboxing, and separate trust boundaries when multiple people are involved.

If you want a short rule, use this one: never give an AI assistant broader access than the smallest workflow it needs to complete.

Official references: Security, Sandboxing, and Getting Started.

Relevant services

These service pages are matched from the subject matter of this article, creating a cleaner path from educational content to implementation work.

Continue reading

Based on shared categories first, then the strongest overlap in tags.