How to Set Up a Local AI Agent on Your Own Server
OpenClaw is an open source AI orchestrator that turns Telegram into a personal AI interface. Here's how I set it up on bare metal and why I haven't touched a web UI since.
Contents
- What OpenClaw Is
- Why I Chose It
- What Claude Did to Install It
- 1. Install OpenClaw globally via npm
- 2. Run onboarding with the daemon flag
- 3. Configure the main settings file
- 4. Create a Telegram bot via BotFather
- 5. Restart the daemon
- 6. Pair your Telegram account
- One Thing to Watch
- Choosing a Model
- How I Use It Day to Day
- What’s Next
I run SEO campaigns across multiple sites, monitor rankings daily, and manage automation scripts on a homeserver. For a long time the AI layer was disconnected — I’d open a browser, paste context, get a response, close the tab. Every session started from zero.
That changed when I set up OpenClaw. Now I get morning briefings on Telegram, ranking alerts that the agent can immediately investigate, and a system that knows my projects without me re-explaining them. Month over month since running it, I’ve seen consistent improvement in clicks and rankings — not because OpenClaw does SEO, but because I actually act on data now instead of waiting to find time to look for it.
This is how I set it up, what Claude did to get it running, and what it looks like in practice.

What OpenClaw Is
OpenClaw is an open source AI orchestrator. The official description is “multi-channel AI gateway with extensible messaging integrations” — which is accurate but doesn’t tell you much. Here’s the practical version.
You install it on your server. It runs as a background daemon, always on. You connect it to Telegram (or WhatsApp, Slack, Discord — your choice). From that point, your AI agent lives in your messaging app. You send it a message, it responds. No browser, no web UI, no monthly subscription to a chat interface.
The gateway runs on local loopback at port 18789 on your machine. Every message you send through Telegram routes through the gateway to the agent. The agent has a persistent workspace — files that carry context between sessions: who you are, what your projects are, what it’s learned about how you work. That workspace is loaded into every conversation.
This is what separates it from a web-based AI tool. Most web interfaces are stateless — each session starts fresh. OpenClaw’s agent accumulates context over time. The longer it runs, the more useful it gets.
It supports multiple LLM providers out of the box: Anthropic, Google, OpenRouter, Ollama, and OpenAI. You’re not locked into one model or one bill. I run a free OpenRouter model as my primary and route heavier tasks to Claude or Gemini when needed.
Why I Chose It
Free and open source. That was the starting point. I wasn’t going to pay a monthly subscription for an AI interface when I already have a server running 24/7.
But the Telegram integration is what made it the obvious choice. I already use Telegram for alerts from my SEO monitoring scripts — ranking drops, index status changes, morning summaries. Having the AI agent in the same app meant I could go from receiving an alert to asking the agent to investigate it without switching tools or opening a browser.
I also didn’t want to depend on a web UI. Web interfaces require a browser, go down occasionally, and create a separation between where your alerts live and where your AI lives. With OpenClaw, both are in Telegram. One app, one workflow.
What Claude Did to Install It
I didn’t run through this manually. Claude handled the full install over SSH — I reviewed the steps and approved the writes.
Here’s what the install process looks like. If you’re running a capable model, this is what you’d hand off:
1. Install OpenClaw globally via npm
npm install -g openclaw@latest
Requires Node.js 22.16 or higher. Node 24 is recommended. On my machine I was already running Node 22 which worked fine.
2. Run onboarding with the daemon flag
openclaw onboard --install-daemon
The --install-daemon flag is what makes it always-on. On Linux it registers a systemd user service. On macOS it uses launchd. After this OpenClaw starts automatically on boot and stays running in the background — you don’t have to think about it again.
3. Configure the main settings file
This is where Claude did the most work. The config file covers the gateway port, channel settings, model providers, and plugin allowlist. The key sections:
{
"channels": {
"telegram": {
"botToken": "YOUR_BOT_TOKEN_HERE",
"enabled": true,
"dmPolicy": "pairing",
"allowFrom": ["YOUR_TELEGRAM_USER_ID"]
}
},
"agents": {
"defaults": {
"model": {
"primary": "openrouter/minimax/minimax-m2.5:free"
}
}
},
"models": {
"providers": {
"ollama": { "baseUrl": "http://127.0.0.1:11434" },
"anthropic": { "baseUrl": "https://api.anthropic.com" },
"openrouter": { "baseUrl": "https://openrouter.ai/api/v1" }
}
},
"plugins": {
"allow": ["ollama", "anthropic", "google", "openrouter", "browser", "telegram"]
}
}
4. Create a Telegram bot via BotFather
This is the one step you do manually. Open Telegram, search @BotFather, send /newbot, follow the prompts. You get a bot token — a long string that looks like 8765838008:AAG1aAvk.... That token goes into openclaw.json under channels.telegram.botToken.
Your Telegram user ID goes in allowFrom. You can find it by messaging @userinfobot on Telegram.
5. Restart the daemon
openclaw daemon restart
6. Pair your Telegram account
Send your bot a message. OpenClaw’s default security model is dmPolicy: "pairing" — unknown senders get a pairing code before the agent responds. You approve it with:
openclaw pairing approve telegram <code>
After that, you’re paired. The bot responds to your messages from that point on.
If you’re running a weaker model that can’t handle multi-file config work, do this in steps — install first, get the daemon running, then tackle the Telegram config separately. The install itself is one command. The config is where you need judgment.
One Thing to Watch
The plugins.allow list in openclaw.json must explicitly include every plugin you want active — including telegram. If telegram is missing, the Telegram channel silently fails to start. The daemon reports it’s running, but your bot won’t respond and there’s no obvious error.
This caught me once. The Ollama TUI inside OpenClaw rewrites openclaw.json when you change models through it, and it drops telegram from the allowlist. Symptom: bot goes quiet after a model change. Fix: add "telegram" back to plugins.allow and restart the daemon.
Always check plugins.allow first if your bot stops responding.
Choosing a Model
OpenClaw supports multiple providers so you’re not forced into one. My current setup:
My primary is openrouter/minimax/minimax-m2.5:free — handles general queries, completely free, good enough for most daily tasks. For anything that needs more reasoning I fall back to Claude Haiku 4.5 via the Anthropic API. For recurring automation like trend alerts I use kimi-k2.5:cloud via Ollama running locally on my homeserver — no API cost at all.
The free OpenRouter tier covers the majority of day-to-day use. You only route to paid models when the task demands it. This keeps the running cost close to zero for most months.
One thing to avoid: Gemini via the OpenAI-compatible endpoint returns 400 errors on this setup. If you’re using Gemini, route it through the native Google provider in openclaw.json, not via the OpenAI compatibility layer.
How I Use It Day to Day
Setting up OpenClaw is also what prompted me to build out my SEO monitoring scripts properly. Before this, I wasn’t running automated monitoring at scale. Once I had Telegram as a reliable delivery layer, it made sense to build scripts that push data directly there.
The key design decision: the scripts deliver raw GSC data straight to Telegram via the bot API — not through the OpenClaw agent. That’s intentional. When I want accurate ranking numbers and click metrics, I don’t want an AI in the middle potentially summarizing or rounding figures. The script pulls from the API, formats it, and sends it. What I get is exact data.
OpenClaw’s agent comes in on top of that. Once the raw data lands in Telegram, I can ask the agent to interpret it, spot patterns, or suggest what to act on. That’s where AI adds value — commentary and analysis, not data retrieval. It keeps the cost down too, since the agent only runs when I actually need it, not on every data pull.
Every day at 8:45am a morning summary lands in Telegram: rankings, clicks, index status, anything that moved overnight. I read it the same way I read a message from anyone else. If something needs a closer look, I ask the bot directly. The agent has context on my sites, scripts, and repo — so it’s working from the same data I am, not making things up.
That shift — from going to look for information to having it come to you — changes how much you actually act on it. Which is why the SEO numbers have moved consistently month over month. Not because the agent is doing SEO, but because I stopped missing signals.
What’s Next
In part 2 I’ll show the full SEO monitoring workflow built on top of this — the scripts, the cron jobs, the Telegram alerts, and how the agent turns raw GSC data into actual decisions. That’s where the real case study is.
The foundation is what this post covers: a self-hosted agent, always on, connected to your data, living in the app you already use. Everything else builds on top of that.