Run OpenClaw, previously known as Moltbot, locally on Razer Blade 18 to monitor tasks in real time and automate workflow actions in the background. Install and configure OpenClaw and use Razer AIKit to enable low-latency, on-device AI without cloud dependency. Blade 18 is built for sustained heavy workloads, keeping performance stable while data stays local and setup remains minimal – so the hardware does the work while you build.
Run the code below for Docker installation:
# Add Docker's official GPG key:
sudo apt update
sudo apt install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
# Add the repository to Apt sources:
sudo tee /etc/apt/sources.list.d/docker.sources <
Next, run the code below for Node.js installation:
# Download and install nvm:
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.3/install.sh | bash
# in lieu of restarting the shell
\. "$HOME/.nvm/nvm.sh"
# Download and install Node.js:
nvm install 24
Run the code below:
nvidia-ctk runtime configure --runtime=docker --config=$HOME/.config/docker/daemon.json
systemctl --user restart docker
sudo nvidia-ctk config --set nvidia-container-cli.no-cgroups --in-place
Run the code below:
docker -v
node -v
npm -v
Run the code below:
mkdir -p $HOME/.cache/huggingface
docker run -it \
--restart=unless-stopped \
--gpus all \
--ipc host \
--network host \
--mount type=bind,source=$HOME/.cache/huggingface,target=/home/Razer/.cache/huggingface \
razerofficial/aikit:latest
Once inside the container, run:
rzr-aikit model download openai/gpt-oss-20b
Notes:
This guide uses openai/gpt-oss-20b. You can also try other models like DeepSeek-R1-Distill-Qwen-7B or Meta-Llama-3-8B, available on Hugging Face.
If your selected model exceeds the memory capacity of a single Razer Blade 18, use Razer AIKit, our open-source solution, to optimize the load across a cluster of GPUs. Further scale your local AI workstation with our Razer eGPU and Forge AI Dev Workstation to increase GPU resources for higher-level inferencing.
Run the code below:
rzr-aikit model run openai/gpt-oss-20b
Note: If running on WSL, include the flag--gpu-memory-utilization 0.8
To confirm the model is running correctly, execute:
rzr-aikit model generate "Tell me about Telegram Bot - what is it?"
Run the code below:
npm install -g openclaw@latest
Verify the installation by running:
openclaw
Run the code below:
mkdir -p ~/.openclaw
cat > ~/.openclaw/openclaw.json <<'EOF'
{
"models": {
"mode": "merge",
"providers": {
"vllm": {
"baseUrl": "http://127.0.0.1:8000/v1",
"apiKey": "sk-local",
"api": "openai-responses",
"models": [
{
"id": "openai/gpt-oss-20b",
"name": "GPT OSS 20B (Local)",
"contextWindow": 120000,
"maxTokens": 8192
}
]
}
}
},
"agents": {
"defaults": {
"model": { "primary": "vllm/openai/gpt-oss-20b" }
}
},
"gateway": {
"mode": "local",
"auth": {
"mode": "token",
"token": "razerai"
}
}
}
EOF
Run the code below:
openclaw gateway --allow-unconfigured
After the gateway starts, open your browser and go to: http://localhost:18789?token=razerai
Back on Terminal, add the following field to ~/.openclaw/openclaw.json
}
...
channels: {
telegram: {
enabled: true,
botToken: "YOUR_TELEGRAM_TOKEN",
dmPolicy: "pairing",
},
}
}
Run the following code:
openclaw status
openclaw pairing approve telegram
Note: OpenClaw is an experimental tool and not directly affiliated with Razer. Please ensure your environment is properly secured. Razer is not liable for data loss or issues arising from its use.
Receive official updates on Razer's AI gaming initiatives and related corporate developments.