Advanced AI users in China are enthusiastically testing Moltbot, the AI assistant that has gone viral in Silicon Valley, with cloud providers offering special packages for those who want to try it out as soon as possible.
Created by Austrian entrepreneur Peter Steinberger, Moltbot, an open-source AI agent, has captured the attention of the tech community with its ability to learn user habits, take control of devices and proactively complete tasks without requiring intervention at every step.
Originally called Clawdbot, the name was changed at the request of Anthropic, Claude’s owner. On Thursday, the bot was rebranded OpenClaw, according to Steinberger.
What is Moltbot (formerly Clawdbot)
At its core, Moltbot is a framework for local AI agents, not an AI model itself. It contains no intelligence of its own, but coordinates calls to large language models, such as Claude, and turns their results into actions on the user’s device, according to an analysis published on the official website of DoControl, a cybersecurity platform company.
It can be viewed as a command center that turns the responses of language models into actual execution.
It has locally stored long-term memory, can link tasks together, and can run continuously, not just in sessions. This fundamentally differentiates it from most third-party AI tools used today, which operate in the browser and “forget” everything at the closing of the tab.
This feature “always on” it’s part of the broader movement of AI agents — tools designed to operate independently without waiting for human prompting.
How Moltbot works
Moltbot is not a conversational assistant in the usual sense. It works as a self-contained software agent, integrated directly into the computing environment.
Once installed and configured, it can receive access to: its own system, control over the browser and active sessions, permissions to read and write local files, access to external services such as email, calendar or APIs (depending on what it is connected to), long-term memory that preserves the context even after restart.
Instead of answering point-by-point questions, Moltbot can observe, plan, and execute tasks on its own—using the language model as the reasoning layer and the local machine as the execution layer.
Security risks of Moltbot
Moltbot poses some major risks, especially when used on devices that touch corporate systems:
1.Extended access to data and privileges
Being installed directly on the user’s system, Moltbot inherits the same privileges and can reach confidential data, password or active sessions. In the event of an error or attack, the consequences can be extensive, including: data leaks, identity theft, compliance failures, and loss of action tracking.
2. Lack of security policies
There are no built-in protection mechanisms. Responsibility is completely transferred to the user, which can be risky for corporate laptops or regulated environments.
3. Risks of prompt injection
Moltbot can interpret hidden instructions in files or texts as legitimate commands. Thus, an attacker can enter malicious commands and the agent can execute them without anyone noticing, affecting connected devices or sensitive data.
Race for AI
Tech giants are now racing to develop AI assistants. Anthropic recently released Claude Cowork, an agent tool capable of organizing files and creating spreadsheets. Meta acquired Manus, an AI product originating in China that can automate social media posts and analyze resumes. None of these tools are available in China, reports the nonprofit press publication Rest of World.
Cybersecurity experts warn of privacy and security risks when rogue agents are given too much autonomy. Agents like Moltbot can send malicious emails and, worse, accidentally leak sensitive information or user credentials.