AI-based assistants or 'agents' — autonomous programs that have access to the user's computer, files, online services, and can automate virtually any task — are growing in popularity with developers and IT workers. But as so many eyebrow-raising headlines over the past few weeks have shown, these powerful and assertive new tools are rapidly shifting the security priorities for organizations, while blurring the lines between data and code, trusted co-worker and insider threat, ninja hacker and novice code jockey. The new hotness in AI-based assistants — OpenClaw (formerly known as ClawdBot and Moltbot) — has seen rapid adoption since its release in November 2025. OpenClaw is an open-source autonomous AI agent designed to run locally on your computer and proactively take actions on your behalf without needing to be prompted. The OpenClaw logo. If that sounds like a risky proposition or a dare, consider that OpenClaw is most useful when it has complete access to your digital life, where it can then manage your inbox and calendar, execute programs and tools, browse the Internet for information, and integrate with chat apps like Discord, Signal, Teams or WhatsApp. Other more established AI assistants like Anthropic's Claude and Microsoft's Copilot also can do these things, but OpenClaw isn’t just a passive digital butler waiting for commands. Rather, it’s designed to take the initiative on your behalf based on what it knows about your life and its understanding of what you want done. “The testimonials are remarkable,” the AI security firm Snyk observed. “Developers building websites from their phones while putting babies to sleep; users running entire companies through a lobster-themed AI; engineers who’ve set up autonomous code loops that fix tests, capture errors through webhooks, and open pull requests, all while they’re away from their desks.” You can probably already see how this experimental technology could go sideways in a hurry. In late February, Summer Yue, the director of safety and alignment at Meta’s “superintelligence” lab, recounted on Twitter/X how she was fiddling with OpenClaw when the AI assistant suddenly began mass-deleting messages in her email inbox. The thread included screenshots of Yue frantically pleading with the preoccupied bot via instant message and ordering it to stop.
How AI Assistants are Moving the Security Goalposts
Summary: AI assistants, such as OpenClaw and others, are rapidly changing security priorities for organizations by providing autonomous access to users' digital lives. These tools blur the lines between trusted co-workers and insider threats, presenting new risks that require careful management.
Key facts
- The rise of AI assistants like OpenClaw is changing traditional security protocols.
- These tools blur the lines between trusted co-workers and potential insider threats.
- Recent research shows that many users expose their web-based administrative interfaces to the Internet, posing significant risks.
Why it matters
The rapid expansion of AI assistants like OpenClaw poses significant risks to organizational security, as these tools blur the line between trusted co-workers and potential insider threats. This shift in dynamics necessitates a reevaluation of current security protocols and could have profound implications for data privacy and business continuity.
Key metrics
- Number of servers exposed online: Hundreds (According to Jamieson O'Reilly's findings on Twitter/X)