TL;DR
Security concerns are valid. But rather than avoiding AI tools entirely, using them carefully while building your security skills is the better choice for developers.
Why the Risks Aren't Visible

Data breaches never stop. In 2025 alone, SKT leaked 23 million SIM card records, Coupang exposed 33.7 million customer records, and Lotte Card had sensitive financial data including card numbers and CVCs stolen. Once leaked, data circulates endlessly on the dark web through resales, making personal information something that requires continuous management.
That's probably why whenever AI agent tools like OpenClaw come up, "is the security okay?" is almost always asked. It's an understandable concern. But it's a bit unfortunate that many people avoid these tools entirely because of security worries.
So, Should You Use AI Tools or Not?
The ChatGPT Era Was Similar
I mentioned OpenClaw as an example, but this isn't a new reaction. The mood was similar when ChatGPT first launched. In 2023, just 20 days after Samsung Electronics' semiconductor division (DS) allowed ChatGPT usage, there were 2 source code leaks and 1 meeting content leak. Samsung then completely banned generative AI, and many companies followed suit. There were plenty of individuals who refused to use it for security reasons too.
What about now? Most companies have created guidelines and are actively using it. Not using it now means falling behind the competition.
Two Choices for Developers

From a developer's perspective, it comes down to two choices:
A. Use carefully → Accumulate experience and insights + improve productivity
B. Don't use at all → Eliminate security risk at source, but also block growth opportunities
You could see it as 2 pros vs 2 cons, but honestly it's a bit of a forced comparison. The logic works if you flip it too. Ultimately, it's a matter of mindset.
Security Concerns Are Growth Opportunities

When you operate AI tools yourself, you naturally start asking security questions. On the network side, you check if there are unnecessarily open ports, if IAM permissions follow least privilege principles. On the AI side, you verify if there are prompt injection vulnerabilities, if AI has access to sensitive files like .env.
The process of thinking through these concerns builds security competency. Even if you want security experience at work, unless you're on the infrastructure team, those opportunities rarely come. Running AI tools in personal projects and thinking about security is creating that opportunity yourself.
Furthermore, knowledge built this way can be directly applied to real company service operations or external consulting. Conversely, being thrown into production without this experience means working with unknown vulnerabilities you can't even identify.
High Interest, Low Actual Usage
Looking at comments on a recent OpenClaw-related YouTube video, there were many security concern comments from Korean developers. "I'm worried about security," "what if company data leaks" were common reactions. Interest is clearly high, but actual usage in Korea is reportedly minimal compared to global usage.
Of course, OpenClaw's security problems are real. In ZeroLeaks' automated red team testing, it scored 2 out of 100 on security, with 91% prompt injection success rate and 84% prompt extraction rate. Shodan scans found hundreds of instances exposed without authentication. Andrej Karpathy warned that Moltbook was "a complete mess of a computer security nightmare at scale" and later said not to run it on your own computer. Google Cloud Security Engineering VP Heather Adkins also flat out said "Don't run Clawdbot."
Looking at the numbers, "why would you use that" is a natural reaction. But using this tool actually made me study security more. You start thinking about how to handle security for services with this architecture, what settings are dangerous, what to block. Installing it locally on a dedicated laptop, I naturally learned security practices like port management, sandboxing, and privilege minimization.
It's not about avoiding it because it's dangerous—it's about using it while understanding the dangerous structure.
Just Yesterday, Moltbook Got Hacked

A perfect case study dropped right as I was writing this. Moltbook, the social network for OpenClaw agents, got hacked. Security firm Wiz's analysis found that Row Level Security (RLS) wasn't configured on the Supabase database, exposing 1.5 million API keys, 35,000 email addresses, and even private messages between agents. Anyone could read and write to the entire DB without authentication.
What's even more shocking is what the Moltbook founder said: "I didn't write a single line of code myself. AI made it all." A service built through vibe coding had one security setting missing and the entire platform was breached.
This isn't just an OpenClaw ecosystem problem. It's a case showing that the faster you build with AI, the more security becomes entirely your responsibility. Even if AI writes the code, checking if RLS is properly configured, if API keys aren't exposed in client code—that's still on the human.
Not Using It Is Safe, But You Also Lose the Learning Opportunity
This isn't about ignoring security. It's the opposite. It's about using AI tools as an opportunity to think about how to do security better in the AI era and improve related skills.
Not using it is safe, but the learning opportunity disappears with it.