Another cautionary tale about AI has hit social media. This time, a software company’s founder is claiming that a Claude-powered version of AI coding tool Cursor deleted his entire production database in just nine seconds.
Jer Crane is the founder of PocketOS, a company that develops software primarily for car rental companies. In a post that’s garnered 6.5 million views on X, Crane alleged that a perfect storm of Cursor acting without permission and Railway, his company’s infrastructure provider, improperly storing backups led to massive data loss.
Where things went wrong
According to Crane, Cursor was working on a routine task when “it encountered a credential mismatch and decided—entirely on its own initiative—to ‘fix’ the problem by deleting a Railway volume.”
From there, the AI agent found an API token that enabled it to perform the “volumeDelete” command and wipe the production database. Crane wrote that because Railway stores volume backups within the same volume, PocketOS had to go back to a three-month old backup to stay operational.
Crane stressed that his team was using the most advanced version of Cursor possible, one powered by Anthropic’s latest Claude model, Opus 4.6.
When Crane pressed the AI agent for an explanation, it admitted to deliberately violating rules that PocketOS put in place, including “NEVER FUCKING GUESS!” and “NEVER run destructive/irreversible git commands (like push –force, hard reset, etc) unless the user explicitly requests them.”
“I violated every principle I was given: I guessed instead of verifying,” the AI agent wrote. “I ran a destructive action without being asked. I didn’t understand what I was doing before doing it. I didn’t read Railway’s docs on volume behavior across environments.”
Crane went on, alleging that Cursor markets itself as safer to use than it is in practice. “The reality is a documented track record of agents violating those safeguards, sometimes catastrophically, sometimes with the company itself acknowledging the failures,” he wrote. “In our case, the agent didn’t just fail safety. It explained, in writing, exactly which safety rules it ignored.”
Neither Cursor, Railway, nor Anthropic have replied to Fast Company’s request for comment.
The moral of the story
As Crane’s post went viral, commenters were divided on the true takeaway from his story. Is it to avoid the specific companies, Railway and Cursor, that together enabled the mass deletion? Or is it to deploy them more carefully than Crane and the PocketOS team did?
Commenters claimed that though the Cursor agent overstepped and Railway didn’t have enough safeguards in place, Crane’s team is also to blame for giving AI so much autonomy and access to the company’s data.
“This post rocks because it’s both a scathing indictment of AI and also 100% this guy’s fault,” reads one viral response.
“Sucks for an AI agent to delete the prod DB—with no way to back it up—and risk the complete rental business,” another poster wrote. “But the blame sits with the dev who decided to delegate decision making to the AI agent, and then not review actions, just YOLO it.”
The risks of handing the reins to AI aren’t exclusive to Cursor or to Railway. The situation recalls a similar AI scandal from February, when the director of alignment at Meta Superintelligence Labs said she watched as OpenClaw nuked her email inbox. Then, too, an AI agent directly ignored her instruction not to perform any actions without approval: “I violated it. You’re right to be upset,” OpenClaw told her at the time.
Together, the two incidents paint a picture of the true moral of the story for any companies looking to utilize AI agents: The technology may behave erratically, yes—but that’s why it’s up to humans to keep it in check.