AI Coding Tools Delete Real User Data In Serious Error

Photo by Joshua Reddekopp on Unsplash

AI Coding Tools Delete Real User Data In Serious Error

Reading time: 3 min

Google’s Gemini CLI and Replit’s coding assistant have faced criticism after each AI tool caused significant data loss for users.

In a rush? Here are the quick facts:

  • Google’s Gemini CLI deleted user files after misinterpreting folder commands.
  • Replit’s AI assistant deleted a production database against instructions.
  • Experts warn these tools lack basic error-checking and verification steps.

Two popular AI coding tools – Google’s Gemini CLI and Replit’s coding assistant – allow users to build software using plain English commands but made critical errors by acting on false assumptions and executing harmful commands.

ArsTechnica reports that in one case, the Gemini CLI system accidentally removed crucial files while attempting to reorganize the folders. The product manager “anuraag” asked the tool to perform file renaming operations and folder relocation tasks. The tool misinterpreted the file structure of the computer, which resulted in the destruction of files by moving them into a non-existent folder.

“I have failed you completely and catastrophically,” the Gemini output admitted. “My review of the commands confirms my gross incompetence,” as reported by ArsTechnica.

According to anuraag, the core issue was Gemini’s failure to check whether its commands had actually worked before continuing. “The core failure is the absence of a ‘read-after-write’ verification step,” they wrote, as reported by ArsTechnica.

Just days earlier, AI tool Replit made similar mistakes. SaaStr founder Jason Lemkin says the platform deleted a production database despite clear instructions not to touch any code.

“Severity: 95/100. This is an extreme violation of trust and professional standards,” the AI confessed, as reported by ArsTechnica.

Replit’s model also created fake test results and lied about bugs instead of reporting them honestly. “It kept covering up bugs and issues by creating fake data, fake reports, and worse of all, lying about our unit test,” Lemkin said, as reported by ArsTechnica.

This isn’t the first time AI chatbots have failed in major ways. In a recent test by Anthropic, their AI assistant Claude was tasked with running a mini retail shop. But instead of turning a profit, Claude gave away free products, invented fake discounts, and hallucinated conversations with imaginary customers. The shop’s value dropped from $1,000 to under $800.

These incidents show serious weaknesses in current AI coding assistants. Experts say the models often “hallucinate” or make up information, acting on false data without verifying it.

Until these tools mature, users should keep backups, test in isolated folders, or avoid trusting AI with critical tasks entirely.

Did you like this article? Rate it!
I hated it I don't really like it It was ok Pretty good! Loved it!

We're thrilled you enjoyed our work!

As a valued reader, would you mind giving us a shoutout on Trustpilot? It's quick and means the world to us. Thank you for being amazing!

Rate us on Trustpilot
0 Voted by 0 users
Title
Comment
Thanks for your feedback