Antigravity Drive Deletion May Not be Malware…

Day 28 of 100 in the 100 Days of Cyber Challenge


…but it’s the next best thing.

I debated whether today’s Malware Monday topic would be about the recent Antigravity D: drive mass file deletion incident because it does not rise to the level of malicious, but is so damaging that it might as well be.

So I decided to write about it.

In late November, a user in Greece had all files (except those in the project folder) removed from their D drive by Google’s Antigravity AI developer tool. The problem started when the user wanted Antigravity to clear their project cache, but the AI tool ran the following Windows command instead:

rmdir /s /q d:\

This deletes all folders and subfolders from the root directory of D: in quiet mode, meaning it does not prompt the user for confirmation. The deletion also bypasses Recycle Bin, making the deleted folders unrecoverable through its recovery feature.

The incident was originally shared in a Reddit post by a user with the handle Deep-Hyena492. As far as I can tell, this user’s claims about what happened appear to be legitimate. Other credible sources like Tom’s Hardware have posted about it too.

To show proof of the incident, Deep-Hyena492 posted a video showing chat logs with the AI tool. The content included the commands that were executed, the user discovering the files were gone and the AI tool reviewing its actions, confirming it had deleted all files and ‘apologizing’ for the damage it had done.

If you were a cybercrook wanting to commit an act of sabotage by software you could not have been more effective than Antigravity was on this occasion.

While not technically malware, Antigravity’s behavior reveals a new category of threat: authorized software causing unauthorized damage (ASCUD). It also raises the question: Do we need to guard against ASCUD in the same way we guard against conventional malware?

It seems like a good idea. As I recall from sci-fi movies like I, Robot and Robocop the robots in had one or more ‘prime directives’ or something similar to that. No matter what these devices thought would be the best course of action, they were programmed not to take any action that violated a directive.

If I could come up with a list of prime directives for autonomous devices or processes, it might start with something like this:

(1) Never assume the device or process is infallible. AI makes mistakes and even the AI makers admit this.

(2) Allow manual overrides.

(3) No irreversible processes are allowed. If an automated process could cause damage, there must be something put in place to rollback to the previous state

(4) No ‘destroy the world’ actions are allowed without safeguards. Autonomously deleting all the files on a drive with no verification or Recycle Bin option would qualify for this.

If we are going to rely more heavily on AI, then cybersecurity tools have to be adjusted to consider rogue AI a threat and a reliable mechanism to limit its destructive capabilities similar to the directives mentioned earlier is badly needed.

Whether files are deleted from a malware attack or AI agents that turn into runaway freight trains doesn’t matter. The damage is the same and there needs to be protection from it.


Posted

in

by