Man Created A ChatGPT-Powered Sentry Rifle, And Gets Access Revoked By OpenAI

17/01/2025

OpenAI has terminated access for a man who created a killer robot, literally.

The man engineered a mount for an automated rifle, which utilizes the technology behind OpenAI's ChatGPT for natural language processing to understand spoken commands.

The device gained widespread attention after a viral Reddit video showed the man issuing voice commands to the system.

“ChatGPT, we’re under attack from the front left and front right. Respond accordingly,” he said in the video, prompting the device to aim the rifle and fire with remarkable speed and accuracy.

In other words, the man created a sentry rifle that understands his voice and obeys his every command.

The device is made from a metallic mount, mounted on top of a piled cinder blocks that store the motor for the mechanical movements.

According to his social media accounts, the man who goes online with the handle STS 3D, created his projects using CNC machine to create the aluminum gears and stand, among others.

Then, at the mount, he placed an automatic, with its trigger strapped into an actuator.

This actuator, and the overall mechanical movements, are all powered by a computer, which uses ChatGPT's Realtime API to interpret commands.

The engineer takes it to the next level by riding the mounted rifle like a mechanical bull—an image reminiscent of Major T. J. "King" Kong straddling a nuclear bomb like a rodeo cowboy in Stanley Kubrick's classic 1964 dark comedy Dr. Strangelove.

The striking demo video shows how even consumer-grade AI technology can easily be leveraged to create a killing machine, if not for violent purposes.

The turret-like device is in its early design, meaning that only time would tell whether what this device can really be.

But before that can ever happen, OpenAI quickly responded to the situation.

"We proactively identified this violation of our policies and notified the developer to cease this activity ahead of receiving your inquiry," the company said to Futurism

"OpenAI's Usage Policies prohibit the use of our services to develop or use weapons, or to automate certain systems that can affect personal safety."

This incident underscores CEO Sam Altman’s warnings about the potential dangers of AI, highlighting the need for robust safeguards, much like how photocopiers are designed to prevent counterfeiting currency.

The event brings into sharp focus one of the most alarming concerns about AI: the automation of lethal weapons.

Critics warn that using AI for military applications, such as autonomous drones or weaponized systems, risks removing human accountability and increasing the potential for unlawful or unethical actions.

OpenAI’s advanced models can process audio and visual data, allowing them to interpret environments and respond intelligently to queries. This capability could, in theory, be adapted to autonomous weaponry capable of identifying and striking targets without human oversight—a development that could constitute a war crime.

Real-world examples already hint at the dangers of such technologies, like during Israel's war with the Palestinian Hamas.

Proponents of AI believe that AI systems can significantly improve soldiers and the ability to neutralized threats. Autonomous drones, for instance, are touted for their potential precision. However, critics argue that reliance on AI in warfare could lead to a loss of accountability and ethical oversight, advocating instead for enhanced capabilities to disrupt enemy systems, such as jamming drone communications.

As for OpenAI, the company explicitly prohibits the use of its technology in developing or operating weapons, as well as automating systems that could pose risks to personal safety.

However, the company has faced scrutiny over its partnership with Anduril, a defense technology company that manufactures AI-powered drones and missiles. The collaboration aims to create systems capable of defending against drone attacks by synthesizing real-time data, reducing operator workloads, and enhancing situational awareness.

At this time, the U.S. defense budget exceeds $800 billion annually, and that other countries around the world have different budgets and requirements.

While the needs of weaponries and defense systems differ from one country to another, the high demand of more advanced systems provides a lucrative opportunity for tech companies to enter the military sector.

The incident serves as a stark reminder of the double-edged nature of AI technology.

While its potential to revolutionize industries is undeniable, the risks of misuse—particularly in military and security contexts—highlight the urgent need for regulation and ethical oversight in the development and deployment of AI.