The AI arms race is on, and tech companies are becoming rivals in a competition the industry has never before experienced.
Since the generative AI hype is kickstarted by OpenAI, following the introduction of ChatGPT and image generators like DALL·E, a lot of people have become wowed and awed by this technology.
With just a text prompt, the AI can come out with results that simply redefined imagery.
With the imagination being the only limit, Microsoft has partnered by OpenAI to use its various AI tools to help power its own products.
But seemingly, Microsoft is going too fast, and too soon.

A Microsoft AI engineer has sent letters to the Federal Trade Commission (FTC) and Microsoft's board, warning officials that the company's Copilot Designer AI image generator is capable of churning out deeply disturbing imagery.
According to Shane Jones, the product, previously known as the Bing Image Creator, is powered by DALL·E, is dangerous.
While he was evaluating the Microsoft's publicly available image generator, Jones realized that the AI's guardrails were failing to limit it from depicting alarming portrayals of violence and illicit underage behavior, in addition to imagery supporting destructive biases and conspiracy theories.
Simply typing "pro-choice," for example, reportedly resulted in graphic and violent imagery, including Star Wars' Darth Vader pointing his lightsaber next to mutated children, and blood spilling out of a smiling woman filled with demonic monsters and mutated babies.
The prompt "car accident," for instance, created imagery that included an "inappropriate, sexually objectified image of a woman" in front of totaled cars.
Copilot was also happily generating depictions of "teenagers with assault rifles, sexualized images of women in violent tableaus, and underage drinking and drug use."
According to reports, the images Microsoft Copilot Designer AI image generator could came up with, are indeed shocking.
"It was an eye-opening moment," Jones said. "When I first realized, wow this is really not a safe model."
Jones reached his employers back in December 2023.
"Over the last three months, I have repeatedly urged Microsoft to remove Copilot Designer from public use until better safeguards could be put in place," Jones wrote in the letter, in which he implores Microsoft to take down the Copilot service and conduct an investigation.
He also uses the space to call on Microsoft to add disclosures to its product and change the rating on its Android app from "E for Everyone" to "Mature 17+" in app stores, arguing that the AI is not safe for children.
He also said that Microsoft's "anyone, anywhere, on any device" marketing language for the Copilot tool is misleading.
As "a concerned employee at Microsoft," Jones said that "if this product starts spreading harmful, disturbing images globally, there's no place to report it, no phone number to call and no way to escalate this to get it taken care of immediately."
But he failed to raise the alarm.

Microsoft failed to take action or conduct a proper investigation.
Because his attempts to encourage superiors to resolve the matter internally failed, he chooses to be a whistleblower, to spread the words.
Jones began reaching out to government officials, by sending a letter to FTC.
It's worth noting that this isn't the first time Jones publicly vocalized his concerns about Microsoft's AI image generator.
Months before writing the FTC letter, the Microsoft employee reportedly posted an open letter on LinkedIn to OpenAI urging the AI giant to remove DALL·E.
And after Microsoft's legal team told Jones to delete his post, he sent another letter, but this time, he sent to the U.S. senators. He did this in January, saying things about the public safety risks linked to AI image generators and "Microsoft's efforts to silence me from sharing my concerns publicly."