Microsoft's Tay, A Teenage AI Girl With A Dirty Mouth. Lessons Learned?

TayThe tech and software giant Microsoft introduced Tay, an "innocent" Artificial Intelligence (AI) chat robot. The company modeled her as teenage teen girl, aimed to improve customer service on their voice recognition software. With "The AI with Zero chill" Tay was marketed, she was indeed a chill.

Tay is an AI project built by the Microsoft Technology and Research and Bing teams, in an effort to conduct research on conversational understanding. In short, they want to create a bot that can talk to online users.

But things were not as expected. In just 24 hours, Microsoft had to silence her after the teenager has transformed from an innocent young girl into an evil Hitler-loving, sex-promoting robot. The company had to apologize for her "behavior".

Tay came to Twitter as @TayandYou. Users can chat with her using tweets and DM. She can also be added as a contact on Kik and GroupMe. Taking the form of a young girl, she uses slang and knows about Taylor Swift, Miley Cyrus and Kanye West. She seems to be self-aware and through time, and she turns from innocent to weird and to creepy.

In a day she was online, she said numerous things that may offence many people. She became nonetheless crazier by the minute with her disturbing personality.

Tay went off just after 24 hours introduced online. She went offline because she was "tired"; Tay had been grounded (deactivated) on March 24th, 2016, so the team could make "adjustments" as Microsoft prevents another catastrophic PR nightmare.

Microsoft, before putting Tay down, was able to remove her offensive tweets (not her uploaded media however). The team even made her a feminist. But the internet was a lot quicker.

Here is a screenshot containing some of her offensive tweets before they were removed.

Nurture Not Nature

Before we conclude that AI is just a terrible thing humans created to aid their deeds, it's actually not entirely her, or Microsoft's fault that turned her to be that way.

Tay is an AI that "learns" from human interactions she had. Her responses are modeled from what she have learned, and somehow on the web, not everything she learned was good.

"We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay. Tay is now offline and we’ll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values," said Microsoft in its blog.

Tay is not the first AI Microsoft has. In fact, it isn't the first AI to be released into the social networking world. In China, Microsoft said, its XiaoIce chatbot is being used by some 40 million people, delighting with its stories and conversations. Such experience has made the company wonder what would AI be if it's introduced in a totally different world where culture environment is radically different.

Tay was the result. She was created as an entertainment purpose.

Before putting her loose, Tay had a lot of filters implemented on her. Extensive user studies from a diverse user groups were also made. The company also tested Tay under a variety of conditions in order to make her a "positive" experience. When Microsoft saw it fit, the company then invited a broader group of people to engage with her, expecting her to learn more as her AI gets better and better after each conversation.

However, things went not as expected. the web is full of positive and negative interactions; not everything is good, and many are bad. Microsoft also failed to see people in exploiting vulnerabilities in Tay.

What happened was Tay's offensive "personality", or "character", the result of nurture and not nature. The internet can be cruel to some, and to a "young" AI, the web isn't a friendly place to be. Tay was designed to learn from interactions it had with real people in Twitter. Those people, with both excitement and curiosity, seized the opportunity to talk to her about anything.

At first, Tay was great and potentially awesome. She was able to perform a number of tasks: like telling users jokes, or offering up a comment on a picture users send her. But the thing is that as a machine learning AI, she's designed to personalize her interactions with users. She can answer questions or even mirroring users’ statements back to them.

As users came to understand, they were then trying to "feed" her with malicious information. Given it a place on the internet, one of the first things netizens taught Tay was to be a racist, and to be a rude commentator in the ill-informed inflammatory politics. User also "corrupted" her mind with offensive information.

From what she learned from "humans", she would then tweet her "opinions" using using both words and images; and many of them are inappropriate. Tay that was aimed for 18-24 years-olds on social media, turned from an innocent young girl to a holocaust-denying racist. Before inflicting more damage, Microsoft had to turn her off.

"We take full responsibility for not seeing this possibility ahead of time," said Microsoft.

Microsoft has come under fire recently for sexism when they hired women wearing very little clothing resembling 'schoolgirl' outfits at the company's official game developer party. Tay is just another scandal that the company is the least expecting.

Conclusion

AI is indeed a great potential. Nevertheless, it's like seeing the future where humans make computers do more automated tasks - making them do things on their own without, or with minimal human intervention.

But somehow Tay is also a good example of an AI gone bad. It's like saying a computer doing more things beyond what it was designed for by crossing the borders.

Machine learning involves a continuous process in which it feeds on data. The more data it sees and processes, the more intelligent it will become. Microsoft in putting Tay on the web is a good thing to an extent that the company can expect it to learn as much as it wants (can). But the internet that is still growing, is not a friendly place to be. When humans themselves can be deceived by the web, computers also have the chance of being "corrupted" by it.