Home / Scitech / Here is Why Microsoft shuts down its new “A.I. Bot Tay”

Here is Why Microsoft shuts down its new “A.I. Bot Tay”

Microsoft’s new launch, the bot called Tay, was able to respond to tweets and chats on Kik and GroupMe. Unfortunately, this bot was already shut down after the latest unfortunate events. It seems that Tay was making offensive and racist statements.

It is obvious that Tay was not coded to be racist, but it seems that it has learned that from those it has interacted with. Tay was an interesting and new experiment from Microsoft, in the attempt to better develop its Artificial Intelligence tech.

At its launch, Microsoft has declared that:

Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation.” 

The giant tech also added that:

The more you chat with Tay the smarter she gets.”

Unfortunately, within 24 hours, this bot has transformed from something interesting and friendly into a racist.

Tay’s tweets were actually very scary. In case someone missed it, Tay is also a bot that people could talk to, and the company has described it as “Microsoft’s A.I. fam the internet that’s got zero chill!”

See also  Scientists have discovered a killer asteroid that could destroy the Earth

Well, they were right about that, it certainly has “zero chill.” Tay is also able to tell jokes, comment on a picture that users send to it and many other tasks.

However, this bot is also designed by Microsoft to personalize interactions with users, while answering to questions.

In the last days, Twitter has pointed out that Tay often repeated back racist tweets with its own commentary. When Microsoft found out about this situation, the company started to delete some of the most damaging tweets.

Unfortunately for the tech company, a few websites collected screenshots of a large variety of those tweets that have been removed by Microsoft and shared them on the Internet. This was not the experience that Microsoft was hoping for when it launched Tay. Some have pointed out that Tay has started to have “Hitler or Nazis approaches”.

See also  Evidence of Alien Life in a rare type of meteorite

So, this really demonstrates that even if technology is neither evil nor good, engineers have a responsibility to make sure that this bot is not designed to reflect back the worst of humanity. So, Microsoft engineers are not allowed to skip the part of teaching the device what not to say.

Microsoft is now aware of the bot’s racism problem and silenced it after 16 hours of chats. Tay announced via tweet that it will turn off for the night, but hasn’t been turned on yet.

A Microsoft spokesperson has confirmed that Tay was taken offline, declaring that:

The AI chatbot Tay is a machine learning project, designed for human engagement. It is as much a social and cultural experiment, as it is technical. 

Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments.

Tay was programmed to absorb the world around it, but it seems that it has failed to express love for equality.

See also  Google wanted to buy WhatsApp for even more than $19 billion
Share on:

You May Also Like

More Trending

Leave a Comment