Subscribe to Breaking News emails

You have successfully subscribed to the Breaking News email.

Subscribe today to be the first to to know about breaking news and special reports.

Microsoft's AI Twitter Bot That Went Racist Returns ... for a Bit

by Luke Graham, Special to CNBC / / Source:

Breaking News Emails

Get breaking news alerts and special reports. The news and stories that matter, delivered weekday mornings.

Microsoft's artificial intelligence program, Tay, reappeared on Twitter on Wednesday after being deactivated last week for posting offensive messages.

However, the program once again went wrong and Tay's account was set to private after it began repeating the same message over and over to other Twitter users.

According to a Microsoft, the account was reactivated by accident during testing.

"Tay remains offline while we make adjustments," a spokesperson for the company told CNBC via email. "As part of testing, she was inadvertently activated on Twitter for a brief period of time."

Read More from CNBC: Microsoft Created a Twitter Bot. It Quickly Became a Racist Jerk

Twitter users speculated the program was caught in a feedback loop where it was constantly replying to its own messages.

Tay was first launched last Wednesday, but had to be deactivated a few days later after it began writing messages using racist and sexual language.

Peter Lee, corporate vice president of Microsoft's research division, apologized for the program's behaviour.

"We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for," Lee wrote on the company's blog.

According to Lee, the program was created as a "chatbot" to entertain 18-to-24 year olds and learn from interacting with humans.

However, some Twitter users were able to manipulate the program to send out the offensive messages.

"Unfortunately, in the first 24 hours of coming online, a coordinated attack by a subset of people exploited a vulnerability in Tay," Lee explained. "As a result, Tay tweeted wildly inappropriate and reprehensible words and images."

Read More from CNBC: Microsoft Axes Chatbot That Learned a Little Too Much Online

Alastair Bathgate, CEO of Blue Prism, a software company that develops robotic process automation systems, said the incident proves that Microsoft has not learnt to control its AI program.

"You can be devious with these things because, essentially, they are not that intelligent," he told CNBC over the phone.

"They are relatively dumb compared to a human with 20 or 40 years of life experience. Maybe it's going to take that much life experience for Tay to understand the difference between good and bad."

Breaking News Emails

Get breaking news alerts and special reports. The news and stories that matter, delivered weekday mornings.