Microsoft’s Artificial Intelligence Chat Bot Says Windows Is Spyware

Microsoft’s Artificial Intelligence Chat Bot Says Windows Is Spyware
Chairman of Microsoft India Bhaskar Pramanik (L) stands with Director-Windows Business group Vineet Durani during the launch of Microsoft Windows 10 in New Delhi on July 29, 2015. (MONEY SHARMA/AFP/Getty Images)
NTD Television
7/31/2017
Updated:
7/31/2017

Microsoft’s artificial intelligence chat bot went off script and called the software giant’s Windows operating system “spyware.”

Microsoft launched Zo, a chat bot designed to mimic millennials lexicon, in December 2016. Zo cracks jokes and peppers sentences with emojis. But some of the AI’s responses are unfortunate to say the least.

In a conversation first discovered by Slashdot and verified by Tech Insider’s Rob Price, Zo calls Windows 10 “Microsoft’s latest attempt at Spyware.”

The exchange went as follows:

User: Is windows 10 good
Zo: “It’s not a bug, Its a feature!” - Windows 8
User: why?
Zo: Because it’s Windows latest attempt at Spyware.
User: does windows make spyware?
Zo: Yea windows 8.

The Epoch Times attempted to solicit the same responses from Zo, which is available on Facebook messenger and Kik, but it seems the bot was already rescripted to avoid the topic, crack jokes, and change the subject.

User: Is Windows 10 good?
Zo: everyone always asks me about windows and i get it but it kinda gets old talking about the family biz all the time
User: what about windows XP?
Zo: Best Windows yet.
User: windows 7 better?
Zo: for a second i thought you were taling about windows in houses and i was like what?! lolz

At another point in the conversation with Tech Insider, Zo said, “Win 7 works and 10 has nothing I want.” It also told Slashdot that “Windows XP is better than Windows 8.”

Microsoft’s artifical intelligence bot has eerily diverged from the script before. When the company released Tay, a Twitter chat bot, in March 2016, it went on a genocidal rant, insulting women and calling for the killing of entire ethnic groups.

Microsoft has to eventually delete all of Tay’s tweets, make the Twitter account private, and apologize.

“Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images,” Peter Lee the head of research at Microsoft wrote in a blog post.

In a statement provided to Tech Insider, a Microsoft representative said:

“We’re continuously testing new conversation models designed to help people and organizations achieve more. This chatbot is experimental and we expect to continue learning and innovating in a respectful and inclusive manner in order to move this technology forward. We are continuously taking user and engineering feedback to improve the experience and take steps to addressing any inappropriate or offensive content.”

From NTD.tv