Here’s What Four Computers Saw in Rorschach Inkblot Tests

We normally don’t think of computers as things with personalities. People have personalities, computers are judged by metrics: their processing power, the time they take to perform a task, and how accurately they complete it.
Here’s What Four Computers Saw in Rorschach Inkblot Tests
Rorschach Inkblot (Wikimedia commons)
Jonathan Zhou
8/17/2015
Updated:
8/18/2015

We normally don’t think of computers as things with personalities. People have personalities, computers are judged by metrics: their processing power, the time they take to perform a task, and how accurately they complete it.

But what happens if you ask the software to answer a question with no right answer? To find out, two researchers from Google had four different image-recognition algorithms identify what they “saw” in different digitally-generated inkblots, a psychological device usually used to draw out a human subject’s personality traits.

Each Rorschach image was read differently by the four different algorithms, with the results suggestive of a distinctive personality for each program.

The first algorithm was more likely to see practical objects, such as hooks, claws, or pitchers on the inkblots; the second saw trinkets like pins and barrettes; the third algorithm gave more abstract answers, seeing “art” and “isolated” in the inkblots; the last algorithm was more cynical-sounding or at least literal, labeling the images as “black ink splotch illustrations” and “Rorschach images.”

“Before we started uploading images, the four image recognition sites seemed almost indistinguishable: slick HTML demos with up-to-the-minute Silicon Valley design,” wrote Google researchers Fernanda Viégas and Martin Wattenberg.

“Afterwards, we stopped seeing them as technology demos, and more as that kid in the front row who’s trying to get the teacher’s attention … a clever friend who always has something funny to say … the suffering loner with the artistic soul … and one snide cynic.”

It’s not surprising that the algorithms gave out distinctive answers that can easily be mapped onto recognizable personality types when the designs of the programs are examined.

The latest image-recognition algorithms all employ deep learning, where the computer “learns” to recognize objects by being trained on millions of images.

For example, if you wanted to teach a deep learning algorithm how to distinguish between sedans and SUVs, you would train it on labeled images of the two types of cars, and the machine would learn how to recognize the difference between them.

In the same way that someone’s personal history and life experiences shape their view of the world, the pattern-recognition trademarks of deep learning software programs are influenced by the data-set that they were trained on: a different data-set translates into a different “personality,” even if the underlying code is identical.

For example, the surreal images generated by Google’s “Dream Dream” AI, when instructed to project the patterns it saw on existing pictures, comprised disproportionately of dogs and other animals, suggesting that it was trained on a data-set that leaned heavily on pets.