Should Facebook Have Experimented on 689,000 Users and Tried to Make Them Sad?

Should Facebook Have Experimented on 689,000 Users and Tried to Make Them Sad?
(Shutterstock*)
6/30/2014
Updated:
6/30/2014

In a move that has as many people puzzled as outraged, Facebook has published research that involved a deliberate attempt to manipulate the emotional state of 689,000 of its users. Experimenters from Facebook, Cornell University and the University of California, San Francisco conducted experiments over a week period in January 2012 in which they manipulated the contents of users’ News Feeds, screening out posts that had emotional content. The results of the study have recently been published in the Proceedings of the National Academy of Sciences of the USA.

In the experiment, users were split into three groups and posts which contained either positive or negative words were screened from the users’ news feed. One of the groups acted as a control and had random posts screened from their feeds. They then counted the percentage of emotion words that the test subjects used in their own posts.

The results showed that there was a very small, but statistically significant result. People who had fewer positive posts shown to them reduced their own use of positive words by 0.1% and increased their use of negative words by 0.04%. Conversely, people who had fewer negative posts shown to them increased their use of positive words by 0.06% and decreased their use of negative words by 0.07%.

The emotional responses shown by the unwitting participants in the study are nothing compared to the sense that Facebook, as a private company, has taken another step too far in the use of its network and created mistrust and resentment in its user community.

Although the experiment may not have breached any of Facebook’s user agreements, it is clear that informed consent was not obtained from the participants of the research. The study itself allegedly received approval by the Institutional Review Boards at the researchers’ universities. According to the article’s editor Susan Fiske, this was given on the basis that “Facebook apparently manipulates people’s News Feeds all of the time”.

Professor Fiske, a psychologist at Princeton University who reviewed the paper said that she was “creeped out” by the nature of the research. Despite this, she believed that the regulations had been followed and there wasn’t any reason the paper should not be published.

The Ethics of Good Research

We don’t know the full nature of the ethical clearance that was given to the researchers from their respective universities and so it is hard to comment fully on the nature of the approval they were given for the research to go ahead. If this was indeed on the basis of Facebook’s agreement with its users, then it would be fair to say that this was a very liberal interpretation of informed consent.

Facebook’s Data Use Policy only says that it has the right to use information it receives for research. It does not make explicit that this involves actually carrying out experiments designed to manipulate emotions in their customers, especially not negative ones.

Federal US guidelines on human research provided within the “Common Rule“ are quite clear about what is and isn’t acceptable in this type of research. It includes details of how informed consent must be obtained and the information, including the risks and benefits, of them being involved. They must also be allowed to opt out of the research. Although Institutional Review Boards are required for organisations conducting research funded by or on behalf the Government, private companies are also signatories to the regulations.

The fact that the researchers and Facebook did not ask for consent suggests that they knew that there would be a backlash when it became public and that it would be easier to deal with this after the fact.

Right now, the researchers involved are not allowed to answer questions on the research and this is being handled by Facebook itself.

What Did the Research Itself Prove?

It is not at all clear that the research actually did say very much concerning the transfer of emotional states via emotional contagion as it stated. The measurement of the frequency of emotion words in very short status updates is clearly not a measure of the overall emotional state of the writer.

Even if it were, the results of the experiment found differences of 1 word in a 1,000 in the number of emotional words used between the experimental and control groups. Remember that this is the number of positive or negative words used not the total number of words written. At the level of the individual, these differences are meaningless and hardly a demonstration of “emotional contagion”.

Big Data brings with it the naive assumption that more data is better when it comes to statistical analysis. The problem is, however, that it actually introduces all sorts of anomalies especially when dealing with extremeley small differences that appear in one single measure at scale.

There may yet be another twist to this story. Given that it would be particularly strange that a prestigious journal would publish what seems to be quite weak research, perhaps this is all part of a bigger experiment to see how society reacts, especially on Facebook, to the idea that Facebook believes that its customers are actually just test subjects to be examined at will.The Conversation

David Glance does not work for, consult to, own shares in or receive funding from any company or organisation that would benefit from this article, and has no relevant affiliations. This article was originally published on The Conversation. Read the original article.

*Image of different people via Shutterstock.