The Ethics Of Facebook’s Emotion-Manipulation Research

Share

I’ve railed against Facebook many times on this blog, and in 2010’s “Facebook: Beyond The Last Straw“), I promised I would stop. I managed to keep that promise for nearly four years, but I’ve been roused to rail once again by the confluence of four different interests I happen to have: emotion research (one of my philosophical activities), ethics (a subject I teach), federal regulations covering university research (which I help to administer by being on my university’s Institutional Research Board) and the internet (which, of course, I constantly use).

In case you haven’t yet heard, what Facebook did was to manipulate the “news feeds” users received from their friends, eliminating items with words associated with either positive emotions or negative emotions, and then observing the degree to which the manipulated users subsequently employed such positive or negative vocabulary in their own posts. Facebook’s main goal was to disconfirm a hypothesis suggested by previous researchers that users would be placed in a negative mood by their friends’ positive news items, or in a positive mood by their friends’ negative news items. As I understand it, the results did disconfirm that hypothesis, and confirmed the opposite one (namely, that users would be placed in congruent rather than incongruent mood states by reading their friends’ positive or negative news items), but just barely.

Although I find this methodology questionable on a number of grounds, apparently peer-reviewers did not. The research was published in a reputable journal. More interesting to me are the ethical implications of Facebook’s having used their users as guinea pigs this way.

The best article I’ve found on the net about the ethical issues raised by this experiment was written as an opinion piece on Wired by Michelle N. Meyer, Director of Bioethics Policy in the Union Graduate College-Icahn School of Medicine at Mount Sinai Bioethics Program. Meyer is writing specifically about the question of whether the research, which involved faculty from several universities whose human-subject research is federally regulated, could have (and should have) been approved under the relevant regulations. Ultimately, she argues that it both could have and should have, assuming that the manipulation posed minimal risk (relative to other manipulations users regularly undergo on Facebook and other sites). Her only caveat is that more specific consent should have been obtained from the subjects (without giving away the manipulation involved), and some debriefing should have occurred afterward. If you’re interested in her reasoning, which at first glance I find basically sound, I encourage you to read the whole article. Meyer’s bottom line is this-

We can certainly have a conversation about the appropriateness of Facebook-like manipulations, data mining, and other 21st-century practices. But so long as we allow private entities to engage freely in these practices, we ought not unduly restrain academics trying to determine their effects. Recall those fear appeals I mentioned above. As one social psychology doctoral candidate noted on Twitter, IRBs make it impossible to study the effects of appeals that carry the same intensity of fear as real-world appeals to which people are exposed routinely, and on a mass scale, with unknown consequences. That doesn’t make a lot of sense. What corporations can do at will to serve their bottom line, and non-profits can do to serve their cause, we shouldn’t make (even) harder—or impossible—for those seeking to produce generalizable knowledge to do.

My only gripe with this is that it doesn’t push strongly enough for the sort of “conversation” mentioned in the first line. The ways in which social media sites – and other internet sites – can legally manipulate their users without their specific consent is, as far as I can tell, entirely unregulated. Yes, the net should be open and free, but manipulation of the sort Facebook engaged in undermines rather than enhances user freedom. We shouldn’t expect to be able to avoid every attempt to influence our emotions, but there is an important difference between (for instance) being exposed to an ad as a price of admission, and having the information one’s friends intended you to see being edited, unbeknownst to you or your friends, for some third party’s ulterior purpose.