The emotional maze: a week of moods manipulated by Facebook (2014).
A research paper published by Facebook in 2014 caused a worldwide stir. The paper, published in the Proceedings of the National Academy of Sciences (PNAS), is titled "Experimental evidence of massive-scale emotional contagion through social networks. The paper, titled "Experimental evidence of massive-scale emotional contagion through social networks," describes the results of an experiment showing how our emotions are affected through social networking sites. The subjects were approximately 700,000 users on Facebook, none of whom were informed that they were part of a social experiment.
The experiment lasted one week, from January 11 to 18, 2012. During this time, Facebook manipulated the posts that appeared in the news feeds of the target users. The algorithm adjusted posts so that some users would see more positively worded posts and others would see more negatively worded posts. In this way, the "atmosphere" of emotion that users receive was intentionally altered. As a result, those who saw more positive posts were more upbeat in their own posts, while those who were exposed to more negative posts were more likely to choose darker tones in their words. was the core of this experiment.
But behind the scientific findings lurked a deep darker side of ethics. The biggest problem was the lack of informed consent, or prior explanation and consent for the experiment. In the fields of psychology and social sciences, "informed consent" (full explanation and voluntary consent) is a pillar of research ethics. Although Facebook's terms of use stated that "information may be used for research," it was considered too violent to intervene in the moods of hundreds of thousands of people on this basis. It was perceived as too intrusive.
Such manipulation could also have a negative impact on mentally unstable users. For example, if a person with depressive tendencies were exposed to only negative posts, there was a risk of further deterioration of his or her mental state, yet this risk was largely neglected in the study. In addition, the experiment would have required rigorous review by an ethics review board (IRB), but Facebook's internal decision was made, and the university's ethics review was after-the-fact and formal, according to some critics.
This incident also brought to light the fact that Facebook, a huge platform, can unknowingly control the mood and behavior of its users. The experiment symbolized the transformation of the social networking service from a place for mere social interaction to a place for psychological manipulation.
Of course, there were many voices defending the experiment, with Facebook explaining that "the impact of the experiment was minimal, with an average change of less than one post," and claiming that "adjustments to the News Feed are always being made, and this manipulation is part of that process. However, the issue here was not the magnitude of the manipulation, but the purpose and intent of the manipulation. Unlike algorithms for commercial optimization, this was a "social experiment" targeting human emotions. The ethical weight of doing so without the user's knowledge is inescapable, no matter how small the change.
This incident triggered a widespread discussion about the nature of experimental research through social networking sites, the relationship between companies and academic institutions, and ethics and privacy in the digital society. Universities are now subject to stricter ethical screening for joint research with companies, and SNS companies are now required to be even more accountable and transparent to their users. Emotions are indeed propagated. But to what extent are our "free expression" and "natural feelings" protected when they are guided by someone else's design?
No comments:
Post a Comment