Why Facebook’s Social Experiment Does Not Warrant Criticism

Advertisement
Why Facebook’s Social Experiment Does Not Warrant Criticism
Advertisement
The social ‘experiment’ recently conducted by Facebook has received its share of criticism. Although the complete objective and findings of the research are not public, the fact that one set of FB users in the experiment were fed ‘negative’ stories is accepted by the social networking giant. Without getting into the finer points of the agreement clauses (built in the user’s agreement with Facebook at the time of signing in or at any other point of interaction between a user and Facebook), which allow Facebook to do this kind of experiment, we can still say that such criticism can be based only on ethical grounds and should stop there. Even the criticism on the ground of ethics is weak and doesn’t provide any base to people to pass any harsh judgement.

Those criticising the experiments have the following arguments.

1. Now Facebook can use data-based algorithms to predict what content we consume.
Facebook technology and data-based medium always run the risk of maximum criticism as they are feared for controlling our lives in ways which do not come naturally to humans. Data and technology taking over our lives is a loosely made common grievance. The apt question in this context is: How far a machine’s algorithm can be trusted for a job assigned to human judgment for a long time? Well, Science and Psychology may provide some answers to this. Writing about human intuition versus algorithm, acclaimed psychologist, researcher and Nobel Laureate Daniel Kahneman opined that experts could be inferior to algorithms. Explaining the human fear of algorithms, he writes, “The story of a child dying because an algorithm made a mistake is more poignant than the story of the same tragedy occurring as a result of human error, and the difference in emotional intensity is readily translated into a moral preference.” But now that more and more people are getting accustomed to algorithms recommending books, music or software tools, the very idea of algorithms playing bigger role in our information intake will start becoming more acceptable.

2. Facebook influenced the emotional response of some users and manipulated their lives.
This is the most discussed and most serious allegation against the experiment. Yes, there are many studies that suggest correlation between negative news and negative behaviour, but can correlation substitute causation? Even in the studies that correlate negative stories to unpleasant emotions or unpleasant emotions to negative behaviour – those induced emotions or behaviour can be different for different people. A war crime story can elicit anger against the perpetrators or apathy towards the system. Also, in these days and age where news content consumption happen through more than one platform, it’s improbable to link any specific behaviour to any single news item. If the traditional media (where the news of Facebook’s experiment has been prominently featured) actually believed that the nature of story could affect our life, we shouldn’t be seeing stories related to death, violence against women and other crimes. In fact, most of the research which established any relation of negative emotion with negative news has been done for newspapers. Yet, newspapers don’t have an unambiguous policy regarding such news. Even if traditional media acknowledges the effect of ‘negative’ content, it is the reality of our information consumption.
Advertisement


3. Facebook is trying to understand users’ emotional response, which would be used later for its own business gains.
This argument does not hold much water. Technology-based media companies often use their customers as ‘products’ and the practice is becoming an accepted business model. In fact, it is not much different from what the traditional media has been doing for decades. For instance, traditional media have always banked on their audience profile to sell advertisement space. They offer the kind of content that best suits the requirements of their audience and it helps them gain maximum leverage with advertisers. Traditional media might not have conducted experiments to figure out the relation between content and its acceptability among the audience, but it is no secret that content variation for many newspapers is guided by the editor’s judgment about the content consumption habits of a particular region or edition. Would these efforts towards designing content specific to a particular audience, with an objective to increase interaction, be called manipulation?

Fortunately for Facebook, the criticism is not based on the idea of the experiment, but the experiment itself. Experiments like this should not be feared. But what if Facebook found that people interacted with negative content more than positive one? Would we know that? What will that tell us about Facebook and the world?