It is said that one of the best ways to manipulate social media users is to make them feel scared or empathetic. The alleged role of Cambridge Analytica and Facebook in the election of Donald Trump seems to prove it. However, UQÀM communications and information science researcher Camille Alloing, suggests that the power social media holds over our emotions needs to be taken with a pinch of salt.
Could you explain what “affective capitalism” is?
Simply put, it is the part of capitalism that exploits our ability to be moved (and to move others) to generate value; something we particularly see on social media.However, it’s worth looking a little closer at the word “affect”. It is a term that can be used to refer to any emotion, but the important part is how it “sets us in motion,” meaning what causes us to take action.
When I “like” a post, I am affected. Unlike emotions (which remain difficult to analyse due to their subjective and unconscious nature), affective consequences can be identified (you can know that a video affected me because I pressed the “like” button). So, although we cannot ascertain whether digital platforms actually succeed in provoking emotions in users, we can analyse how users behave.
Given that most social media revenue comes from selling ad space, the goal of these platforms is to increase the time users spend on them, and therefore the number of ads viewed. To that end, affect is undeniably extremely useful – by generating empathy, more reactions are prompted, and content is shared more.
I have found that individuals are now part of structures that can affect them (and therefore make them feel emotions and make them act) although they cannot affect back. If I post something, and I’m expecting a response from my friends, I am alienated because I will only get that response if Facebook chooses (for reasons beyond my control) to share my post in my friends’ feeds.
You say that “affect is a powerful tool”. Is it the power to manipulate people through their emotions?
If I said that affecting a person meant you could successfully manipulate them, I would be in agreement with the arguments the platforms are putting out. Facebook, for example, has every reason to let people think that their algorithm is able to control users because it helps them to sell ad space. In this way, the Cambridge Analytica scandal [in which this company attempted to manipulate Facebook users to influence American swing voters in the 2016 US presidential election in favour of Donald Trump] provided incredible publicity for Facebook with their advertisers who saw it as an opportunity to drastically increase their sales by manipulating users!
However, the role of social media in Trump’s election must be put in perspective, and we should be careful not to trust oversimplified explanations. Even though Facebook boasted that its targeted advertising was 89% accurate, in 2019 employees revealed that average accuracy in the US was in fact only half that (41%, and as low as 9% in some categories)1. Sure, these platforms’ algorithms and functionalities have tangible effects… but they are much less than what you might think.
The research is there to facilitate well-balanced debates, and scientific studies23 have shown that, contrary to what we might hear, social media platforms cannot actually manipulate us. That doesn’t mean they don’t try, but they cannot control who they affect nor what the consequences are of their initiatives. What’s more, it can quickly become dangerous, even more so given that their concept of human psychology leaves much to be desired. Believing that people are blindly subject to their emotions and cognitive biases is a form of class contempt.
In 2014, Facebook hired researchers to perform psychological tests that aimed to manipulate the emotions of 700,000 users, without their consent [4]. This “scientific” study was meant to demonstrate the platform’s ability to control the mood of its users and involved modifying people’s news feeds to show them more negative (or positive) content. As a result, they claimed that they could cause “emotional contagion,” as people would publish content that was more negative (or positive, depending on what they had been shown). However, on top of the obvious ethical issues, the experiment was statistically flawed, and the conclusions do not hold up. But I think it’s fair to say that scientific rigour was probably not their priority! Above all, the objective was to create good publicity among advertisers – Facebook uses research as a PR tool.
Yet it is important to remember that affecting someone is not necessarily negative – it all depends on our intentions. We are constantly affecting each other, and when we are feeling down, we need to be affected in a positive way. We simply need to carefully consider who we are allowing to affect us. Should private companies have this power? Should the government?
Should we be concerned by detection of biometric emotion?
Yes. We are currently seeing the widespread dissemination of biometric tools that measure emotions. In our book [5], we mention a comedy club in Barcelona, the Teatreneu, where the price of your ticket is calculated by the number of times you laugh (30 cents per laugh). This example is pretty anecdotal, but less amusingly, biometric technology (which until recently were nothing but basic experiments for commercial ends) is now being used to monitor citizens. The NYPD has spent more than $3 billion since 2016 on its algorithms, which use targeted ads to measure the attitudes towards police of 250,0000 residents [6].
The problem is also that this biometric emotion detection technology is very bad at doing its job. This is because it is based on the work of American psychologist Paul Ekman and his Facial Action Coding System [a method of analysing facial expressions that aims to associate certain facial movements to emotions], which does not actually work in practice.
Despite their ineffectiveness, these biometric tools are spreading at a rapid pace – yet technology is much more dangerous when it works badly than when it doesn’t work at all! If it’s 80% reliable, and you are part of the 20% margin of error, it will be up to you to prove it. I find it very concerning that poorly functioning tools are becoming tools of governance and surveillance, implemented without the consent of the main parties involved.