The Medium Isn’t Always the Message

Social media will not set us free — but neither will they enslave us. New means of communication depend on people using them responsibly

Marina Gerner

Tahrir Square, Cairo, in 2011: Protestors organised and documented the revolution using Twitter, Facebook and YouTube (credit: Jonathan Rashad)“We have just seen the civilised world gathered as one family around a common sick bed, hope and fear alternately fluctuating in unison the world over as hopeful or alarming bulletins passed with electric pulsations over the continents and under the seas.” 
“Just as in a theatre you speak directly face to face with five or six hundred persons.”
“All the corners of the earth are joined, kindled, fused.” 

These are three reflections on the invention of the telegraph in the early 1880s — but they have an uncanny resonance today in discussion of social media. We praise social media for fuelling revolutions and spreading democracy. We blame them for hosting child pornography and allowing terrorists to communicate and recruit. We say they have created new jobs and destroyed old ones. We depict either an Orwellian dystopia or a utopian global village where we are all friendly neighbours. The early media theorist Marshall McLuhan, who coined terms like “the global village” and “the medium is the message” in the 1960s, is back in vogue. But the arguments for and against social media are not black and white; rather, they are of Rubik’s Cube complexity.

Over the last decade waves of civil protest have taken place around the world, both organised and reported through social media. The Arab Spring targeted totalitarian regimes in the region; the Spanish “Los Indignados” and the Occupy movement emerged after the financial crisis demanding social change. Social media were hailed as a powerful weapon, a weapon that did not involve arms. 

But as those movements failed to effect the reforms they wanted, confidence in the power of social media has waned. However, they have helped to bring around some change. They give visibility and a limited degree of protection to political dissidents, like the Russian journalist Oleg Kashin, who documents political protests, or the Chinese dissident artist Ai Weiwei, who used Twitter to record the names of children who died under badly constructed school buildings in the 2008 Sichuan earthquake.

Because of the rise of smartphones and tablets more pictures were taken in the last three years than in all of previous history. Photographs taken on the phones of spectators during the Boston Marathon helped investigators identify the suspects who planted two bombs, killing three people and injuring 264. The Guardian‘s investigation into the death of Ian Tomlinson, a passing newspaper vendor, during a demonstration against the G20 meeting in London in 2009 was helped when an American witness provided a video of the incident taken on a phone. 

But social media can also spread pictures that lie. In the aftermath of Hurricane Sandy in 2012, an image of the Statue of Liberty surrounded by a spectacular swirl of clouds was widely shared on Twitter — only to be disclosed as a fake. More recently, the BBC revealed how many pictures of destruction and devastation circulated in relation to Israel’s Protective Edge operation in Gaza are actually from the Syrian conflict — nothing to do with Israel. Social media have created new ways of seeing but have also obscured the truth.

Authoritarian leaders loathe social media. In 2013, Turkey’s prime minister Recep Tayyip Erdoğan said: “To me, social media are the worst menace to society.” Social media provide an alternative channel for information and communication, far from old dictatorial tricks like orchestrated rallies televised by a state broadcaster. 

But these alternative channels of communication depend on the people who power them. Social media are, in that sense, not media, but a tool for speech. Twitter is akin to a loudspeaker and Facebook is like a noticeboard. Only if a critical mass of people calls for freedom and knowledge will social media live up to its revolution-starting reputation. We have learnt that the medium is not the message: a message is needed too. More messages than ever before now exist because of social media. Seventy-four per cent of internet users use social networking sites, according to the Pew Internet and American Life Project. Social media have unleashed a torrent of human thought and emotion. But does this cacophony of messages amount to anything that Pandora hasn’t already released from her box? Social media are anarchic and protean. They have brought forth what Hobbes might have called a state of nature 2.0. 

“Things fall apart; the centre cannot hold; Mere anarchy is loosed upon the world,” wrote Yeats in 1919. A century later his words ring true as the internet has undermined hierarchical sources of knowledge. The nature of knowledge, education and politics is changing; everyone can now see and contribute different points of view online, argues David Weinberger, a senior researcher at Harvard’s Berkman Center for Internet and Society. 

But the web’s abundance of information is not a new problem. Ecclesiastes 12:12 reads: “Of making many books there is no end; and much study is weariness of the flesh.” The Roman philosopher Seneca wrote: “What is the point of having countless books and libraries whose titles the owner could scarcely read through in his whole lifetime?” Today’s overflow of information means reliable journalists are needed as gatekeepers and filters to determine what is important.

Social media have a ritualised hold over us. Smartphones accompany us to the bathroom, to bed and on public transport. A corresponding etiquette has not fully developed yet. Just how acceptable is it to whip out your phone in the company of other people? Rules of communication develop over time, and so does the rule of law. 

The justice system is catching up with technology. While banning social media is about as useful as banning the postal system, we draw the line at criminal activities such as death threats or child pornography for which people can be prosecuted just as they can be in “real life”. But legal repercussions are not confined to death threats. Defamation online is as punishable as it is offline.

Today, 80 per cent of people in the UK worry about their personal data online. Edward Snowden’s revelations, WikiLeaks and the rise of big data have spurred inquiries, outrage and suspicion. 

In March, the European Court of Justice ruled that Google must uphold the “right to be forgotten” and honour individual requests to remove private data from its search results.

In June, Facebook, which is still the most dominant social networking platform, came under fire for experimenting with the data of almost 700,000 of its users in an “emotional contagion” study. In collaboration with American academics, Facebook researchers removed either positive or negative posts in newsfeeds and found out that we are more likely to post happy status updates if our friends do. 

This is hardly surprising. But legal experts and privacy campaigners criticised Facebook for conducting a mass experiment without informing users they were guinea pigs. Facebook countered: “It was consistent with Facebook’s Data Use Policy, to which all users agree prior to creating an account on Facebook, constituting informed consent for this research.” Later, however, Facebook’s lead researcher Adam Kramer issued a public apology: “I can understand why some people have concerns about it, and my co-authors and I are very sorry for the way the paper described the research and any anxiety it caused. In hindsight, the research benefits of the paper may not have justified all of this anxiety.” 

The philosopher Onora O’Neill defines informed consent as  “permission granted in full knowledge of the possible consequences”. Just how informed is our informed consent on social media? Studies show that most of us have never read the “terms and conditions” before signing up. And those who try are unlikely to make sense of the lengthy legalese they are presented with. 

Even if the terms and conditions were only 500 words long, people would be unlikely to read them. Researchers are trying to come up with graphic solutions instead. They might be icons we are familiar with, or a traffic light system, similar to nutrition labels on food packets. But the internet is more complex than that. Companies need to take responsibility and come up with ideas on how to simplify their terms and conditions. Informed consent should be a reality rather than a formality.

We worry about our online data, and even more so about the online experiences of those considered the most vulnerable — children. Parents worry their children might encounter strangers, violence, pornography, bullying, racism or websites on self-harm online. It is easy to understand calls for more internet restrictions for children. 

But we need a more balanced view on the risks and benefits for children online, argues Professor Sonia Livingstone, who was awarded the OBE for “services to children and child internet safety” this year. She coordinated the large-scale “EU Kids Online” study across Europe reveals that most children have positive experiences online, while only one in seven said something they encountered online over the past year upset them.

Livingstone says the internet poses the biggest risks for children who have psychological difficulties at home and outside the web. They need professional and parental support, including guidance on online risks. The challenge is to protect children from rare harmful occurrences, without limiting the opportunities to learn, share and develop resilience online enjoyed by the majority of youngsters. 

The study reveals that only 36 per cent of nine- to 16-year-olds say they know more about the internet than their parents. This myth has obscured children’s need for support in developing digital skills. The “stranger danger” fear is also something of a myth: most (87 per cent) of 11-16-year-olds are in touch online with people they know face to face. Only 9 per cent of children have met up with someone they first met online. Very few went unaccompanied or met someone older and only 1 per cent had a negative experience. As for the “everyone watches porn online” myth, one in seven children saw sexual images online. The report concludes: “The media hype over pornography is based on unrepresentative samples or just supposition.”

The internet was intended as a 21st-century Agora but algorithms have changed the rules of the game. If you and I Google something at the same time our results will be different, because Google takes our location, our past searches and other factors into account when it shows us our results. 

Since December 2009, Google has customised searches for each user. Instead of showing us what is objectively the most popular result, it shows us the result it deems as most relevant to us, so that we are more likely to click on it. Similarly, Facebook shows us updates we are more likely to comment on. Twitter also doesn’t show the entirety of tweets of everyone we follow, but applies a kind of quality score system. After all, they are for-profit companies that rely on our eyeballs to spend time on and return to their sites.

Eli Pariser, internet activist and chief executive of the viral content-sharing site Upworthy, sees this as a dangerous development. He argues that the result is a “filter bubble” which is “a unique universe of information” that surrounds each of us. This means that we are less likely to encounter ideas or political views unlike our own. When we flick through a newspaper we see articles on topics we were not specifically looking for. When we wander through a physical library we are likely to discover something unexpected. In many cases it is possible to turn off personalisation settings on websites, or at least be aware of them. Otherwise, the world of algorithms invites us to sit in our own exclusive club, eating the same kind of sandwich day in day out while discussing the same topics with the usual suspects.

In his book Writing on the Wall: Social Media — The First 2,000 Years (Bloomsbury), Tom Standage argues that the world of social media is the norm while the era of mass media was an anomaly. He explains that social media-defined as “media we get from other people, exchanged along social connections” — can be traced back to ancient Rome, where letters were copied and shared on papyrus rolls. This mode of communication reappeared thereafter, prior to the era of mass communication.

Distrusting the media is not a new phenomenon either. Long before the invention of the printing press, Socrates was suspicious of the written word. He feared that it travelled beyond the possibility of question and revision, and thus beyond trust. But as time passes so will our worries. The Pew Research Centre report Digital Life in 2025 argues that the internet will become “like electricity-less visible, yet more deeply embedded in people’s lives for good and ill”. 

Social media won’t make us geniuses or morons. They won’t bring about world peace, nor will they create a fake world of empty experiences. Social media would be without meaning or content if it wasn’t for those who use it-us.

Underrated: Abroad

The ravenous longing for the infinite possibilities of “otherwhere”

The king of cakes

"Yuletide revels were designed to see you through the dark days — and how dark they seem today"