Michael James Walsh, University of Canberra and Stephanie Alice Baker, City, University of London
Twitter has come under increasing public scrutiny for facilitating hostile communication online. While the micro-blogging site promotes itself as providing a “free” and “safe” space to talk, critics have highlighted the company’s inept response to repeated instances of trolling, harassment and abuse.
Our research into how people present themselves and manage their interactions with other users on Twitter suggests that case-by-case responses are inadequate. We found Twitter’s design promotes avoidance as the easiest solution to hostility, without offering space for the kind of restorative activity that could lead to more genuine resolution of conflict.
Hostility on Twitter is disproportionately directed towards women, people of colour and marginalised groups. For example, in 2016 US comedian Leslie Jones was inundated with racist tweets following the release of the film Ghostbusters.
Black and Indigenous sportspeople, such as Adam Goodes, Glen Kamara and Lewis Hamilton, have also been subjected to racist abuse on Twitter and implored the platform to do more to respond to the situation.
More recently, racist tweets against Black English footballers proliferated after the national team’s loss to Italy in the UEFA European Football Championship.
In 2018, Amnesty International published a report detailing the extent of abuse directed at female users, describing Twitter as “a toxic place for women”. The report criticised Twitter for failing to respect women’s rights and respond to reports of violence in a transparent manner.
Part of the reason for the degree of hostility on Twitter is because of the way the platform is designed. Sociologist Ian Hutchby called this the “affordances” of a technology: the material possibilities a technology affords its users, the type of actions it enables and constrains.
Twitter’s affordances shape how users interact on the site. This includes platform features (such as likes, retweets, and mentions), accounts being public by default and the capacity for users to be anonymous. The character limit of tweets also facilitates brief, impulsive, inimical exchange.
In 2017, the company introduced changes to reduce hostility on Twitter. Notable changes included doubling the length of tweets from 140 to 280 characters. Twitter also introduced “threads” to connect a series of tweets into a longer commentary and provided the option to hide replies. These changes were an attempt to “help minimise unwanted replies and improve meaningful conversations” on the platform, but hostility on Twitter persists.
One reason for the degree of hostility on Twitter is that the site’s metrics can be gamed to elevate controversial and abusive content. Research also shows false and misleading news is retweeted more than authentic stories, especially among like-minded groups.
In 2018, Twitter launched a “healthy conversation strategy”. This aimed to assess the “health” of interactions on Twitter with a view to improve them.
In 2019 we conducted an online questionnaire to explore how internet users respond to hostility on Twitter. Our study found Twitter users deploy several common strategies to manage hostile interactions on the site.
These include the use of pseudonyms and multiple accounts to achieve a degree of anonymity and privacy, as well as blocking users and self-censoring to pre-emptively limit exposure to harassment and abuse.
Users know they are vulnerable on the platform and artfully manage their social interactions by anticipating hostility, managing the immediate information environment through protecting their tweets, adopting different personas via multiple accounts and limiting how they communicate online.
These observations suggest users are finding ways to “save face” online. The sociologist Erving Goffman called this kind of activity “face-work”.
In Goffman’s model, we employ different “faces” to adapt to specific interactions and environments:
We have party faces, funeral faces, and various kinds of institutional faces.
The aim of face-work is to create a positive impression of ourselves to others. When we “have face”, we succeed in presenting a consistent self that others validate. In contrast, we “lose face” when information arises that undercuts our presentation of self.
Our research extends the idea of face-work to examine the strategies Twitter users employ to interact with others.
We suggest that users adopt a type of “Twitter-face”: a face-work tactic of responding to hostile interactions in a way that will protect the user’s metaphorical face.
Hostile interactions on Twitter often take specific forms, such as doxing, pile-ons and ratioing. In each of these, a user’s face is confronted by co-ordinated attacks that disrupt the positive impression they are trying to present.
Face-work generally occurs in two ways. The first is avoidance, in which people try to avoid face-threatening information or prevent others from seeing it. The second is correction, where people make efforts to apologise for their own face-threatening actions.
We can see an example of corrective face-work on Twitter below, where a person’s face is threatened, they attempt to correct the threatening information, and the conflict is resolved with an apology and acceptance.
Avoidance, on the other, often takes the form of blocking other Twitter users.
Our findings show Twitter users overwhelmingly use avoidance practices as a defensive strategy to prevent hostility on the site. Specific techniques include
Under normal circumstances both avoidance and correction are vital aspects of face-work, but on Twitter there appears to be an overemphasis on avoidance at the expence of correction.
This places Twitter in a difficult situation. Users desire greater control over how they interact, but new features allowing greater control seem to privilege avoidance and may reduce attempts to engage in restorative interactions.
Beyond introducing isolated features, which place responsibility on the individual user, Twitter needs to reconsider the algorithms and metrics (such as likes and retweets) that enable the company to profit from co-ordinated harassment campaigns, controversy and abuse. This could include hiding likes or removing re-tweets and algorithms.
Michael James Walsh, Associate Professor, University of Canberra and Stephanie Alice Baker, Senior Lecturer in Sociology, City, University of London
This article is republished from The Conversation under a Creative Commons license. Read the original article.