Nobody should go to work with the expectation of receiving abuse, although for some occupations it may be seen as a side effect that is hard to remove. Journalism can be one of those jobs, and so it seems moderator of journalistic product too. This research may be viewed as being a bit of an expected eye-opener yet, at the same time, possibly not necessarily containing anything that new. Some, also, may question some of the supporting data and assumptions, but this need not devalue the core premise about online behaviour and response to news articles.

Commenting to news articles can be a popular activity; since 2006 The Guardian has received over 70 million comments alone, and a fair number of comments have been blocked and not published as they breached various community behaviour standards such as abuse, irrelevancy and legal issues. Some observers may also suggest that certain viewpoints get an easier time at the moderational table than others, but that is not the point here.

The author contends and offers what is described as evidence to show, that female and BAME (Black, Asian and minority ethnic) journalists perceptions that they are subject to more abuse than their male, white counterparts may be valid. The data presented notes that this situation is even more prominent in ‘male-dominated’ sections of the newspaper site. Is this revelation generated by accident or real-world design?

Reader habits vary and in routine news reporting I suggest it is less common for people to read an article, possibly disagree with it and then form an opinion based on the stated gender and evaluation of any racial origin (if a photograph is not present or the name is not going to give a reasonably clear indication). A straw poll of several colleagues from different countries suggests a similar attitude. Columnists, of course, can be a different matter as you invariably will see greater prominence if you read the headline (News roundup by David Journalist or Jane Columnist argues against poverty in China). It can also be true that certain columnists tend to focus on particular issues (knowing what buttons to press) and thus can inflame people holding different views. It does not excuse vulgar, unwarranted comment, but equally certain people can have ‘thinner skins’ over some regions of discussion or accept no contrary view to their perspective.

Unfortunately, I could not see a deeper dive into the data in question, the type of journalistic product and even a subject comparison. As someone who, from time to time, has followed BTL (Below the Line) discussions on the Guardian, I have noticed a perceived disparity about what opinions can be permitted, and it cannot be automatic that only those of an alternative view suddenly are disproportionately more likely to use vulgar language and more.

Despite this, there is a validity to the data under analysis and some of the tentative claims made. It almost begs for a closer, focused investigation. Here the Guardian could have been in a position to help this, with a form of A/B testing in many cases. Sure, you cannot suddenly rename ‘Jane Columnist’ to ‘John Columnist’, but equally a report by ‘John Journalist’ could have its byline swapped to ‘John Al-Journalist’ or Janilla Journalist’. Would subtle changes make a difference? If the same article was split for testing, even for the first few hours of publication, between two identities and that data analysed it could be very instrumental. Of course, the researcher can only work with the available data, but the Guardian, who claims to be active with such data journalism and research, could quickly provide the facilities to this researcher or another to further prove the apparent evidence; this is something that I would love to see.

For the avoidance of doubt, I do not doubt that there is unpleasant behaviour out there, such as racism, misogyny and foul comments. Equally, I am sure that female and minority group members are capable of slinging unwelcome barbs too. The research makes for a great, alarming headline but the nuance is missing, so far, I contend. It was admitted that the gender and background of individual commentators could not be determined, so is there a risk of assumptions being reached, even if not explicitly stated?

It could equally be good if the data could be generated to try and analyse the data by moderator as well (the Guardian had employed 13 full-time equivalent moderators to handle the flood, it is claimed). Is moderator A (a white, middle-aged male) more or less sensitive to perceived rule breaches on subject Y than moderator B (black, 20-something transgender)? We are all human, and our views can pollute our neutrality, irrespective of what we claim and how we seek to avoid it.

The article itself is well-composed, with a sound literature evaluation that raises some excellent thinking points. The positioning of comments within a gendered public sphere is valid, and the research methodology and implementation seemed fine, subject to the caveats and desired previously raised.

It is clear that correct content moderation has its place, comments such as ‘do you get paid for writing this?’ (dismissive of the author) or ‘the more corpses floating in the sea, the better’ (concerning the mass drowning of migrants’ doesn’t add value. If you disagree with the author, fine, articulate it. If you are against unplanned migration and want to point out that such migrant transport is risky and only adds to the misery, fine, say so. Hopefully, the moderation should allow these through, but I know from personal experience (not at the Guardian) that often even well-crafted, non-rule breaking comments get ‘blocked’ when they apparently appear to go against the ‘narrative of the story’. Maybe one blocking is an accident, several…? Nobody said that moderation is an easy job, but with its power comes universal responsibility.

The rest of the article dealt with perceptions from a staff survey about comments directed at them, as well as reader feedback to an earlier form of this research. Both in themselves felt to be standard pieces of research, adding colour to the mix, but the main meat on the bone is this analysis of 70m-odd comments (of which about 1.4m were blocked for various reasons). Otherwise, other features such as limitations, discussion and the like were as to be expected.

I enjoyed the article. It was clearly written, relatively jargon-free and certainly thought-provoking. It left you wanting for more, to either expose further the actual problem that is out there, as if the findings are universal then society does have some more significant issues, or to contextualise it and possibly show that unpleasantness exists, but it is less of a widespread, pointed problem that initial analysis suggests. Anything that can cut down on abuse should be applauded, but you need precise tools rather than blunt tools to do it justice.

Even this review transformed into a mini-column, or not so mini. It left such a mark on the reader, and it should not be viewed as entirely negative, although certainly, one may question if the scale of the problem and its focus is as universal as it is claimed; only further, nuanced research can help identify that.

Gardiner, B., 2018. “It’s a terrible way to go to work”: what 70 million readers’ comments on the Guardian revealed about hostility to women and minorities online. Feminist Media Studies, pp.1-17. doi:10.1080/14680777.2018.1447334

A post-publication review of this article that appears on Publons.