— Michael Wolfindale (@mwolfindale) March 14, 2020
In browsing through my Facebook and Twitter feeds today, and reflecting on the relation between power and algorithms (Beer 2017), a friend posted on Facebook a Guardian article asking ‘Why don’t we treat the climate crisis with the same urgency as coronavirus?’
Immediately below, a post was displayed from another friend (from different circles) linking to an article from Business Insider about the importance of hand washing and the coronavirus.
It’s difficult to tell why the algorithm placed these posts next to each other – whether there was a connection between the two, or whether both were deemed independently likely to exhibit some kind of ‘positive reaction’ (or reaction of another sort) from me. However, it did make me reflect on how initially – to a degree – these posts felt (at least to me) “pitched” against/at odds with one another (by being placed in close proximity) and exhibited initial reactions that I might not have otherwise felt had I seen the posts independently.
@mjahr talked about these early efforts on the Twitter blog…
‘…when you open Twitter after being away for a while, the Tweets you’re most likely to care about will appear at the top of your timeline…’(@mjahr on Twitter blog, 2016)
…and went on to praise their “success”…
‘We’ve already seen that people who use this new feature tend to Retweet and Tweet more, creating more live commentary and conversations, which is great for everyone.’(@mjahr on Twitter blog, 2016)
Furthermore, in this HootSuite blog post, criticism of the Twitter algorithm was downplayed somewhat, stating that ‘the algorithm drove more engagement from users’.
Yet what does ‘engagement’ mean, how do they know what we ‘care about’ and how can they possibly prove that it is ‘great for everyone’? Driving traffic and increasing usage may be ‘great’ for both Twitter and HootSuite – companies who attempt to derive profit in one way or another from such usage – however, this kind of biased uncritical language is perhaps all too common in some circles (such as those who are profiting through advertising or sale of associated products).
Much has been written about the questions and issues that algorithms such as these raise, not least the relationship between political rhetoric and social media; for example, by Oliver (2020). I continue to read, explore and critically reflect, particularly pondering over what this might mean in an educational context (such as our own use of Twitter during this course)…
This paper explores the relationship between social media and political rhetoric. Social media platforms are frequently discussed in relation to ‘post-truth’ politics, but it is less clear exactly what their role is in these developments. Specifically, this paper focuses on Twitter as a case, exploring the kinds of rhetoric encouraged or discouraged on this platform. To do this, I will draw on work from infrastructure studies, an area of Science and Technology Studies; and in particular, on Ford and Wajcman’s analysis of the relationships between infrastructure, knowledge claims and politics on Wikipedia. This theoretical analysis will be supplemented with evidence from previous studies and in the public domain, to illustrate the points made. This analysis echoes wider doubts about the credibility of technologically deterministic accounts of technology’s relationship with society, but suggests however that while Twitter may not be the cause of shifts in public discourse, it is implicated in them, in that it both creates new norms for discourse and enables new forms of power and inequality to operate.
Here are a few screenshots on Twitter users recommended to me, based on a tweet I liked. I sense here that, should I follow all these users, I may be entering an ‘echo chamber’ or getting into a ‘you loop‘…
NB: I did follow a number of these users, and noted that later tweets did tend to broadly have similar viewpoints or at least regularly discuss similar issues.
Below is another different kind of recommendations feed on Twitter – ‘you may be interested in…’. Many (but not all) of these felt less ‘relevant’ to me – either being based in a different geographic location or not being an area I was interested in. I assume these are users that may of those I follow have liked or follow themselves – a slightly more distant connection. If I was to follow these users, I suspect it would be more likely that my feed would become closer to those I follow, again with the potential of creating an ‘echo chamber’.
The “echo chamber” is an issue that has been widely written about, but appears prevalent here. While I am aware of it, I still find myself unwittingly engaging in it, perhaps keen to be “up to date” with the latest from those I agree with. However, I am increasingly finding this limiting and would prefer more engagement with those I do not broadly agree with. I am aware Twitter are experimenting with exposing people to opposing views although it has been met with skepticism by some.
For me, the timeline still appears to be presented in such a way that you are receiving a wide range of views/news/updates, despite this not being the case. Furthermore, I cannot see any evidence that Twitter have our “best interests” at heart, since they are ultimately a company seeking profit.
Great point, Monica! I think it’s a contrasting approach and no less valid if acknowledged as such. I had similar issues but concluded that from the moment I announced myself (as a researcher) I (and the study) became entangled in the course/community I was studying… #mscedc
— Michael Wolfindale (@mwolfindale) February 29, 2020