And here is the algorithmic play I wanted to share for this block, you will find other ideas and experiments in my posts below. The video is a bit long because I wanted to show and comment (parts of) my explorations.
Feel free to jump to the conclusions (the summing (some) thing up slide). In a nutshell, comparing two accounts that I have been using for a while: my private account and an account that I have been using with a group of young Congolese people since 2013, and looking for key terms and the YouTube suggestions:
- Initial/first suggestions where always identical but…
- Educational videos led to US university level videos for ‘my’ account and to Chinese, easy English, high school level videos for the Congolese account.
- Topic of key interest in the DRC, such as ex president Kabila, led to a Kabila loop/echo chamber for the Congolese account and to videos on other African presidents in ‘my case’
- The keyword Ebola led to more worrying videos on Ebola in the case of the Congolese account while the topic quickly switched to coronavirus in ‘my case’
Where do we go from there? It is not hard to see how forms or racism and discrimination can be generated by YouTube… and how echo chambers emerge -but the more important point, echoing some of Kitchen’s points (2017) really is about the actual usage of the Internet and Youtube: what about shared accounts, shared identities, and shared practices? A key assumption behind so many algorithm is that one account is one user and it is hard, if not impossible, to account for users that are multiple people, and perhaps even people with complex and multi-layered identities (with some identities becoming more prominent at specific times as as Amartya Sen would write).
This does matter, enormously, for education in ‘low tech’ and poor environments where devices and accounts are shared between students: as long as the algorithms fail to incorporate divergent usage of technology, they will not be useful, and may even be detrimental, to students.
references
Kithcin, R. 2017. Thinking Critically about Researching Algorithms. Information, Communication & Society, 20:1, 14-29, DOI: 10.1080/1369118X.2016.1154087
Fantastic and insightful artefact, JB, and really like the way you have published it as a screencast presentation alongside your commentary, allowing us to see your conclusions alongside visuals of your explorations.
As you say, examples of racism and discrimination being generated/reproduced through YouTube and other algorithmic systems is not very difficult to find, but the (perhaps more subtle) assumptions being made about aspects such as identity, shared accounts etc. that you point out is such an important point. During my own explanations of Coursera and other “apps” during this block, I’ve often felt that the way they have been designed creates so many exclusions through these kinds of assumptions, and that a democratic design process that includes a diverse range of voices is so important yet sadly rare.
I’ve been following the notion of platform cooperativism, which may be one possible approach to address this:
https://blog.p2pfoundation.net/participation-codesign-diversity-trebor-scholz-on-platform-cooperativism/2017/12/08
Thank you for bringing up such important points, and really enjoyed your artefact!
Thanks very much, and thanks so much for the link too!
There is a very important issue behind your algorithmic play JB and that is the question of how the human element influences the design of algorithms and what results they are supposed to reproduce. Perhaps it is no wonder that many of the algorithms are black-boxed and kept secret. Even the results generated are influenced by those countries that have more population density of users connected to the internet. Should, then, algorithms be considered objective in their results. Probably not.
This was a great presentation JB with many ramifications and questions into the real function of algorithms. Thank you for the many insights.
I really enjoyed your artifact, JB!
While doing my algorithmic play, I also felt that the digital divide is likely affecting the search results. People(or countries) who have stable access to the internet can participate more, in watching and creating videos that lead to biased search results. Some countries don’t allow their citizens to access YouTube, Google, etc., students in less privileged areas don’t have digital devices so their interests may be less reflected than their counterparts. That’s why we shouldn’t just be consumers of the results of algorithms, but be participants in all process of algorithm process.
Thank you for sharing your artifact!
Excellent point – I think the question is indeed also one of whether poeple can participate, and likely also the extent to which they participate and whether they see participation in the same way those in power do. In a sense this makes me think even more about some of the issues of our last block!
I really loved your artefact, JB! It demonstrates how biased, discriminating and limiting technology can be. I was particularly impressed by the fact that personalization graded the language. This is where the notorious loop comes into play. Your English is not ideal, and it will stay the same, because you are never exposed to authentic speech. And who said it was not ideal after all?
It is also a deep thought that 1 account is not necessarily equal to one user or one identity, and this can be misleading. Inspired by your research, I made a few searches in my Youtube and saw that 99% of everything on offer are products where white people are involved. I’m sure that the same search in Africa would have given opposite results. So how barrier-free is the Internet space? It’s a fat question. Maybe, within time search engines will stick to the same principle as films in the US, when they will fight for diversity and include people of different gender, age, race and abilities in the same search, who knows? We are not talking about the best quality content coming at the top again.
Thank you for such a thought-provoking piece!
Thanks – what I am really curious about is how ‘deliberate’ the black box is: is it just fed by biased data or created by paternalistic engineers? This is sort of what you are saying and also brings us back to deepr question about education as consumerism (if students are consumers, then they should be given what they like… not challenging videos in higher level English!)
Fascinating experiment! The assumptions algorithms make simply based on race, gender etc. are so profound! While many of us are aware of this happening, it’s so difficult to tell how these assumptions really affect people. How would your worldview, your knowledge, your relationships change if we were provided with different information? Very thought provoking!
Dear JB,
what an interesting point you raise about devices that are shared amongst many users. In shared devices or shared accounts, the algorithm can never really customise the content to the individual. In terms of educational usages, we will find a digital divide here where the less well off students who share devices will not get content catered to their needs, or ‘what the algorithm thinks they need’. It might not be a bad thing!