‘These algorithmic systems are not standalone little boxes, but massive, networked ones with hundreds of hands reaching into them, tweaking and tuning, swapping out parts and experimenting with new arrangements…we need to examine the logic that guides the hands…’ (Seaver 2013: 10)
Early on this block, I played with a range of different algorithms, looking at the recommendations served up by my existing activity. I have created a “recommendations feed” for my lifestream, and also made general notes and screenshots in number of posts.
Later on, and in thinking specifically about digital education, and tying the activity with our MOOC exploration, I first looked at my FutureLearn recommendations, and finally, focused on Coursera – where I was able to provide false data for my “profile” and “learning plan” and see the alterations in recommended courses.
Some initial conclusions
Many of the “recommendation engines” I played with, such as SoundCloud, Spotify and Twitter, led me into what I perceived to be a “you loop“, “filter bubble” or “echo chamber”. Google’s autocomplete showed some signs of reproducing existing biases, perhaps amplifying dominant views held by other users. Google may also be making assumptions based on data they hold about me, which may have skewed YouTube search results, although it would be interesting to compare results with other users or in different locations, an approach Kitchin (2017: 21) discusses. I have written a little about ethical concerns and my Google data also.
Moving my focus to Coursera, in its recommendations they appeared to privilege courses with a information technology/computer science focus, although the range of available courses is clearly a factor too. In any case, the founders’ background in computer science, and education at a Western university, shone through despite attempts to tweak my “profile” and “learning plan” (I have written a little about the “profile” ethical issues also). This appears to be a common theme in “apps” or websites” developed by Western companies, and the design processes utilised (whereby products are developed first for the “profile” of the founder(s), and secondarily for others) arguably creates exclusions for some (and a strong sense of “this wasn’t designed for me”) and inclusions for others (notably switching my profile to “software engineer” produced a wealth of “relevant“ results which I could tailor to my choosing).
My artefact: Algorithmic systems: entanglements of human-machinic relations
You can see the Coursera ‘algorithmic play’ posts in a lifestream feed, and also through this CodePen (which brings in the same feed but presents, ranks and orders it slightly differently). How might consuming the feeds through different ranking systems affect your perception, and how might the pre-amble surrounding it have changed your conclusions and what you click on?
You can also access the “code behind” the CodePen. However, even though access is granted to this code, it will not necessarily be easy to make sense of it (Kitchin 2017: 20; Seaver 2013). While presented as ‘open’ source, is it really ‘open’ and who for? What exclusions might this create?
The CodePen (while slightly misleadingly called an “Educational Algorithm”) is a very basic attempt to present the feed of Coursera recommendations in a different fashion for comparison, while provoking some thought around algorithmic systems in general. There is no complex processing, just a little reordering. It also does not store any data (information entered is used within the browser to rank articles, but not stored in an external database), and the text descriptions are fictional – it is just for visual/demonstration purposes.
Reflections on the CodePen
Inspired by Kitchin (2017: 23), I have written a few words reflecting on my experiences, while acknowledging the limitations of the subjectiveness of this approach.
The CodePen was adapted (“forked“) from an existing CodePen, which provided the majority of the base upon which (with my very limited skills!) I could tweak very slightly to add a basic feed of the Coursera recommendations by pasting in another bit of code (which turned out to be old and broken) and then ultimately another bit of example code (this is from a Google-“led” project). It is very much a case of different bits of code “hacked” together, full of little “bugs” and “glitches” and not particularly well designed or written! I was keen to strictly limit the time spent on it, although I know much time could be spent tidying and refining it.
Presumably, similar time constraints, albeit with more resources/testing, affect development of, say, Facebook/Google etc. algorithms and lead to mistakes. After all, Facebook’s internal motto used to be ‘move fast and break things’, although this race to create a “minimum viable product” and “disrupt” (regardless of the consequences) is increasingly criticised.
In any case, this CodePen is like a snapshot of an experimental “work in progress” (which others are welcome to “fork”, use and adapt) and brings to mind the messy, ontogenetic, emergent, and non-static nature of algorithmic systems (Kitchin 2017: 20-21).
Ultimately, my aim is to raise some questions about some details of this process and, since most of the code was “stuck together” from bits of free and ‘open’ source code, how the culture of the individuals/teams is significant too. As Seaver (2013: 10) puts it…
‘…when our object of interest is the algorithmic system, “cultural” details are technical details — the tendencies of an engineering team are as significant as the tendencies of a sorting algorithm…’
…and, given that a large portion of the code came from a Google-led project (and Twitter too), how might the tendencies of those teams have created inclusions/exclusions? Furthermore, as the focus of my artefact is Coursera, whose founders both have experience working at Stanford University and Google, how might the tendencies there have guided Coursera and, subsequently, my CodePen?
Finally, given that Coursera presents itself as a universal educational space, where its vision is ‘a world where anyone, anywhere can transform their life by accessing the world’s best learning experience’, what are the implications of this for digital education? In my initial explorations, my perceptions are that computer science and information technology disciplines from Western universities are prioritised by Coursera, in general and through their algorithmic systems. However, further research is needed to, as Kitchin (2017: 25) puts it, ‘unpack the full socio-technical assemblage’.
Wow! Another sensational artefact Michael. So incredibly detailed and well researched.
Really great question raised about Silicon Valley’s idea of ‘progress’. That article by Giannella looks fantastic. Go Berkeley!
You really captured the essence of some common types of educational algorithm platforms and the philosophies behind them in a creative and engaging way. What’s more the artefact itself is an educational experience of some kind of adaptive learning program.
Really cool and impressive!
Thanks David! Yes, it was a sort of coincidence that so much of the artefact ended up having connections to Silicon Valley (although I suppose this is common given the dominance/influence of the “tech” industry there) – Coursera’s headquarters are based there, bits of the code used from Google and Twitter teams based there and so on. I wonder how the “tech” industry culture there has influenced the algorithmic systems I ‘played’ with. The incessant push for a very specific idea of ‘progress’, often framed by a neoliberal agenda, seems to have touched different areas I played with in any case – in the way things have been designed or the apparent philosophy or business model underlying things. Thanks for checking it out anyhow and also really enjoyed your artefact!
Wow Michael, very thorough.
I have never heard of Coursera before, very interesting that that first page it was written with so much intended to be friendly and seemed to emphasize that it was safe to use and there for your benefit.
Nice work!
Monica
Thanks Monica! Yes, there does seem to be a trend with “friendly” language in many of today’s “apps”, possibly in an attempt to conceal some of the more ruthless commercial intentions in a slick “user experience”! I suppose nothing particularly new from a marketing perspective, but in taking a little time to glance over the privacy policy (something I’d rarely do, for my own personal apps at least!) the commercial intentions were more apparent (albeit fairly vague and legalistic) and it did make me reconsider my initial assumptions about Coursera. Thanks for your comments!
This is really impressive, Michael, very thorough pieces of work and a polished reflection on top! I like how you connected different issues…. and went all the way to the idea of progress, and, without naming it, the Silicon Valley ethos. I don’t know if you have read about the Californian ideology, a term coined 25 years ago by Barbrook and Cameron (the original essay is here http://www.imaginaryfutures.net/2007/04/17/the-californian-ideology-2/) –it still seems an apt depiction of your experience on Coursera and other platforms: a paradoxical mix of universal, left-leaning liberalism (the Coursera vision) and hopeful technological determinism (hence the focus on technological courses?). I think Kitchin’s paper (forcing us to go beyond the technical) is indeed a great entry point to unravel some of the ‘hidden’ ideology.
Thank you JB! I hadn’t read that particular article before – thanks so much for sharing, it is really helping me to put things into context! As you say, this Silicon Valley ethos seems to influence Coursera and many of the other “apps” and algorithmic systems we’ve been playing with in this block. It has been very revealing looking underneath the polished user interfaces and marketing slogans, and trying to uncover the ideologies, cultures, motivations and political and commercial agendas that underpin it all – often very specific ideologies which benefit a small group of people, yet are problematically presented as “for all”. Thank you for your comments!
Yet again you impress. Great work Michael. I liked the way you used the experience of the MOOC as a springboard to highlight issues with privacy and collection of data pertaining to the user accounts. I suppose that nothing is actually free. The provider will always take something away. Privacy policies never say everything and there must be big money behind big data. Thank you
Thank you, Val! Absolutely – and always reminds me how many of those privacy policies I’ve accepted yet never read for my personal apps…I’m thinking twice now… Thanks for your comments!
Michael,
I am always in awe of the amount of detail, thought and time that you’ve put into your artefacts. Much kudos and credit to you.
To quote one of your lines “as often seems the way, the developers build something for themselves (ensuring a seamless user experience for their own circumstances) and then only later branch out.”
Those who do not come from the same background as the designers are at a disadvantage, they do not fit, they may not understand, or be able to take full advantage of the course proffered. In order to ‘fit’ the course, the users may have to change themselves. As Eli Pariser states in his interview with Alexis Madrgal (see the ‘you loop’ article you quoted) “these services may end up creating a good fit between you and your media by changing … you”
Ethical concerns indeed.
Thank you so much for your comments, Adrienne! Absolutely, this idea of those people not fitting into a particular background (which so often appears based around the culture of Silicon Valley), and being disadvantaged as a result, was picked up on by Trebor Scholz when discussing inclusive codesign and platform cooperativism:
https://blog.p2pfoundation.net/participation-codesign-diversity-trebor-scholz-on-platform-cooperativism/2017/12/08
Essentially, by following a more democratic model (influenced by cooperative principles), they hope to involve those populating a platform in the process of building it from the very beginning, and to design for those ‘on the margins’. Thus, the “Silicon Valley way” is not the only way, despite often being presented as “objective”, “universal” or “good for all”.
Lots to mull over – thanks again for your comments!