Below are the results of a YouTube search for ‘algorithms in education’ – on the left I am signed in, on the right I am not.
The results are subtly different – when I am signed in, slightly more ‘advanced’ videos about algorithms (one from Harvard University) are displayed. Perhaps this is due to information Google holds on my age and education, or due to the fact I have watched and liked a number of longer university lectures and interviews on this course.
This speaks to the way algorithms are ‘ontogenetic, performative and contingent’ (Kitchin 2017: 21) – they are not static nor fixed, vary from user to user, from location to location and can often involve randomness.
I changed my Coursera “learning plan” to indicate that I am a Teacher/Tutor/Instructor in the Education industry, to compare the results with my previous exploration of Coursera.
The results are more varied (and not exclusively focused on software development or the “tech” industry), however there are still various programming, data/computer science and business options presented (despite expressing no preference for this kind of industry):
I am experimenting with inputting false information about myself in Coursera, in order to see the difference in algorithmic recommendations. Here is how I described myself…
… and here are some recommendations provided after entering the above data…
The top listed courses are exclusively technology-based and “offered by” Google, and appear to have no direct connection to my listed industry “Health and Medicine”…
While my explorations were very limited here, in some ways this seems fairly consistent with my experiences of using certain (but not all) MOOC or educational course/video sites (and even more general “apps”). As soon as you step outside of the area of computer science, the range of courses is more limited, despite the sites themselves being presented as general educational sites. In looking to change my “learning plan” options (which change your profile and recommendations) revealed the “default” or “suggested” text, presented before you enter your own profile options:
You can see the results of my profile/”learning plan” alterations here. However, at this stage of deciding my profile options, the “software engineer” who works in “tech” seems to be the “default” starting point here. This is all perhaps no surprise given that Coursera was set up by Stanford computer scientists; as often seems the way, the developers build something for themselves (ensuring a seamless user experience for their own circumstances) and then only later branch out.
One example outside of education here is the online bank Monzo, whose early customer base was ‘95% male, 100% iPhone-owning, and highly concentrated in London’ (Guardian 2019). This description mirrors the co-founder Tom Blomfield, as he himself admits:
‘Our early customer was male, they lived in London, they were 31 years old, they had an iPhone and worked in technology. They were me. I’ve just described myself. Which has huge advantages, right? It’s very easy to know what I want.’ (The Finanser 2017)
While Monzo does claim to have a focus on social inclusion (This is Money 2019), why is this always seemingly secondary to building the app, gaining users (similar to themselves) and getting investors on board? Should social inclusion, whereby apps are designed for all users in a democratic fashion where everyone has a say, not be inherent in the very beginning planning, design and development processes? There may be a place here for considering platform cooperativism, inclusive codesign and participatory design approaches here (see Beck 2002; Scholz and Schneider 2016; West-Puckett et al. 2018).
Coming back to education, if Coursera have taken a similar approach as Monzo to designing their platform and building up their catalogue of courses, it is perhaps concerning that who do not mirror the designers and developers may be left excluded and on the margins.
‘The importance of inclusive codesign has been one of the central insights for us. Codesign is the opposite of masculine Silicon Valley “waterfall model of software design,” which means that you build a platform and then reach out to potential users. We follow a more feminine approach to building platforms where the people who are meant to populate the platform are part of building it from the very first day. We also design for outliers: disabled people and other people on the margins who don’t fit into the cookie-cutter notions of software design of Silicon Valley.’
Trebor Scholz (P2P Foundation 2017)
Eli Pariser’s new book The Filter Bubble is a valuable exposition of what living and learning through Google and Facebook will mean for our lives as citizens.
Here are my recommendations from FutureLearn, at least in part likely informed by some of the MOOCs I signed up to while deciding upon my micro-ethnography. These MOOCs include:
Signing up for these MOOCs appears to have affected these recommendations fairly significantly, given there are recommended courses in the areas of research, security and programming. However, there appear to be few (if any) courses directly touching on the areas of anthropology and music (which my enrolled courses cover); this may be due to lack of currently available courses although there may be other reasons.
How have other people been involved in shaping results?
It is not clear (at least from this page) how they make the recommendation decisions, but there may well be algorithmic ranking based on sponsorship, course popularity or “staff picks”. Therefore, it’s possible that other students’ enrolments or FutureLearn staff decisions may alter my recommendations.
Do results feel personal or limiting? Is this optimisation, or a‘you loop’?
I don’t think I would normally make use of the explicitly labelled recommendations, however I often make use of the search function which may include similar algorithmic ordering and ranking. The choices here seem fairly limiting, almost persuading me that – in order to be an “expert” – I should study them. There seems to be the assumption that I would choose to study a similar course to one I have studied before, even though in reality I would probably want to look at something completely different.
What might be the implications?
My concern, looking at both the general catalogue of courses and the recommendations (albeit very briefly), is that certain subjects appear privileged over others (there are a great deal of courses on computer programming, for instance). As mentioned above, this may be down to many other factors (such as course availability), however it would be interesting to see how course enrolment numbers impact upon the ranking. I personally would find this a little disconcerting – I wouldn’t want a course that simply has high enrolment numbers to be privileged in my recommendations. As elsewhere in education, just because a course may have lower numbers or generate less money, it doesn’t mean it is any less important.
There appear to be some fairly binary options about technology being “good” or “bad”, and dominant ideas of ‘success’, presented here… could this be mainly influenced by what others have searched? Or by a prevalence of articles supporting these positions?
Of all the videos posted to YouTube, there is one that the platform recommends more than any other right now, according to a Pew Research study published Wednesday. That video is called “Bath Song | +More Nursery Rhymes & Kids Songs – Cocomelon (ABCkidTV).
‘…it is worth reflecting on what one means by ‘self learning’ in the context of algorithms. As algorithms such as deep neural nets and random forests become deployed in border controls, in one sense they do self-learn because they are exposed to a corpus of data (for example on past travel) from which they generate clusters of shared attributes. When people say that these algorithms ‘detect patterns’, this is what they mean really – that the algorithms group the data according to the presence or absence of particular features in the data.
Where we do need to be careful with the idea of ‘self learning’, though, is that this is in no sense fully autonomous. The learning involves many other interactions, for example with the humans who select or label the training data from which the algorithms learn, with others who move the threshold of the sensitivity of the algorithm (recalibrating false positives and false negatives at the border), and indeed interactions with other algorithms such as biometric models.’
The field of AI is extremely broad and somewhere in there appears Machine Learning (ML), the automated learning of machines. This system is widely used these days and it basically consists of computers learning. In other words, these machines’ performance improves with experience.
‘Do algorithms compute beyond the threshold of human perceptibility and consciousness? Can ‘cognizing’and ‘learning’ digital devices reflect or engage the durational experience of time? Do digital forms of cognition radically transform workings of the human brain and what humans can perceive or decide? How do algorithms act upon other algorithms, and how might we understand their recursive learning from each other? What kind of sociality or associative life emerges from the human-machinic cognitive relations that we see with association rules and analytics?’
…and, as I explore these ‘human-machinic cognitive relations’, look beyond the polished “app” user interfaces and reflect on how algorithms (despite how they are presented) are far from objective or neutral (Kitchin 2017: 18). I turn my attention to investigating discrimination and bias (Noble 2018 and #AlgorithmsForHer)…
I also investigate the notion of ‘data colonialism’ (Knox 2016: 14), rethink the relation between algorithms and power (Beer 2017) and look to the future of what this might all mean in an educational context (Knox et al. 2020; Williamson 2017).