Week 9 Summary – Algorithmic Bias

David Beer raises the issue of the decision-making power of algorithms and identifies that there is need to understand how algorithms shape organisation, institutional, commercial and governmental decision making (Beer, 2017). There are criticisms of those holding the view that of algorithms as ‘guarantors of objectivity, authority and efficiency’ and with others arguing that due to the fact algorithms are created by humans, they embed layers of social and political bias into their code, that result in decisions that are neither benign or neutral. Furthermore, these “decisions hold all types of values, many of which openly promote racism, sexism and false notions of meritocracy” (Noble, 2018). As such ‘algorithms produce worlds rather than objectively account for them, and are considered manifestations of power’ (Knox, 2015).

It was this notion of algorithms bias that drove my inquiry in this week’s section of the lifestream blog and there was no shortage of social media commentary on the issue. Cathy O’ Neil identifies this in her YouTube clip and supports the view by claiming that algorithms are not objective and that they are merely ‘opinions embedded into math’. Perhaps most interesting, was the work of Joy Buolamwini, whose investigation  artificial intelligence face recognition software, has unearthed inherent racist and sexist elements from its developers.

To what extent are racism values embedded into algorithms?
Joy Buolamwini’s has carried out extensive research on how algorithmic code determining facial recognition, fails to recognise black women – Click the image for more detail

However, where does this notion of algorithmic bias intersect with education and what type of educational landscape will the algorithms produce? With the rise of anti-plagiarism software, and the growth of intelligent teaching and learning platforms such as Century Tech, many educators fear that there is incremental dependency on algorithms within schools and colleges, particularly for assessment. This is certainly not without difficulties or tension. Ben Williamson claims that many studies have highlighted inaccuracies in the Turnitin software, which many institutions use to cross-check student work, incorrectly branding some students as cheats, whilst missing other, and very clear instances of plagiarism. This ultimately leads to a growing level of distrust between youngsters and their educators, and is responsible for breaking down relationships as the use of technology, and algorithmic dependency increases. How else will students and teachers be negatively impacted by algorithmic biases (or errors) and, as dependency on these tools continues to grow, will educators be able to even identify when this happens, let alone how to mitigate it?


  • Beer, D. (2015) The social power of algorithms, Information, Communication & Society, 20:1, 1 – 13, DOI: 10.1080/1369118X.2016.121614
  • Knox, J. 2015. Algorithmic Cultures. Excerpt from Critical Education and Digital Cultures. In Encyclopedia of Educational Philosophy and Theory. M. A. Peters (ed.). DOI 10.1007/978-981-287-532-7_124-1
  • Noble, S. (2018) Algorithms of Oppression, NYU Press, New York.
  • Williamson, B. (2019). Automating mistrust. Code Acts in Education

Week 8 Summary – How Algorithms Shape Our Lives

The origins of the word alogrithm – click for a great BBC clip

The TEDx presentation by Kevin Slavin in this week’s lifestream, argues that we “need to rethink a little bit about the role of contemporary math… its transition from being something we extract and derive from the world to something that actually starts to shape it – the world around us and the world inside us.” This has congruence with the themes being explored in the core reading that posits in recent years algorithms have become “increasingly involved in the arranging, cataloguing and ranking of people, places and knowledge… They are becoming increasingly ubiquitous actors in the global economy, as well as our social and material worlds.” (Knox, 2015).  In essence, algorithms are now major actors in contemporary human society and culture.

On personal reflection it is evident that algorithms are highly influential in my own life, and are certainly shaping my every day thoughts and actions. I need only consider my Netflix recommendations to see tangible evidence of how an algorithm can shape day-to-day decision making. This was surmised in both news articles in the lifestream, each which explored the incredible power of major organisations such as Amazon and the impact they have had, and continue to have, on contemporary culture. As the Observer article recognises, this provides these companies with tremendous power, and raises the question of algorithmic objectivity. Are automated processes completely free of biases, or are they, as many would suggest enmeshed with corporate or political biases?

Knox, J. (2015) Algorithmic Cultures. Excerpt from Critical Education and Digital Cultures. In Encyclopedia of Educational Philosophy and Theory. M. A. Peters (ed.). DOI 10.1007/978-981-287-532-7_124-1

Week 5 Summary – Liberating MOOCs, redefining education

As I delved further into the study of my chosen MOOC, my social media postings in the life stream reflected a growing curiosity in the possibility that massive open learning of this nature, is potentially offering a new route to formal academic certification. Indeed, with this shift in paradigm Bayne et. al (2019) argue that “the open education movement has predominantly framed its mission in terms of ‘freedom from’, characterising educational institutions as rigid, antiquated, inaccessible and ultimately ‘closed’, in opposition to which the open movement is cast as a disruptive liberation”. (Bayne et al, 2019, 50) This has congruence with the TEDx YouTube talk by Jonathan Schaeffer, who discusses the disruptive nature that MOOCs have on traditional learning in tertiary education, and examines the manner in which this routeway to formal education could be actualised.

This question of ‘disruptive liberation’ is further examined in the BBC Sounds podcasts by asking ‘could these new free online courses open higher education to parts of the world in a way that’s been unthinkable up until now or are MOOCs an experiment that could destroy centuries of tradition?’ In the second of the two podcasts – Measuring MOOCS by Science AAAS, quantifiable measures are shared to demonstrate how disrupting and liberating, MOOCs can actually be. By sharing some of the enrolments figures of the popular Introduction to Computer Science MOOC at Massachusetts Institute of Technology (an impressive 350k), it is possible to understand the power that the MOOCs have in creating ‘freedom from’ the traditional institution.


Bayne, S., Evans, P., Ewins, R., Knox, J., Lamb, J., Macleod, H., O’Shea, C., Ross, J., Sheail, P., Sinclair, C. (2019 DRAFT). The Manifesto for Teaching Online.

Week 4 Summary – Choosing a MOOC and thinking about the digital ethnography

The term MOOC was first coined in 2008. Stanford University offered its first MOOC in 2012, entitlted ‘An Introduction to Artificial Intelligence’ and had over 160k individuals enrol, with 20, 000 passing the course.

Choosing a MOOC – perhaps easier said than done! After initially opting for ‘An Age of Sustainable Development‘ through edX, I soon discovered that the online community within this particular MOOC was significantly lacking in dialogue in the discussion forums. As an alternative I decided to enrol in Holocaust: The Final Solution with Coursera. As a teacher of secondary history I was keen to further my own professional understanding of this topic, in parralel with developing my digital ethnography for this task.

My lifestream additions this week have focused on the broader theme of online community and how this differs from a corporeal community, as well as the development of collective intelligence within wiki sites. A common thread throughout the blog entries has been how to make an online community successful, when there are so many challenges and road blocks that can hinder success. The power of anonymity, intangibility and falsification were all highlighted as potential barriers and points of tension. Some of these entries touched on what had been outlined in Lister (2009) who identified the difficulty in creating community online when ‘participants are there but not there, in touch but never touching, as deeply connected as they are profoundly alienated’ (Lister, 2009, 209). Mark Willis’ TEDx presentation however, provides ways in which real online community can be achieved. He posits that when the following four criteria are met – longevity, shared values, community management and trust – then the group can truly be deemed as ‘community’.

As I progress in the ethnographic study of this MOOC, I am keen to examine whether my chosen course meets Willis’ 4 point criteria for community culture, particularly with that of shared values.  This is further strengthened in Saadatdoost et al (2014) who states that “culture components include shared beliefs, values, perspectives and practices.” I am therefore hoping to investigate how far the participants in this MOOC share values, and if so, what are these? How much do these values provide cohesion within and across the discussion forums? Also, are there tensions that arise due to the presence of disparate, incompatible values? And if so, how are these tensions diffused? Lots of interesting questions to take with me, as I move forward with my chosen MOOC.

Lister, M. (2009) ‘Networks, users and economics’ in New media: a critical introduction, pp163 – 236, London: Routledge

Saadatdoost, R., Sim. A., Mittal, N., Jafarkarimi, H. & Mei Hee, J. (2014) ‘A netnography study of MOOC Community’, PACIS 2014 Proceedings. 116. http://aisel.aisnet.org/pacis2014/116.

Week 3 Summary – Biohacking and Transhumanism

There are a growing number of Transhumanist socieities. Humanity + is one such organisation. It has over 6000 members in over 100 countries

This week my interest was piqued by the unusual practice of biohacking, and how this has contributed to the transhumanism discussion. Newton Lee, in The Transhumanist Handbook defines transhumanism as anyone that is “using science and technology to enhance or alter our body chemistry in order to stay healthy and be more in control of our lives.” (Lee, 2019, 5). The YouTube clips and tweets on biohacking from this week’s lifestream were certainly representative of this view, particularly in respect to the idea of greater control, which was predominant theme throughout most of the social media on this topic. As individuals merge their bodies with technology, ranging from RFID chips inserted into hands as a replacement to contactless debit cards, to more radical cases such as Tim Cannon’s (rather crude) forearm implant that records biometric data from his body, there was consensus amongst all biohackers – these experimental modifications had given them greater agency and control of their individual lives.

There is little doubt that biohacking is a growing trend, and the discussion on the BBC Sounds podcast, as well as the interviews with Michael Laufer and Eric Matzer from this week’s YouTube clips, supports this view.  However, many critics of biohacking argue that its growth will ultimately be limited to a niche subculture, and that the movement is unlikely to gain enough traction to became mainstream. Medical ethics and opposition on religious grounds will ultimately curb the movement and limit its potential to grow beyond the very curious.

Conversely, proponents of the movement claim that biohackers are extropians of human change and that by pushing these boundaries today, they are catalysing an inevitable movement towards posthumanism. However, this does raise some very important questions. If this transhumanist movement is an inevitability for the 21st century, what ethical issues must be considered as we progress down this road of human change? Should there be interventions to regulate the range of practices this encompasses and, if so, what should that regulation look like? And if this were to happen, how long will it take before we start to see the appearance of modification clinics on the high street, offering biohacker-esque body augmentations to a mainstream market? Biohackers would certainly argue that this will be sooner, rather than later.


  • Lee, N. (2019) ‘Brave New World of Transhumanism’ in The Transhumanism Handbook, Springer: Switzerland, p5.
  • ‘Transhumanism’ (10 February 2020) Wikipedia, available at https://en.wikipedia.org/wiki/Transhumanism (Accessed: 10 February 2020)

Week 2 Summary: Embodiment Relations and mobile technology

Just over half of children in the United States — 53 percent — now own a smartphone by the age of 11. And 84 percent of teenagers now have their own phones, immersing themselves in a rich and complex world of experiences that adults sometimes need a lot of decoding to understand (npr.org)

This week I was intrigued by the concept of embodiment, as it brought to mind some of the school students that I teach. A New Hope questioned ‘at what point do our bodies begin and end. How do we define our most intimate borders?” This has congruence with what Miller defines as embodiment relationship, in that “when technologies are being used, the tool and the user become one” and the object becomes “part of the body image and overall identify of the person” (Miller, 2011, 219). Vincent delves further into the theory of embodiment relation by examining the intimate relationship that many individuals have with their mobile devices. Citing the work of Richardson (2007), he outlines that the close proximity of mobile phones to the body and the manner in which they connect to a number of sensory functions creates a much more powerful connection to humans than any other type of technology we use.

This concept had significant influence on this week’s life-stream and I identified some YouTube clips that explored our increasingly complex relationship with mobiles, and how smartphone dependency has become a rapidly growing epidemic. I was particularly interested in the article that I tweeted from Psychology Today that argued the attachment of a young person to that of their mobile phone is akin to the relationship a child has with a teddy bear. I was further intrigued by the TEDx talks from Jeff Butler and Anastacia Dedykina who respectively delved into discussions of how mobiles phones change the way we think, and whether we could live without them.

In my school, this is particular concern of mine and despite the existence of a ‘silent and invisible’ mobile phone policy, I see youngsters walking around our campus carrying mobile phones as if the device was an appendage to their limb. There is no doubt that these youngsters have a deeply intimate relationship with their mobiles, and any suggestion of their removal can often lead to anxiety, and in some cases despair. As Vincent argues, the devices are very clearly an extension of themselves and the social platforms they are accessing are reflections of their identity and self. Therefore to forcibly remove the technology would be tantamount a technological amputation.

However, the question remains as to how much this increasingly symbiotic relationship humans have with mobile technology, will actually contribute to human development? Does the embodiment relationship enhance our ability to grow into more advanced versions of humanity, or does this desecrate humanity and stymie its potential to flourish?

Miller, V. (2011) The Body and Information Technology in Miller, V. Understanding digital culture pp. 207 – 223, London: Sage

Week 1 Summary – Thinking about cybernetics

About 12, 000 individuals in the UK (inlcuding my father) wear a cochlear implant. That’s 12,000 cyborgs to EDC students!

Upon reading The Body and Information Technology (Miller, 2011), my interest was piqued in the manner in which the ‘cyborg’ was represented. The terminology, as a result of popular culture and dystopian notions of cybernetics, has often been framed as something to fear, with the term being imbued with pejorative connotations. Citing Gray et. al’s idea that by using technology as a means to restore, normalise, enhance and reconfigure the human body, it is possible to view the notion of cyborgs through an entirely different prism. Ten years ago my father was lucky enough to receive a cochlear implant on the National Health Service after years of degeneration in his hearing. Prior to the operation his hearing had diminished to such low levels that had essentially rendered him severely deaf. The implant to restore his hearing was life-changing and this ‘normalising’ technology significantly improved the quality of my father’s life. His hearing was restored to such a level that the was able to once again hear the sound of a spoon clinking against the side of a mug, as he stirred sugar into his tea – an everyday noise that he had not heard for years. Until I read the core paper, I have never viewed my father as a ‘cyborg’ but Miller has certainly put forward a reasonable case that has helped realign my perspective on this. Imagining cyborgs as individuals who have benefited from technology to improve the quality of their lives, rather than a traditional view often put forward in science fiction, establishes a more positive framework for understanding the complex relationship between humans and machine. This was very influential in my lifestream this week, with my inaugural tweeted about cochlear implants and cybernetics from the Ear Institute.

That said, there are possible ethical concerns on the horizon with this technology – as auditory cybernetics have developed over the past decade, my father’s device has become increasingly connected to the digital world. He can now connect his cochlear to Bluetooth and is able to attune the device to his mobile phone, laptop and television. This has brought me to wonder whether, in the not-too-distant future, humans who do not suffer from acute deafness, will be choosing to voluntarily implant the technology in order to enhance their connectivity to digital environments. This of course raises a gamut of ethical concerns over the nature of voluntary augmentations on the human body. Is this something that should be prevented from happening? And if so, can it be stopped?

Miller, V. (2011) “The Body and Information Technology”, from Miller, V. Understanding Digital Culture pp. 207 – 223, London: Sage.