Week 9 Summary – Algorithmic Bias

David Beer raises the issue of the decision-making power of algorithms and identifies that there is need to understand how algorithms shape organisation, institutional, commercial and governmental decision making (Beer, 2017). There are criticisms of those holding the view that of algorithms as ‘guarantors of objectivity, authority and efficiency’ and with others arguing that due to the fact algorithms are created by humans, they embed layers of social and political bias into their code, that result in decisions that are neither benign or neutral. Furthermore, these “decisions hold all types of values, many of which openly promote racism, sexism and false notions of meritocracy” (Noble, 2018). As such ‘algorithms produce worlds rather than objectively account for them, and are considered manifestations of power’ (Knox, 2015).

It was this notion of algorithms bias that drove my inquiry in this week’s section of the lifestream blog and there was no shortage of social media commentary on the issue. Cathy O’ Neil identifies this in her YouTube clip and supports the view by claiming that algorithms are not objective and that they are merely ‘opinions embedded into math’. Perhaps most interesting, was the work of Joy Buolamwini, whose investigation  artificial intelligence face recognition software, has unearthed inherent racist and sexist elements from its developers.

To what extent are racism values embedded into algorithms?
Joy Buolamwini’s has carried out extensive research on how algorithmic code determining facial recognition, fails to recognise black women – Click the image for more detail

However, where does this notion of algorithmic bias intersect with education and what type of educational landscape will the algorithms produce? With the rise of anti-plagiarism software, and the growth of intelligent teaching and learning platforms such as Century Tech, many educators fear that there is incremental dependency on algorithms within schools and colleges, particularly for assessment. This is certainly not without difficulties or tension. Ben Williamson claims that many studies have highlighted inaccuracies in the Turnitin software, which many institutions use to cross-check student work, incorrectly branding some students as cheats, whilst missing other, and very clear instances of plagiarism. This ultimately leads to a growing level of distrust between youngsters and their educators, and is responsible for breaking down relationships as the use of technology, and algorithmic dependency increases. How else will students and teachers be negatively impacted by algorithmic biases (or errors) and, as dependency on these tools continues to grow, will educators be able to even identify when this happens, let alone how to mitigate it?

References

  • Beer, D. (2015) The social power of algorithms, Information, Communication & Society, 20:1, 1 – 13, DOI: 10.1080/1369118X.2016.121614
  • Knox, J. 2015. Algorithmic Cultures. Excerpt from Critical Education and Digital Cultures. In Encyclopedia of Educational Philosophy and Theory. M. A. Peters (ed.). DOI 10.1007/978-981-287-532-7_124-1
  • Noble, S. (2018) Algorithms of Oppression, NYU Press, New York.
  • Williamson, B. (2019). Automating mistrust. Code Acts in Education

Leave a Reply

Your email address will not be published. Required fields are marked *