Algorithms and decisions

Are algorithms foolproof? Can they transcend human error and bias. The NewEconomy questions algorithms below.

from Diigo https://ift.tt/3aHETez
via IFTTT

Since time immemorial, man has used religion, fortune-telling, talismans and blind faith when faced with difficult decisions. Similarly, mantras and finger-crossing have also been practised to appease the mind into making difficult choices. Yet nowadays these routines seem particularly antiquated and obsolete when most decisions are taken by personal devices and online services which appear tailor-made for us. Can we trust them?

Far from being neutral and all-knowing decision tools, complex algorithms are shaped by humans, who are, for all intents and purposes, imperfect. Algorithms function by drawing on past data while also influencing real-life decisions, which makes them prone, by their very nature, to repeating human mistakes and perpetuating them through feedback loops. Often, their implications can be unexpected and unintended.

and similarly….

First, algorithms act as part of a wider network of relations which mediate and refract their work, for example, poor input data will lead to weak outcomes (Goffey, 2008; Pasquale, 2014). Second, the performance of algorithms can have side effects and unintended consequences, and left unattended or unsupervised they can perform unanticipated acts (Steiner, 2012). Third, algorithms can have biases or make mistakes due to bugs or miscoding (Diakopoulos and Drucker as cited in Kitchin, 2017).

 

 

References:

Kitchin. R., (2017) Thinking critically about and researching algorithms,
Information, Communication & Society, 20:1, 14-29, DOI: 10.1080/1369118X.2016.1154087

Week 8-Algorithms for everyone

Pavstud

 

It would have been very handy to design an algorithm to filter my most inspired posts on this blog from the more run of the mill ones. On the other hand, this could prove futile in a blog aimed at documenting my train of thought throughout Education and Digital Cultures. Algorithms are as much about filtering out ‘undesired’ data as about whitelisting user choices.

There were two main arguments running in tandem through the posts this week. The right for the informed public to have access to algorithms behind some of the most popular social media platforms and how algorithms in education can either help or destroy notions of learning. Since algorithms are ‘adjudicating more and more consequential decisions in our lives’ (Diakopoulos, cited in Kitchin, 2017) and they are essentially capitalist in nature, one has to question who they are serving. Their chimaeric nature made up of many networked ‘hands’ (Seaver, cited in Kitchin, 2017) is perhaps why studying their effect is not straightforward. Yet algorithms feed-in human ingenuity or lack of knowledge about them and so need to be ethically managed.

Algorithms and AIEd is also a field of education that is often contested because of a return to a behavioural approach to learning. Perhaps this might not be the Pavlovian route where learners are given instant gratification but more of a consumerist perspective that monitors learning to collect data and tailor effective learning solutions through positive behaviour. Reinforcement learning and nudging are perhaps two of the most effective ways to shape learning. Not only are technologies shaping learning but more often than not they are shaping humans to act like machines, thereby stripping them of their autonomy by negating them access to what is being filtered out.

The ‘learner’ is now an irrational and emotional subject whose behaviours and actions are understood to be both machine-readable by learning algorithms and modifiable by digital hypernudge platforms.  (Knox et al, 2020)

References:

Kitchin. R., (2017) Thinking critically about and researching algorithms,
Information, Communication & Society, 20:1, 14-29, DOI: 10.1080/1369118X.2016.1154087

Knox, J., Williamson, B., & Bayne, S., (2020) Machine behaviourism:
future visions of ‘learnification’ and ‘datafication’ across humans and digital technologies, Learning, Media and Technology, 45:1, 31-45, DOI: 10.1080/17439884.2019.1623251