Going back over my lifestream this week, it became apparent how ontogenetic this exercise has been, as my lifestream has gently morphed over these months to suit both the requirements of each block and my own. From short, frequent posts that characterised it at the beginning, my posts grew longer, contained more links and hopefully were more varied
Indulging in some human algorithmic processes as I went through the posts, I found that most of them were triggered by references to literature and websites while others contained links to videos. I did not have as many Twitter posts as I wanted (probably because of my shyness with social media) but I did find some podcasts here and there.
I embrace Kitchin’s (2017) view that algorithms ‘can be conceived in a number of ways’. I believe that there have been several algorithms at play in Education and Digital Cultures. The most obvious has been the technical algorithm, controlling the automation of posts using IFTTT. This allowed me to customize the algorithmic process to my needs, something that would have perhaps been too ‘extensive and complex a task to be tackled’ manually (Kitchin, 2017). The algorithm behind IFTTT allowed me a glimpse into the way triggers act on personal data and behaviour in order to ‘process instructions and produce a result’ (Kithin, 2017) that is tailor-made similar to the way modern technology is envisaging education.
Was I happy to allow an app to trawl my accounts, bring them to the public and ‘shape part of my everyday practices and tasks’ (Kitchin, 2017)? Perhaps not completely, and I did create separate accounts on occasion but the process of ‘domestication’ (Kitchin, 2017) eventually took place. Were my IFTTT algorithms impartial and objective? No, as the choice of applets was mine and the choice of which feeds to forward too. Yet the lifestream was never meant to be impartial. On the contrary, it helped represent my train of thought over these weeks. Was my algorithm reliable? IFTTT did glitch or shut down on a few occasions. At one point I got a forwarded a YouTube video I had not liked. What would have happened if an algorithm embedded in an educational platform failed? What would have been the outcome?
There was also the cultural and relational dimension to algorithms defined by the connections between members of Education and Digital Cultures (to whom I am indebted) and the experiences of other communities on the MOOC. Here, algorithms acted ‘as a wider network of relations which mediated and refracted their (relations) work’ (Kitchin, 2017). Strong ties encouraged strong community feelings, triggered by posts/comments sent to members which initiated discussions or prompted sharing of experiences.
The artefacts which were another form of algorithm, condensed the knowledge from every block into visual representations, incorporating accumulated data from literature, browsing, forums, suggestions and the course experience itself while encouraging the discovery of new media to represent them.
How has the lifestream experience helped me understand the implications algorithms have on education? Like the algorithms involved in generating content for my lifestream according to my personal choices, algorithms in education make the collection of large amounts of data possible. Data is collected (datafication) to maximise the learning experience (digitization), removing what is extra and presenting it in the best way possible for learners, a process human agency alone would find difficult to replicate as I observed through the lifestream exercise.
Am I thoroughly convinced in the processes of ‘datafication’ of student information and ‘digitization’ of curriculum content (Knox et al, 2020)? Again…not entirely. While datafication is a precious resource in modern educational systems, the adage where the end justifies the means keeps coming to mind.
What happens to the learner when his/her actions are reduced to a collection of numbers (accrued from ‘pervasive data mining and data analytic packages (Williamson, 2017) ) that can be broken down, interpreted, sectioned and grouped into blocks, similar to the way entertainment media or products are categorised in online shopping and entertainment platforms? Is there a risk that learners become a ‘product of consumerist analytic technologies’ (Knox et al, 2020) and black-boxed trade secret algorithms, whereby the value of a person lies in data obtained by tracking his/her behaviour and success?
My lifestream algorithm has been an occasion for me to be both author and agent of the data selected to represent my activity during the course. This is not always the case with all educational platforms. It is, therefore, necessary that exercises pertaining to the collection of student data while done by large corporations are as transparent (by questioning and studying them) as possible while the digitization of the learning experience keeps both learners and teachers at its centre (and safeguards their autonomy), where they can still contribute to the output of the algorithm.
References:
Kitchin. R., (2017) Thinking critically about and researching algorithms,
Information, Communication & Society, 20:1, 14-29, DOI: 10.1080/1369118X.2016.1154087
Knox, J., Williamson, B., & Bayne, S., (2020) Machine behaviourism:
future visions of ‘learnification’ and ‘datafication’ across humans and digital technologies, Learning, Media and Technology, 45:1, 31-45, DOI: 10.1080/17439884.2019.1623251
Williamson, B. 2017. Introduction: Learning machines, digital data and the future of education (chapter 1). In Big Data and Education: the digital future of learning, policy, and practice. Sage.
Featured image created and modified from images obtained on https://pixabay.com.