“…there are things that computers cannot do – things at the very core of the project of education, things at the very core of our work: computers cannot, computers do not care. It may be less that our machines are becoming more intelligent; it may be that we humans are becoming more like machines.” (Watters, 2015)
“What if recruited teacher who were deep understanders of students, not necessarily of content?” (TEDx Talks, 2017)
For the purpose of this final post, and attempting to answer the three question posed to us, I’ll focus primarily on the YouTube algorithm. This is because it has had the most marked effect on my lifestream choices in comparison to Twitter and Pocket, and has more relevance to my views on algorithms in education. With both Pocket and Twitter, I have been led less by suggestions. For example, Pocket emails me regularly with a list of suggested articles that have no relevance to what I’ve shared and I have engaged less with Twitter than YouTube so have been less likely to follow the suggestions.
I have YouTube clips set up to autoplay and have regularly gone off on a video watching journey as “related” videos are up next. This has also been the case for my ‘recommended’ videos where I have seen suggestions on similar topics to those that I have watched and liked. I have presumed that these have been suggested because of key word similarities in the title along with clips that other users, who have selected the video that I am consuming, have watched. This process was apparent to me this week when I started researching examples of algorithms in non-educational contexts (Amazon article), followed by educational examples (adaptive learning technologies) and then the use of data to inform teaching. As soon as I started searching for the things that make a great teacher, multiple clips get popping and are still popping up on my recommended list. The final clip about making good teachers great (TEDx Talks, 2017) made a point about teachers using multiple points of data to make a decision on a student’s progress and this will help me respond to the second question.
Kitchin (2017) discusses how algorithms minimise not only costs for businesses but also human error and bias. His point about how they adjudicate more consequential decisions in our lives was relevant to my situation when my algorithm project clip was removed for breaching guidelines. I have no doubt that a human perspective would have realised the error of the decision but that would have added cost to the business without any gain. This was an example where the algorithm was not a guarantor of objectivity (Knox, 2015). The rather black and white nature of that decision was irritating and, as a consumer, I would have preferred a different decision but it was also low stakes for me and I will continue to use the platform. I understand how I could have bypassed that system and I was still able to find a different platform to share my work. However, what about those who don’t have the knowledge, support nor the time to find a different solution. This is when algorithms can amplify inequalities (Knox et al., 2020). A real-life example would be welfare recipients in Australia who have been given a bill (often unfairly) to repay monies by an algorithm which has been coined as robodebt. This need to have a person, as opposed to a robot, to understand and advocate on one’s behalf was highlighted by a question that Azul Terronez posed to students: what makes a great teacher? (TEDx Talks, 2017)
Terronez asked this question to thousands of children, especially primary-aged students. Many of the responses would have confused a computer as they needed “interpreting” and the meaning inferred. These young people had something good to say but didn’t necessarily have the vocabulary to express their sentiments. I experience this multiple times on a daily basis as a primary school teacher where I either proactively give them vocabulary or respond to “errors” and use it as a teaching opportunity. This realisation helps me respond to the third and final question. I was heading to this understanding in my response to David Yeat’s comment on my algorithm project where I discussed how I see adaptive technologies as an “enhancement to me rather than a substitute”. Algorithms can help me understand many things about a student in my class that I would either never comprehend, or do so in a much slower fashion. However, I can do many things that a robot can’t do, and vice versa. Both provide great value but it’s important not to rely too heavily on one or the other. The third, and I would argue best way, is a blend of both the algorithmic and the human factors.
Kitchin, R (2017), ‘Thinking Critically about Researching Algorithms’, Information, Communication & Society, 20(1) pp14-29
Knox, J (2015), Critical Education and Digital Cultures, in Encyclopedia of Educational Philosophy and Theory, Peters, M.A. (ed.)
Knox, J, Williamson, B, Bayne, S (2020) ‘Machine behaviourism: future visions of ‘learnification’ and ‘datafication’ across humans and digital technologies’, Learning, Media and Technology, 45(1), pp31-45
TEDx Talks (2017), ‘What makes a good teacher great?’, Available online at https://www.youtube.com/watch?v=vrU6YJle6Q4, accessed 21 March 2020
Watters, A (2015), ‘The Algorithmic Future of Education’, Hack Education, Available online at http://hackeducation.com/2015/10/22/robot-tutors, accessed 21 March 2020