Liked on YouTube: Ben Williamson, University of Edinburgh

Ben Williamson questions whether the misuse of algorithms and big data collection can affect the way the public perceives education technology and hence resists it. He gives a number of very significant examples of how technologies and algorithms can go wrong. Some of these cases feature systems that had never been tested before or others which question the issue of privacy with data collection practices.

He describes a number of studies looking into ways of collecting ‘intimate data about the bodies and the brains of students’ such as DNA IQ tests based on saliva tests and neuroptimized education platforms that collect data ‘leaking’ from children’s brains through brain bands. These systems are able to make predictions about children’s’ intelligence and attainment but how accurate are they or are they even ethical at all.

 

Ben Williamson, University of Edinburgh
Ben Williamson, Senior Researcher, University of Edinburgh

Through the Twitter Mob and the Critical EdTech Activists, do we experience an Edtech push-back? Why?

www.uis.no
via YouTube https://www.youtube.com/watch?v=UVg56JmpGV8

Algorithmic Play Artefact

Please find my Algorithmic Play artefact HERE.

This has been a busy and very interesting week and I hope my artefact has managed to incorporate a bit of everything…analysis…humour…reflection and some conclusions. I hope you enjoy it.

 

 

How AI will destroy Education

cartoon

Is data collected from students always reliable when taking decisions on education?

from Diigo https://ift.tt/2zO9KHO
via IFTTT

A number of educational models like constructivist and experiential approaches to education have shown that performance (in the form of marks)  is not a necessarily reliable gauge for predicting that learning has taken place….and yet marks are often part of the data collected in order to determine where students go wrong. Furthermore, predictions made on unreliable or inconclusive collected data can do more harm than good.

One of the less promoted aspects of AI in education is the isolation of the learner from the environment and peers and yet robust AI systems should take these variables even more into account. This is perhaps the idea behind modern behavioural approaches to put the environment (of learning) back into the equation by designing ‘architectures’ that take into account the ‘physical, socio-cultural and administrative environments in which choices are framed’ (Knox et al, 2020).

In spite of this, the same arguments that go into the removal of the teacher from the learning equation are often voiced when talking about AIEd. There are still aspects of the learning process, often related to the community of learning, that are still absent from learning algorithms. These include notions surrounding the emotive aspect of learning. Will these aspects be truncated in favour of ‘cleaner solutions’?

References:

Knox, J., Williamson, B., & Bayne, S., (2020) Machine behaviourism:
future visions of ‘learnification’ and ‘datafication’ across humans and digital technologies, Learning, Media and Technology, 45:1, 31-45, DOI: 10.1080/17439884.2019.1623251

People Want to Know About Algorithms—but Not Too Much

When we interact with algorithms, we know we are dealing with machines. Yet somehow their intelligence and their ability to mimic our own patterns of thought and communication confuse us into viewing them as human. 

Kartik Hosanagar

The trust that students place in education systems is a finely-balanced thing and this article goes to show how too much information can create distrust and loss of confidence in re-establishes systems.

from Diigo https://ift.tt/2Tyq4He
via IFTTT

References:

H. Hosanagar (2019). People Want to Know About Algorithms – but Not Too Much. Available at: https://www.wired.com/story/book-excerpt-algorithm-transparency/. (Accessed: 8th February 2020).