Our second week continued with questions raised through films (including A New Hope and Cyborg) and books (Machines Like Me and Iain M. Banks’ series). Themes that particularly struck me include:
1) Assuming that ‘human’ is not an objective nor inclusive term (Braidotti 2013: 26), how might this affect how we think about ‘artificial intelligence’, power and agency?
2) If we take a ‘dynamic partnership between humans and intelligent machines’ (Hayles 1999: 288) as a point of departure, how might we consider concepts such as consciousness, (distributed) cognition and agency?
3) Can machines make ‘moral‘ decisions?
4) Building on a discussion about gender and ‘virtual’ identities, are we ‘performing’ or is it ‘performative‘? Should there be a distinction between ‘real’/’virtual’ here, and how do we define ‘real’? (The Matrix comes to mind here…) How might this play out in on our identities on Twitter, lifestream-blogs etc.?
5) Thinking beyond assumptions that the ‘human’ is at the centre of education, and technology is a ‘tool‘ or ‘enhancement‘, what are the implications of a complex entanglement of education and technology (Bayne 2015: 18) for this course?
Complex entanglement (‘Entanglement’, ellen x silverberg, Flickr)
Many discussions were via Twitter, drawing in questions from the public:
Great thread! Can i throw in another q? Where would AI forms sit? Perhaps advanced AI life forms? Do we need to consider agency?
— Louise Connelly (@lconnelly09) January 23, 2020
I have also been commenting on others’ lifestream-blogs, bringing them in as feeds.
Following on from last week’s map, I have opened new and revisited old avenues:
I have also experimented with visualisations of my feed ahead of our visual artefact task…
On your third point – can robots make moral decisions – I just finished McEwen’s ‘Machines Like Me’. The choice that Adam makes towards the end, towards Miranda’s criminal act got me thinking about whether AI can be MORE moral than humans … but morality is cultural, historic, social, subjective in many ways … what would a robot teacher in a Japanese classroom choose to do in an ethical dilemma as opposed to one in a US classroom?
Absolutely! Some might say Adam’s decisions near the end were more ‘rational’ or ‘moral’ than his ‘human’ counterparts…although here we go with the dualisms (‘rational’/’irrational’, ‘moral’/’immoral’, ‘human’/’machine’)…
As you say, it’s all subjective. What may be considered ‘rational’ may not be by others, and what might have been considered ‘moral’ by someone fifty years ago might be considered ‘immoral’ by the same person today.
There’s some interesting discussion in this paper about ‘rationality’ and education, and the ‘humanistic model of lifelong learning’:
‘Habermas, lifeworld and rationality: towards a comprehensive model of lifelong learning’ (Regmi 2015) https://doi.org/10.1080/02601370.2017.1377776
‘Assuming that ‘human’ is not an objective nor inclusive term (Braidotti 2013: 26), how might this affect how we think about ‘artificial intelligence’, power and agency?’
Great question! This is where people are beginning to question the particular assumptions ‘built in’ to intelligent systems. For example, Joy Buolamwini’s much-publicised work showed how facial recognition had in-built biases towards particular skin types, and gender: http://news.mit.edu/2018/study-finds-gender-skin-type-bias-artificial-intelligence-systems-0212. The point being that designers of AI already have an idea of what they think is ‘normal’ humanness, and this becomes the basis of potentially significant decision-making.
‘what are the implications of a complex entanglement of education and technology (Bayne 2015: 18) for this course?’
One of the aspects of EDC we might reflect on here is the automated capacity of the lifestream. To what extent do do you feel ‘in control’ of your work, where items are added through pre-programmed rules?
Great to see that you’ve set up some comments feeds!
Thank you, Jeremy!
> ‘One of the aspects of EDC we might reflect on here is the automated capacity of the lifestream. To what extent do do you feel ‘in control’ of your work, where items are added through pre-programmed rules?’
Absolutely, and while I am able to set up, customise and curate the feeds, I still sometimes feel “guided” in some ways…I wonder if this a ‘dynamic partnership between humans and intelligent machines’ (Hayles 1999: 288)!
In using the Artsteps application to set up my visual artefact, I also found myself “inspired” simply by the way things appeared or were presented at times, be it “random” or “by design”.