Today we had our first film festival tutorial and I enjoyed very much the discussion and the way how we did it. We watched 4 short movies together by using the watch2gether online tool.
Exchange via chat went well and a lot of different ideas were shared.
The recurring issue was the human – cyborg / AI interaction. Very interestingly displayed in Retrofit.
Some of the aspects in my (and others) opinions were
- uploading human emotions into a cyborg/ AI with uncontrollable results
- emotional robots vs. unemotional human beings / reasonable robots vs. overreacting or overemotional humans –> role swapping
- robots juging/ identifying human limitations
- utopian visions of robot – human interrelation
- control vs. self determination
- free will vs. algorithm based behaviour
Alltogether the movies showed a less a dystopian future vision (with exclusion of 2001 clip) as many sci – fi movies do.
Also Retrofit, Cyborg or Intelligence explosion movies are experimenting with the ideas of how robots or AI are to be integrated in our society, how they might react and how society might react.
What are your opinions?
Thank you for summing up the session yesterday, Thomas, and adding your thoughts!
Something that struck me during our discussions was the utopian/dystopian narrative that is often portrayed – often one or the other. Similarly, my thinking about autonomous will, where the responsibility lies (human/machine?) and so on can often end up being one or the other.
Yet, in looking at the Hayles (1999) reading (How we became posthuman: virtual bodies in cybernetics, literature, and informatics), a contrasting view is suggested – one where these boundaries and binary narratives are blurred. Here are a few quotes that stuck out at me today:
> ‘In the posthuman view, by contrast, conscious agency has never been “in control”. In fact, the very illusion of control bespeaks a fundamental ignorance about the nature of the emergent processes through which consciousness, the organism, and the environment are constituted.’ (Hayles 1999: 288)
> ‘distributed cognition replaces autonomous will’ (Hayles 1999: 288)
> ‘Just as the posthuman need not be antihuman, so it also need not be apocalyptic’ (Hayles 1999: 288)
Still mulling over all this, but would be great to hear everyone’s thoughts!
Great to see your lifestream up and running Thomas. Do note that you need to include an end-of-week summary post – see the course guide for all of the details – this should be around 250 words, reflecting on your lifestream activity from the previous week. This will also be the main post that I comment on each week.
‘uploading human emotions into a cyborg/ AI with uncontrollable results’
Yes, it often doesn’t seem to end well! Do you think this assumption – of the mind being ‘information’, capable of uploading, is prevalent in some of the ways people talk about educational technologies?
‘robots juging/ identifying human limitations’
Yes, there is something about a perceived ability to see ‘us’ from the outside, and with precision. Is this what learning analytic technologies, for example, promise?
‘ideas of how robots or AI are to be integrated in our society, how they might react and how society might react’
This seems to be a key theme doesn’t it. What might be the issues of integration in education? How would automated agents ‘fit in’? Do they support, or would they have some kind of authority?