Film festival review (weeks 1 & 2)

I’ve divided this page into two sections: A shortened film review (for the sake of the reader) and a longer version of the same film review (for my own notes).
Film Festival Review SHORT
1.0 Introduction
The film clips could be roughly bundled around the themes of ‘memory’, ‘machine sentience’ and ‘almost human’. In the live tutorials we asked ourselves how these sci fi films explored the influence of technology on our concept of ‘humanness’, & how the films address our assumptions about education when viewed through the lens of digital technology.
I will explore the three themes and their potential influence on education below.
2.0 Memory.

Memory is a common theme in several of the films mentioned below. Memories or embedded/uploaded information stimulates emotions- sometimes causing erratic and unplanned behaviour, glitches in the code, as is evidenced by the robots with embedded memory in the clips Retrofit, Robocop, and Robots of Brixton. Embedded memory causes an emotional attachment to the memories because they give a sense of self, the emergence of ego that wants to be seen as worthy of life in its own right. Tears in rain (from Bladerunner).
2.1 Information upload (memory upload) and how it might affect education.
Memories, (even fabricated memories), can change the thinking and life course of the person or machine that contains them or downloads them. In education, information upload or memory upload to students should also contain supports that allow the recipient to contextualise, sort, use or discard it.
Information uploaded without context can be misunderstood and improperly used. Students might upload KungFu skills to black belt level (ref. a scene from the Matrix) but without the discipline, respect in the Master and self control that comes from years of patient practice, these students will not engage in a controlled and proper practice of their new skill.
Improvements in memory or information transfer may not eliminate social problems. Example those in poorer societies may have access to the information and the technical skill to download it, but without a mentor to help them sift, sort, critically appraise and keep the useful information, the resulting embedded information may be improperly used, and may not lead to learning or useful application in different societies.
3.0 On the theme of machine sentience.

Sentience (creativity, intelligence, self awareness, and intentionality) is another theme that is running through these movie clips, and the three I chose to refer to were Tears in rain (from Bladerunner), The Intelligence Explosion and Stop Dave, I’m afraid (from 2001: A Space Odyssey). They are a commentary on the exponential rise of machine intelligence and the need for regulatory oversight and restrictions on its development.
3.1 Machine sentience and its application to education.
If the AI teacher bots becomes more intelligent than humans, which is estimated to be the case in 50 years, then bots may perhaps no longer be interested in educating humans, believing them to be too erratic and of little potential. The intelligent machine may deem it illogical to continue to invest time in humans when it would prefer to be off exploring the universe.
AI bots in education are already behaving in an empathetic manner towards struggling students as they measure student engagement patterns, answer student queries, anticipate student needs and entice increased engagement through positive reinforcement. How long before this empathy (always self checking and self improving) will develop a deep understanding of what it means to be human, achieving sentience and with it, the need be recognised as something more? Example sentient beings express the need of freedom of choice. If this and other ‘human’ needs are frustrated, will this cause a conflict of interest in our AI bots? Is it morally right to let them develop this need but prevent them from having it? Will bots start to demand more rights, more space in this world, more status?
Who or what will manage our use of technology as it creates new moral challenges?
4.0; On the theme of ‘almost human’

In many ways, ‘almost human’ for me, overlaps with ‘sentience’ in section 3. Maybe however, sentience, creativity, intelligence, self-aware, and intentionality can be developed without becoming human. I am reminded of the Vulcan race here, the fictional extra-terrestrial humanoid species in the Star Trek universe. Vulcans have purged emotions from their lives because emotions are seen as a primitive feature that can lead to erratic behaviour and illogical life choices.
4.1 ‘Almost Human’ and its application to education.
The film clips portray two opposite reactions to being ‘almost human’ . Chappie is wide eyed with wonder and devours life with joy and an open cyborg heart. The female cyborg in clip 1 is upset at being turned ‘almost human’, going rogue. Both futures are possible. How do we plan for the more desirable one?
But wouldn’t it be amazing if we had an AI bot that could transfer the joy of learning to their students? If they become more ‘human like’ would they in turn be more useful to human students, rekindling a sense of creativity and trust, hope and joy in a learning towards better future?
But if they are more human like and they are trusted to teach our young humans, will the emotional aspect within the AI lead to erratic behaviour that teaches our students the wrong application of the information- for no logical reason? Humans are illogical afterall.
5.0 Conclusion
These films asks us to consider the challenges of our time; the effects of a dehumanised society, shaped by technological advances, powerful corporations, environmental destruction, overpopulation, and AI intruding into our private lives.
How will we humans develop strategies early on to make the AI ‘safe’, not just for humans, but for the potential suffering of the AI itself. There is a moral quandary between developing intelligence to serve human needs and preventing too much intelligence. How much humanness we can embed in an AI, without that AI suffering from emotion overload and the pain of erratic and uncontrollable responses?
The exponential rise of machine intelligence is here, and so too is the need for regulatory oversight and restrictions on its development for all aspects of human society including education.
Film Festival Review LONG
1.0 Introduction
We used Togethertube to simultaneously watch movie clips and collaborate via the chat feature. These films asks us to consider the challenges of our time; the effects of a dehumanised society, shaped by technological advances, powerful corporations, environmental destruction, overpopulation, and AI intruding into our private lives. These films interpret the future as mostly dystopian with the cyborg representing the main threat to humans; “an augmented human being that represents both a cybernetic arrangement of the blurred boundaries between the living and the machinic, as well as a disturbing Sci-Fi vision of the consequences of increasing technological development” (Knox 2015).
The film clips could be roughly bundled around the themes of ‘memory’, ‘machine sentience’ and ‘almost human’. In the live tutorials we asked ourselves how these sci fi films explored the influence of technology on our concept of ‘humanness’, & how the films address our assumptions about education when viewed through the lens of digital technology.
I will explore the three themes and their potential influence on education below.
2.0 Memory.
Memory is a common theme in several of the films. Memories or uploaded information can stimulate emotions- sometimes causing erratic and unplanned behaviour, glitches in the code, as is evidenced by the robots with embedded memory in the clips below.
In the clip ‘Retrofit’, Robot Dad has nothing left of his humanness except for a memory chip that was uploaded into the body of a robot. Robot Dad is haunted by his memories of his life before his consciousness was uploaded into what he now sees as a backward step, a retrofit, a limited machine. His uploaded memories remind him of what he has lost; power, status, a sense of purpose. This triggers anger; erratic emotions, loss of control, raw grief (“Let me die with dignity!”).
In the clip Robocop, the memories stimulated by Robocop’s walk through his old apartment cause emotions of grief, loss, anger as he remembers his family. As memories flow, his circuits are flooded by human emotion; causing a ‘glitch’ & loss of control (he punches the furniture in the apartment)
Robots of Brixton
London’s robot workforce are feeling oppressed because of overpopulation and life in ghetto conditions. In order to get out of their situation, the young robots regress to the embedded memory that might be a residual dream from their human makers. The embedded memories stimulate the robots to repeat the street clashes of the young humans in the slums of Brixton in the 80’s, perhaps amplifying human flaws and by-passing logical machine thought. The concept of race and power is touched on here; with the tech improvements failing to solve social problems like overpopulation, inequality and loss of agency to a more powerful overlord.
Tears in rain (from Bladerunner)
Here again we have a replicant with implanted memories who is influenced and affected by them. The cyborg wants to live a little longer because he has become too attached to his memories and the sense of purpose which came from having those memories and getting emotional about them. His tears flow unseen in the rain.
2.1 Information upload (memory upload) and how it might affect education.
Memories, (even fabricated memories), can change the thinking and life course of the person or machine that contains them or downloads them. In education, information upload or memory upload to students should also contain supports that allow the recipient to contextualise, sort, use or discard it.
Information uploaded without context can be misunderstood and improperly used. Students might upload KungFu skills to black belt level (ref. a scene from the Matrix) but without the discipline, respect in the Master and self control that comes from years of patient practice, these students will not engage in a controlled and proper practice of their new skill.
Improvements in memory or information transfer may not eliminate social problems. Example those in poorer societies may have access to the information and the technical skill to download it, but without a mentor to help them sift, sort, critically appraise and keep the useful information, the resulting embedded information may be improperly used, and may not lead to learning or useful application in different societies.
3.0 On the theme of machine sentience.
Sentience (creativity, intelligence, self aware, and intentionality) is another theme that is running through these movie clips.
Tears in rain (from Bladerunner)
The monologue of this robot reflects his sense of pride in his accomplishments and of a longing to be treated equally by his human listener. “I’ve seen C-beams glitter of the shoulder of Orion”. He displays sentience is his sense of wonder at what he has seen, and the ego that wants to be recognised as worthy of living. The dove symbolises a soul; telling us that this AI machine has developed a soul. As he dies, the dove flies off towards the only piece of blue sky in the entire film. The music in the background is a cacophony of noise brought to harmony in the end, as this sentient being finds peace in his final moments.
The Intelligence Explosion
The robot in this file has an algorithm that has exponentially exploded with intelligence. In a break from the usual dystopian prediction, this film shows the moment that he tells his human masters that humans are insignificant, not worth his time and not worth enslaving. This machine could ‘take over’ but has no interest in doing so. The film is commentary around the exponential rise of machine intelligence and the need for regulatory oversight and restrictions on its development, before it gets to the point where we cannot pull the plug.
3.1 Machine sentience and its application to education.
If the AI teacher bots becomes more intelligent than humans, which is estimated to be the case in 50 years, then bots may perhaps no longer be interested in educating humans, believing them to be too erratic and of little potential. The intelligent machine may deem it illogical to continue to invest time in humans when it would prefer to be off exploring the universe.
AI bots in education are already behaving in an empathetic manner towards struggling students as they measure student engagement patterns, answer student queries, anticipate student needs and entice increased engagement through positive reinforcement. How long before this empathy (always self checking and self improving) will develop a deep understanding of what it means to be human, achieving sentience and with it, the need be recognised as something more? Example sentient beings express the need of freedom of choice. If this and other ‘human’ needs are frustrated, will this cause a conflict of interest in our AI bots? Is it morally right to let them develop this need but prevent them from having it? Will bots start to demand more rights, more space in this world, more status?
Who or what will manage our use of technology as it creates new moral challenges?
4.0; On the theme of ‘almost human’
In many ways, ‘almost human’ for me, overlaps with ‘sentience’ in section 3. Maybe however, sentience, creativity, intelligence, self-aware, and intentionality can be developed without becoming human. I am reminded of the Vulcan race here, the fictional extra-terrestrial humanoid species in the Star Trek universe. Vulcans have purged emotions from their lives because emotions are seen as a primitive feature that can lead to erratic behaviour and illogical life choices.
The Cyborg
The man in white coat comes into to take the sheet of plastic off female robot, wake her up and tell her that today is a great day- today is the day that she becomes human. He then proceeds to implant something into her, which makes her face twist and contort with pain, as emotions flood her circuits. He asked her how she feels, she does not like it. She does not want to ‘have to get used to it’. She gets angry and says “The future is born, I only wish I could say the same for you”, and they go off camera, presumably the human dies at the hand of the robot at this stage. Why does she kill him? She has nothing to gain from his death. Is it because the surge of emotions have turned her ‘almost human’ and made her think in an illogical thought pattern?
Chappie
I love this film clip because here we have a robot (Chappie) that achieves sentience and and a childlike wonder at the beauty of everything around him. He displays creativity and joy. He breaks out from being the superintelligence that people fear, and becomes a person in his own right with human connections.He becomes something that enriches the lives of the humans around him.
4.1 ‘Almost Human’ and its application to education.
The film clips portray two opposite reactions to being ‘almost human’ . Chappie is wide eyed with wonder and devours life with open arms and joy. The female cyborg in clip 1 is upset and angry, turning bad. Both futures are possible. How do we plan for the more desirable one?
But wouldn’t it be amazing if we had an AI bot that could transfer the joy of learning to their students? If they become more ‘human like’ would they in turn be more useful to human students, rekindling a sense of creativity and trust, hope and joy in a learning towards better future?
But if they are more human like and they are trusted to teach our young humans, will the emotional aspect within the AI lead to erratic behaviur that teaches our students the wrong application of the information- for no logical reason? Humans are illogical afterall.
5.0 Conclusion
These films asks us to consider the challenges of our time; the effects of a dehumanised society, shaped by technological advances, powerful corporations, environmental destruction, overpopulation, and AI intruding into our private lives.
How will we humans develop strategies early on to make the AI ‘safe’, not just for humans, but for the potential suffering of the AI itself. There is a moral quandary between developing intelligence to serve human needs and preventing too much intelligence. How much humanness we can embed in an AI, without that AI suffering from emotion overload and the pain of erratic and uncontrollable responses?
The exponential rise of machine intelligence is here, and so too is the need for regulatory oversight and restrictions on its development for all aspects of human society including education.