Week 11 Summary – Algorithms in Education

Click for article – How AI is taking over the classroom.

In the final extended week of the course I decided to further my social media interactions on algorithms by exploring how they increasingly intersect with secondary education. Recently, my school invested in Century Tech, a nascent technology start-up that offers a teaching and learning platform powered by artificial intelligence. Using vast quantities of student data gathered through diagnostic assessment, as well as student responses in learning activities, the AI is able to use algorithms to generate unique and individualised pathways for a child’s progress in core subject areas.

This made me ponder the role that algorithms may play in education in these coming years, and the impact that that an emerging algorithmic culture will have on classrooms and teachers. Adrien DuBois, in his TEDx speech is vociferous in his view that ‘teachers are on the path to becoming obsolete’. It is impossible to know how accurate this assertion will be.  That said, other evidence from the lifestream suggest whilst AI, and the algorithmic culture they create, are certainly a threat to corporeal teachers’ viability, there is a strong counter-argument offered that posits human creativity and insight, inherent qualities within the teaching profession, are irreplaceable and cannot be replicated by machines.

It is perhaps the middle ground being presented in the third YouTube clip,  that really shows what the future truly holds. The clips explores the growing symbiotic relationship between teacher and technology, with AI and their associated algorithms, working as supporting actors alongside the teacher in developing students’ education, particularly that of special educational needs and ensnaring student engagement and positive behaviour.

Week 10 Final Summary – Thinking About Algorithms

Thinking about algorithms

As I draw to the end of Education and Digital Cultures, there are a number of issues I would like to reflect upon to close the blog. As a nascent student of digital education, algorithms have been a key player in the development of my own knowledge and understanding, with them ‘sorting, filtering, searching, prioritising, recommending, deciding and so on’ as the course has progressed. As David Beer states, an algorithm provides us the opportunity to ‘to shape our knowledge and produce outcomes’ (Beer, 2017, 2) This has certainly been true throughout the EDC module, and there is there is no doubt they have played a vital role in sculpting my understanding of digital cultures within an educational setting, and cultivating my success in the completion of this lifestream.

Despite much of the evidence from the core reading that ‘algorithms produce worlds, rather than objectively account for them’ and that they are ‘manifestations of power’ (Knox, 2015), I would still hold the view that much of the algorithmic governance, in the context of my EDC learning, has been fairly innocuous in nature. Perhaps others would argue this position is naïve, but generally, I am confident that the algorithms throughout the lifestream have always steered my learning in positive directions, offering sensible and useful links to capture my interest, and further learning. This was mostly frequently noted use within my use of YouTube, whereby recommendations normally had congruence with the prior clip, that I had watched or searched for. Whilst my algorithmic play noted the problematic nature of this within other settings, and how this could entrench users in a negative cycle of confirmation bias, within an educational setting there are real benefits to this for the potential it has in furthering learning. In short, I have not felt undertones of subliminal messaging encoded into algorithmic suggestions throughout the duration of this course.  That said, I do not deny the existence of algorithmic power, and the manipulative qualities they possess. Indeed, ‘algorithms …are the new power brokers in society’ (Diakopoulos, 2013 cited in Kitchin, 2017). That cannot be denied.

Finally, I wonder where algorithmic governance leaves education, particularly high school children, many of whom are happy to mindlessly watch clip after clip on YouTube, or click on every link or suggestion within their social media?  I wonder how much this impacts on their ability to harness enquiry skills or ask valid questions, and steer the direction of their own learning. Do the algorithms exert more influence on their learning pathway than their own processes of enquiry and logical thinking? Are the algorithms encouraging students to think less, and follow more? Is this further contributing to a spoon-feeding, instant gratification culture, that appears to be growing in younger generations? Furthermore, if the algorithms lead students down an incongruous route, how much time would be wasted in watching superfluous clips or heading up ‘digital blind alleys’, before a student is able to realign with the task in hand? Perhaps, with this in mind, it is incumbent upon the educator to ensure that use of this media is mitigated or digital tasks are directed more so by the teacher, than that of an algorithm.

References
Beer, D. (2017) The social power of algorithms, Information, Communication & Society, 20:1, 1-13, DOI: 10.1080/1369118X.2016.1216147

Kithcin, R. 2017. Thinking Critically about Researching Algorithms. Information, Communication & Society, 20:1, 14-29, DOI: 10.1080/1369118X.2016.1154087

Knox, J. 2015. Algorithmic Cultures. Excerpt from Critical Education and Digital Cultures. In Encyclopedia of Educational Philosophy and Theory. M. A. Peters (ed.). DOI 10.1007/978-981-287-532-7_124-1

Week 10 – Algorithmic Play

https://www.timetoast.com/timelines/2214815

Choosing my social media – For my algorithmic play activity, I decided to focus on my most frequently visited social media site – YouTube. I watch a variety of YouTube clips almost every evening as one of my primary sources of news, and I often follow my selected pathway of clips, as recommended by the YouTube algorithms.
That said, I was interested to monitor more closely how the algorithms ‘guide’ me in my recommended viewing and personal decision making, and whether or not, I was actually in the driving seat.

I decided to start with the type of video that I watch almost every day – The View. This is a daytime American talk show, co-hosted by four women, including Whoopi Goldberg. The show is highly political in nature, and discussion amongst the women mostly centres around American politics, and in particular the Trump Administration.

Methodology – Before starting the algorithmic play I decided to delete the watched history, to avoid algorithmic influence from previous YouTube sessions and other videos I had watched.
When the sidebar of recommendations was shown, I opted for clips that piqued my own personal interests and I always chose one from the top six recommended videos. I continued to click through the recommendations for over 30 videos, and I recorded where it took me. The results of this pathway are shown on the Timetoast attached to this blog.

Reflections – It is clear that starting the algorithmic play with a television programme such as the View, resulted a preordained pathway being lain by the algorithms. The View, as mentioned, is highly political with three-quarters of the panel coming from liberal, left-wing backgrounds. The programme has a high degree of ‘Trump-bashing’ and although there is one Republican panellist, Megan McCain, unlike the majority of the Republican Party, she is an ardent outspoken critic of President Trump. The initial clip I watched focused on the Trump Administration’s lacklustre response to the Coronavirus pandemic.
It seems that the tenor and tone of this particular clip was highly influential on the subsequent recommended pathway suggested by the algorithms. Each of the clips that followed were imbued with the following themes:-

• Left-wing liberal news organisation e.g. Vox Media/ CNN/ MSNBC (20 clips)
• Anti-Trump (9 clips)
• Coronavirus Pandemic (7 clips)
• Race relations (4 clips)
• Brexit (2 clips)

At several points the algorithms had restricted my options, limiting what I could see and what I could choose for a period of time. In particular this happened when I selected the first of the Vox media clips. This resulted in being ‘stuck’ with only Vox choices to choose from for another 12 selections.
In order to change the options of the algorithm, I purposely selected a video clip that would create a new direction. This worked, and I was able to ‘escape’ the Vox loop and move onto content created by other organisations, although still within left wing, liberal media.

On reflection, it seems that there was certainly a loop of information, with the algorithms directing me to clips with very similar themes and information. At no time was I directed towards media such as Fox News or other right-wing media groups.

It is clear that with this type of ‘algorithmic power’ or ‘algorithmic governance’, there is threat of algorithms giving rise to confirmation bias in users. This is highly problematic, particularly in a society that is extremely polarised in political opinion.  How can society ably solve problems if it is unable to objectively see the other side of an argument? If algorithms do not show me alternative political opinion, how will I ever be able to understand opposing perspectives? This type of algorithmic echo chamber, is therefore very dangerous. This has congruence with the view put forward in by Rob Kitchin who states that “Far from being neutral in nature, algorithms construct and implement regimes of power and knowledge…. Algorithms are used to seduce, coerce… regulate and control: to guide and reshape how people… and objects interact with and pass through various systems… ” (Kitchin, 2017, 19).

Ethical Issues – There are a certainly some ethical issues to consider. For instance, there would be very little doubt, having seen my list of viewed videos, as to which end of the political spectrum I belonged. Could this data be misused or manipulated? Do my political affiliations no longer hold the same degree of privacy as they had done in the past, now that such data is widely available to large companies and organisations?

Another ethical consideration is how to disentangle between private and professional. As a user of YouTube both at home and at work, it is important to the ensure that the algorithms do not unnecessarily reveal private data and personal preferences in a professional setting, and so ensuring appropriate log ins are used in each of the settings.

References
Kithcin, R. (2017) Thinking Critically about Researching Algorithms. Information, Communication & Society, 20:1, 14-29, DOI: 10.1080/1369118X.2016.1154087

Week 9 Summary – Algorithmic Bias

David Beer raises the issue of the decision-making power of algorithms and identifies that there is need to understand how algorithms shape organisation, institutional, commercial and governmental decision making (Beer, 2017). There are criticisms of those holding the view that of algorithms as ‘guarantors of objectivity, authority and efficiency’ and with others arguing that due to the fact algorithms are created by humans, they embed layers of social and political bias into their code, that result in decisions that are neither benign or neutral. Furthermore, these “decisions hold all types of values, many of which openly promote racism, sexism and false notions of meritocracy” (Noble, 2018). As such ‘algorithms produce worlds rather than objectively account for them, and are considered manifestations of power’ (Knox, 2015).

It was this notion of algorithms bias that drove my inquiry in this week’s section of the lifestream blog and there was no shortage of social media commentary on the issue. Cathy O’ Neil identifies this in her YouTube clip and supports the view by claiming that algorithms are not objective and that they are merely ‘opinions embedded into math’. Perhaps most interesting, was the work of Joy Buolamwini, whose investigation  artificial intelligence face recognition software, has unearthed inherent racist and sexist elements from its developers.

To what extent are racism values embedded into algorithms?
Joy Buolamwini’s has carried out extensive research on how algorithmic code determining facial recognition, fails to recognise black women – Click the image for more detail

However, where does this notion of algorithmic bias intersect with education and what type of educational landscape will the algorithms produce? With the rise of anti-plagiarism software, and the growth of intelligent teaching and learning platforms such as Century Tech, many educators fear that there is incremental dependency on algorithms within schools and colleges, particularly for assessment. This is certainly not without difficulties or tension. Ben Williamson claims that many studies have highlighted inaccuracies in the Turnitin software, which many institutions use to cross-check student work, incorrectly branding some students as cheats, whilst missing other, and very clear instances of plagiarism. This ultimately leads to a growing level of distrust between youngsters and their educators, and is responsible for breaking down relationships as the use of technology, and algorithmic dependency increases. How else will students and teachers be negatively impacted by algorithmic biases (or errors) and, as dependency on these tools continues to grow, will educators be able to even identify when this happens, let alone how to mitigate it?

References

  • Beer, D. (2015) The social power of algorithms, Information, Communication & Society, 20:1, 1 – 13, DOI: 10.1080/1369118X.2016.121614
  • Knox, J. 2015. Algorithmic Cultures. Excerpt from Critical Education and Digital Cultures. In Encyclopedia of Educational Philosophy and Theory. M. A. Peters (ed.). DOI 10.1007/978-981-287-532-7_124-1
  • Noble, S. (2018) Algorithms of Oppression, NYU Press, New York.
  • Williamson, B. (2019). Automating mistrust. Code Acts in Education

Week 8 Summary – How Algorithms Shape Our Lives

The origins of the word alogrithm – click for a great BBC clip

The TEDx presentation by Kevin Slavin in this week’s lifestream, argues that we “need to rethink a little bit about the role of contemporary math… its transition from being something we extract and derive from the world to something that actually starts to shape it – the world around us and the world inside us.” This has congruence with the themes being explored in the core reading that posits in recent years algorithms have become “increasingly involved in the arranging, cataloguing and ranking of people, places and knowledge… They are becoming increasingly ubiquitous actors in the global economy, as well as our social and material worlds.” (Knox, 2015).  In essence, algorithms are now major actors in contemporary human society and culture.

On personal reflection it is evident that algorithms are highly influential in my own life, and are certainly shaping my every day thoughts and actions. I need only consider my Netflix recommendations to see tangible evidence of how an algorithm can shape day-to-day decision making. This was surmised in both news articles in the lifestream, each which explored the incredible power of major organisations such as Amazon and the impact they have had, and continue to have, on contemporary culture. As the Observer article recognises, this provides these companies with tremendous power, and raises the question of algorithmic objectivity. Are automated processes completely free of biases, or are they, as many would suggest enmeshed with corporate or political biases?

Knox, J. (2015) Algorithmic Cultures. Excerpt from Critical Education and Digital Cultures. In Encyclopedia of Educational Philosophy and Theory. M. A. Peters (ed.). DOI 10.1007/978-981-287-532-7_124-1

Week 7 Summary – Results of the Digital Ethnography

On completion of my ethnographic study, I have uncovered, in one particular discussion thread of my chosen MOOC, a certain degree of ‘shared value’ –  a key component of online community, stipulated by Mark Wills’ in his TEDx speech. This was evident in the commonality and repetitive nature of the language being used throughout the thread, from post to post. The visual results of this are shown in the word cloud in the attached ThingLink, along with commentary of the methodology and results.

Some key reflections:- In carrying out this study was frustrated by the fact that I encountered far more limitation in the digital ethnography than anticipated. Firstly, finding a MOOC discussion forum that engendered enough dialogue to allow for a study to take place was perhaps the biggest challenge. And once this was finally done, being able to immerse myself fully into the MOOC discussion was not always possible due to time constraints – as such I found myself acting as a passive observer, rather than an active participant. Furthermore, the qualitative nature of the results made it difficult to analyse and draw precise conclusions. This is likely due to the fact that the study was small in nature. Had this been scaled up and carried out over a longer period of time, a clearer perspective of community and shared values could have been extrapolated from the results. Consequently, I do not feel that the results of my ethnography give a true representation of the community culture that exists within the MOOC forum, but merely a tiny fragment of what it may be.

Week 6 Summary – Conducting an Ethnography

Some suggestions for studying an ethnography – Click on the image

Christine Hine argues that ethnography is ‘a methodology that offers little in the way of prescription to its practitioners and has no formula for judging the accuracy of its results’ (Hine, 2000, 3). To someone such as myself, having never carried out an ethnographic study before, these words are not particularly reassuring. However, as I have been working towards the completion of my digital ethnography, I have been buoyed by the abundant presence of online advice regarding the skills, motivations and qualities required to conduct a study of this nature. Consequently, this week’s lifestream blog has mostly crystallised around these pockets of guidance. ‘How to do Ethnography’ by the Visual Communicating Guy (VCG) was particularly useful in how it established eight clear steps in conducting ethnographic research. Although, as the scope of our digital ethnography is limited and small-scale, some of those steps (e.g. Step 6 &7) are not pertinent. The YouTube video of Professor Sienna Craig, outlining the ethical considerations was also very helpful in establishing some of  the ethical parameters that I should be considering, as I continue to observe and be a part of my online community.

However, it was the YouTube videos by Robert Kozinets and Daniel Miller that were most pivotal in helping to bridge the gap between the ethnographic aspect of the study and the world of digital education. As Miller states the study of anthropology, in its desire to understand people, is essentially the most appropriate crucible for understanding the world of digital.

References
Hine, C. (2000) The virtual objects of ethnography, in Hine, C. Virtual Ethnography, pp. 41 – 66, London: Sage

Week 5 Summary – Liberating MOOCs, redefining education

As I delved further into the study of my chosen MOOC, my social media postings in the life stream reflected a growing curiosity in the possibility that massive open learning of this nature, is potentially offering a new route to formal academic certification. Indeed, with this shift in paradigm Bayne et. al (2019) argue that “the open education movement has predominantly framed its mission in terms of ‘freedom from’, characterising educational institutions as rigid, antiquated, inaccessible and ultimately ‘closed’, in opposition to which the open movement is cast as a disruptive liberation”. (Bayne et al, 2019, 50) This has congruence with the TEDx YouTube talk by Jonathan Schaeffer, who discusses the disruptive nature that MOOCs have on traditional learning in tertiary education, and examines the manner in which this routeway to formal education could be actualised.

This question of ‘disruptive liberation’ is further examined in the BBC Sounds podcasts by asking ‘could these new free online courses open higher education to parts of the world in a way that’s been unthinkable up until now or are MOOCs an experiment that could destroy centuries of tradition?’ In the second of the two podcasts – Measuring MOOCS by Science AAAS, quantifiable measures are shared to demonstrate how disrupting and liberating, MOOCs can actually be. By sharing some of the enrolments figures of the popular Introduction to Computer Science MOOC at Massachusetts Institute of Technology (an impressive 350k), it is possible to understand the power that the MOOCs have in creating ‘freedom from’ the traditional institution.

 

References
Bayne, S., Evans, P., Ewins, R., Knox, J., Lamb, J., Macleod, H., O’Shea, C., Ross, J., Sheail, P., Sinclair, C. (2019 DRAFT). The Manifesto for Teaching Online.

Week 4 Summary – Choosing a MOOC and thinking about the digital ethnography

The term MOOC was first coined in 2008. Stanford University offered its first MOOC in 2012, entitlted ‘An Introduction to Artificial Intelligence’ and had over 160k individuals enrol, with 20, 000 passing the course.

Choosing a MOOC – perhaps easier said than done! After initially opting for ‘An Age of Sustainable Development‘ through edX, I soon discovered that the online community within this particular MOOC was significantly lacking in dialogue in the discussion forums. As an alternative I decided to enrol in Holocaust: The Final Solution with Coursera. As a teacher of secondary history I was keen to further my own professional understanding of this topic, in parralel with developing my digital ethnography for this task.

My lifestream additions this week have focused on the broader theme of online community and how this differs from a corporeal community, as well as the development of collective intelligence within wiki sites. A common thread throughout the blog entries has been how to make an online community successful, when there are so many challenges and road blocks that can hinder success. The power of anonymity, intangibility and falsification were all highlighted as potential barriers and points of tension. Some of these entries touched on what had been outlined in Lister (2009) who identified the difficulty in creating community online when ‘participants are there but not there, in touch but never touching, as deeply connected as they are profoundly alienated’ (Lister, 2009, 209). Mark Willis’ TEDx presentation however, provides ways in which real online community can be achieved. He posits that when the following four criteria are met – longevity, shared values, community management and trust – then the group can truly be deemed as ‘community’.

As I progress in the ethnographic study of this MOOC, I am keen to examine whether my chosen course meets Willis’ 4 point criteria for community culture, particularly with that of shared values.  This is further strengthened in Saadatdoost et al (2014) who states that “culture components include shared beliefs, values, perspectives and practices.” I am therefore hoping to investigate how far the participants in this MOOC share values, and if so, what are these? How much do these values provide cohesion within and across the discussion forums? Also, are there tensions that arise due to the presence of disparate, incompatible values? And if so, how are these tensions diffused? Lots of interesting questions to take with me, as I move forward with my chosen MOOC.

Lister, M. (2009) ‘Networks, users and economics’ in New media: a critical introduction, pp163 – 236, London: Routledge

Saadatdoost, R., Sim. A., Mittal, N., Jafarkarimi, H. & Mei Hee, J. (2014) ‘A netnography study of MOOC Community’, PACIS 2014 Proceedings. 116. http://aisel.aisnet.org/pacis2014/116.

Week 3 Summary – Biohacking and Transhumanism

There are a growing number of Transhumanist socieities. Humanity + is one such organisation. It has over 6000 members in over 100 countries

This week my interest was piqued by the unusual practice of biohacking, and how this has contributed to the transhumanism discussion. Newton Lee, in The Transhumanist Handbook defines transhumanism as anyone that is “using science and technology to enhance or alter our body chemistry in order to stay healthy and be more in control of our lives.” (Lee, 2019, 5). The YouTube clips and tweets on biohacking from this week’s lifestream were certainly representative of this view, particularly in respect to the idea of greater control, which was predominant theme throughout most of the social media on this topic. As individuals merge their bodies with technology, ranging from RFID chips inserted into hands as a replacement to contactless debit cards, to more radical cases such as Tim Cannon’s (rather crude) forearm implant that records biometric data from his body, there was consensus amongst all biohackers – these experimental modifications had given them greater agency and control of their individual lives.

There is little doubt that biohacking is a growing trend, and the discussion on the BBC Sounds podcast, as well as the interviews with Michael Laufer and Eric Matzer from this week’s YouTube clips, supports this view.  However, many critics of biohacking argue that its growth will ultimately be limited to a niche subculture, and that the movement is unlikely to gain enough traction to became mainstream. Medical ethics and opposition on religious grounds will ultimately curb the movement and limit its potential to grow beyond the very curious.

Conversely, proponents of the movement claim that biohackers are extropians of human change and that by pushing these boundaries today, they are catalysing an inevitable movement towards posthumanism. However, this does raise some very important questions. If this transhumanist movement is an inevitability for the 21st century, what ethical issues must be considered as we progress down this road of human change? Should there be interventions to regulate the range of practices this encompasses and, if so, what should that regulation look like? And if this were to happen, how long will it take before we start to see the appearance of modification clinics on the high street, offering biohacker-esque body augmentations to a mainstream market? Biohackers would certainly argue that this will be sooner, rather than later.

References

  • Lee, N. (2019) ‘Brave New World of Transhumanism’ in The Transhumanism Handbook, Springer: Switzerland, p5.
  • ‘Transhumanism’ (10 February 2020) Wikipedia, available at https://en.wikipedia.org/wiki/Transhumanism (Accessed: 10 February 2020)

Week 2 Summary: Embodiment Relations and mobile technology

Just over half of children in the United States — 53 percent — now own a smartphone by the age of 11. And 84 percent of teenagers now have their own phones, immersing themselves in a rich and complex world of experiences that adults sometimes need a lot of decoding to understand (npr.org)

This week I was intrigued by the concept of embodiment, as it brought to mind some of the school students that I teach. A New Hope questioned ‘at what point do our bodies begin and end. How do we define our most intimate borders?” This has congruence with what Miller defines as embodiment relationship, in that “when technologies are being used, the tool and the user become one” and the object becomes “part of the body image and overall identify of the person” (Miller, 2011, 219). Vincent delves further into the theory of embodiment relation by examining the intimate relationship that many individuals have with their mobile devices. Citing the work of Richardson (2007), he outlines that the close proximity of mobile phones to the body and the manner in which they connect to a number of sensory functions creates a much more powerful connection to humans than any other type of technology we use.

This concept had significant influence on this week’s life-stream and I identified some YouTube clips that explored our increasingly complex relationship with mobiles, and how smartphone dependency has become a rapidly growing epidemic. I was particularly interested in the article that I tweeted from Psychology Today that argued the attachment of a young person to that of their mobile phone is akin to the relationship a child has with a teddy bear. I was further intrigued by the TEDx talks from Jeff Butler and Anastacia Dedykina who respectively delved into discussions of how mobiles phones change the way we think, and whether we could live without them.

In my school, this is particular concern of mine and despite the existence of a ‘silent and invisible’ mobile phone policy, I see youngsters walking around our campus carrying mobile phones as if the device was an appendage to their limb. There is no doubt that these youngsters have a deeply intimate relationship with their mobiles, and any suggestion of their removal can often lead to anxiety, and in some cases despair. As Vincent argues, the devices are very clearly an extension of themselves and the social platforms they are accessing are reflections of their identity and self. Therefore to forcibly remove the technology would be tantamount a technological amputation.

However, the question remains as to how much this increasingly symbiotic relationship humans have with mobile technology, will actually contribute to human development? Does the embodiment relationship enhance our ability to grow into more advanced versions of humanity, or does this desecrate humanity and stymie its potential to flourish?

Miller, V. (2011) The Body and Information Technology in Miller, V. Understanding digital culture pp. 207 – 223, London: Sage

Week 1 Summary – Thinking about cybernetics

About 12, 000 individuals in the UK (inlcuding my father) wear a cochlear implant. That’s 12,000 cyborgs to EDC students!

Upon reading The Body and Information Technology (Miller, 2011), my interest was piqued in the manner in which the ‘cyborg’ was represented. The terminology, as a result of popular culture and dystopian notions of cybernetics, has often been framed as something to fear, with the term being imbued with pejorative connotations. Citing Gray et. al’s idea that by using technology as a means to restore, normalise, enhance and reconfigure the human body, it is possible to view the notion of cyborgs through an entirely different prism. Ten years ago my father was lucky enough to receive a cochlear implant on the National Health Service after years of degeneration in his hearing. Prior to the operation his hearing had diminished to such low levels that had essentially rendered him severely deaf. The implant to restore his hearing was life-changing and this ‘normalising’ technology significantly improved the quality of my father’s life. His hearing was restored to such a level that the was able to once again hear the sound of a spoon clinking against the side of a mug, as he stirred sugar into his tea – an everyday noise that he had not heard for years. Until I read the core paper, I have never viewed my father as a ‘cyborg’ but Miller has certainly put forward a reasonable case that has helped realign my perspective on this. Imagining cyborgs as individuals who have benefited from technology to improve the quality of their lives, rather than a traditional view often put forward in science fiction, establishes a more positive framework for understanding the complex relationship between humans and machine. This was very influential in my lifestream this week, with my inaugural tweeted about cochlear implants and cybernetics from the Ear Institute.

That said, there are possible ethical concerns on the horizon with this technology – as auditory cybernetics have developed over the past decade, my father’s device has become increasingly connected to the digital world. He can now connect his cochlear to Bluetooth and is able to attune the device to his mobile phone, laptop and television. This has brought me to wonder whether, in the not-too-distant future, humans who do not suffer from acute deafness, will be choosing to voluntarily implant the technology in order to enhance their connectivity to digital environments. This of course raises a gamut of ethical concerns over the nature of voluntary augmentations on the human body. Is this something that should be prevented from happening? And if so, can it be stopped?

Miller, V. (2011) “The Body and Information Technology”, from Miller, V. Understanding Digital Culture pp. 207 – 223, London: Sage.