Michael saved in Pocket: ‘The politics of artificial intelligence: an interview with Louise Amoore’ (Open Democracy, Louise Amoore and Krystian Woznicki, 2018)

Excerpt (Louise Amoore)

‘…it is worth reflecting on what one means by ‘self learning’ in the context of algorithms. As algorithms such as deep neural nets and random forests become deployed in border controls, in one sense they do self-learn because they are exposed to a corpus of data (for example on past travel) from which they generate clusters of shared attributes. When people say that these algorithms ‘detect patterns’, this is what they mean really – that the algorithms group the data according to the presence or absence of particular features in the data.

Where we do need to be careful with the idea of ‘self learning’, though, is that this is in no sense fully autonomous. The learning involves many other interactions, for example with the humans who select or label the training data from which the algorithms learn, with others who move the threshold of the sensitivity of the algorithm (recalibrating false positives and false negatives at the border), and indeed interactions with other algorithms such as biometric models.’

View full article

Michael commented on Susan’s lifestream – Week 2 Summary – enhancement & (dis)embodiment

Week 2 Summary – enhancement & (dis)embodiment

Michael Wolfindale:

Great summary and fascinating points!

Reflecting specifically on the idea of ‘distributed cognition’, and what this might mean for education, brought me across an article where Hayles (2008) discusses the idea in context of ‘slippingglimpse’, a verbal-visual collaboration involving a videographer, poet and programmer and consisting of videos of moving water associated with scrolling poetic text.

Amongst other things, Hayles (2008: 23) discusses the ‘collision/conjunction of human and non-human cognition’, as well as ‘non-conscious parts of cognition’. One example of the latter might be a musician who has learnt a piece ‘by heart’ and ‘knows the moves in her body better than in her mind’ (I remember the phrase ‘muscle memory’ from piano lessons!).

She also discusses the ‘non-conscious performance of the intelligent machine’ (for example, learning from ‘computed information’), as well as ‘the capacity of artificial evolution for creative invention’ (such as using image-editing software).

Another example is reading, which some describe as ‘a whole-body activity that involves breathing rhythms, kinaesthesia, proprioception, and other unconscious or non-conscious cognitive activities’ (Hayles 2008: 16). The work ‘slippingglimpse’ itself ‘requires and mediates upon multimodal reading as a whole body activity’ (ibid.: 18).

While I am still processing the implications of these ideas for education (particularly the way they complicate individual agency), these examples have certainly been food for thought and helped me to think beyond the Cartesian mind/body dualism!

Michael saved in Pocket: ‘Machines Like Me by Ian McEwan review – intelligent mischief’

Machines Like Me
Machines Like Me

Excerpt

By a strange twist of fate, I read this book while on a visit to the Falkland Islands, where the British victory over Argentina in the 1982 war feels as though it might have happened last week.

View full article


This book came up during the film festival discussions, and it brings up some interesting questions around ‘moral decisions’ being made by ‘machines’. Yet, even writing these words makes me come back again to this quote from Hayles:

‘distributed cognition replaces autonomous will’ (Hayles 1999: 288)

Thinking back to the aircraft/pilot example from Miller (2011: 211), I continue to reflect on this idea of decision-making being distributed across human and non-human actors as I read this dialogue between Albert Borgmann and N. Katherine Hayles on humans and machines.

This is helping me to further deconstruct and question the idea of ‘autonomous will’, the boundaries of the ‘human subject’ and the notion that agency lies with either ‘human’ or ‘machine’, reflecting instead on this concept of distributed cognition.

The end of our first week on cyberculture

Today marks the end of an exciting first week!

Before we started, I set up Twitter, YouTube, SoundCloud and Pocket feeds and shared resources on artifical intelligence embracing social science, machines and cognition, posthumanism and a track and article demonstrating music and algorithms. This mix was intended to test out different feeds and save/share content to revisit later.

I began the first day reflecting on a short clip from Blade Runner, referencing the ‘more human than human’ quote mentioned in the Miller (2011) reading. As I worked through the readings and films, contemplating on the figure of the ‘cyborg’ through Haraway, I reconsidered my assumptions about the boundaries between ‘human’ and ‘machine’. This theme kept cropping up, while looking at the Voight-Kampff test in Blade Runner and during a Twitter exchange about ‘testing’ for a ‘human’.

This ‘human’/‘machine’ boundary was just one assumption I found myself deconstructing, encouraged by Sterne (2006: 24) to question, examine and reclassify categories and boundaries and avoid importing existing biases. Thinking about ‘feedback loops’ and questioning the ‘boundaries of the autonomous subject’ (Hayles 1999: 2) brought me to this video, and inspired my header image (the Mandelbrot set), a visualisation created through feedback and iteration. I went on to explore posthumanism through videos and readings from Braidotti and Hayles, reconsidering my ideas about autonomous will and the neutrality of the term ‘human’.

My journey this week had tangents (including the #AlgorithmsForHer conference) which I hope to revisit. I’ve sketched out my journey below…

EDC week 1
EDC week 1 (enlarge)

View references