As we end our first block on cyberculture, it continues to strike me how many ideas about technology and education appear rooted in dualisms which tend to centre (a certain kind of) ‘human’, whilst othering the ‘digital’ (Knox 2015).
What kind of ‘human’, however, influences the design of ‘artificial intelligence’, and what assumptions may be baked into the algorithms that influence the choice of content we include in our lifestreams? Does this reproduce existing biases or privilege a certain view of ‘human’ ‘intelligence’? What might be the implications for education and learning analytics?
If ‘machines’ can ‘learn’, does the responsibility still lie with the programmer? If ‘distributed cognition replaces autonomous will’ (Hayles 1999: 288), should we instead think in terms of ‘cognitive assemblages’ and ‘nonconscious cognition’? Reflecting on this, I found an example of distributed cognition through slippingglimpse (Hayles 2008).
I continued this week to consider how technology is often visualised as a ‘tool’ or ‘enhancement’ (‘Ping Body’, Stelarc). Moving beyond technology ‘enhanced’ learning (Bayne 2015a), and towards a critical posthumanist view, can we imagine a view of education where the human subject is not separate nor central but the human and non-human are entangled in a ‘creative “gathering”’ (Bayne 2015b)? How might we visualise this?