As we end our first block on cyberculture, it continues to strike me how many ideas about technology and education appear rooted in dualisms which tend to centre (a certain kind of) ‘human’, whilst othering the ‘digital’ (Knox 2015).
What kind of ‘human’, however, influences the design of ‘artificial intelligence’, and what assumptions may be baked into the algorithms that influence the choice of content we include in our lifestreams? Does this reproduce existing biases or privilege a certain view of ‘human’ ‘intelligence’? What might be the implications for education and learning analytics?
If ‘machines’ can ‘learn’, does the responsibility still lie with the programmer? If ‘distributed cognition replaces autonomous will’ (Hayles 1999: 288), should we instead think in terms of ‘cognitive assemblages’ and ‘nonconscious cognition’? Reflecting on this, I found an example of distributed cognition through slippingglimpse (Hayles 2008).
I continued this week to consider how technology is often visualised as a ‘tool’ or ‘enhancement’ (‘Ping Body’, Stelarc). Moving beyond technology ‘enhanced’ learning (Bayne 2015a), and towards a critical posthumanist view, can we imagine a view of education where the human subject is not separate nor central but the human and non-human are entangled in a ‘creative “gathering”’ (Bayne 2015b)? How might we visualise this?
Finally, as use of the ‘cyber’ prefix has declined (Knox 2015), how might we think about the ‘digital’? What might a ‘postdigital‘ perspective mean for education (Knox 2019)? I continue to explore…
‘it continues to strike me how many ideas about technology and education appear rooted in dualisms which tend to centre (a certain kind of) “human”, whilst othering the “digital”‘
yes, I think this is one of the key ideas we can take forward from the block: a quite problematic exceptionality of the human (self-directing, autonomous) that seems engrained in education particularly. Questioning ‘humanness’ is long established in other disciplines, but education seems to be less inclined to move beyond particular views of the relationships between people and technologies.
‘What kind of ‘human’, however, influences the design of “artificial intelligence”‘
Indeed. this is often down to the data sets used to train AI machine learning. The AI institute have recently announced a specific project on this: https://ainowinstitute.org/announcements/announcing-ai-nows-data-genesis-program.html
I think this is a key part of where we might ‘locate’ bias, and would extend to educational contexts: what kind of training data is used for learning analytics?
‘should we instead think in terms of ‘cognitive assemblages’ and ‘nonconscious cognition’?’
This could be a way to connect with block2: rather than relying on the somewhat uncritical notions of ‘community’ that accompany the promotion of ‘web 2.0’ and *social* media in education, are their other concepts that might define the ways people and technologies relate? A ‘creative gathering’?
Thank you, Jeremy!
> This could be a way to connect with block2: rather than relying on the somewhat uncritical notions of ‘community’ that accompany the promotion of ‘web 2.0’ and *social* media in education, are their other concepts that might define the ways people and technologies relate? A ‘creative gathering’?
I have been pondering over this, and contemplating the concept of agential realism (Barad 2003: 828) – where ‘there is no…exterior observational point’ and ‘we are part of the world in its ongoing intra-activity’.
I hope this will help me to reconsider my assumptions about how people and technologies relate as I begin my micro-ethnography!