What transpires in the unmediated space-time excess that moves, at once, between and alongside cognition and recognition, between and alongside formation and information, between and alongside prehension and comprehension? Following upon their most recent books—N Katherine Hayles’ Unthought: The Power of the Cognitive Unconscious (University of Chicago, 2017) and Tony D Sampson’ s The Assemblage Brain: Sense Making in Neuroculture (University of Minnesota, 2016), the convergences and divergences that emerge and weave throughout this conversation are quite revealing.
I continue to consider the lines that often seem drawn between education/technology, ‘human’/’machine’, conscious/nonconscious and so on…
An article I shared previously asked ‘How do machines think?’ Yet, from a critical posthumanist perspective, what does it mean ‘to think’?
I reflect on this question whilst exploring the ideas of ‘cognitive assemblages’ and ‘nonconscious cognition’ in this discussion between N. Katherine Hayles and Tony D. Sampson…
Illustration: Zbyněk Baladrán
Reading N. Katherine Hayles’ Unthought (University of Chicago Press, 2017), I’m struck by her notion of ‘cognitive assemblages’ to describe human-technical interaction which she discusses as fully imbricated. I wonder if the women and men whose careers in technology-driven work contexts we are exploring in Nordwit understand themselves as cognitive assemblages? In Hayles’ work agency is distributed, as are many other things such as responsibility – but do our research participants think of themselves in that way? The people I have interviewed in the context of Digital Humanities tend to take a rather instrumentalist view of technology, and we might want to ask, what difference does it make if you understand yourself as a ‘cognitive assemblage’ or as someone who makes use of technology – or, as academics can often feel, as a ‘victim’ of technology (the skype in my office isn’t working, we’re unable to project images etc.)?
1) Assuming that ‘human’ is not an objective nor inclusive term (Braidotti 2013: 26), how might this affect how we think about ‘artificial intelligence’, power and agency?
2) If we take a ‘dynamic partnership between humans and intelligent machines’ (Hayles 1999: 288) as a point of departure, how might we consider concepts such as consciousness, (distributed) cognition and agency?
3) Can machines make ‘moral‘ decisions?
4) Building on a discussion about gender and ‘virtual’ identities, are we ‘performing’ or is it ‘performative‘? Should there be a distinction between ‘real’/’virtual’ here, and how do we define ‘real’? (The Matrix comes to mind here…) How might this play out in on our identities on Twitter, lifestream-blogs etc.?
5) Thinking beyond assumptions that the ‘human’ is at the centre of education, and technology is a ‘tool‘ or ‘enhancement‘, what are the implications of a complex entanglement of education and technology (Bayne 2015: 18) for this course?
Many discussions were via Twitter, drawing in questions from the public:
Great thread! Can i throw in another q? Where would AI forms sit? Perhaps advanced AI life forms? Do we need to consider agency?
— Louise Connelly (@lconnelly09) January 23, 2020
Following on from last week’s map, I have opened new and revisited old avenues:
I have also experimented with visualisations of my feed ahead of our visual artefact task…
Following my first film review on A New Hope, here is a second shorter post on The Cyborg, after being inspired to pick up on a theme from Matthew Taylor’s review of the same film (fearing technology):
The Cyborg includes many aspects relevant to the themes we have been exploring, however one theme in particular struck me on rewatching it this week after a Twitter exchange: how/should we think about agency with regards to technology (for example, around the issues of fear and control, if we should even consider things in this way)?
Great questions! 🤔 Thinking about AI and agency brings to mind this from Hayles (1999: 288): ‘In the posthuman view…conscious agency has never been “in control”…distributed cognition replaces autonomous will…’ #mscedc (1/2)
— Michael Wolfindale (@mwolfindale) January 23, 2020
In our contemporary moment, when machine learning algorithms are reshaping many aspects of society, the work of N. Katherine Hayles stands as a powerful corpus for understanding what is at stake in a new regime of computation. A renowned literary theorist whose work bridges the humanities and sciences among her many works, Hayles has detailed ways to think about embodiment in an age of virtuality (How We Became Posthuman, 1999), how code as performative practice is located (My Mother Was a Computer, 2005), and the reciprocal relations among human bodies and technics (How We Think, 2012). This special issue follows the 2017 publication of her book Unthought: The Power of the Cognitive Nonconscious, in which Hayles traces the nonconscious cognition of biological life-forms and computational media. The articles in the special issue respond in different ways to Hayles’ oeuvre, mapping the specific contours of computational regimes and developing some of the ‘inflection points’ she advocates in the deep engagement with technical systems.
This article, from a Theory, Culture and Society special issue on Thinking with Algorithms: Cognition and Computation in the Work of N. Katherine Hayles relates to some of the articles and videos I have recently shared on ‘machines’ and cognition (particularly around this idea of ‘nonconscious cognition’). I’m also saving it here as it will no doubt be relevant to our later block on algorithmic cultures!
In November 2019, Leon Kowalski found himself in the offices of a large corporation in Los Angeles, answering some odd questions. “You’re in a desert. You look down and you see a tortoise…” When the questioner moved on to ask about his mother, things didn’t end well.