Disentanglement from my lifestream: wrapping up algorithmic cultures and EDC 2020

Entanglement
‘Entanglement’ (ellen x silverberg, Flickr)

As I disentangle myself from my lifestream feeds, and reflect on the course, I consider how I have perceived and been influenced by the algorithmic systems involved.

Google and Twitter were consistent influences, the latter through new/existing connections and via #mscedc#AlgorithmsForHer and #ds106, and I saved/favourited (often highly ranked) resources to Pocket, YouTube and SoundCloud (and other feeds).

While I had some awareness of these algorithms, alterations to my perception of the ‘notion of an algorithm’ (Beer 2017: 7) has shaped my behaviour. Believing I “understand” how Google “works”, reading about the Twitter algorithm and reflecting on ranking/ordering have altered my perceptions, and reading about ‘learning as “nudging”‘ (Knox et al. 2020: 38) made me think twice before accepting the limiting recommendations presented to me.

Referring to the readings, these algorithmic operations are interwoven with, and cannot be separated from, the social context, in terms of commercial interests involved in their design and production, how they are ‘lived with’ and the way this recursively informs their design (Beer 2017: 4). Furthermore, our identities shape media but media also shapes our identities (Pariser 2011). Since ‘there are people behind big data’ (Williamson 2017: x-xi), I am keen to ‘unpack the full socio-technical assemblage’ (Kitchin 2017: 25), uncover ideologies, commercial and political agendas (Williamson 2017: 3) and understand the ‘algorithmic life’ (Amoore and Piotukh 2015) and ‘algorithmic culture’ (Striphas 2015) involved.

During my ‘algorithmic play’ with Coursera, its “transformational” “learning experiences” and self-directed predefined ‘learning plans’ perhaps exemplify Biesta’s (2005) ‘learnification’. Since ‘algorithms are inevitably modelled on visions of the social world’ (Beer 2017: 4), suggesting education needs “transforming” and (implied through Coursera’s dominance of “tech courses”) ‘the solution is in the hands of software developers’ (Williamson 2017: 3) exposes a ‘technological solutionism’ (Morozov 2013) and Californian ideology (Barbrook and Cameron 1995) common to many algorithms entangled in my lifestream. Moreover, these data-intensive practices and interventions, tending towards ‘machine behaviourism’ (Knox et al. 2020), could profoundly shape notions of learning and teaching.

As I consider questions of power with regards to algorithmic systems (Beer 2017: 11) and the possibilities for resistance, educational institutions accept commercial “EdTech solutions” designed to “rescue” them during the coronavirus crisis. This accelerated ‘datafication’ of education, seen in context of wider neoliberal agendas, highlights a growing urgency to critically examine changes to pedagogy, assessment and curriculum (Williamson 2017: 6).

However, issues of authorship, responsibility and agency are complex, for algorithmic systems are works of ‘collective authorship’, ‘massive, networked [boxes] with hundreds of hands reaching into them’ (Seaver 2013: 8-10). As ‘processes of “datafication” continue to expand and…data feeds-back into people’s lives in different ways’ (Kennedy et al. 2015: 4), I return to the concept of ‘feedback loops’ questioning the ‘boundaries of the autonomous subject’ (Hayles 1999: 2). If human-machinic boundaries are blurred and autonomous will problematic (ibid.: 288), we might consider algorithmic systems/actions in terms of ‘human-machinic cognitive relations’ (Amoore 2019: 7) or ‘cognitive assemblages’ (Hayles 2017), entangled intra-relations seen in context of sociomaterial assemblages and performative in nature (Barad 2007; Introna 2016; Butler 1990) – an ‘entanglement of agencies’ (Knox 2015).

I close with an audio/visual snippet and a soundtrack to my EDC journey

 

My EDC soundtrack:

My EDC soundtrack cover image


View references

Reflecting back on EDC 2020

As I come to the end of the lifestream blog, I return to questions and aspects I considered early on in the course…

EDC week 3
EDC week 3 (enlarge)

‘Entanglement’ has been a key theme throughout the course – the entangled ‘boundaries of the autonomous subject’ (Hayles 1999: 2), reconsidering the dualisms and false binaries which have increasingly appeared entangled – human/machine, real/virtual, open/closed, public/private and so on…

Dualisms visual artefact
Dualisms visual artefact

Rather than assume I could remain an impartial observer, I became entangled in my “object” of research – the ds106 communities and networks

Miro board
Miro board

Finally, my ‘algorithmic play’ artefact – rather than examine standalone and discrete “black boxes” of code, instead revealed ‘massive, networked ones with hundreds of hands reaching into them’ (Seaver 2013: 10)…

Photo by Federico Beccari on Unsplash
PhotoFederico BeccariUnsplash.

…multiple codebases entangled with one another, with collective authorship but tangled up with a “Silicon Valley ethos”, commercial interests, neoliberal ideologies and specific notions of “progress”…

‘Algorithmic systems: entanglements of human-machinic relations’
‘Algorithmic systems: entanglements of human-machinic relations’

…a messy and complex entanglement of multiple algorithmic systems and human-machinic cognitive relations (Amoore 2019: 7), algorithmic systems with ‘a cultural presence’ (Beer 2017: 11), both modelled on and influenced by ‘visions of the social world’ (ibid: 4).

Contemplating on how we might think about agency in this context encourages me to consider the ‘broader debates about the status of agency as processes of “datafication” continue to expand and as data feeds-back into people’s lives in different ways’ (Kennedy et al. 2015: 4).

This “feeding back” of data into people’s lives in turn brings me back to the concept of ‘feedback loops’ which question the ‘boundaries of the autonomous subject’ (Hayles 1999: 2) and loops me back to the beginning of the course, when I put up my lifestream blog header image (the Mandelbrot set), a visualisation itself created through feedback and iteration

The Mandelbrot set
The Mandelbrot set (created by Wolfgang Beyer with Ultra Fractal, CC BY-SA 3.0).

View references

Michael commented on Susan’s lifestream – Week 2 Summary – enhancement & (dis)embodiment

Week 2 Summary – enhancement & (dis)embodiment

Michael Wolfindale:

Great summary and fascinating points!

Reflecting specifically on the idea of ‘distributed cognition’, and what this might mean for education, brought me across an article where Hayles (2008) discusses the idea in context of ‘slippingglimpse’, a verbal-visual collaboration involving a videographer, poet and programmer and consisting of videos of moving water associated with scrolling poetic text.

Amongst other things, Hayles (2008: 23) discusses the ‘collision/conjunction of human and non-human cognition’, as well as ‘non-conscious parts of cognition’. One example of the latter might be a musician who has learnt a piece ‘by heart’ and ‘knows the moves in her body better than in her mind’ (I remember the phrase ‘muscle memory’ from piano lessons!).

She also discusses the ‘non-conscious performance of the intelligent machine’ (for example, learning from ‘computed information’), as well as ‘the capacity of artificial evolution for creative invention’ (such as using image-editing software).

Another example is reading, which some describe as ‘a whole-body activity that involves breathing rhythms, kinaesthesia, proprioception, and other unconscious or non-conscious cognitive activities’ (Hayles 2008: 16). The work ‘slippingglimpse’ itself ‘requires and mediates upon multimodal reading as a whole body activity’ (ibid.: 18).

While I am still processing the implications of these ideas for education (particularly the way they complicate individual agency), these examples have certainly been food for thought and helped me to think beyond the Cartesian mind/body dualism!

Michael commented on Val Muscat’s EDC lifestream – Computers start composing

Computers start composing.

Michael Wolfindale:

Fascinating article, Val!

I also came across an article about algorithms being involved in the composition/improvisation of music while I was reflecting on how ‘machines’ ‘think’, how ‘humans’ ‘think’, and the blurred boundaries between the two from a posthuman standpoint.

Talking of computers being able to ‘swing’, jazz pianist and programmer Dan Tepfer uses a special ‘player piano’ (a piano with an onboard computer that can ‘play’ itself). In practice, the piano is able to ‘listen’ to what Dan plays and ‘respond’ (e.g. play additional notes) through an algorithm Dan has written.

It’s interesting how Dan speaks out about the process (“I’m not writing a piece, I’m writing the way the piece works”), and how this article describes the piano as ‘his composing partner’ (rather than as a ‘tool’ he controls):

NPR – Fascinating Algorithm: Dan Tepfer’s Player Piano Is His Composing Partner

Michael saved in Pocket: ‘Cognitive Assemblages?’


Illustration: Zbyněk Baladrán

Excerpt

Reading N. Katherine Hayles’ Unthought (University of Chicago Press, 2017), I’m struck by her notion of ‘cognitive assemblages’ to describe human-technical interaction which she discusses as fully imbricated. I wonder if the women and men whose careers in technology-driven work contexts we are exploring in Nordwit understand themselves as cognitive assemblages? In Hayles’ work agency is distributed, as are many other things such as responsibility – but do our research participants think of themselves in that way? The people I have interviewed in the context of Digital Humanities tend to take a rather instrumentalist view of technology, and we might want to ask, what difference does it make if you understand yourself as a ‘cognitive assemblage’ or as someone who makes use of technology – or, as academics can often feel, as a ‘victim’ of technology (the skype in my office isn’t working, we’re unable to project images etc.)?

View full article

The end of our second week on cyberculture

Our second week continued with questions raised through films (including A New Hope and Cyborg) and books (Machines Like Me and Iain M. Banks’ series). Themes that particularly struck me include:

1) Assuming that ‘human’ is not an objective nor inclusive term (Braidotti 2013: 26), how might this affect how we think about ‘artificial intelligence’, power and agency?

2) If we take a ‘dynamic partnership between humans and intelligent machines’ (Hayles 1999: 288) as a point of departure, how might we consider concepts such as consciousness, (distributed) cognition and agency?

3) Can machines make ‘moral‘ decisions?

4) Building on a discussion about gender and ‘virtual’ identities, are we ‘performing’ or is it ‘performative‘? Should there be a distinction between ‘real’/’virtual’ here, and how do we define ‘real’? (The Matrix comes to mind here…) How might this play out in on our identities on Twitter, lifestream-blogs etc.?

5) Thinking beyond assumptions that the ‘human’ is at the centre of education, and technology is a ‘tool‘ or ‘enhancement‘, what are the implications of a complex entanglement of education and technology (Bayne 2015: 18) for this course?

Entanglement

Complex entanglement (‘Entanglement’, ellen x silverberg, Flickr)

Many discussions were via Twitter, drawing in questions from the public:

I have also been commenting on others’ lifestream-blogsbringing them in as feeds.

Following on from last week’s map, I have opened new and revisited old avenues:

EDC week 2
EDC week 2 (enlarge)

I have also experimented with visualisations of my feed ahead of our visual artefact task…

InfraNodus: Text network visualisation and discourse analysis (described as 'postsingularity thinking tool'
InfraNodus: Text network visualisation and discourse analysis (or ‘postsingularity thinking tool’)
Binaries/dualisms
Binaries/dualisms

View references

Film review – ‘Cyborg’

Following my first film review on A New Hope, here is a second shorter post on The Cyborg, after being inspired to pick up on a theme from Matthew Taylor’s review of the same film (fearing technology):

The Cyborg

The Cyborg includes many aspects relevant to the themes we have been exploring, however one theme in particular struck me on rewatching it this week after a Twitter exchange: how/should we think about agency with regards to technology (for example, around the issues of fear and control, if we should even consider things in this way)?

The Cyborg portrays the ‘human’ exerting power over the ‘cyborg’ (the ‘human’ choosing its name and date of birth, as if it were a ‘tool’ without agency). This brings to mind the way technology is often seen as a ‘tool’ in education, rather than technology and education being ‘co-constitutive of each other, entangled in cultural, material, political and economic assemblages of great complexity’ (Bayne 2015: 18).

How, then, might we consider agency in this complex entanglement? Hayles (1999: 288) argues that ‘in the posthuman view…conscious agency has never been “in control”…distributed cognition replaces autonomous will’ and, in this talk and book, discusses the idea of the ‘cognitive nonconscious’.

I plan to dig further into how we might consider consciousness, cognition and agency with regards to technology and education as we continue with the course.


View references

Michael saved in Pocket: ‘Watch: N. Katherine Hayles on Nonconscious Cognition and Material Processes’

Excerpt

Hayles delivered the following lecture—the first of two covering “Nonconscious Cognition and Material Processes”—on 8 May 2015.

“This talk discusses the relation of nonconscious cognition to consciousness/unconscious, which I call the modes of awareness. It develops the idea of cognition in technical systems, particularly computational media, showing how principles of selection and specification of contexts lead to the creation of meaning out of information inflows/ingresses and outflows/egresses. It discusses the relation of agency within technical systems to human agency, arguing for a model of “punctuated agency” analogous to the “punctuated equilibrium” proposed by Stephen Jay Gould and others. It proposes the idea of “evolutionary potential” as a way to talk about trajectories of technological developments, arguing that computational media have a greater evolutionary potential than any other technology ever invented by humans. Finally, it argues that technical cognitive systems are interpenetrating human complex systems so pervasively and ubiquitously as to change the nature of what it means to be human, and the challenges that this interpenetration poses particularly to the humanities.”

View full article