Disentanglement from my lifestream: wrapping up algorithmic cultures and EDC 2020

Entanglement
‘Entanglement’ (ellen x silverberg, Flickr)

As I disentangle myself from my lifestream feeds, and reflect on the course, I consider how I have perceived and been influenced by the algorithmic systems involved.

Google and Twitter were consistent influences, the latter through new/existing connections and via #mscedc#AlgorithmsForHer and #ds106, and I saved/favourited (often highly ranked) resources to Pocket, YouTube and SoundCloud (and other feeds).

While I had some awareness of these algorithms, alterations to my perception of the ‘notion of an algorithm’ (Beer 2017: 7) has shaped my behaviour. Believing I “understand” how Google “works”, reading about the Twitter algorithm and reflecting on ranking/ordering have altered my perceptions, and reading about ‘learning as “nudging”‘ (Knox et al. 2020: 38) made me think twice before accepting the limiting recommendations presented to me.

Referring to the readings, these algorithmic operations are interwoven with, and cannot be separated from, the social context, in terms of commercial interests involved in their design and production, how they are ‘lived with’ and the way this recursively informs their design (Beer 2017: 4). Furthermore, our identities shape media but media also shapes our identities (Pariser 2011). Since ‘there are people behind big data’ (Williamson 2017: x-xi), I am keen to ‘unpack the full socio-technical assemblage’ (Kitchin 2017: 25), uncover ideologies, commercial and political agendas (Williamson 2017: 3) and understand the ‘algorithmic life’ (Amoore and Piotukh 2015) and ‘algorithmic culture’ (Striphas 2015) involved.

During my ‘algorithmic play’ with Coursera, its “transformational” “learning experiences” and self-directed predefined ‘learning plans’ perhaps exemplify Biesta’s (2005) ‘learnification’. Since ‘algorithms are inevitably modelled on visions of the social world’ (Beer 2017: 4), suggesting education needs “transforming” and (implied through Coursera’s dominance of “tech courses”) ‘the solution is in the hands of software developers’ (Williamson 2017: 3) exposes a ‘technological solutionism’ (Morozov 2013) and Californian ideology (Barbrook and Cameron 1995) common to many algorithms entangled in my lifestream. Moreover, these data-intensive practices and interventions, tending towards ‘machine behaviourism’ (Knox et al. 2020), could profoundly shape notions of learning and teaching.

As I consider questions of power with regards to algorithmic systems (Beer 2017: 11) and the possibilities for resistance, educational institutions accept commercial “EdTech solutions” designed to “rescue” them during the coronavirus crisis. This accelerated ‘datafication’ of education, seen in context of wider neoliberal agendas, highlights a growing urgency to critically examine changes to pedagogy, assessment and curriculum (Williamson 2017: 6).

However, issues of authorship, responsibility and agency are complex, for algorithmic systems are works of ‘collective authorship’, ‘massive, networked [boxes] with hundreds of hands reaching into them’ (Seaver 2013: 8-10). As ‘processes of “datafication” continue to expand and…data feeds-back into people’s lives in different ways’ (Kennedy et al. 2015: 4), I return to the concept of ‘feedback loops’ questioning the ‘boundaries of the autonomous subject’ (Hayles 1999: 2). If human-machinic boundaries are blurred and autonomous will problematic (ibid.: 288), we might consider algorithmic systems/actions in terms of ‘human-machinic cognitive relations’ (Amoore 2019: 7) or ‘cognitive assemblages’ (Hayles 2017), entangled intra-relations seen in context of sociomaterial assemblages and performative in nature (Barad 2007; Introna 2016; Butler 1990) – an ‘entanglement of agencies’ (Knox 2015).

I close with an audio/visual snippet and a soundtrack to my EDC journey

 

My EDC soundtrack:

My EDC soundtrack cover image


View references

Resisting through alternative spaces and practices: some initial thoughts

Image by John Hain from Pixabay
Image by John Hain (Pixabay)

‘If politics is about struggles for power, then part of the struggle is to name digital technologies as a power relation and create alternative technology and practices to create new spaces for citizens to encounter each other to struggle for equality and justice.’ (Emejulu and McGregor 2016: 13)

Much of this block on algorithmic cultures has involved us examining the algorithmic systems and cultures at play in our lives, ‘unpack[ing] the full socio-technical assemblage’ (Kitchin 2017: 25) and uncovering ideologies, commercial and political agendas (Willamson 2017: 3).

As I explore power and (the notion of) the algorithm (Beer 2017), and the complexities of agency with regards to these entangled human-machinic relations (Knox 2015; Amoore 2019; Hayles 1999), and its implications for my lifestream here and education in general, I increasingly wonder what form resistance to this might take.

With this mind, I have gathered some rough notes and links below into the form of a reading list of sorts; this is something I hope to explore in the future.

Collating a reading list…

Protest and resistance

A Guide for Resisting Edtech: the Case against Turnitin (Morris and Stommel 2017)

‘Students protest Zuckerberg-backed digital learning program and ask him: “What gives you this right?”‘ (Washington Post 2018)

Why parents and students are protesting an online learning program backed by Mark Zuckerberg and Facebook (Washington Post 2018)

Brooklyn students hold walkout in protest of Facebook-designed online program (New York Post 2018)

Hope (2005) outlines several case studies in which school students have resisted surveillance.

Tanczer et al. (2016) outline various ways in which researchers might resist surveillance, such as using “The Onion Router” or tor (although the tor project website may be blocked for some).

Noiszy offers a browser extension which aims to mislead algorithmic systems by filling sites of your choosing them with “noise”, or “meaningless data”.

#RIPTwitter hashtag, often used to resist changes to Twitter’s algorithmic systems. Noticed earlier in the course, when considering changes to algorithms that may have an effect on my social media timelines. See DeVito et al. (2017).

‘Integrated & Alone: The Use of Hashtags in Twitter Social Activism’ (Simpson 2018). Examines viral hashtags associated with social movements: #metoo, #takeaknee, #blacklivesmatter.

Alternative models (such as platform cooperativism, participatory democracy/design and inclusive codesign)

‘There are people behind big data – not just data scientists, but software developers and algorithm designers, as well as the political, scientific and economic actors who seek to develop and utilise big data systems for their diverse purposes. And big data is also about the people who constitute it, whose lives are recorded both individually and at massive population scale. Big data, in other words, is simultaneously technical and social.’ (Williamson 2017: x-xi)

What/who might we resist? Surveillance, ‘datafication’ and data-intensive practices that discriminate against the marginalised (Noble 2018; Knox et al. 2020)? ‘Learnification’, neoliberal ideologies and the marketisation of education (Biesta 2005)? “Technological solutionism” (Morozov 2011; 2013)?

Looking a little deeper into the models and processes those following the “Silicon Valley” model often evangelise about, reveals the Agile Manifesto (Beck et al. 2001), written by what appears to be an group (calling themselves ‘The Agile Alliance’ and described as ‘organizational anarchists’) of people meeting at a Utah-based ski resort in 2001.

David Beer (2017: 4) argues that ‘algorithms are inevitably modelled on visions of the social world, and with outcomes in mind, outcomes influenced by commercial or other interests and agendas.’ Yet can we imagine a future where these agendas – rather than based on a Silicon Valley ethos, or as Barbrook and Cameron (1995) call it, Californian ideology (thank you JB Falisse for introducing me to this) – are rooted in cooperative or participatory principles?

‘Cooperative creativity and participatory democracy should be extended from the virtual world into all areas of life. This time, the new stage of growth must be a new civilisation.’ Imaginary Futures book (Barbrook 2007)

Alternative business models rooted in democracy, such as platform cooperativism (see Scholz and Schneider 2016).

Alternative design processes, rooted in participation and inclusivity, such as participatory design (see Beck 2002) and inclusive codesign:

‘The importance of inclusive codesign has been one of the central insights for us. Codesign is the opposite of masculine Silicon Valley “waterfall model of software design,” which means that you build a platform and then reach out to potential users. We follow a more feminine approach to building platforms where the people who are meant to populate the platform are part of building it from the very first day. We also design for outliers: disabled people and other people on the margins who don’t fit into the cookie-cutter notions of software design of Silicon Valley.’ (P2P Foundation 2017)

What might this look like in the context of education?

Can Code Schools Go Cooperative? (Gregory 2016)

Looking into critical pedagogy (see Freire 2014 [1970]; Freire 2016 [2004]; Stommel 2014), and If bell hooks Made an LMS: Grades, Radical Openness, and Domain of One’s Own (Stommel 2017).

As we see signs that “EdTech” companies stand to potentially gain from or exploit the current coronavirus crisis…

…it is ever more urgent to consider the potential significant effects that these “EdTech solutions” may have on educational policy and pedagogies (Williamson 2017: 6). As Williamson (2020) writes:

‘Emergency edtech eventually won’t be needed to help educators and students through the pandemic. But for the edtech industry, education has always been fabricated as a site of crisis and emergency anyway. An ‘education is broken, tech can fix it’ narrative can be traced back decades. The current pandemic is being used as an experimental opportunity for edtech to demonstrate its benefits not just in an emergency, but as a normal mode of education into the future.’ (Williamson 2020)


View references

Week nine and algorithmic systems: unpacking the socio-technical assemblage

Photo by Federico Beccari on Unsplash
Algorithmic systems as ‘massive, networked [boxes] with hundreds of hands reaching into them’ (Seaver: 2013: 10)? (PhotoFederico BeccariUnsplash.)
Having just published my artefact Algorithmic systems: entanglements of human-machinic relations, I have started commenting on others’ artefacts, finding insightful (but sobering) explorations of the algorithmic systems of FacebookNetflix and others. Facebook’s foray into ‘personalized learning platforms’ and the resistance against it is very relevant as I reflect on the issues of surveillance and automation in education (Williamson 2019).

While focusing on Coursera for my artefact, I observed inclusions/exclusions, tendencies to privilege “tech” courses from Western institutions, and experienced the complexities of researching algorithmic systems (Kitchin 2017). Does Silicon Valley-based Coursera – with its vision of life transformation for all – subscribe to Silicon Valley’s notion of ‘progress’ while blind to its exclusions? Is it another example of an attempt to ‘revolutionise’ education, as Williamson (2017) details, framed by neoliberal and commercial agendas yet presented as an objective and universal ‘learning experience’?

I reflect on the ‘black box’ metaphor, yet ‘algorithmic systems are not standalone little boxes, but massive, networked ones with hundreds of hands reaching into them’ (Seaver 2013: 10) and we ‘need to examine the work that is done by those modelling and coding [them]’ (Beer 2017: 10). Thus, rather than stripping individual algorithms of their wider social and political context, we should ‘unpack the full socio-technical assemblage’ (Kitchin 2017: 25) and examine the complex ‘human-machinic cognitive relations’ (Amoore 2019: 7), ‘entanglement of agencies’ (Knox 2015) and the implications for education in an era of ‘datafication’ (Knox et al. 2020).


View references

‘Algorithmic play’ artefact – ‘Algorithmic systems: entanglements of human-machinic relations’

‘These algorithmic systems are not standalone little boxes, but massive, networked ones with hundreds of hands reaching into them, tweaking and tuning, swapping out parts and experimenting with new arrangements…we need to examine the logic that guides the hands…’ (Seaver 2013: 10)

Early on this block, I played with a range of different algorithms, looking at the recommendations served up by my existing activity. I have created a “recommendations feed” for my lifestream, and also made general notes and screenshots in number of posts.

Later on, and in thinking specifically about digital education, and tying the activity with our MOOC exploration, I first looked at my FutureLearn recommendations, and finally, focused on Coursera – where I was able to provide false data for my “profile” and “learning plan” and see the alterations in recommended courses.

Some initial conclusions

Many of the “recommendation engines” I played with, such as SoundCloud, Spotify and Twitter, led me into what I perceived to be a “you loop“, “filter bubble” or “echo chamber”. Google’s autocomplete showed some signs of reproducing existing biases, perhaps amplifying dominant views held by other users. Google may also be making assumptions based on data they hold about me, which may have skewed YouTube search results, although it would be interesting to compare results with other users or in different locations, an approach Kitchin (2017: 21) discusses. I have written a little about ethical concerns and my Google data also.

Moving my focus to Coursera, in its recommendations they appeared to privilege courses with a information technology/computer science focus, although the range of available courses is clearly a factor too. In any case, the founders’ background in computer science, and education at a Western university, shone through despite attempts to tweak my “profile” and “learning plan” (I have written a little about the “profile” ethical issues also). This appears to be a common theme in “apps” or websites” developed by Western companies, and the design processes utilised (whereby products are developed first for the “profile” of the founder(s), and secondarily for others) arguably creates exclusions for some (and a strong sense of “this wasn’t designed for me”) and inclusions for others (notably switching my profile to “software engineer” produced a wealth of “relevant“ results which I could tailor to my choosing).

My artefact: Algorithmic systems: entanglements of human-machinic relations

You can see the Coursera ‘algorithmic play’ posts in a lifestream feed, and also through this CodePen (which brings in the same feed but presents, ranks and orders it slightly differently). How might consuming the feeds through different ranking systems affect your perception, and how might the pre-amble surrounding it have changed your conclusions and what you click on?

View the CodePen

‘Algorithmic systems: entanglements of human-machinic relations’
‘Algorithmic systems: entanglements of human-machinic relations’

You can also access the “code behind” the CodePen. However, even though access is granted to this code, it will not necessarily be easy to make sense of it (Kitchin 2017: 20; Seaver 2013). While presented as ‘open’ source, is it really ‘open’ and who for? What exclusions might this create?

The CodePen (while slightly misleadingly called an “Educational Algorithm”) is a very basic attempt to present the feed of Coursera recommendations in a different fashion for comparison, while provoking some thought around algorithmic systems in general. There is no complex processing, just a little reordering. It also does not store any data (information entered is used within the browser to rank articles, but not stored in an external database), and the text descriptions are fictional – it is just for visual/demonstration purposes.

Reflections on the CodePen

Inspired by Kitchin (2017: 23), I have written a few words reflecting on my experiences, while acknowledging the limitations of the subjectiveness of this approach.

The CodePen was adapted (“forked“) from an existing CodePen, which provided the majority of the base upon which (with my very limited skills!) I could tweak very slightly to add a basic feed of the Coursera recommendations by pasting in another bit of code (which turned out to be old and broken) and then ultimately another bit of example code (this is from a Google-“led” project). It is very much a case of different bits of code “hacked” together, full of little “bugs” and “glitches” and not particularly well designed or written! I was keen to strictly limit the time spent on it, although I know much time could be spent tidying and refining it.

Presumably, similar time constraints, albeit with more resources/testing, affect development of, say, Facebook/Google etc. algorithms and lead to mistakes. After all, Facebook’s internal motto used to be ‘move fast and break things’, although this race to create a “minimum viable product” and “disrupt” (regardless of the consequences) is increasingly criticised.

In any case, this CodePen is like a snapshot of an experimental “work in progress” (which others are welcome to “fork”, use and adapt) and brings to mind the messy, ontogenetic, emergent, and non-static nature of algorithmic systems (Kitchin 2017: 20-21).

Ultimately, my aim is to raise some questions about some details of this process and, since most of the code was “stuck together” from bits of free and ‘open’ source code, how the culture of the individuals/teams is significant too. As Seaver (2013: 10) puts it…

‘…when our object of interest is the algorithmic system, “cultural” details are technical details — the tendencies of an engineering team are as significant as the tendencies of a sorting algorithm…’

…and, given that a large portion of the code came from a Google-led project (and Twitter too), how might the tendencies of those teams have created inclusions/exclusions? Furthermore, as the focus of my artefact is Coursera, whose founders both have experience working at Stanford University and Google, how might the tendencies there have guided Coursera and, subsequently, my CodePen?

Finally, given that Coursera presents itself as a universal educational space, where its vision is ‘a world where anyone, anywhere can transform their life by accessing the world’s best learning experience’, what are the implications of this for digital education? In my initial explorations, my perceptions are that computer science and information technology disciplines from Western universities are prioritised by Coursera, in general and through their algorithmic systems. However, further research is needed to, as Kitchin (2017: 25) puts it, ‘unpack the full socio-technical assemblage’.


View references

Data collected through Coursera profile – what are the ethical issues at stake?

The data you can enter through the Coursera “profile”, while it is not compulsory, promises potential career opportunities, new connections and course recommendations. This may be of less interest to those who have an existing level of education, a job and existing connections; however, if you are visiting Coursera with the hope of increased career opportunities, it may be significant (although it is difficult to tell without further analysis or Coursera data).

A quick check of the Coursera privacy policy reveals that ‘general course data’ and site ‘activity’ may be shared with ‘Content Providers and other business partners’, including personally identifiable information, and ‘Content Providers and other business partners may share information about their products and services that may be of interest to you where they are legally entitled to do so’.

In addition to the site activity data (presumably course searches, enrolments and so on) collected by Coursera, the additional information you can provide to personalise your ‘learning experience’ and recommendations is fairly extensive, including work experience, education, career goals, location, age and gender:

Coursera profile Coursera profile

While it is possible through this page to limit the information to ‘only me’, ‘the Coursera community’ or ‘everyone on the web’, presumably the privacy policy will still allow Coursera staff, and associated ‘content providers’ or ‘business partners’ to access and analyse this data. The options presented (for which I selected ‘only me’ in all cases) give a slightly false and misleading sense of privacy, since the privacy policy outlines that it should say ‘only me, plus Coursera staff, Content Providers and business partners’ – that is, assuming I have understood the policy correctly, although presumably many will never read it at all.

There do not seem to be any options (easily visible on this page, at least) for hiding all your data from everyone else (including Coursera staff, ‘Content Providers’ and ‘business partners’), nor do there appear to be any options for customising who can view your site activity. For an educational site – where some may be following a promise of improved career opportunities, and where everyone is not beginning at the same ‘starting point’ – it, in my opinion, seems appropriate to be able to hide all data from everyone.

Michael saved in Pocket: ‘Algorithming the Algorithm’ (Uprichard and Mahnke 2014)

Excerpt

Imagine sailing across the ocean. The sun is shining, vastness all around you. And suddenly [BOOM] you’ve hit an invisible wall. Welcome to the Truman Show! Ever since Eli Pariser published his thoughts on a potential filter bubble, this movie scenario seems to have become reality, just with slight changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we would like to emphasize another side to the algorithmic everyday life. We argue that algorithms can instigate and facilitate imagination, creativity, and frivolity, while saying something that is simultaneously old and new, always almost repeating what was before but never quite returning. We show this by threading together stimulating quotes and screenshots from Google’s autocomplete algorithms. In doing so, we invite the reader to re-explore Google’s autocomplete algorithms in a creative, playful, and reflexive way, thereby rendering more visible some of the excitement and frivolity that comes from being and becoming part of the riddling rhythm of the algorithmic everyday life.

View full article

Viewing different Coursera courses to influence my recommendations

The tweaks to my Coursera profile and learning plan have had a fairly limited effect so far on my Coursera recommendations.

I notice my recently viewed courses have an impact, so will look to alter this:

Recently viewed courses in Coursera
Recently viewed courses in Coursera

First, I am switching my profile and learning plan to a nurse in the healthcare industry:

Coursera profile
Coursera profile
Coursera learning plan
Coursera learning plan

…courses that others identifying themselves as nurses now appear…

Coursera - 'People who are Nurses took these courses'
Coursera – ‘People who are Nurses took these courses’

Notably, the first is “nursing informatics” – could this be another example of information technology dominating results?

I view some courses related to ‘Everyday Parenting’, ‘Mindfulness’, ‘Well-Being’ and ‘Buddhism and Modern Psychology’ and ‘Social Psychology’.

Below some more computer science/information technology degree recommendations…

Coursera 'Earn Your Degree'
Coursera ‘Earn Your Degree’

There are some courses displayed on ‘Personal Development’. Many are not particularly related to the areas I specified, however it is a rare opportunity to see recommended courses that are not computer science or information technology.

Coursera Personal Development
Coursera Personal Development

My explorations seem to again show a privilege towards computer science subjects – again, not surprising given the background of the founders.

However, this limited focus does seem slightly at odds with Coursera’s own slogan:

‘We envision a world where anyone, anywhere can transform their life by accessing the world’s best learning experience.’
(About Coursera)

As previously discussed, the approach for those from a Western university-educated computer science background to build something for themselves, raise funds through investment but then market it as a “universal” solution that is “best” for all appears quite common.

Tweaking my “profile” in Coursera to “software engineer”

Further to setting my initial (false) “profile” and playing with my Coursera “learning plan”, I have now tweaked my profile to indicate I am a software engineer at “Executive Level” at Facebook, with a masters :

Tweaking my Coursera profile
Tweaking my Coursera profile

I have also set my learning plan so that I am a “software engineer” in the “technology” industry:

Coursera learning plan
Coursera learning plan

The key difference here is the recommended course list, which now suggests courses that other software engineers have taken:

Coursera recommendations
Coursera recommendations

There seem to be a wealth of courses in the area, which is perhaps unsurprising given my other experiences of the site so far.

SoundCloud and Spotify recommendations – a “you loop”?

Here is the Spotify playlist which has been connected to my lifestream…

EDC Spotify playlist
EDC Spotify playlist

…and today’s recommended songs…

Spotify recommended songs
Spotify recommended songs

…which, with a few exceptions, are largely very “similar” or songs from the same albums.

My SoundCloud recommendations appear to be partly influenced by listening to a podcast from Meet The Education Researcher

SoundCloud 'Artists You Should Know'
SoundCloud ‘Artists You Should Know’
SoundCloud 'Artists You Should Know'
SoundCloud ‘Artists You Should Know’

 

Are these examples of the algorithm pushing “similar” content and perhaps also changing my perception of what I should listen to? Have I been in a “you loop“? Have my recommendations been influenced by others listening to them?