week 10/11 final reflections, lifestream summary, and covid-19

Major crises, such as covid-19, make paradoxes and tensions in society more salient. I wish to close this course with two that have been central to this lifestream.

  1. We typically worry about the ‘automated’ (e.g. my tags a reasonable view on AI, disrupting the world, and tech saves the world) ─the cybernetic part of the cyborg (Hayles 1999’s boundaries of the autonomous subject), the nudges of MOOCs (Adam 2019), and the intelligent algorithm (Knox, Williamson & Bayne 2020) ─yet they do very well at creating and reinforcing social behaviours: the social distancing the public now massively supports or, in education, the integration of norms (meant to become spontaneous dispositions). With digital cultures, education finds itself often torn between the desire to ‘emancipate the self’ and the will to make students norm-abiding citizens. Concepts such as transhumanism (Bayne 2015), entanglements of agencies (Knox 2015) and assemblages (Kitchin 2017) have been useful discoveries for me to avoid getting stuck.

  1. Learning takes place in context, and a constant worry of my lifestream has been to question and the relevance of cyborgs and MOOCs (also building on Adam 2019) beyond the West (e.g. my tags gaming the system and look South ). Moving to algorithms, a tension appears obvious (and the artefacts of this block, for instance, contrasting mine and Iryna’s): we worry both that digital cultures homogenise learning and that they discriminate by making the experience singular (and therefore making people miss things out). Here the contesting of learnification and dataification (Biesta 2017; Knox, Williamson & Bayne 2020) appears a possible way forward as it seeks to restore student autonomy but also challenges the idea of students as fully informed consumers.

Let me now reflect more on this lifestream as an artefact co-constructed with the support of algorithms. Automation failed me a little (e.g. my comments not appearing in my lifestream) and I feel I missed out a little, during block 2: maybe I have already written too much on ‘communities’ as an academic, or maybe it was that I felt it hard to again try a MOOC, or maybe I just had less headspace because of travelling. Anyhow, more interesting is probably that feeling that, overall, I have tried to resist algorithms (Beer 2017). Opting-out of Twitter was one such acts of resistance, but I am a prisoner of Google recommendations (even with all the obfuscation tools installed on my browser) and echo chambers on Pocket and YouTube (as experimented in my artefacts).

What strikes me in my lifestream, and perhaps this is in part testament to my difficulty of embracing the open space (à la Manifesto to Teaching Online) in which it sits, is the level of edition and curation that I did (and so did many, but not all to the same extent, of my excellent classmates). However, this is maybe more our acknowledgement of the entanglement of agencies and the assemblage of socio-materialities (Kitchin 2017) that stubborn Luddism. Maybe the algorithms and indeed making us cyborgs, and it is not that terrifying.

references

Bayne, S. (2014) What’s the matter with ‘Technology Enhanced Learning’? Learning, Media and Technology, DOI: 10.1080/17439884.2014.915851 (journal article)

Beer, D. (2017) The social power of algorithms, Information, Communication & Society, 20:1, 1-13, DOI: 10.1080/1369118X.2016.1216147

Biesta, G.J., 2017. The rediscovery of teaching. Taylor & Francis.

Hayles, N. Katherine 1999 Towards embodied virtuality from Hayles, N. Katherine,  How we became posthuman: virtual bodies in cybernetics, literature, and informatics pp.1-25, 293-297, Chicago, Ill.: University of Chicago Press.

Kitchin, R., 2017. Thinking critically about and researching algorithms. Information, Communication and Society, 20 (1), 14–29.

Knox, J., Williamson, B. & Bayne, S. 2020. Machine behaviourism: future visions of ‘learnification’ and ‘datafication’ across humans and digital technologies, Learning, Media and Technology, 45:1, 31-45, DOI: 10.1080/17439884.2019.1623251

Taskeen Adam (2019) Digital neocolonialism and massive open online courses (MOOCs): colonial pasts and neoliberal futures, Learning, Media and Technology, 44:3, 365-380, DOI: 10.1080/17439884.2019.1640740

a quick note on the formatting of my lifestream (and why I disappeared from Twitter)

This should have happened earlier… but I have finally cleaned my lifestream (slightly). The categories correspond to the main building block of this course -so the broad thematic activities as well as type of assignments or source. The tags are the key themes.

A quick work on two things that happened to my lifestream:

  • I started the course tweeting a lot and then stopped. You will see tweets for a few weeks and then none. It was a deliberate choice that I after after I realised, in the first group call, that my lifestream could live without Twitter. Why then stop totally? There are two reasons, which I am not expecting people to share: (1) my Twitter feed is tiny (the numbers speak for themselves: 69 tweets, half of which were in this course, 130 following and 386 followers) and I wish to keep it as such. It is mainly if not only a professional tool for me and I didn’t feel like mixing the course with the couple of tweets I have on papers I have published. I could probably have created another account, but it was too late when I realised the issue and also (2) I decided five or six years ago to withdraw from Facebook and that made me happier, social media can be fascinating and very useful but the ‘problem’ in my life is not really to get more news, stay in touch with people, or even reach out to new people (ok, social media is great for that last one) rather it is to curate and digest information and make more informed and pondered decisions, and I don’t find social media such as Twitter very useful for that. What I need is headspace, twitter doesn’t give me much of that. Okay, there is actually a third reaso, I am just really bad with social media, when on it I wuld just loose hours watching what’s going on. So I simply don’t do it much, to keep sane. I am not expecting people to share my take on Twitter, but I think my lifestream will show it is possible to do decent work without social media. [on this: https://www.newyorker.com/tech/annals-of-technology/escaping-twitters-self-consciousness-machine]

  • Something went wrong with IFFT and it did not record a series of comments I have made on my coursemates’ lifestreams and in particular artefact. It doesn’t matter much, some of it did work and I also copy-pasted some in my last post.

a few of my comments the RSS didn’t catch (ok, it’s probably we who did not set it up properly)

I just realised my RSS feed didn’t work, well it did work but only for some lifestreams, I am not too sure what happened as I did do some tests but anyway… here are just a few of the comments I made, already some weeks ago, on the algorithmic play of some others…

#mscedc My algorithmic play artefact: https://t.co/n2L6TTwenI

Nice thought-provoking piece! I quite like the approach, really cool idea and a clear evidence that there are many different ways of research algorithms (how did you mobilise such network, that’s an impressive effort). And yes, that’s a nice pandemic map 😉. What I find striking is that both Python and Feminism lead to similar results. I guess Python was to be expected, but feminism maybe not so much… it would be interesting to repeat the exercise and ask about not the first five results but, say, the results between 20 and 25. In my exploration of YouTube, I found that the first results are indeed the same when varying the profiles -maybe because of self-feeding bubbles or the powerful interest behind them- but then they diverge later on… but then who looks at results ranked 20-25!

‘Algorithmic play’ artefact – ‘Algorithmic systems: entanglements of human-machinic relations’

This is really impressive, Michael, very thorough pieces of work and a polished reflection on top! I like how you connected different issues…. and went all the way to the idea of progress, and, without naming it, the Silicon Valley ethos. I don’t know if you have read about the Californian ideology, a term coined 25 years ago by Barbrook and Cameron (the original essay is here http://www.imaginaryfutures.net/2007/04/17/the-californian-ideology-2/) –it still seems an apt depiction of your experience on Coursera and other platforms: a paradoxical mix of universal, left-leaning liberalism (the Coursera vision) and hopeful technological determinism (hence the focus on technological courses?). I think Kitchin’s paper (forcing us to go beyond the technical) is indeed a great entry point to unravel some of the ‘hidden’ ideology.

Algorithmic Play Artefact

This is a really exciting piece, and there is so much into it, it really shows super hard work and I am really impressed by the way you have disentangle the different elements of no less than four major websites. I quite like the idea that some of our agency being taken away by the algorithms. At the end of the Netflix and Amazon bits, you worry about the monetisation of education by online platforms (datafication, in line with Williamson and others). I can’t agree more but I wonder: what do think would be the right regulation mechanism? Is it about regulation?

 

Algorithms and Ideology

From the French-German ARTE.tv – this is great (but iframe doesn’t work)

What exactly are algorithms and how do they affect politics and civic society? Are they ideologically neutral or can they be manipulated? Raphaël Enthoven discusses with Italian political journalist Giuliano da Empoli and algorithm expert, Aurélie Jean.

https://www.arte.tv/en/videos/092170-008-A/algorithms-and-ideology/

 

Can computers ever replace the classroom?

With 850 million children worldwide shut out of schools, tech evangelists claim now is the time for AI education. But as the technology’s power grows, so too do the dangers that come with it. By For a child prodigy, learning didn’t always come easily to Derek Haoyang Li. via Pocket https://ift.tt/2Wx4nHA

Comment on algorithmic play by jfalisse

Nice job! I like the presentation but also the reflection. Great point on apophenia… I guess it gives us some perspective too, in the sense that most algorithms are nothing evil, they just try to best capture our attention, don’t they? I think this is where the big difference between YouTube and (hopefully) proper education platforms lies: one’s aim is just to attract our attention, whether it is with relevant stuff doesn’t really matter!

source https://edc20.education.ed.ac.uk/jkwon/2020/03/16/this-is-my-algorithmic-play-with-youtube-i-hope-you-all-have-enjoyed-this-activity-https-t-co-zvhmw3n2ue-mscedc/#comment-45

Comment on Jiyoung Kwon’s algorithmic play by jfalisse

Nice job! I like the presentation but also the reflection. Great point on apophenia… I guess it gives us some perspective too, in the sense that most algorithms are nothing evil, they just try to best capture our attention, don’t they? I think this is where the big difference between YouTube and (hopefully) proper education platforms lies: one’s aim is just to attract our attention, whether it is with relevant stuff doesn’t really matter!

from
Comments for Jiyoung Kwon’s EDC lifestream https://ift.tt/2UtfaQP
via IFTTT

The end of nudging ?

This is not really my last post, more of an in-between reflection about some of things I have been posting recently on nudging (and also some of the stuff I read on Valerian’s excellent lifestream – http://edc20.education.ed.ac.uk/vmuscat/), and then obviously the coronvirus debacle and especially the recent u-turn in the British approach (which seem to mark a decline in the influence of the nudging unit in the coronavirus reponse).

Two thoughts on nudges then, trying to circle back to education:

  1. There are many instances in which nuding doesn’t work, or just overclaims results (see my lifestream – just another example here https://www.npr.org/sections/money/2020/02/04/801341011/the-limits-of-nudging-why-cant-california-get-people-to-take-free-money) – nudges are often context- and experiment -specific and suffer from the general replication crisis in psychology (https://www.theatlantic.com/science/archive/2018/11/psychologys-replication-crisis-real/576223/). This may reassure Williamson, Knox, and colleagues (see below): maybe we should pay less attention to the overhyped idea of nudging in the first place, and certainly in education where the evidence base is even thinner… and go back to the old (good?) idea of learning environment?
  2. The term nudge is disputed, but everybody seems to agree it is a small change in the environment: using a word instead of another, a shape instead of another. What most search engines do is not nudging, it is nothing like a small touch, it is a rather big and complex entreprise of social engineering, often with a commercial motives. This is not nudge theory in the sense of tweaking individual behaviour using their less rational and without having a theory of learning or the individual (although even that idea of a lack of foundations is challenged, some say nudging is still the old homo economicus framework): rather the logic is one of maximisation of a quantified target, sometimes a test core but more often attention or money, through non-disclosed means (algorithms). Very much classical economics.

references

Williamson, B. 2017. Introduction: Learning machines, digital data and the future of education (chapter 1). In Big Data and Education: the digital future of learning, policy, and practice. Sage. Access the ebook version here (Ease login required)

Knox, J., Williamson, B. & Bayne, S. 2020. Machine behaviourism: future visions of ‘learnification’ and ‘datafication’ across humans and digital technologies, Learning, Media and Technology, 45:1, 31-45, DOI: 10.1080/17439884.2019.1623251