Do algorithms make us behave?

 

This video explains the concept of reinforcement learning in machines and gives some very good examples by showing how the algorithm behind reinforcement learning continuously compares particular actions (responses) into the machine engine (in this case a game). When a positive result is achieved and a reward is given, the set of steps leading to that reward is saved. This keeps going on in order to accrue as many positive behaviours as possible. When the concept of reward is not that straightforward in that the steps to get to a reward are much more complex, reward shaping and adding more rewards for every scenario is possible (although time-consuming). Training without rewards is very hard in reinforcement learning, a technique which closely echoes the behavioural learning patterns of early educational systems.

The idea of algorithmic systems that pepper student learning with occasions for enjoying reward (as in the case of easy quizzes in MOOCs) may act as the carrot before the donkey in order to promote the self-directing learner while providing an occasion for ‘datafication’ and collection of data (Williamson, 2017). In this case, student behaviour becomes a very ‘valuable commodity’ (Knox et al, 2020) in providing the ‘action to the state’ as explained in the video because it can help predict outcomes. Ironically students are then providing their behaviour patterns for free to the users of CMSs, VLEs and MOOCs.

not only is data positioned before the desires of the learner as the authoritative source for educational action, but the role of the learner itself is also recast as the product of consumerist analytic
technologies. (Knox et al, 2020)

Educational systems that study and collect data in order to provide ‘the best possible learning experience’ and ‘limit’ the online learner to a simple reward system are an example of Biesta’ s concept of ‘learnification’, whereby the system is merely interested in producing successful students and growing numbers of successful students. This kind of ‘solutionism’ is a far cry from the learning process envisaged by Biesta. (Biesta, 2012). The social dimension of education is absent as a starter and learning is reduced to the concept of playing a basic video game (like Pong) in which the reward rather than the playing experience is what ultimately counts, reducing the learner to the idea of a ‘product’ (Rushkoff, cited in Knox et al, 2020). This is a view deeply enshrined in radical behaviourism and a concept built upon the binary determinism of computer systems that are able to break down responses to knowledge into a system of ‘ons’ and ‘offs’ that will eventually (even thanks to the development in quantum computing) challenge or even outperform the best human minds as seen below.

References:

Biesta, G., (2012). Giving Teaching back to education: Responding ot the disappearance of the teacher. Phenomenology & Practice 6 (2)pp 35-49.

Knox, J., Williamson, B., & Bayne, S., (2020) Machine behaviourism:
future visions of ‘learnification’ and ‘datafication’ across humans and digital technologies, Learning, Media and Technology, 45:1, 31-45, DOI: 10.1080/17439884.2019.1623251

Williamson, B. 2017. Introduction: Learning machines, digital data and the future of education (chapter 1). In Big Data and Education: the digital future of learning, policy, and practice. Sage.

How AI will destroy Education

cartoon

Is data collected from students always reliable when taking decisions on education?

from Diigo https://ift.tt/2zO9KHO
via IFTTT

A number of educational models like constructivist and experiential approaches to education have shown that performance (in the form of marks)  is not a necessarily reliable gauge for predicting that learning has taken place….and yet marks are often part of the data collected in order to determine where students go wrong. Furthermore, predictions made on unreliable or inconclusive collected data can do more harm than good.

One of the less promoted aspects of AI in education is the isolation of the learner from the environment and peers and yet robust AI systems should take these variables even more into account. This is perhaps the idea behind modern behavioural approaches to put the environment (of learning) back into the equation by designing ‘architectures’ that take into account the ‘physical, socio-cultural and administrative environments in which choices are framed’ (Knox et al, 2020).

In spite of this, the same arguments that go into the removal of the teacher from the learning equation are often voiced when talking about AIEd. There are still aspects of the learning process, often related to the community of learning, that are still absent from learning algorithms. These include notions surrounding the emotive aspect of learning. Will these aspects be truncated in favour of ‘cleaner solutions’?

References:

Knox, J., Williamson, B., & Bayne, S., (2020) Machine behaviourism:
future visions of ‘learnification’ and ‘datafication’ across humans and digital technologies, Learning, Media and Technology, 45:1, 31-45, DOI: 10.1080/17439884.2019.1623251

Digital public: looking at what algorithms actually do

‘Ever have that feeling where you’re not sure if you’re awake or dreaming? (Neo – The Matrix).

It is equally interesting and terrifying what algorithms can gather from the inadvertent browser or user of social media who spends a second more hovering over an image or as soon as the ‘Like ‘ button is pressed.

What would it mean to developers and advertisers to make their algorithms more accessible? How can the informed user be more aware of what data is being collected from him/her and to what purpose? Are the hidden machinations behind most of today’s platforms kept hidden because of competitors or is there a darker reason? Groups like Algorithm Watch advocate transparency when it comes to the use of algorithms while studying the effect of algorithms.

Since ‘algorithms will play an ever-increasing role in the exercise of power’ (Kitchin, 2017), it is vital that a code of ethics is in place both to protect the user but also to allow researchers access to some of the workings behind complex algorithms in an effort to study the ramifications and effects they have on the public, thereby ascertaining how  ‘carefully crafted fictions’ they are (Gillespie, cited in Kitchin, 2017) and avoid the spread of fake news mentioned below:

from Diigo https://ift.tt/2E4tg5u
via IFTTT

References:

Kitchin. R., (2017) Thinking critically about and researching algorithms,
Information, Communication & Society, 20:1, 14-29, DOI: 10.1080/1369118X.2016.1154087

#mscedc Should legislation push social media platforms to reveal their data gathering algorithms and control them? https://t.co/y0dTna1iUQ

The UK government’s advisory body on AI ethics believes  that social media platforms should be regulated when it comes to the algorithms used to promote media on social networking. They are also proposing that the government should push social networks to allow independent researchers access to their data, thereby doing away with one of the main obstacles of researching into the way algorithms work as defined by Kitchin (2017), that of ‘black boxing’ .

Such legislation would control the display of ‘high-risk’ adverts such as political adverts but also adverts related to jobs and ‘age-restricted products’. This legislation aims to find a middle ground between two opposing poles,  the liberal American view to online freedom and the strictly controlled extremes imposed by China.

from http://twitter.com/MVJ12518369
via IFTTT

References:

Kitchin. R., (2017) Thinking critically about and researching algorithms,
Information, Communication & Society, 20:1, 14-29, DOI: 10.1080/1369118X.2016.1154087

When algorithms get the upper hand.

Et nondum quod opera.

01000101 01110100 00100000 01101110 01101111 01101110 01100100 01110101 01101101 00100000 01110001 01110101 01101111 01100100 00100000 01101111 01110000 01100101 01110010 01100001 00101110

And yet it works

The Guardian view on digital injustice: when computers make things worse

The Guardian view on digital injustice: when computers make things worse

Garbage in, garbage out.

It seems that in spite of everything, software algorithms can still produce even worst results than humans if created irresponsibly or given the wrong data remember this). On the other hand, ‘with careful training and well-understood, clearly defined problems, and when they are operating on good data, software systems can perform much better than humans ever could’ (Guardian, 2019). When the system is programmed to accept garbage for a political or economic reason, then we can hardly fault the machine (are we even able to do that?). It seems that deliberately or not, human error can still make its way through an algorithm.

from Diigo https://ift.tt/2VEW2S2
via IFTTT

My ethnography – A community with a focus

My micro-ethnography artefact is available here.

I feel that my ethnography on the MOOC ‘Launching Innovation in Schools’ is a mixture of different things, some statistical data, an evaluation of posts, use of language and some personal observations. I did feel at one point that I spent more time than necessary on the statistical data but I found that patterns in replies and activity between participants shed some light on the type of community that was (and is still) forming. I decided to choose a discussion around a video thread that seemed to have more activity than other threads.

Over the last few weeks, I did feel like I ‘ like discovering a cozy
little world that had been flourishing without me, hidden within the walls of my house’ Rheingold (2000). Accessing the MOOC every few days or receiving reminders of posts on my mobile every few days did feel like being part of something else. I felt that my MOOC was ‘a community of practice’ more than anything else but the depth of experience expressed was enlightening and comforting at the same time in my profession.

References:

Rheingold, H. (2000). The Virtual Community. Available at:http://www.caracci.net/dispense_enna/The%20Virtual%20Community%20by%20Howard%20Rheingold_%20Table%20of%20Contents.pdf. Accessed (1st March 2020).

References used in ethnography:

Lister, Martin … [et al.], (2009) “Chapter 3. Networks, users and economics” from Martin Lister … [et al.], New media: a critical introduction pp.163-236, London: Routledge

Kozinets, R. V. (2010) . Chapter 2: ‘Understanding Culture Online’ Netnograpghy: doing ethnographic research online. London: Sage. pp.21-40.

Kozinets, R. V. (2018). Netnography: Robert Kozinets. Available at: https://www.youtube.com/watch?v=F8axfYomJn4. (Accessed: 27th February 2020)

Bibliography:

Harrison, R. & Michael, T. (2009) Identity in Online Communities: Social Networking Sites and Language Learning Identity in Online Communities: Social Networking Sites and Language Learning. Available at: https://www.researchgate.net/publication/265631150_Identity_in_Online_Communities_Social_Networking_Sites_and_Language_Learning_Identity_in_Online_Communities_Social_Networking_Sites_and_Language_Learning. (Accessed: 25th February 2020).

Knox, J., (2013). Five critiques of the open educational resources movement. Teaching in higher education, 18(8), pp.821-823.

Vasilescu, B., Capillupi, A. & Serebrenik, A. (2012) Gender representation and online participation. A Quantitative study. Availabale at: https://bvasiles.github.io/papers/iwc13.pdf. (Accessed: 25th February 2020).

 

Week 7- An overiew of online cultures.

This has been the concluding week to online cultures with a few (I was relatively busy with the ethnography) links to other studies (here and here) conducted with online cultures. Some of these studies were based on quantitative techniques and done a while ago but they still shed light on the interest in online communities generated by the Web 2.0 technologies boom. Most of these studies reveal multiple opportunities for people to come together for a number of reasons and  ‘that, rather than being socially-impoverished and ‘lean’, there were detailed and personally enriching social worlds being constructed by online groups.’ (Kozinets 2010).

Many people have I believe often experienced the feeling aptly described by Rheingold (2005) of ‘ Finding the WELL was like discovering a cosy little world that had been flourishing without me…’ at the first experiences of joining an online culture. Joining my MOOC slightly later I did get a feel of this ‘becoming part of something already there’. The first post might be a bit intimidating at first but it only takes the first reply or the first post to feel your presence has been felt.

I often wonder what it would feel like to meet members of an online community, or perhaps other people on this course in person one day, somewhere, perhaps a pub, a village square or a university hall. Will it be the feeling of meeting old friends, or of experiencing new relationships? Mobile technologies and fast internet speeds have allowed us to be online all the time making us a continuos presence in online communities and we worry when the community is quiet or has not posted in some time.

The study of my MOOC online community over these past few weeks has been interesting not only in the study of the MOOC itself which is all on the micro-ethnography but also in the first days of online communities when I was bounding from one of three MOOCs until I found the one I wanted to study. Each of these three MOOCs had their own specific community with members that made the community unique.

References:

Kozinets, R. V. (2010) Chapter 2 ‘Understanding Culture Online’, Netnography: doing ethnographic research online. London: Sage. pp. 21-40.

Rheingold, H. (2000). The Virtual Community. Available at:http://www.caracci.net/dispense_enna/The%20Virtual%20Community%20by%20Howard%20Rheingold_%20Table%20of%20Contents.pdf. Accessed (1st March 2020).

 

The cherry on the cake

I found a series of Kozinets videos on YouTube which are short and very illuminating. The pity is that I found them slightly late in the day when I could have made better use of them for my micro-ethnography.