The Truman Syndrome and Surveillance Capitalism

from Diigo https://ift.tt/2sHfHE8
via IFTTT

I happened to watch The Truman Show yesterday and could not help but notice a number of similarities with issues arising from the literature around algorithmic cultures. The script is peppered with examples of nudging on a subconscious (and then conscious) level that helped keep Truman oblivious to the fiction going on around him while stopping him from travelling outside the dome.

I then spent some time on the internet, looking for literature pertaining to the nudging of Truman and found this instead which shows how the film was inspirational at a time when social media hadn’t yet been popular. There are several parallel themes between the film and concerns of social media nowadays, especially the fact that

Truman is watched and recorded everywhere he goes, and this is used to build up information about his patterns and behaviour, which are then used to provide him with a version of things he wants.

Trott, T.(2018)

References:

Trott, T.(2018). How ‘The Truman Show’ Warned Us About Social Media (Before It Was Invented). Available at: https://medium.com/framerated/how-the-truman-show-warned-us-about-social-media-before-it-was-invented-f19819f1c87a. (Accessed: 22nd March 2020).

Liked on YouTube: Ben Williamson, University of Edinburgh

Ben Williamson questions whether the misuse of algorithms and big data collection can affect the way the public perceives education technology and hence resists it. He gives a number of very significant examples of how technologies and algorithms can go wrong. Some of these cases feature systems that had never been tested before or others which question the issue of privacy with data collection practices.

He describes a number of studies looking into ways of collecting ‘intimate data about the bodies and the brains of students’ such as DNA IQ tests based on saliva tests and neuroptimized education platforms that collect data ‘leaking’ from children’s brains through brain bands. These systems are able to make predictions about children’s’ intelligence and attainment but how accurate are they or are they even ethical at all.

 

Ben Williamson, University of Edinburgh
Ben Williamson, Senior Researcher, University of Edinburgh

Through the Twitter Mob and the Critical EdTech Activists, do we experience an Edtech push-back? Why?

www.uis.no
via YouTube https://www.youtube.com/watch?v=UVg56JmpGV8

Week 9 – Algorithms and the future

Images obtained and modified from https;//pixabay.com

This post brings me closer to the end of the block on Algorithmic Cultures. Most of my time this week was dedicated to the artefact, which brings together the literature, observations and experimentation with algorithms.  My interest over the past few days has been evaluating the socio-economical dimension of algorithms in popular platforms and education. Williamson (2017) describes the impetus of Silicon Valley enterprises and entrepreneurs and their interest in developing ‘incubators’ as prototypes for a new wave of education.

Williamson’s (2017) concept of sociotechnical imaginaries describes the way large corporations approach education…and ‘whose aspirations are therefore becoming part of how collectively and publicly shared visions of the future are accepted, implemented and taken up in daily life’. This begs the question of whether education within this vision can ever be free from the bias that exists when it is filtered through the strata of political, commercial and legislative machines. How unbiased can education be when the concept of learning and teaching becomes a set of data that can be studied, categorised and developed in a software lab?

Another case in point is the concept of nudging, also mentioned in a couple of my posts this week. While nudging can help students by providing them with timely feedback, support and content, one wonders whether this useful tool can be used to promote ideals that go beyond the educational aims, whose scope is to act as part of models ‘to which certain actors hope to make reality conform, serving as ‘distillations of practices’ for the shaping of behaviours and technologies for visualizing and governing particular ways of life ad forms of social order (Huxley, sited in Williamson, 2017).

 

References

Williamson, B. 2017. Introduction: Learning machines, digital data and the future of education (chapter 1). In Big Data and Education: the digital future of learning, policy, and practice. Sage.

Algorithmic Play Artefact

Please find my Algorithmic Play artefact HERE.

This has been a busy and very interesting week and I hope my artefact has managed to incorporate a bit of everything…analysis…humour…reflection and some conclusions. I hope you enjoy it.

 

 

What is Nudging and how can Nudging help Students to Enroll in College

This is a short video on nudging that presupposes that humans are irrational beings and that ‘most human decision-making is inherently irrational, habitual and predictable’ (Knox et al, 2020). It represents one side of the story. Below is another view of nudging, questioning the ethics behind nudging. Does the fact that people nowadays do not have the time and objectivity to reason logically, give carte blanche to industries to select for them? Is there really freedom of choice or is the concept just an illusion?

Below is a Soundcloud link that describes how nudging can be useful to students in order to help them make correct choices when enrolling in College. Knight (cited in Knox et al, 2020) also makes reference to this:

The UK higher education regulator, the Office for Students, has also adopted aspects of behavioural design to inform how it presents data to prospective university students – thus nudging them to make favourable choices about which degrees to study and which institutions to choose for application – while the Department for Education’s new ‘Education Lab’ positions behavioural science as a key source of scientific expertise  in policy design (Knight 2019).

https://ift.tt/3aVwJ2w

A growing body of research have found that small-scale behavioral nudge campaigns can get students to complete complex tasks, such as refiling for federal financial aid to attend college. But researchers don’t yet know enough about why certain nudges have worked in the past or whether they would still work on a larger scale.

On this episode of On the Evidence, we talk with Jenna Kramer, an associate policy researcher at RAND Corporation, and Kelly Ochs Rosinger, an assistant professor in the Department of Education Policy Studies at The Pennsylvania State University, about efforts to use large-scale nudges to increase college and financial aid applications, increase college enrollment, and bolster college students’ persistence in completing college.

This episode is part of a series produced by Mathematica in support of the Association for Public Policy Analysis and Management (APPAM) and its fall research conference.

Kramer and Rosinger participated in an APPAM panel about scaling nudge interventions in post-secondary education. A summary of the panel as well as links to papers discussed in the session is available here: https://ift.tt/2IHWash

To keep up with Kramer and Rosinger’s work, follow them on Twitter. Kramer is @j_w_kramer and Rosinger is @kelly_rosinger.
via IFTTT

References:

Knox, J., Williamson, B., & Bayne, S., (2020) Machine behaviourism:
future visions of ‘learnification’ and ‘datafication’ across humans and digital technologies, Learning, Media and Technology, 45:1, 31-45, DOI: 10.1080/17439884.2019.1623251

 

Amazon, Google, and the Ethics of Data: How Berkeley Students Can Compete with Tech Giants

 

from Diigo https://ift.tt/2IFcMR8
via IFTTT

Companies like Netflix have systems in place to pass on user information and preferences to third parties using a system that hides/encrypts private data or their provenance. It basically collects data about users without revealing who they are.  If not already in place, it will not take long for large corporations like Google, Amazon and others to sell their services to smaller companies who require data from consumers/users to make decisions. It is true that privacy will not be revealed but the data is still being sold.

Our behaviours online can seemingly predict many things, not only what we buy and watch but who we vote for, what our income bracket is and those times in a year when we can afford a holiday. This may sound worrying but it is the price that we have to pay for having our lives tailor-made, at least online.

 

Is Education still in the hands of Educators?

Why is education so irresistible for Silicon Valley entrepreneus?

from Diigo https://ift.tt/2xlzCLh
via IFTTT

The field of education has always been fertile ground for Silicon Valley entrepreneurs in search of new opportunities to develop hardware and software advertised as solutions for bettering education under the ‘liberal politics of the technology.’ (Williamson et el, 2018). The culture of a sense of freedom enjoyed by these companies in political and economic terms, as described by Williamson et al (2018)   is one of the driving forces behind the implementation of new technological trends in various sections of society.

Ferenstein (2015) terms the new Silicon Valley liberals ‘civicrats,’ or ‘techDemocrats,’ whose goal is to make everyone innovative, healthy, civic and educated, and see government’s role as an investor in maximizing people’s contribution to the economy and society.
(cited in Williamson et el, 2018)
What Ferenstein describes is a mentality which believes that anything can be solved having the right tools and resources…that humans are fundamentally ‘faulty’ and that issues concerning human limitations can be solved through technology. It is perhaps the same impetus that drives pioneers of technology to often redefine schooling in terms of what their technologies can ‘solve’ rather than how technologies can change pedagogies and design of syllabi. They flutter their banner of innovation based on the idea of ‘charter’ schools, independent centres that are funded to experiment freely from academic legislation governing other types of schools with the scope of showing how technologies are one big solution to most educational limitations (Williamson et el, 2018)

The idea of education as a capitalist goldmine has meant that any serious enterprise willing to invest in education has been required to break the learning process into quantifiable data (datafication). This is the same as a live update feeds on stock markets at Wall Street. Williamson (2017) describes how recent developments in technology have concerned themselves with the real-time collection of data pertaining to the way people learn in order to provide more personalised learning experiences. An example of this is the Silicon Schools Fund in America which promotes the creation of

 ‘laboratories of innovation and proof points for personalized learning (Williamson et al, 2018):

Schools that give each student a highly-personalized education, by combining the best of traditional education with the transformative power of technology

 Students gaining more control over the path and pace of their learning, creating better schools and better outcomes

Software and online courses that provide engaging curriculum, combined with real-time student data, giving teachers the information they need to support each student

Teachers developing flexibility to do what they do bestinspire, facilitate conversations, and encourage critical thinking

Personalised learning also advocates against standardized methods of testing and learning, pushing instead towards tailor-made solutions dependent on the collection of data similar to that experienced when shopping online, using GPS data, searching or booking flights. This personalization is possible because huge amounts of data can be collected, stored and kept while algorithms constantly check any developments in patterns and behaviour. While admirable in many ways, one questions how this data can be used in other ways that benefit companies to promote their goods.
References:
Williamson, B. 2017. Introduction: Learning machines, digital data and the future of education (chapter 1). In Big Data and Education: the digital future of learning, policy, and practice. Sage.

Williamson, B., Means, A. & Saltman, K. (2018).Startup Schools, Fast Policies, and Full-Stack Education Companies. Available at: https://www.researchgate.net/publication/327404706_Startup_Schools_Fast_Policies_and_Full-Stack_Education_Companies. (Accessed: 11th March 2020).

Images obtained and modified from https://pixabay.com

Algorithms and decisions

Are algorithms foolproof? Can they transcend human error and bias. The NewEconomy questions algorithms below.

from Diigo https://ift.tt/3aHETez
via IFTTT

Since time immemorial, man has used religion, fortune-telling, talismans and blind faith when faced with difficult decisions. Similarly, mantras and finger-crossing have also been practised to appease the mind into making difficult choices. Yet nowadays these routines seem particularly antiquated and obsolete when most decisions are taken by personal devices and online services which appear tailor-made for us. Can we trust them?

Far from being neutral and all-knowing decision tools, complex algorithms are shaped by humans, who are, for all intents and purposes, imperfect. Algorithms function by drawing on past data while also influencing real-life decisions, which makes them prone, by their very nature, to repeating human mistakes and perpetuating them through feedback loops. Often, their implications can be unexpected and unintended.

and similarly….

First, algorithms act as part of a wider network of relations which mediate and refract their work, for example, poor input data will lead to weak outcomes (Goffey, 2008; Pasquale, 2014). Second, the performance of algorithms can have side effects and unintended consequences, and left unattended or unsupervised they can perform unanticipated acts (Steiner, 2012). Third, algorithms can have biases or make mistakes due to bugs or miscoding (Diakopoulos and Drucker as cited in Kitchin, 2017).

 

 

References:

Kitchin. R., (2017) Thinking critically about and researching algorithms,
Information, Communication & Society, 20:1, 14-29, DOI: 10.1080/1369118X.2016.1154087

Week 8-Algorithms for everyone

Pavstud

 

It would have been very handy to design an algorithm to filter my most inspired posts on this blog from the more run of the mill ones. On the other hand, this could prove futile in a blog aimed at documenting my train of thought throughout Education and Digital Cultures. Algorithms are as much about filtering out ‘undesired’ data as about whitelisting user choices.

There were two main arguments running in tandem through the posts this week. The right for the informed public to have access to algorithms behind some of the most popular social media platforms and how algorithms in education can either help or destroy notions of learning. Since algorithms are ‘adjudicating more and more consequential decisions in our lives’ (Diakopoulos, cited in Kitchin, 2017) and they are essentially capitalist in nature, one has to question who they are serving. Their chimaeric nature made up of many networked ‘hands’ (Seaver, cited in Kitchin, 2017) is perhaps why studying their effect is not straightforward. Yet algorithms feed-in human ingenuity or lack of knowledge about them and so need to be ethically managed.

Algorithms and AIEd is also a field of education that is often contested because of a return to a behavioural approach to learning. Perhaps this might not be the Pavlovian route where learners are given instant gratification but more of a consumerist perspective that monitors learning to collect data and tailor effective learning solutions through positive behaviour. Reinforcement learning and nudging are perhaps two of the most effective ways to shape learning. Not only are technologies shaping learning but more often than not they are shaping humans to act like machines, thereby stripping them of their autonomy by negating them access to what is being filtered out.

The ‘learner’ is now an irrational and emotional subject whose behaviours and actions are understood to be both machine-readable by learning algorithms and modifiable by digital hypernudge platforms.  (Knox et al, 2020)

References:

Kitchin. R., (2017) Thinking critically about and researching algorithms,
Information, Communication & Society, 20:1, 14-29, DOI: 10.1080/1369118X.2016.1154087

Knox, J., Williamson, B., & Bayne, S., (2020) Machine behaviourism:
future visions of ‘learnification’ and ‘datafication’ across humans and digital technologies, Learning, Media and Technology, 45:1, 31-45, DOI: 10.1080/17439884.2019.1623251

Do algorithms make us behave?

 

This video explains the concept of reinforcement learning in machines and gives some very good examples by showing how the algorithm behind reinforcement learning continuously compares particular actions (responses) into the machine engine (in this case a game). When a positive result is achieved and a reward is given, the set of steps leading to that reward is saved. This keeps going on in order to accrue as many positive behaviours as possible. When the concept of reward is not that straightforward in that the steps to get to a reward are much more complex, reward shaping and adding more rewards for every scenario is possible (although time-consuming). Training without rewards is very hard in reinforcement learning, a technique which closely echoes the behavioural learning patterns of early educational systems.

The idea of algorithmic systems that pepper student learning with occasions for enjoying reward (as in the case of easy quizzes in MOOCs) may act as the carrot before the donkey in order to promote the self-directing learner while providing an occasion for ‘datafication’ and collection of data (Williamson, 2017). In this case, student behaviour becomes a very ‘valuable commodity’ (Knox et al, 2020) in providing the ‘action to the state’ as explained in the video because it can help predict outcomes. Ironically students are then providing their behaviour patterns for free to the users of CMSs, VLEs and MOOCs.

not only is data positioned before the desires of the learner as the authoritative source for educational action, but the role of the learner itself is also recast as the product of consumerist analytic
technologies. (Knox et al, 2020)

Educational systems that study and collect data in order to provide ‘the best possible learning experience’ and ‘limit’ the online learner to a simple reward system are an example of Biesta’ s concept of ‘learnification’, whereby the system is merely interested in producing successful students and growing numbers of successful students. This kind of ‘solutionism’ is a far cry from the learning process envisaged by Biesta. (Biesta, 2012). The social dimension of education is absent as a starter and learning is reduced to the concept of playing a basic video game (like Pong) in which the reward rather than the playing experience is what ultimately counts, reducing the learner to the idea of a ‘product’ (Rushkoff, cited in Knox et al, 2020). This is a view deeply enshrined in radical behaviourism and a concept built upon the binary determinism of computer systems that are able to break down responses to knowledge into a system of ‘ons’ and ‘offs’ that will eventually (even thanks to the development in quantum computing) challenge or even outperform the best human minds as seen below.

References:

Biesta, G., (2012). Giving Teaching back to education: Responding ot the disappearance of the teacher. Phenomenology & Practice 6 (2)pp 35-49.

Knox, J., Williamson, B., & Bayne, S., (2020) Machine behaviourism:
future visions of ‘learnification’ and ‘datafication’ across humans and digital technologies, Learning, Media and Technology, 45:1, 31-45, DOI: 10.1080/17439884.2019.1623251

Williamson, B. 2017. Introduction: Learning machines, digital data and the future of education (chapter 1). In Big Data and Education: the digital future of learning, policy, and practice. Sage.