Week 10 Final post – Summary of my Lifestream.

Going back over my lifestream this week, it became apparent how ontogenetic this exercise has been, as my lifestream has gently morphed over these months to suit both the requirements of each block and my own. From short, frequent posts that characterised it at the beginning, my posts grew longer, contained more links and hopefully were more varied

Indulging in some human algorithmic processes as I went through the posts, I found that most of them were triggered by references to literature and websites while others contained links to videos. I did not have as many Twitter posts as I wanted (probably because of my shyness with social media) but I did find some podcasts here and there.

I embrace Kitchin’s (2017) view that algorithms ‘can be conceived in a number of ways’. I believe that there have been several algorithms at play in Education and Digital Cultures. The most obvious has been the technical algorithm, controlling the automation of posts using IFTTT. This allowed me to customize the algorithmic process to my needs, something that would have perhaps been too ‘extensive and complex a task to be tackled’ manually (Kitchin, 2017). The algorithm behind IFTTT allowed me a glimpse into the way triggers act on personal data and behaviour in order to ‘process instructions and produce a result’ (Kithin, 2017) that is tailor-made similar to the way modern technology is envisaging education.

Was I happy to allow an app to trawl my accounts, bring them to the public and ‘shape part of my everyday practices and tasks’ (Kitchin, 2017)? Perhaps not completely, and I did create separate accounts on occasion but the process of ‘domestication’ (Kitchin, 2017) eventually took place.  Were my IFTTT algorithms impartial and objective? No, as the choice of applets was mine and the choice of which feeds to forward too. Yet the lifestream was never meant to be impartial. On the contrary, it helped represent my train of thought over these weeks. Was my algorithm reliable? IFTTT did glitch or shut down on a few occasions. At one point I got a forwarded a YouTube video I had not liked. What would have happened if an algorithm embedded in an educational platform failed? What would have been the outcome?

There was also the cultural and relational dimension to algorithms defined by the connections between members of Education and Digital Cultures (to whom I am indebted) and the experiences of other communities on the MOOC. Here, algorithms acted ‘as a wider network of relations which mediated and refracted their (relations) work’ (Kitchin, 2017). Strong ties encouraged strong community feelings, triggered by posts/comments sent to members which initiated discussions or prompted sharing of experiences.

The artefacts which were another form of algorithm, condensed the knowledge from every block into visual representations, incorporating accumulated data from literature, browsing, forums, suggestions and the course experience itself while encouraging the discovery of new media to represent them.

How has the lifestream experience helped me understand the implications algorithms have on education? Like the algorithms involved in generating content for my lifestream according to my personal choices, algorithms in education make the collection of large amounts of data possible. Data is collected (datafication) to maximise the learning experience (digitization), removing what is extra and presenting it in the best way possible for learners, a process human agency alone would find difficult to replicate as I observed through the lifestream exercise.

Am I thoroughly convinced in the processes of ‘datafication’ of student information and ‘digitization’ of curriculum content (Knox et al, 2020)? Again…not entirely. While datafication is a precious resource in modern educational systems, the adage where the end justifies the means keeps coming to mind.

What happens to the learner when his/her actions are reduced to a collection of numbers (accrued from ‘pervasive data mining and data analytic packages (Williamson,  2017) ) that can be broken down, interpreted, sectioned and grouped into blocks, similar to the way entertainment media or products are categorised in online shopping and entertainment platforms? Is there a risk that learners become a ‘product of consumerist analytic technologies’ (Knox et al, 2020) and black-boxed trade secret algorithms, whereby the value of a person lies in data obtained by tracking his/her behaviour and success?

My lifestream algorithm has been an occasion for me to be both author and agent of the data selected to represent my activity during the course. This is not always the case with all educational platforms. It is, therefore, necessary that exercises pertaining to the collection of student data while done by large corporations are as transparent (by questioning and studying them) as possible while the digitization of the learning experience keeps both learners and teachers at its centre (and safeguards their autonomy), where they can still contribute to the output of the algorithm.

References:

Kitchin. R., (2017) Thinking critically about and researching algorithms,
Information, Communication & Society, 20:1, 14-29, DOI: 10.1080/1369118X.2016.1154087

Knox, J., Williamson, B., & Bayne, S., (2020) Machine behaviourism:
future visions of ‘learnification’ and ‘datafication’ across humans and digital technologies, Learning, Media and Technology, 45:1, 31-45, DOI: 10.1080/17439884.2019.1623251

Williamson, B. 2017. Introduction: Learning machines, digital data and the future of education (chapter 1). In Big Data and Education: the digital future of learning, policy, and practice. Sage.

Featured image created and modified from images obtained on https://pixabay.com.

The Truman Syndrome and Surveillance Capitalism

from Diigo https://ift.tt/2sHfHE8
via IFTTT

I happened to watch The Truman Show yesterday and could not help but notice a number of similarities with issues arising from the literature around algorithmic cultures. The script is peppered with examples of nudging on a subconscious (and then conscious) level that helped keep Truman oblivious to the fiction going on around him while stopping him from travelling outside the dome.

I then spent some time on the internet, looking for literature pertaining to the nudging of Truman and found this instead which shows how the film was inspirational at a time when social media hadn’t yet been popular. There are several parallel themes between the film and concerns of social media nowadays, especially the fact that

Truman is watched and recorded everywhere he goes, and this is used to build up information about his patterns and behaviour, which are then used to provide him with a version of things he wants.

Trott, T.(2018)

References:

Trott, T.(2018). How ‘The Truman Show’ Warned Us About Social Media (Before It Was Invented). Available at: https://medium.com/framerated/how-the-truman-show-warned-us-about-social-media-before-it-was-invented-f19819f1c87a. (Accessed: 22nd March 2020).

Liked on YouTube: Ben Williamson, University of Edinburgh

Ben Williamson questions whether the misuse of algorithms and big data collection can affect the way the public perceives education technology and hence resists it. He gives a number of very significant examples of how technologies and algorithms can go wrong. Some of these cases feature systems that had never been tested before or others which question the issue of privacy with data collection practices.

He describes a number of studies looking into ways of collecting ‘intimate data about the bodies and the brains of students’ such as DNA IQ tests based on saliva tests and neuroptimized education platforms that collect data ‘leaking’ from children’s brains through brain bands. These systems are able to make predictions about children’s’ intelligence and attainment but how accurate are they or are they even ethical at all.

 

Ben Williamson, University of Edinburgh
Ben Williamson, Senior Researcher, University of Edinburgh

Through the Twitter Mob and the Critical EdTech Activists, do we experience an Edtech push-back? Why?

www.uis.no
via YouTube https://www.youtube.com/watch?v=UVg56JmpGV8

Week 9 – Algorithms and the future

Images obtained and modified from https;//pixabay.com

This post brings me closer to the end of the block on Algorithmic Cultures. Most of my time this week was dedicated to the artefact, which brings together the literature, observations and experimentation with algorithms.  My interest over the past few days has been evaluating the socio-economical dimension of algorithms in popular platforms and education. Williamson (2017) describes the impetus of Silicon Valley enterprises and entrepreneurs and their interest in developing ‘incubators’ as prototypes for a new wave of education.

Williamson’s (2017) concept of sociotechnical imaginaries describes the way large corporations approach education…and ‘whose aspirations are therefore becoming part of how collectively and publicly shared visions of the future are accepted, implemented and taken up in daily life’. This begs the question of whether education within this vision can ever be free from the bias that exists when it is filtered through the strata of political, commercial and legislative machines. How unbiased can education be when the concept of learning and teaching becomes a set of data that can be studied, categorised and developed in a software lab?

Another case in point is the concept of nudging, also mentioned in a couple of my posts this week. While nudging can help students by providing them with timely feedback, support and content, one wonders whether this useful tool can be used to promote ideals that go beyond the educational aims, whose scope is to act as part of models ‘to which certain actors hope to make reality conform, serving as ‘distillations of practices’ for the shaping of behaviours and technologies for visualizing and governing particular ways of life ad forms of social order (Huxley, sited in Williamson, 2017).

 

References

Williamson, B. 2017. Introduction: Learning machines, digital data and the future of education (chapter 1). In Big Data and Education: the digital future of learning, policy, and practice. Sage.

Algorithmic Play Artefact

Please find my Algorithmic Play artefact HERE.

This has been a busy and very interesting week and I hope my artefact has managed to incorporate a bit of everything…analysis…humour…reflection and some conclusions. I hope you enjoy it.

 

 

What is Nudging and how can Nudging help Students to Enroll in College

This is a short video on nudging that presupposes that humans are irrational beings and that ‘most human decision-making is inherently irrational, habitual and predictable’ (Knox et al, 2020). It represents one side of the story. Below is another view of nudging, questioning the ethics behind nudging. Does the fact that people nowadays do not have the time and objectivity to reason logically, give carte blanche to industries to select for them? Is there really freedom of choice or is the concept just an illusion?

Below is a Soundcloud link that describes how nudging can be useful to students in order to help them make correct choices when enrolling in College. Knight (cited in Knox et al, 2020) also makes reference to this:

The UK higher education regulator, the Office for Students, has also adopted aspects of behavioural design to inform how it presents data to prospective university students – thus nudging them to make favourable choices about which degrees to study and which institutions to choose for application – while the Department for Education’s new ‘Education Lab’ positions behavioural science as a key source of scientific expertise  in policy design (Knight 2019).

https://ift.tt/3aVwJ2w

A growing body of research have found that small-scale behavioral nudge campaigns can get students to complete complex tasks, such as refiling for federal financial aid to attend college. But researchers don’t yet know enough about why certain nudges have worked in the past or whether they would still work on a larger scale.

On this episode of On the Evidence, we talk with Jenna Kramer, an associate policy researcher at RAND Corporation, and Kelly Ochs Rosinger, an assistant professor in the Department of Education Policy Studies at The Pennsylvania State University, about efforts to use large-scale nudges to increase college and financial aid applications, increase college enrollment, and bolster college students’ persistence in completing college.

This episode is part of a series produced by Mathematica in support of the Association for Public Policy Analysis and Management (APPAM) and its fall research conference.

Kramer and Rosinger participated in an APPAM panel about scaling nudge interventions in post-secondary education. A summary of the panel as well as links to papers discussed in the session is available here: https://ift.tt/2IHWash

To keep up with Kramer and Rosinger’s work, follow them on Twitter. Kramer is @j_w_kramer and Rosinger is @kelly_rosinger.
via IFTTT

References:

Knox, J., Williamson, B., & Bayne, S., (2020) Machine behaviourism:
future visions of ‘learnification’ and ‘datafication’ across humans and digital technologies, Learning, Media and Technology, 45:1, 31-45, DOI: 10.1080/17439884.2019.1623251

 

Amazon, Google, and the Ethics of Data: How Berkeley Students Can Compete with Tech Giants

 

from Diigo https://ift.tt/2IFcMR8
via IFTTT

Companies like Netflix have systems in place to pass on user information and preferences to third parties using a system that hides/encrypts private data or their provenance. It basically collects data about users without revealing who they are.  If not already in place, it will not take long for large corporations like Google, Amazon and others to sell their services to smaller companies who require data from consumers/users to make decisions. It is true that privacy will not be revealed but the data is still being sold.

Our behaviours online can seemingly predict many things, not only what we buy and watch but who we vote for, what our income bracket is and those times in a year when we can afford a holiday. This may sound worrying but it is the price that we have to pay for having our lives tailor-made, at least online.

 

Is Education still in the hands of Educators?

Why is education so irresistible for Silicon Valley entrepreneus?

from Diigo https://ift.tt/2xlzCLh
via IFTTT

The field of education has always been fertile ground for Silicon Valley entrepreneurs in search of new opportunities to develop hardware and software advertised as solutions for bettering education under the ‘liberal politics of the technology.’ (Williamson et el, 2018). The culture of a sense of freedom enjoyed by these companies in political and economic terms, as described by Williamson et al (2018)   is one of the driving forces behind the implementation of new technological trends in various sections of society.

Ferenstein (2015) terms the new Silicon Valley liberals ‘civicrats,’ or ‘techDemocrats,’ whose goal is to make everyone innovative, healthy, civic and educated, and see government’s role as an investor in maximizing people’s contribution to the economy and society.
(cited in Williamson et el, 2018)
What Ferenstein describes is a mentality which believes that anything can be solved having the right tools and resources…that humans are fundamentally ‘faulty’ and that issues concerning human limitations can be solved through technology. It is perhaps the same impetus that drives pioneers of technology to often redefine schooling in terms of what their technologies can ‘solve’ rather than how technologies can change pedagogies and design of syllabi. They flutter their banner of innovation based on the idea of ‘charter’ schools, independent centres that are funded to experiment freely from academic legislation governing other types of schools with the scope of showing how technologies are one big solution to most educational limitations (Williamson et el, 2018)

The idea of education as a capitalist goldmine has meant that any serious enterprise willing to invest in education has been required to break the learning process into quantifiable data (datafication). This is the same as a live update feeds on stock markets at Wall Street. Williamson (2017) describes how recent developments in technology have concerned themselves with the real-time collection of data pertaining to the way people learn in order to provide more personalised learning experiences. An example of this is the Silicon Schools Fund in America which promotes the creation of

 ‘laboratories of innovation and proof points for personalized learning (Williamson et al, 2018):

Schools that give each student a highly-personalized education, by combining the best of traditional education with the transformative power of technology

 Students gaining more control over the path and pace of their learning, creating better schools and better outcomes

Software and online courses that provide engaging curriculum, combined with real-time student data, giving teachers the information they need to support each student

Teachers developing flexibility to do what they do bestinspire, facilitate conversations, and encourage critical thinking

Personalised learning also advocates against standardized methods of testing and learning, pushing instead towards tailor-made solutions dependent on the collection of data similar to that experienced when shopping online, using GPS data, searching or booking flights. This personalization is possible because huge amounts of data can be collected, stored and kept while algorithms constantly check any developments in patterns and behaviour. While admirable in many ways, one questions how this data can be used in other ways that benefit companies to promote their goods.
References:
Williamson, B. 2017. Introduction: Learning machines, digital data and the future of education (chapter 1). In Big Data and Education: the digital future of learning, policy, and practice. Sage.

Williamson, B., Means, A. & Saltman, K. (2018).Startup Schools, Fast Policies, and Full-Stack Education Companies. Available at: https://www.researchgate.net/publication/327404706_Startup_Schools_Fast_Policies_and_Full-Stack_Education_Companies. (Accessed: 11th March 2020).

Images obtained and modified from https://pixabay.com

Algorithms and decisions

Are algorithms foolproof? Can they transcend human error and bias. The NewEconomy questions algorithms below.

from Diigo https://ift.tt/3aHETez
via IFTTT

Since time immemorial, man has used religion, fortune-telling, talismans and blind faith when faced with difficult decisions. Similarly, mantras and finger-crossing have also been practised to appease the mind into making difficult choices. Yet nowadays these routines seem particularly antiquated and obsolete when most decisions are taken by personal devices and online services which appear tailor-made for us. Can we trust them?

Far from being neutral and all-knowing decision tools, complex algorithms are shaped by humans, who are, for all intents and purposes, imperfect. Algorithms function by drawing on past data while also influencing real-life decisions, which makes them prone, by their very nature, to repeating human mistakes and perpetuating them through feedback loops. Often, their implications can be unexpected and unintended.

and similarly….

First, algorithms act as part of a wider network of relations which mediate and refract their work, for example, poor input data will lead to weak outcomes (Goffey, 2008; Pasquale, 2014). Second, the performance of algorithms can have side effects and unintended consequences, and left unattended or unsupervised they can perform unanticipated acts (Steiner, 2012). Third, algorithms can have biases or make mistakes due to bugs or miscoding (Diakopoulos and Drucker as cited in Kitchin, 2017).

 

 

References:

Kitchin. R., (2017) Thinking critically about and researching algorithms,
Information, Communication & Society, 20:1, 14-29, DOI: 10.1080/1369118X.2016.1154087

Week 8-Algorithms for everyone

Pavstud

 

It would have been very handy to design an algorithm to filter my most inspired posts on this blog from the more run of the mill ones. On the other hand, this could prove futile in a blog aimed at documenting my train of thought throughout Education and Digital Cultures. Algorithms are as much about filtering out ‘undesired’ data as about whitelisting user choices.

There were two main arguments running in tandem through the posts this week. The right for the informed public to have access to algorithms behind some of the most popular social media platforms and how algorithms in education can either help or destroy notions of learning. Since algorithms are ‘adjudicating more and more consequential decisions in our lives’ (Diakopoulos, cited in Kitchin, 2017) and they are essentially capitalist in nature, one has to question who they are serving. Their chimaeric nature made up of many networked ‘hands’ (Seaver, cited in Kitchin, 2017) is perhaps why studying their effect is not straightforward. Yet algorithms feed-in human ingenuity or lack of knowledge about them and so need to be ethically managed.

Algorithms and AIEd is also a field of education that is often contested because of a return to a behavioural approach to learning. Perhaps this might not be the Pavlovian route where learners are given instant gratification but more of a consumerist perspective that monitors learning to collect data and tailor effective learning solutions through positive behaviour. Reinforcement learning and nudging are perhaps two of the most effective ways to shape learning. Not only are technologies shaping learning but more often than not they are shaping humans to act like machines, thereby stripping them of their autonomy by negating them access to what is being filtered out.

The ‘learner’ is now an irrational and emotional subject whose behaviours and actions are understood to be both machine-readable by learning algorithms and modifiable by digital hypernudge platforms.  (Knox et al, 2020)

References:

Kitchin. R., (2017) Thinking critically about and researching algorithms,
Information, Communication & Society, 20:1, 14-29, DOI: 10.1080/1369118X.2016.1154087

Knox, J., Williamson, B., & Bayne, S., (2020) Machine behaviourism:
future visions of ‘learnification’ and ‘datafication’ across humans and digital technologies, Learning, Media and Technology, 45:1, 31-45, DOI: 10.1080/17439884.2019.1623251