[ Week 10 ] Final Summary: how algorithms have influenced my lifestream blog

  • What kind of algorithms have been involved, and how do you perceive them to have operated? How were your activities influenced?
  • How do the identified algorithmic operations relate to particular readings from this block? Which ideas from the readings help you to explain what might have been happening?

I mainly used Google, Youtube, and Twitter to feed into my lifestream so searching, filtering and recommending systems have been involved. However, unlike many hypes about ‘personalized’ algorithms, recommendations based on big data did not work for me. For example, I searched and clicked articles ‘criticism about TEL or LA’, I rarely found the relevant materials. Rather, advantages and commercials of edutech were prioritized, located at the top of the pages. In this sense, digital data and algorithms may not be free from ‘political agendas, commercial interests, entrepreneurial ambitions’ as Williamson(2017) claims and lead to click unrelated materials which may be induced for their (Google, Youtube) own sake, so as Kitchin(2017, 18) puts it, ‘algorithms are created for purposes that are often far from neutral: to create value and capital; to nudge behavior and structure preferences in a certain way’. 

However, on Twitter, it was a bit different although recommended feeds still did not work for me. There were peers’ provoking, reflexive ideas which enabled me to rethink themes and search for inspiring articles.

However, in general, because of these impersonalized algorithms, I needed to dedicate much more time than I expected to find materials that match my interests. It was hard and time-consuming for me as a non-native English speaker, because I should read each article in sequence suggested by algorithms to identify whether or not articles matched with my interests, which was not easy. 

In this sense, the searching and ranking algorithms were virtually obstacles, rather than guides for me.

  • What does this algorithmic activity potentially mean for notions of learning and teaching, authorship and agency?

I tried not to passively consume the information that algorithms suggested, but interpret, evaluate and defy. Participating in (re)developing algorithms is also important in terms of human agency over algorithms, although I am not sure if it helped, I clicked the ‘like button’ of the videos YouTube, linked articles on Twitter and Facebook which had critical views on digital culture, so I hope it works. And because of this agency, I think I have a ‘co-authorship’ of my blog with writers of the linked articles as I tried to share, remix, and make meaningful sense of the articles.

In this regard, this lifestream blog does not fully represent my learning journey because time and effort of my decision regarding which things were included and which were not, were unseen. So I would describe my blog as a photo album containing photographs captured at one moment during a learning journey.

However, if without ideas from reading lists, peers’ activities and teachers’ provoking comments, I would not have the agency over algorithms. Regarding digital technology, each fact and idea may be right, born with a good purpose. But teaching and learning may be about which value we should stand for and how we participate in algorithmic culture as a student and an educator. Unless, the agency over algorithms would be exclusive property for the central government, top universities, and mega-corporations which can access and possess our data which may exacerbate the existing inequalities in our society. 



Kithcin, R. 2017. Thinking Critically about Researching Algorithms. Information, Communication & Society, 20:1, 14-29, DOI: 10.1080/1369118X.2016.1154087

Williamson, B. 2017. Introduction: Learning machines, digital data and the future of education (chapter 1). In Big Data and Education: the digital future of learning, policy, and practice. Sage.

Williamson, B. 2019. Automating mistrust. Code Acts in Education.

[ Week 9 Summary] Human involvement is needed in the algorithmic culture

This article reports about how we use learning analytics considering the racial bias and limitations embedded in algorithms. Georgia State University has not only introduced learning analytics but also tripled the number of student advisers so that humans, not just data and algorithms can actively intervene, and consult with students in person. However, these efforts are not enough because unseen, but powerful power is behind and embedded in algorithms.

Whether a certain search result is left or gone is decided by for-profit corporations that can collect and access a huge volume of data, the school administration may use unproven algorithms to monitor students which may lead to more unequal, stigmatized influence on students. In addition to this, if students can use tech-powered personalized learning, and students in low-income areas cannot, it may perpetuate injustice in (re)developing algorithms ignoring a variety of contexts as well as accessing fancy devices and software. And this problem of inequality applies to the contrary case. If personalized learning applies to students in less privileged backgrounds only to handle the lack of teachers, then they would lose the luxury of face to face interaction with teachers. 

Maybe the most important feature of datafication, algorithmic culture is the character of ‘unseen’ which is also be used to hidden control systems, so it is not enough to intervene in the interpreting process. We should participate in remaking algorithms to make it more fair and meaningful, teach our students algorithmic literacy, and claim our rights to algorithms

[ Week 8 Summary ] Impersonalised algorithms

It has not been long that I recognized almost wherever I went, data about my information and materials I clicked were collected, and algorithms were working. The very first time I noticed about algorithms working was when I was shopping online. I can see additional items recommended by big data, which means if I clicked item A, and big data and algorithms would recommend item B that were seen or bought by other people who clicked or purchased item A.

At first, I thought it was interesting and efficient, but actually, I didn’t like recommended items and concerned about since when they collected peoples’ data without consent. Moreover, I felt embarrassed the website just lead to more clicks and purchases using the rhetoric of ‘suggestion for me’ which uses much more data of others than mine. So I thought it was suspicious their recommendation was really for ‘me’.

During this block three, I have seen many hypes about big data, learning analytics and algorithms which claims they enable us to provide students with tailored, personalized education. However, it can create unexpected harms, and students are merely reduced to numbers and codes, algorithms may be developed by ‘majority’ which mean ‘normal’,’standardized’ so small number of people are ignored or even warned as a cheater. And as the speaker of this video claims, even we don ‘t decide what gets in and gets out

So, in this matter, algorithms may not be efficient and personalized but rather than impersonalized.



[ Week 7 Summary ] Transformation of MOOC

This week, I explored the transformation of MOOC in terms of more interaction, drawing in excluded participants, localized context, and collaborative teaching.

Exploring MOOCs for four weeks(and microethnography) was a great chance to look into the MOOCs, from the birth which many people predicted as an innovation or even revolutionist to subvert the traditional education to the criticism for lack of interaction and commercialization, and at the same time, transformative MOOCs which is uncertain if we may call it MOOCs.

However, despite many criticisms and changes, I cannot say MOOCs is meaningless. At least in Korea, before MOOCs, digital education(or e-learning) was considered just as a supplementary tool such as preparing future promising jobs or delivering popular private tutors’ lectures for college entrance exams. But the advent of MOOC made digital education discourse mainstream among citizens in contrast to the past when the discourse was largely the thing of academics and policymakers.

So in this regard, even though MOOCs may have failed to meet the original expectations, I think it was a great starting point to expand our ideas and rethink what we have been missing in the educational sense.

[ Week 6 Summary ] Changing MOOCs

This week, I compared the predictions of MOOCs to the current state of MOOCs. I remembered the rosy prospects, anyone who wanted to get higher education can be taught by world-class universities’ professors for free. It also predicted the democratized, global communities that shared ideas from all over the world.

However, the criticisms such as low completion rates and lack of interaction have been the issues which made course providers working on to handle like this. Despite these attempts, MOOCs have been gradually charging fees, introducing career development programs for corporations.

Then, are MOOCs really dead? I am not sure but personally, I am wondering if we limit the possibilities about what MOOCs can do. After reading the struggles and changes of MOOCs, I have started to think at least it is worth trying. Maybe it will work in other contexts and we should see where it goes.


[ Week 5 Summary ] Unintended harms caused by participatory culture

In the previous week, I explored the hidden commercial and political, deep-seated culture in schools behind the community culture.

However, putting aside the external power relations, participatory community culture may engender certain biases such as this and this, and because of this unintentional and invisible character, we also may contribute to strengthening existing biases if there is no intervention at all.

This article gives us a reason(or excuse) why the completion rate is so low in MOOC despite its ambitious goal of ensuring universal access to higher education to all. The incompleteness can be translated as an exploration, then can we know when this exploration ends or where it goes in the flood of various kinds of MOOCS(not only world-famous one but also localized one) and lectures in YouTube? As a student, can we find out which one is suitable for me and follow all processes designed by whom we never have interacted on our own?

To handle this, we can try to do our best to enter more keywords to find out the ‘best one’ which we can dedicate as a participant. However, it may lead us to a very localized, fragmented online community and cause other social problems like this.

[ Week 4 Summary ] Hidden and embodied power dynamics of virtual community

This and this videos claim that virtual community culture can make a change if we passionately look for the right community and participate or even say ‘one twit can change the world’. They could find out where they want to belong to so that they recover self-esteem and secure the site of change through participation.

However, the virtual community does not always work like these. It may create a new culture that is different from the present culture, but it also can be the extension of our offline world, existing power dynamics. Monica Lewinsky did not complain about the vicious behaviors of individuals or appeal for sympathy about what she has gone through but revealed the hidden power- industries that pursue profits at any cost. It may start from the connection as the term World Wide Web implies, but as time goes by, communication and participation are being replaced with commercialization.

Let alone commercial interests, political decision and existing cultural context or organizational resistance against the change make our society more complicated. It may cause unexpected results combining existing power relations.

However, it does not mean that the digital community cannot change anything. ‘One twit’ may not change the world, but I think it would be a starting point if we recognise the implicit value, relationships and ponder about our next move continuously. 


[ Week 3 Summary ] Questions regarding what we should think about making digital technology policies

This week, I have thought about the role of policymakers regarding digital technology. As we can see in UK Policy paper and this article, in reality, technology has served as a tool for certain political intentions and not necessarily proved its efficiency as the OECD report and this news article claims.

Crudely putting, I would say there are three types of policies in practice- fostering, supplementing the deficiency, and monitoring. And regarding whether a certain policy can survive may depend on if we managed to meet our goal which was described in the proposal at the stage of securing budget.

When I was working in the special education department in the Korean government, technology-related policies focused on  ‘supplementing the deficiency’ such as providing assistive technologies for students with disabilities like this or providing online courses for students with health impairments which is similar to the Chinese government’s action to the recent coronavirus crisis. These kinds of policies can easily survive and supported by participants in schools because in this context, providing help itself becomes output because it contributes to the equality of access to education. 

However, in relation to ‘fostering’, it is not easy to decide what the goal, input, and output are. For example, if we decide to establish partnerships between digital experts and schools as OECD takes issuesthen what is the goal and outcome? Or do the governments only focus on monitoring misuse of technology in education which is rather evident in defining goal, input and outcome?

[ Week 2 Summary ] The way we perceive technology in education

This week, I have explored the perception of human/machine in educational contexts. First, although it is not relevant to the educational context but I wanted to know how and why we perceive why some technologies are adopted and some don’t because I thought the reason certain technologies are adopted due to cultural and political context. So I simply searched on google and the result was like this. These titles are examples of essentialism which assets that we ‘should’ adopt technology using words such as barrier, slow, haven’t yet, lag in tech adoption, even it says technology ‘adoption challenge is a human problem, not technical’.

In educational contexts, this videothis news articlethis commercial video reflect this essentialist view in the utopian version of technologies. Technology can’t solve educational problems which we have faced and haven’t fixed by teachers. So whenever I hear and see these claims, one question comes up. And as Neil Selwyn claimed, it may seem that teachers’ role is focused on the functional dimensions like delivery of knowledge as labor. This reminded me of the weird feeling which I felt when I traveled to Taiwan and saw this. These guards look like robots although they are human, but they are required to do the same work at the given time, given place without contexts which is totally different from dynamic classrooms. 

So rather than considering technology as ‘other worldly’ which leads to essentialism and instrumentalism as Knox mentions(2015), I thought this article throw us serious questions. 


[ Week 1 Summary ] Questions about Relationships with Technology in Everyday lives


This video and that video have made me reconsider what I take for granted till then, I started to look around machines in my daily life to rethink the relationships with machines.

Both are from technology, but a cloned dog is actually ‘life form’, a robot dog is not.

But what about a cloned dog? It is seemingly a life form, and you can never know if he was generated in the lab before the owner told you the fact.

So I am wondering which one people consider more ‘machine-like’ when they experience both in reality and if there may be a change to the perception when you find out he is cloned.

I think a robot cleaner is a reversed example of a cloned dog in terms of the perception of machine/human. You may feel like a robot cleaner is ‘human-like’ although it is a machine, but you may feel like a cloned dog is ‘machine-like’ once you notice it is cloned although it is a life form.


Although I have just finished IDEL, still thought human and machine is totally different and separated each other. Now, even the effort to define ‘what it is to be human’ seems to strengthen the dichotomy of human/machine to me. I am still struggling to understand ‘industrialism’ which pervades in our society, but I would say to overcome this binary concept of human/machine in our everyday lives is the starting point.