Week 9 Review

Week 9 was the second week of our algorithmic culture block and week of our algorithmic play artefact, located here. In the artefact I discuss how Instagram makes suggestions for me based on my search history, my demographic, and my previous likes.

It was also the week we had to leave work and set up home offices; as IT support staff that meant 12 hr days for myself and colleagues to get through the calls for support from faculty and staff in all aspects of online teaching. As my classmate Sean has mentioned though, the global situation is a game changer for our sector, leading to huge opportunities.

In a roundup of this week’s blog additions, I pulled in some posts related to my Instagram artefact. As the posts on ‘the AI generator of fictitious faces’- the post shows how easy it has become for AI to generate an influencer type photograph that will get lots of instagram ‘likes’, based on social norms for ‘what is considered attractive’ – 10,000 fake faces were created by the algorithm over several hrs.

In addition the posts on Instagram’s use of AI here and here, demonstrate how technology can no longer be seen as the passive instrument of humans, but it is being used as an agent by humans and non humans to “nudge towards the predetermined path as designed by the algorithm” Knox et al 2020.

Seeing as there is an “increased entanglement of agencies in the production of knowledge and culture” (Knox 2015), it is very hard to drill down to see who is managing or benefiting from the behavioural governance of these algorithms. This is particularly pertinent to education and the influencing factors on young people. From a privacy or discriminatory point of view, Educational transcripts, unlike credit reports or juvenile court records, are currently considered fair game for gatekeepers like colleges and employers, as noted in this post, and information from these can be used to implement unfair bias against potential scholarship entrants or employees.

Algorithmic code is influenced by “political agendas, entrepreneurial ambitions, philanthropic goals and professional knowledge to create new ways of understanding, imagining and intervening” in our decision making (Williamson 2017)

Therefore we should maintain a critical perspective on algorithmic culture and continue to question the “objectivity and authority assumed of algorithms” (Knox 2015), and how they are shaping, categorising and privileging knowledge places and people. I’m glad to see the that there is a growing awareness of algorithms that appear to discriminate against people based on age, race, religion, gender, sexual orientation or citizenship status, as noted in the development of  an ‘accountability bill’ in the US. Culpability can be difficult to prove in these cases due to the way that companies remain very secretive about how their models work, but it is a start.

References for the readings

Kitchin, R. (2017) Thinking critically about and researching algorithms, Information, Communication & Society, 20:1, 14-29, DOI: 10.1080/1369118X.2016.1154087

Knox, J. 2015. Algorithmic Cultures. Excerpt from Critical Education and Digital Cultures. In Encyclopedia of Educational Philosophy and Theory. M. A. Peters (ed.). DOI 10.1007/978-981-287-532-7_124-1

Knox J., Williamson B., & Bayne S., (2020) Machine behaviourism: future visions of ‘learnification’ and ‘datafication’ across humans and digital technologies, Learning, Media and Technology, 45:1, 31-45, DOI: 10.1080/17439884.2019.1623251

Williamson, B. 2017. Introduction: Learning machines, digital data and the future of education (chapter 1). In Big Data and Education: the digital future of learning, policy, and practice

Instagram popularity model algorithm

The most interesting thing about the Instagram popularity algorithm that pulls images from a dataset of 10,000 images of faces, is that NONE of the faces are real; they are created by computers. The algorithm scores to these faces according to how beautiful it considers them and the predictability of those images being ‘liked’ by other users. None of the faces are real, they were created by an algorithm that looks for predefined signs of beauty;  big eyes, rosy lips and mostly female. It assembled these faces according to what it predicts will be seen by humans as attractive.

random face

The second most interesting thing about this algorithm is that the above image is the top ranked face from the set of 10,000 machine generated random faces.  I was a bit surprised that this is deemed to be the ‘most influential’ photo, because it has noticeable imperfections, her teeth are not great, the left side of her jaw is swollen. The popularity score is designed to predict which photos will be the most liked on Instagram, and these might not necessarily be the most realistic photos or attractive faces. In addition, as the video below explains, the 32 highest scoring photos alongside this image are mostly female and mostly Asian. The theory is that the researchers from the university of Hong Kong who trained the Instagram popularity model, used a data scraping method that collected too many Asian photos, too many females, and was therefore biased.

 

How does Instagram use AI?

AI inside Instagram is used in 3 ways-

1) newsfeed (sorting your posts),

2) Targeted advertising based on your demographic and what you ‘liked’ in the past and

3) Deep learning-to manage community moderation.

Deep learning looks at words in the comments in posts, it groups the words together and considers what is good and bad text. Bad text might be what it considers trolling, hate speech or words associated with cyberbullying. The algorithm is a closely guarded secret, we do not know how the model works or what type of comments they are targeting or how many comments are being removed. Therefore we don’t know how biased it is, and how much this algorithm is “used to coerce, discipline, regulate and control: to guide and reshape how people, … interact with and pass through .. systems”. Kitchin 2017

Kitchin, R. (2017) Thinking critically about and researching algorithms, Information, Communication & Society, 20:1, 14-29, DOI: 10.1080/1369118X.2016.1154087

 

New York City Moves to Create Accountability for Algorithms — ProPublica

This article discusses the development of an ‘accountability bill’ in the US, which aims to punish different races of peoplecompany algorithms that appear to discriminate against people based on age, race, religion, gender, sexual orientation or citizenship status.

Whilst it would be a good law in theory, in reality, it would be very hard to implement. Algorithmic systems are so difficult to break down, designed to be increasingly complex, gathering millions of data points and  “woven together with hundreds of other algorithms to create algorithmic systems” (Williamson 2017). Added to that,companies are very secretive about how their models work and what type of parameters make up the design of the algorithms, so that “the rules generated by (algorithms) are compressed and hidden”  (Williamson 2017). Discrimination would be very hard to prove as being deliberate in these cases, but hopefully it will encourage scientifically sound data builds, validated in appropriate ways, and to eventually make them more transparent to the public.

Article Complet : https://www.propublica.org/article/new-york-city-moves-to-create-accountability-for-algorithms
via Pocket

Williamson, B. 2017. Introduction: Learning machines, digital data and the future of education (chapter 1). In Big Data and Education: the digital future of learning, policy, and practice

Instagram explains how it uses AI to choose content for your Explore tab

Not a huge amount of detail was handed over by Instagram in the writing of this article, but I did glean one thing;

Instagram identifies accounts that are similar to one another by adapting a common machine learning method known as “word embedding.” Word embedding systems study the order in which words appear in text to measure how related they are. Instagram uses a similar process to determine how related any two accounts are to one another. If it thinks this is similar to an account you’ve already liked, they’ll recommend it to you.

There are no details on what signals are used to identify spam or misinformation. The algorthim details are not revealed. So whilst algorithms are playing an increasingly important role in producing content and mediating the relationships between us and other internet products, precisely how they do that is not made clear to us,

“Such conclusions have led a number of commentators to argue that we are now entering an era of widespread algorithmic governance, wherein algorithms will play an ever-increasing role in the exercise of power, a means through which to automate the disciplining and controlling of societies and to increase the efficiency of capital accumulation.” Kitchin 2017

 

Article Complet : https://www.theverge.com/2019/11/25/20977734/instagram-ai-algorithm-explore-tab-machine-learning-method
via Pocket

Reference
Kitchin, R. (2017) Thinking critically about and researching algorithms, Information, Communication & Society, 20:1, 14-29, DOI: 10.1080/1369118X.2016.1154087