Older Parents and technology

Global situation means long but rewarding working days for those of us supporting teachers creating online lectures and assessment plans. Glad to help. Take a moment to cheer on our isolated older population as they navigate the internet! #mscedc
https://t.co/SRSwwM9elr

Instagram popularity model algorithm

The most interesting thing about the Instagram popularity algorithm that pulls images from a dataset of 10,000 images of faces, is that NONE of the faces are real; they are created by computers. The algorithm scores to these faces according to how beautiful it considers them and the predictability of those images being ‘liked’ by other users. None of the faces are real, they were created by an algorithm that looks for predefined signs of beauty;  big eyes, rosy lips and mostly female. It assembled these faces according to what it predicts will be seen by humans as attractive.

random face

The second most interesting thing about this algorithm is that the above image is the top ranked face from the set of 10,000 machine generated random faces.  I was a bit surprised that this is deemed to be the ‘most influential’ photo, because it has noticeable imperfections, her teeth are not great, the left side of her jaw is swollen. The popularity score is designed to predict which photos will be the most liked on Instagram, and these might not necessarily be the most realistic photos or attractive faces. In addition, as the video below explains, the 32 highest scoring photos alongside this image are mostly female and mostly Asian. The theory is that the researchers from the university of Hong Kong who trained the Instagram popularity model, used a data scraping method that collected too many Asian photos, too many females, and was therefore biased.

 

How does Instagram use AI?

AI inside Instagram is used in 3 ways-

1) newsfeed (sorting your posts),

2) Targeted advertising based on your demographic and what you ‘liked’ in the past and

3) Deep learning-to manage community moderation.

Deep learning looks at words in the comments in posts, it groups the words together and considers what is good and bad text. Bad text might be what it considers trolling, hate speech or words associated with cyberbullying. The algorithm is a closely guarded secret, we do not know how the model works or what type of comments they are targeting or how many comments are being removed. Therefore we don’t know how biased it is, and how much this algorithm is “used to coerce, discipline, regulate and control: to guide and reshape how people, … interact with and pass through .. systems”. Kitchin 2017

Kitchin, R. (2017) Thinking critically about and researching algorithms, Information, Communication & Society, 20:1, 14-29, DOI: 10.1080/1369118X.2016.1154087