New York City Moves to Create Accountability for Algorithms — ProPublica

This article discusses the development of an ‘accountability bill’ in the US, which aims to punish different races of peoplecompany algorithms that appear to discriminate against people based on age, race, religion, gender, sexual orientation or citizenship status.

Whilst it would be a good law in theory, in reality, it would be very hard to implement. Algorithmic systems are so difficult to break down, designed to be increasingly complex, gathering millions of data points and  “woven together with hundreds of other algorithms to create algorithmic systems” (Williamson 2017). Added to that,companies are very secretive about how their models work and what type of parameters make up the design of the algorithms, so that “the rules generated by (algorithms) are compressed and hidden”  (Williamson 2017). Discrimination would be very hard to prove as being deliberate in these cases, but hopefully it will encourage scientifically sound data builds, validated in appropriate ways, and to eventually make them more transparent to the public.

Article Complet : https://www.propublica.org/article/new-york-city-moves-to-create-accountability-for-algorithms
via Pocket

Williamson, B. 2017. Introduction: Learning machines, digital data and the future of education (chapter 1). In Big Data and Education: the digital future of learning, policy, and practice

AI Knows How to Take Away Your Health Insurance

China’s largest insurer, Ping An, has apparently started employing artificial intelligence to identify face with mark pointsuntrustworthy and unprofitable customers, via facial recognition software, designed to interpret expressions of stress, ethnicity and body weight (BMI). Its purpose is to maximize profits by avoiding expensive customers, with no constraints for fairness or long-term community health. AI used in this way may allow insurers to reject sick people (or people they expect to get sick) and accept only those customers who they don’t expect to use the insurance. Insurance that excludes people who might need it is no longer insurance.

This is an example of the hierarchizing of people; I’m not sure yet if it will find it’s way into education other than to disadvantage people who might already be disadvantaged in terms of ethnicity, and being unable to access insurance may impact access to education. Additionally, if teachers are unable to access health insurance then they may not be able to get a job as educator and this would disadvantage the sector.  As has been advocated by Knox, Williamson and Kitchin, it remains imperative to be aware of the power of algorithms and give critical attention to the forces as play behind their development. “Businesses with products to sell, venture capital firms with return on investment to secure, think tanks with new ideas to promote, policy makers with problems to solve and politicians with agendas to set have all become key advocates for data driven education”. (Williamson 2017)

Article Complet : https://www.bloomberg.com/opinion/articles/2019-06-14/china-knows-how-to-take-away-your-health-insurance
via Pocket

References

Kitchin, R. (2017) Thinking critically about and researching algorithms, Information, Communication & Society, 20:1, 14-29, DOI: 10.1080/1369118X.2016.1154087

Knox, J. 2015.Algorithmic Cultures. Excerpt from Critical Education and Digital Cultures. In Encyclopedia of Educational Philosophy and Theory. M. A. Peters (ed.). DOI 10.1007/978-981-287-532-7_124-1

Knox J., Williamson B., & Bayne S., (2020) Machine behaviourism: future visions of ‘learnification’ and ‘datafication’ across humans and digital technologies, Learning, Media and Technology, 45:1, 31-45, DOI: 10.1080/17439884.2019.1623251

Williamson, B. 2017. Introduction: Learning machines, digital data and the future of education (chapter 1). In Big Data and Education: the digital future of learning, policy, and practice

Using Data Science to Reduce Fire Risk

A positive article about machine learning; it is sometimes hard to find the positive ones.

Funded by Data Science for Social Good, safety and building inspection officers in Atlanta teamed up city scape Atlantawith some data science students to create a Predictive Community Risk Reduction database—Using Data Science to Reduce Fires. They called it ‘Firebird’.

Firebird then ran parameters and assigned a fire risk score to each property on the fire department’s inspection list, and also created “heat maps” so AFRD inspectors can visually see areas of the city of Atlanta with more at-risk buildings.

What I like about this story is that the students who worked on the project decided to make the Firebird model publicly available as an open source code so it can be used and improved by anyone with an Internet connection. I think as well that it demonstrates a positive use of AI or machine learning in education, giving our student data scientists projects to focus on that are not financially incentivised but socially so.

Article Complet : https://www.nfpa.org/News-and-Research/Publications-and-media/NFPA-Journal/2016/May-June-2016/Features/Embracing-Analytics
via Pocket

The Algorithms Aren’t Biased, We Are

I like the symbol of the broken algorithmic mirror in this article. Decisions vital to our welfare and freedoms are made using and supported by algorithms that mirror-brokenimprove with data: machine learning (ML) systems. Algorithms can be taught to score teachers and students, sort résumés, grant (or deny) loans, evaluate workers, target voters, set parole, and monitor our health insurance. However determining a decision with an algorithm does not automatically make it reliable and trustworthy; just like quantifying something with data doesn’t automatically make it true

We must not assume the transparency and necessity of automation. (Knox 2015), and to maintain a “more general, critical account of algorithms, their nature and how they perform work” (Kitchin 2017)

This article reminds us that Teachers matter. This is even more true in machine learning — machines do not bring prior experience, contextual beliefs into the decision making process. They reflect the biases in our questions and our data. These biases get baked into machine learning projects in both feature selection and training data. As more and more learner data is being harvested, learners, teachers and administrators are being prompted to choose a certain path of learning. Knox et al (2020) describe this as a ‘crucial shift’ away from Biesta’s learnification model, which is led by learner needs (learner as consumer). As datification takes over, we move away from learnification and become more influenced from behavioural psychology and algorithmic generative rules (Kitchin 2017) that nudge us towards ‘correct’ forms of performance and conduct that have already been decided (Knox et al 2020).

All of this can have implication for education, what is taught, why it is taught, what is excluded from the curriculum, and the efforts we make to provide corrective lenses for the algorithmic mirror.

How to these  “automated, non-human agents influence contemporary educational practices?” (Knox 2015) If the Amazon recommendation algorithm, which has significant sway over our buying habits, were to be used in an educational setting, it could for example influence a student with weaker maths skills to stay away from choosing maths modules elective, based on their previous choices, and based on the algorithm’s bias that they are probably not going to succeed anyway. Further down the line in their final paper, they would not be able to use statistical analysis to query research data sets, and cross reference and extrapolate new comparisons and results.

References

Kitchin, R. (2017) Thinking critically about and researching algorithms, Information, Communication & Society, 20:1, 14-29, DOI: 10.1080/1369118X.2016.1154087

Knox, J. (2015). Algorithmic Cultures. Excerpt from Critical Education and Digital Cultures. In Encyclopedia of Educational Philosophy and Theory. M. A. Peters (ed.). DOI 10.1007/978-981-287-532-7_124-1

Knox J., Williamson B., & Bayne S., (2020) Machine behaviourism: future visions of ‘learnification’ and ‘datafication’ across humans and digital technologies, Learning, Media and Technology, 45:1, 31-45, DOI: 10.1080/17439884.2019.1623251

Article Complet : https://medium.com/mit-media-lab/the-algorithms-arent-biased-we-are-a691f5f6f6f2
via Pocket

The Tension between Professional Control and Open Participation

This article  by Lewis 2012 explores how professions of law, medicine and academica have endured an ongoing contest from a do-it-yourself culture that challenges traditional forms of elite expertise. It is a struggle over boundaries: about the rhetorical and material delimitations of insiders and outsiders.

This incursion of the ‘ordinary person’ into the bastions of media privilege is experienced as both opportunity and threat by the industries themselves (Lister 2005)

Emerging research also suggests the possibility of a hybrid logic of adaptability and openness—an ethic of participation—emerging to resolve this tension going forward (Lewis 2012)- a trickle of empirical data is beginning to suggest a “slow philosophical shifting” towards a resolution to the professional–participatory tension.

reference
Lewis. (2012). The Tension between Professional Control and Open Participation: Journalism and its Boundaries – https://conservancy.umn.edu/bitstream/handle/11299/123290/1/iCS%20-%20The%20Tension%20between%20Professional%20Control%20and%20Open%20Participation%20-%20Journalism%20and%20its%20Boundaries.pdf

Accessed 16 Feb 2020