Using Data Science to Reduce Fire Risk

A positive article about machine learning; it is sometimes hard to find the positive ones.

Funded by Data Science for Social Good, safety and building inspection officers in Atlanta teamed up city scape Atlantawith some data science students to create a Predictive Community Risk Reduction database—Using Data Science to Reduce Fires. They called it ‘Firebird’.

Firebird then ran parameters and assigned a fire risk score to each property on the fire department’s inspection list, and also created “heat maps” so AFRD inspectors can visually see areas of the city of Atlanta with more at-risk buildings.

What I like about this story is that the students who worked on the project decided to make the Firebird model publicly available as an open source code so it can be used and improved by anyone with an Internet connection. I think as well that it demonstrates a positive use of AI or machine learning in education, giving our student data scientists projects to focus on that are not financially incentivised but socially so.

Article Complet : https://www.nfpa.org/News-and-Research/Publications-and-media/NFPA-Journal/2016/May-June-2016/Features/Embracing-Analytics
via Pocket

The Algorithms Aren’t Biased, We Are

I like the symbol of the broken algorithmic mirror in this article. Decisions vital to our welfare and freedoms are made using and supported by algorithms that mirror-brokenimprove with data: machine learning (ML) systems. Algorithms can be taught to score teachers and students, sort résumés, grant (or deny) loans, evaluate workers, target voters, set parole, and monitor our health insurance. However determining a decision with an algorithm does not automatically make it reliable and trustworthy; just like quantifying something with data doesn’t automatically make it true

We must not assume the transparency and necessity of automation. (Knox 2015), and to maintain a “more general, critical account of algorithms, their nature and how they perform work” (Kitchin 2017)

This article reminds us that Teachers matter. This is even more true in machine learning — machines do not bring prior experience, contextual beliefs into the decision making process. They reflect the biases in our questions and our data. These biases get baked into machine learning projects in both feature selection and training data. As more and more learner data is being harvested, learners, teachers and administrators are being prompted to choose a certain path of learning. Knox et al (2020) describe this as a ‘crucial shift’ away from Biesta’s learnification model, which is led by learner needs (learner as consumer). As datification takes over, we move away from learnification and become more influenced from behavioural psychology and algorithmic generative rules (Kitchin 2017) that nudge us towards ‘correct’ forms of performance and conduct that have already been decided (Knox et al 2020).

All of this can have implication for education, what is taught, why it is taught, what is excluded from the curriculum, and the efforts we make to provide corrective lenses for the algorithmic mirror.

How to these  “automated, non-human agents influence contemporary educational practices?” (Knox 2015) If the Amazon recommendation algorithm, which has significant sway over our buying habits, were to be used in an educational setting, it could for example influence a student with weaker maths skills to stay away from choosing maths modules elective, based on their previous choices, and based on the algorithm’s bias that they are probably not going to succeed anyway. Further down the line in their final paper, they would not be able to use statistical analysis to query research data sets, and cross reference and extrapolate new comparisons and results.

References

Kitchin, R. (2017) Thinking critically about and researching algorithms, Information, Communication & Society, 20:1, 14-29, DOI: 10.1080/1369118X.2016.1154087

Knox, J. (2015). Algorithmic Cultures. Excerpt from Critical Education and Digital Cultures. In Encyclopedia of Educational Philosophy and Theory. M. A. Peters (ed.). DOI 10.1007/978-981-287-532-7_124-1

Knox J., Williamson B., & Bayne S., (2020) Machine behaviourism: future visions of ‘learnification’ and ‘datafication’ across humans and digital technologies, Learning, Media and Technology, 45:1, 31-45, DOI: 10.1080/17439884.2019.1623251

Article Complet : https://medium.com/mit-media-lab/the-algorithms-arent-biased-we-are-a691f5f6f6f2
via Pocket