Researching the datafication of family life and the impact of tech on children’s rights in the Child Data Citizen project led me to a new research exploration. As I listened to parents telling me how they were being falsely ‘read’ or ‘understood’ by algorithms crunching their data traces, and analyzed the impacts of inaccurate automated decisions on children’s life trajectories, I realized such data injustices or ‘human errors’ could harm any human being.
The Human Error project specifically investigates the fallacy of algorithms when it comes to reading humans. My team and I argue that the claimed fairness and objectivity of machines fall short when it comes to making decisions about humans or judging them. As artificial intelligence is increasingly used in a variety of fields such as education, healthcare or human resources, the systemic ‘errors’, ‘biases’ and inaccuracies’ it carries can severely impact human’s life trajectories and therefore need to be unveiled and explained.
This two-year project was launched in September 2020 at the University of St. Gallen in Switzerland and combines anthropological theory with critical data and AI research. Our goal is “to shed light on the fact that the race for AI innovation is often shaped by stereotypical and reductionist understandings of human nature, and by new emerging conflicts about what it means to be human.”
As I keep on advocating for closer attention to children data rights when using AI-enabled technologies, I am thrilled to pursue the conversation around AI, ethics and human rights with the Human Error project. Here is the link of our website to find out more about the Human Error Project and the team.