The Call for Human-Centered AI: Experts Urge a Shift in Approach to Technology Development

A team of global experts emphasizes the need for human-centered AI design to prioritize the well-being and needs of individuals.

Artificial intelligence (AI) has become an integral part of our lives, but there is growing concern that its development is focused more on innovation than on meeting human needs. A new book titled “Human-Centered AI – A Multidisciplinary Perspective for Policy-Makers, Auditors, and Users” brings together the insights of fifty experts from various disciplines and countries to explore the importance of human-centered AI. These experts argue that AI should be designed to support and empower humans rather than replace or devalue them. This article delves into the concept of human-centered AI, its potential benefits, and the challenges that need to be addressed.

Human-Centered AI: Aligning Technology with Human Flourishing
Human-centered AI, according to Shannon Vallor from the University of Edinburgh, is about aligning technology with the health and well-being of individuals. It focuses on creating AI that supports and enriches human lives rather than technology that competes with or replaces humans. Vallor criticizes the development of generative AI, which prioritizes power and capability over meeting human needs. The result is technology that humans must adapt to, rather than technology designed to meet their specific requirements.

The Problem with AI: Systemic Biases and Privacy Concerns
The contributors to the book highlight several concerns with the current trajectory of AI development. Malwina Anna Wójcik from the University of Bologna and the University of Luxembourg points out the systemic biases in AI development, leading to discrimination and power gaps. She emphasizes the need for diversity in research and interdisciplinary collaborations to address these issues. Matt Malone from Thompson Rivers University discusses the challenge AI poses to privacy, as people often lack understanding of how their data is collected and used. He suggests that privacy will continue to play a crucial role in defining the boundaries between humans and technology.

See also  Future Smart Glasses Could Use Sonar Instead of Cameras for Improved Privacy and Affordability

AI and Human Behavior: Impacts on Self and Social Media
The book also explores the behavioral impacts of AI use. Oshri Bar-Gil from the Behavioral Science Research Institute discusses how using platforms like Google can change our thinking processes, diminishing our agency and autonomy. Alistair Knott from Victoria University of Wellington, Tapabrata Chakraborti from the Alan Turing Institute, and Dino Pedreschi from the University of Pisa investigate the use of AI in social media and its potential role in moving users toward extremist positions. They propose greater transparency and studying of recommender systems’ effects on users’ attitudes toward harmful content.

Making Human-Centered AI a Reality: Extending Existing Laws and Confidence in Policymakers
Pierre Larouche from the Université de Montréal argues that instead of creating new legislation for AI, existing laws should be extended and applied to address its challenges. He emphasizes the importance of framing the debate within existing legal frameworks to avoid prolonged discussions that hinder progress. Benjamin Prud’homme from Mila – Quebec Artificial Intelligence Institute calls for policymakers to have confidence in their ability to regulate AI responsibly. He encourages policymakers to invite diverse perspectives, including marginalized communities and end-users, to ensure effective governance mechanisms are put in place.


The call for human-centered AI design has gained momentum as experts from various disciplines and countries emphasize the need to prioritize human well-being and needs. The book “Human-Centered AI – A Multidisciplinary Perspective for Policy-Makers, Auditors, and Users” provides valuable insights into the risks and missed opportunities of not adopting a human-centered approach. By addressing systemic biases, privacy concerns, and behavioral impacts, and by extending existing laws and involving diverse perspectives, we can shape AI technology that truly enhances human experiences and empowers individuals. Policymakers must embrace the challenge and strike a balance between innovation and responsible regulation, ensuring that AI serves humanity rather than the other way around.

See also  The Macintosh at 40: How Apple Revolutionized Technology Through User Experience