Leading with Consciousness in the Age of Artificial Intelligence

Center for Responsible Business & Leadership
Tuesday, November 11, 2025 - 10:30

Artificial intelligence (AI) is no longer a distant promise, it has become an integral part of our daily lives: from product recommendations to CV screening, from medical decisions to bank credit. Yet, as AI becomes ever more pervasive, so too grows the risk that our organisations may become more efficient but less wise.

Luciano Floridi, the Italian scholar renowned for his pioneering work in the philosophy and ethics of information, reminds us that the essential question is not whether AI is intelligent, but what kind of intelligence it possesses. AI can be efficient, but it is not conscious; it can recognise patterns, but it does not understand their meaning or the moral context in which it operates. The ethical challenge lies not only in what machines do, but also in how we programme them to decide and in the kinds of human values we embed within them. It is therefore up to us to redefine the role of human intelligence in this new Age of AI. That role is less about solving a clearly defined problem and more about deciding which problems are worth solving, why, and to what end, and what costs, trade-offs, and consequences are acceptable.

The case of Uber’s self-driving car, which in 2018 caused the death of a pedestrian in Arizona, exposed the moral fragility of technological progress when innovation outpaces caution. When algorithms make fatal decisions, who is responsible? The company that disabled the emergency brakes? The software engineer who programmed the system? The regulator who authorised the tests? Or the safety driver who placed too much trust in the technology? The episode illustrates the risk of diffused or fragmented moral responsibility, one of the greatest ethical dilemmas of the digital age.

Responsible leadership, in this context, is the art of balancing prudence with ambition. It is not about stifling innovation, but about ensuring that technological creativity does not eclipse moral commitment. As Reid Blackman argues in his recent book Ethical Machines, companies must not only reflect on the ethical risks that may arise from adopting AI, but also develop formal structures to ensure that those risks are properly addressed. In relation to potential risks, responsible leaders must promote explainability over algorithmic opacity, encourage diversity and inclusion over bias, and safeguard information privacy over immediate profit. Regarding formal structures, leadership should identify roles responsible for the continuous assessment and monitoring of these issues, and design processes that translate ethical principles into business practice.

AI is a powerful tool for the common good, but only if it is guided by human values. Beneficence, justice, and critical thinking are not abstract concepts. They are practical leadership skills in a world where the power to decide is no longer exclusively human. The true test of AI is not technical, but moral: to ensure that our machines are fair and reflect, as faithfully as possible, the fundamental values of humanity.

 

Filipa Lancastre, Assistant Professor of Strategy, Organizations and Entrepreneurship 

Yunus Social Innovation Center