The outlook on the future of artificial intelligence tends to vary from alarmist to utopian, and humanity still has time to shape the future of AI in positive directions. An in-depth resource for how that can happen is the new book by Hamilton Mann, Group Vice-President Digital at Thales and lecturer at INSEAD and HEC Paris, Artificial Integrity: The Paths to Leading AI Toward a Human-Centered Future.
I enjoyed working with Hamilton in my role as managing editor of Leader to Leader, for his Fall 2024 article “Building an AI Compact to Uphold Artificial Integrity.” Hamilton, whose book is published by Wiley, also the co-publisher of our journal, was recommended to us by our editor-in-chief, Sarah McArthur.
The Global Peter Drucker Forum recently published Hamilton’s article “The Only Code That Matters Is Integrity — Not Intelligence,” he was recently interviewed by Jyoti Guptara on video for Drucker Forum TV and he was a panelist in November for the 16th Global Peter Drucker Forum, in Vienna.
I’m grateful to Hamilton for answering my questions about the book itself, his concept of Artificial Integrity, and how it relates to his career as an executive and an educator.
How would you characterize the research, lived experience, and key messages represented in your recent book Artificial Integrity: The Paths to Leading AI Toward a Human-Centered Future?
In Artificial Integrity, I address one of the most critical challenges of our time: how to ensure that AI serves humanity, not the other way around.
Weaving together research and lived experience, I advocate for a vision of AI that prioritizes integrity over intelligence.
The core of the book challenges the prevailing narrative that AI’s evolution should be driven purely by the question of having systems just exhibiting more and more so-called intelligence for the sake of surpassing what we think we know about human intelligence towards superintelligence or AGI.
Instead, I advocate for AI systems that are guided by integrity—as a foundational and critical point of design, especially in further advancing AI capability— ensuring their outcomes are human-centered, which I defined by AI systems having an autonomous capacity of being ethically, socially and morally aligned with societal values in their core functioning.
It’s about radically rethinking what we expect from technology and the role it plays in our lives. For too long, AI has been about raw power and mimicking intelligence. But intelligence alone isn’t enough—it’s like building the fastest car in the world without brakes. What’s the point if you can’t control where you’re going?
Warren Buffett famously said, “In looking for people to hire, look for three qualities: integrity, intelligence, and energy. And if they don’t have the first, the other two will kill you.” I think this principle equally applies to AI systems.
AI should amplify the best of humanity, not replace it, nor exploit our weaknesses, for example, by surreptitiously hacking the reward system of our brains. It should enhance our creativity, decision-making, and values. I’ve seen firsthand how technology can transform lives when it’s designed with people at the center. And I’ve also seen what happens when it’s driven purely by profit, power, or speed—chaos, mistrust, and unintended consequences.
In looking for AI system that could benefit to all humanity, it is no longer enough to create systems that compute value; we must create systems that comprehend values.
That’s the frontier AI we should be aiming for: designing technology that doesn’t just work but works for us, shaping a future we all want to live in.
Taking into consideration your research, writing, and promoting the book, how well does this fit into/complement your day-to-day work, including teaching?
At its core, the book is an extension of my professional and academic pursuits—it represents a step in my journey of research and practical insights into responsible AI, which I also like to call AI for Good, to emphasize that good is not exclusive to ‘not-for-profit’ ventures.
Working in the fields of aerospace, defense, and cybersecurity—which encompass nothing less than supporting critical and vital infrastructures in our digitalized societies—I see every day how much technology is an asset and a driver of progress for humanity. The AI revolution is no exception. It represents a major step forward, with great promise in delivering good to society.
However, it is essential to acknowledge that technology, even when created with the best intentions, is not inherently immune from producing adverse effects in society.
In my lectures, I strive to share with students and professionals the importance of thinking critically about the role of technology in shaping society, and the book has provided me with a platform to foster conversations and reflections in the classroom.
Moreover, promoting the book has been an opportunity to engage with a global audience, sparking dialogues and research questions aimed at cracking the code to further advance the development of Artificial Integrity systems.
The narrative of AI inevitably affecting every environmental, economic, or societal outcome isn’t a given. We have choices in what AI system we design.
In many ways, Artificial Integrity is a catalyst for the broader mission that drives my work: empowering ourselves to approach technology with both curiosity and a sense of responsibility, ensuring that innovation remains aligned with human values.
If you were to have an employee book group based at a major international institution such as the World Bank, IMF, or United Nations; and/or trade associations such as The AI Association, The Association for the Advancement of Artificial Intelligence, or the European Association for Artificial Intelligence; read your book, what particular parts would they likely find to be the most relevant for the mission of their organization?
Several chapters would stand out.
I would cite three chapters, both from a business perspective and from the standpoint of multilateral international institutions like the ones you mentioned.
Starting from the start, Chapter 1, The Stakes for Building Artificial Integrity, holds relevance from a business perspective as it examines the transformative potential of AI in reshaping industries and creating competitive advantages while leveraging the approach of digital for good. It highlights the duality of AI as both a driver of business innovation and societal progress, emphasizing the need for businesses to embed integrity into their AI strategies to ensure long-term sustainability and trust which both are and will increasingly become their license to remain competitive.
Conversely, if I look at Chapter 1 from a multilateral perspective, for the IMF, the chapter underscores the profound implications of AI on global economic stability and workforce dynamics. In light of concerns about the transformative impact of AI on labor markets, the chapter provides a timely framework for understanding how AI can drive economic shifts while presenting challenges to job security and equitable growth.
If I pick one chapter in the middle, for businesses, Chapter 4, What Navigating Artificial Integrity Transitions Implies, provides a blueprint for navigating the challenges of embedding integrity in the framework of the dynamic of Human and AI Co-Intelligence. It emphasizes the relationship between humans and AI, considering modes such as Marginal Mode (where limited AI contribution and human intelligence is required—think of it as ‘less is more’), AI-First Mode (where AI takes precedence over human intelligence), Human-First Mode (where human intelligence takes precedence over AI), and Fusion Mode (where a synergy between human intelligence and AI is required). It draws on the approaches and guiding posts to consider when embedding integrity capability into AI systems while transitioning from one mode to another.
Changing lens, and looking at this chapter from a multilateral agency standpoint such as the United Nations, it also offers critical insights for addressing pressing concerns about human interaction inclusivity with AI while enforcing mechanisms within the system itself rather than relying solely on external guidelines. Specifically, it explores how transitions between modes like Human-First, AI-First, and Fusion can be designed to uphold human dignity and ensure equitable access to AI benefits.
Closing with the final one, Chapter 8, What Change to Envision in Economic AIquity and Societal Values, it resonates with a business perspective by emphasizing the strategic importance of building fair and transparent data-driven economies, considering the growing concerns around data ownership and consentship in the context of AI training.
On this aspect, thinking, for example, about the missions of the World Bank, this chapter’s exploration of data as a common currency and its potential to foster economic equity provides valuable insights into how AI can drive sustainable economic development and address disparities. It emphasizes the need to mitigate the risks of data monopolies in our global economies, ensuring that data ownership and access are distributed fairly.
As for the neologisms consentship and AIquity, I define “Consentship,” a combination of “Consent” and “Leadership,” as a leadership approach rooted in the principle of obtaining and respecting informed consent of individuals when it comes to the use of their data.
I define “AIquity,” a combination of “AI” and “Equity,” as the pursuit of fairness and equity in AI applications, ensuring AI systems do not perpetuate or exacerbate social inequalities.