Real-World Action for AI: The Summit and Beyond
AI opportunity vs AI safety is a false dichotomy
This past weekend I attended a superb event, ‘Governing in the Age of AI’, held in Paris by the Tony Blair Institute (TBI) ahead of the AI Action Summit on February 10-11. The TBI panelists were a truly impressive set of AI leaders, notably including Yoshua Bengio, among whose many major accomplishments is leading the International AI Safety Report published ahead of the summit.
The AI Action Summit itself asserted high-level aspirations for ethical AI and regulation, resulting in lack of consensus. I have cited many previous examples of well-intended pablum elsewhere. But the TBI event featured candid views on real issues, and civilized debate between people with differing opinions. We all need more of that.
The thinking at the TBI event consolidated my thinking on how AI should be deployed, and also profoundly altered it. Three points are key:
AI is a real-world technology, and its human and technical effects are not separable.
AI is moving too rapidly and too unpredictably for effective ‘control in advance’—e.g. by regulation.
The ongoing debate between AI opportunity and AI safety creates a false dichotomy.
Below I explore each of these points. In short, rather than being dazzled by the technological capabilities of AI, we should look at AI through a practical lens of what it actually does to change the world as we know it.
AI (and intelligence) in the real world
There is no agreed definition of ‘artificial intelligence’, but most widely-accepted definitions say or imply something about replication of human intelligence. The term was originally defined by John McCarthy in 1955 as “the science and engineering of making intelligent machines”, and it’s fairly obvious that “intelligent machines” here refers to machines that could do things that only humans could previously do. When I asked it this morning, ChatGPT was more explicit:
Artificial Intelligence (AI) refers to the field of computer science that focuses on creating machines or software capable of performing tasks that typically require human intelligence.
This makes AI different from other technologies. In the 1960s, Marshall McLuhan wrote in Understanding Media how new technologies and media extend human capabilities:
We become what we behold. We shape our tools, and thereafter our tools shape us.
AI arguably has a similar effect, but it goes a step further by replacing rather than just extending human capabilities. And because of this replacement, where AI extends the capabilities of those who deploy it, it multiplies societal power dynamics. This has recently been obvious in the apparent plans of Elon Musk and the Department of Government Efficiency (DOGE) to replace US government employees with AI systems. Replacement and disruption of jobs has also been a feature of many previous technologies, but in this respect AI is in a class by itself.
In a world where AI is increasingly doing what humans do, we should expect effects that involve all or most of the complexity of human society—with these effects:
somewhat mitigated by the fact that AI will not quickly do all that humans can do
intensified because AI is far from identical to human intelligence and introduces new complexities to societal interactions.
I have written multiple times in this blog about the need for responsible AI solutions to address the specific context in which they are deployed. Thinking about AI in terms of doing human work makes clear that the diversity of real-world situations in which AI is used cannot easily be broken down into categories of use cases—the uses of AI are manifold, highly situation-specific and evolving quickly.
The final panel at the TBI event this week brought home the nature of situation-specificity through the divergent views of Yoshua Bengio and Fu Ying, China’s former Vice Minister of Foreign Affairs and UK ambassador and now professor at Tsinghua University. While Professor Fu expressed confidence in the ability to for avoid AI misuse through legislation, Bengio was much less sanguine about this solution. This disagreement highlighted that solutions for the same problem can differ depending upon societal context: i.e. the difference between China’s relatively authoritarian society (which allows particularly effective top-down regulatory solutions) and the greater autonomy of Western society.
Moving at ‘AI speed’
The speed at which AI is moving was an important theme at the TBI event. Both Marc Warner (CEO of UK AI implementation company Faculty) and Laura Gilbert (former UK government AI official at 10 Downing Street and current Head of AI for Government at Oxford’s Ellison Institute of Technology) made the simple but perhaps non-obvious point that it’s nearly impossible to predict what’s coming next with AI—even developments that now seem inevitable (notably the phenomenal success of large language models, beginning with ChatGPT) were almost entirely unexpected just a few years ago. Warner also made a related exhortation to “trust the exponential”—i.e. to trust that the ongoing exponential progress of AI will continue.
In the halcyon days of the late 1990s dotcom bubble, we used to talk about moving at ‘Internet speed’. Well … ‘AI speed’ is much faster.
The International AI Safety Report also emphasizes that the rapid pace of AI innovation presents major challenges:
The pace and unpredictability of advancements in general-purpose AI pose an ‘evidence dilemma’ for policymakers. Given sometimes rapid and unexpected advancements, policymakers will often have to weigh potential benefits and risks of imminent AI advancements without having a large body of scientific evidence available.
A possible conclusion is that this uncertainty mean that society must choose either to let AI take its course, or to take strong action to restrain the progress of AI. For example, this was the approach of the unsuccessful (and I would argue misguided) March 2023 suggestion for a six-month “pause” in development of advanced AI models.
In my view, this conclusion is unwarranted, even putting to one side the facts that the AI genie is out of the bottle, and that coordinated global action to restrain it is for practical purposes impossible.
The false dichotomy between AI opportunity and safety
The seeds of an alternative solution lie in the third point that there is ultimately a false dichotomy between AI opportunity and AI safety. That’s not to say that there are not trade-offs between functionality and safety—there are, inevitably—but the two do not need to be inconsistent in the medium- to long-term, as I aim to explain below.1
This idea of a false dichotomy was explicit at the TBI event. It began with Lucilla Sioli, Director of the European AI Office, emphasizing the dual role of her office in both promoting European AI and implementing the EU AI Act. Later in the same panel, Rajeev Chandrasekhar, who was India’s Minister for Electronics and Information Technology until June 2024, touted the benefits of Indian legislation on AI, suggesting that it (compared to European legislation) does a better job of making innovation not be the enemy of citizens’ rights. In response, Sioli (cheerfully) repeated that she genuinely meant what she said—that European regulation aims at both goals consistently.
Some (such as AI accelerationists) might dismiss this as an exchange between bureaucrats who don’t grok the corrosive effect of regulation on innovation. But Sioli and Chandrasekhar are onto something. This takes us back to the first point about AI in the real world.
What is to be done?
The question we need to answer about AI is the practical one: “What is to be done?”—Lenin’s question. Fortunately, in the case of AI, we don’t need the Russian Revolution. Rather we need more of what we have always done as humans, even though AI looks so new and different.
This is my central insight from the discussions last weekend:
Because AI is a technology that substitutes for human capabilities in circumstances that are wide, unpredictable and rapidly changing, we have no choice but to continue engaging with those circumstances practically, and at a high level of detail in a way that benefits individuals and society.
The world is a messy place, with much progress and destruction, intelligence and stupidity. AI will certainly create both good and bad, and the good and bad are not separable, or at least not fully separable—this is where Sioli and Chandrasekhar are plainly correct about the false dichotomy. In this messy, complicated world, each of us should act in a way that benefits individuals and society. Of course, there will be selfish people, and misguided people, and oligarchs who want power, and others who take things in wrong directions. It has always been so. The forces of destruction will inevitably exploit AI—and we must continually tip the balance towards it being a force for good.
This may seem only a theoretical, aspirational perspective, so let me take a practical example, involving the work of Elon Musk and DOGE in the US government. If we believe that he is not properly respecting humans involved, let’s take action as voters, executives and thinkers to blunt the effect of those actions. On the other hand, part of what Musk is doing is an effort to stem a massive volume of fraudulent payments by the US government—and it is reasonable to believe that AI can help, subject to significant challenges. In such a complex set of interacting circumstances, we have no choice but to dive in and engage with the details.
There may be a role for AI regulation in this approach, but we must recognize the inherent challenges with regulation of fast-moving AI, the lack of international consensus on how to regulate this inherently global technology, and the outright hostility to AI regulation from important quarters (e.g. the speech by US Vice President JD Vance at the AI Action Summit).
So what I intend to do, within my sphere of influence—as founder of an AI-driven edtech company, and in other AI-related activities—is engage with the details of delivering AI applications (like PlaylistBuilder) effectively and responsibly, aiming to make things better rather than worse by considering the consequences of my actions. No one knows these consequences in my domain better than I do, because this is where I live. My hope is that in society more broadly—in many other domains—there will be enough of us pulling in the right direction to make things better.
There is extensive evidence that the arc of history has bent towards progress and greater human flourishing over a period of many centuries, set out in detail in books like Factfulness by Hans Rosling and Enlightenment Now by Stephen Pinker. This is not to argue that there are not serious problems in the world (e.g. disease, climate change, inequality and poverty) and many bad actors, but rather that there is good reason to believe that things can be better through collective action.
On balance, I won’t boldly claim that the future of AI is for the good of humanity, but I am optimistic, and what other rational choice do we have than to aim for the best? Please join me on this journey.
Caveat lector: these ideas are nascent and I plan to explore them later in more depth.