Note to readers: The name of this Substack has changed from ‘Maury’s Substack’ to ‘LearnTech’, to capture central focuses of learning about technology and technology for learning, to align with the LearnTech meetup, and to create the possibility of co-authorship. Now, on to my regular content …
AI safety is about “preventing accidents, misuse, or other harmful consequences arising from artificial intelligence (AI) systems.”1
Well … yes and no. Preventing those bad things is desirable, but it’s an impractical goal in the lightning-fast-moving world of AI. As I recently wrote, we must avoid the false dichotomy between AI opportunity and AI safety, and make AI work practically in the real world. Andrew Ng recently wrote that ‘responsible AI’ is a much better term than ‘AI safety’, for similar reasons.
This blog is about why those views are consistent with how the world works in a general sense—globally and locally.
Clean and messy virtue
Cutting through the choice of terms, most of us want—in the words of Spike Lee—to “do the right thing” with AI. But reasonable people will differ about what the ‘right thing’ is.
The Founding Fathers of the United States thought about right action in terms of ‘virtue’. For example, Thomas Jefferson wrote:
[W]ithout virtue, happiness cannot be.2
[L]iberty … is the great parent of science & of virtue; and … a nation will be great in both always in proportion as it is free.3
Jefferson’s idea of practical virtue—linked with liberty and science—can be used as a lever to clarify the distinction between ‘AI safety’ and ‘responsible AI’.
Oxford Languages defines ‘virtue’ (in modern usage) as “behaviour showing high moral standards”. This ‘clean’ definition of virtue is a lot like the cul de sac of high principle in which the AI safety community has gotten stuck. “High moral standards” are great in principle, but they must be applied to be useful.
By contrast, Etymonline explains that ‘virtue’ comes from Latin virtutem, meaning “moral strength, high character, goodness; manliness; valor, bravery, courage (in war); excellence, worth”.4 This older definition is more specific, and messier. Courage in war, for example, is tightly linked to the manifold, changing and unpredictable circumstances of war. In short, real virtue is messy.
The current frenzy of AI progress is not a war, but the analogy has merit.
Think global, act local
A more modern analogy for the challenges of getting AI right can be found in the 1970s slogan ‘Think global, act local’. Originally used in the environmental context, the slogan urges us to remember that global problems (like environmental damage, or risks of AI) are addressed most effectively by consistent, repeated action in specific local contexts.
The slogan draws on that of earlier thinking, like that of Patrick Geddes in 1915 on urban planning:5
'Local character' … is attained only in course of adequate grasp and treatment of the whole environment, and in active sympathy with the essential and characteristic life of the place concerned.
In sum, the high principle of “high moral standards” or global ideals primarily do us good when we get our hands dirty by engaging with the specific environments in which we find ourselves.
Let’s take a few examples, shall we?
Examples of the messy virtue of acting locally are not hard to find.
A recent Stanford study in on large language models and bias is a great example in the AI space. The researchers identified some potential technical interventions in LLMs to reduce bias, but concluded that, overall, bias is a complex phenomenon that cannot be isolated to specific LLM neurons/weights. This led to the conclusion that responsibility for reducing bias (think global) should be put primarily on users/deployers of LLMs (act local) rather than on LLM developers.
This is opposite to the approach taken by the EU AI Act, which places limited obligations on model deployers and much more extensive obligations on developers of AI systems, including general-purpose AI (GPAI) models. Not surprisingly, the attempt to specify such details in a code of practice for GPAI models is tying the EU AI Office in knots.
Examples of the same principle abound in other domains. I’ll look first at another technical safety domain, and then at politics / economics.
Aviation safety rules have developed over many years, producing—usually—a remarkably safe air transport system, with more than a 50X reduction in fatalities per mile since 1970 (after equally significant gains before that). Global best practice has been learned from many local mistakes, which unfortunately and inevitably continue, as two recent incidents illustrate.
On January 29, a Bombardier CRJ approaching Ronald Reagan Washington National Airport collided with a US Army Black Hawk helicopter, killing all on both aircraft. The collision resulted from a series of local mistakes, including a risky helicopter route, reduced air traffic control staffing, pilot confusion resulting from a complex visual environment and communications errors.
On February 17, another Bombardier CRJ flipped on landing at Toronto Pearson International Airport due in part to adverse weather conditions, but there were no fatalities despite an explosion. The relatively good outcome was attributed to aircraft design and effective response by the cabin crew and rescue teams.
So what is ‘virtue’ in response to these accidents? Fairly obviously, aviation authorities will engage in painstaking detail with the local mistakes and unanticipated circumstances, and learn what lessons can be learned—possibly resulting in changes to global rules. That approach is how aircraft safety works, and is a major reason for its long-term success. AI safety could work in a similar way as it matures.
Moving on to politics, it’s a truism that “all politics is local”—a statement associated with but not originated by Tip O’Neill, who was Speaker of the US House of Representatives in the 1970s and 1980s. O’Neill was a leader in a Democratic Party that was still good at local politics. The roots of the modern Democratic Party are in things like organizing workers to defend their rights—Cory Doctorow recently blogged an excellent (and typically polemic) summary of the history of union organizing. But the Democrats seem to have lost much of that local energy to a focus on high-level, rights-driven principles. This focus (while admirable in some details) was effectively dismissed by Republicans as “woke”, and appears to have been a major reason for the Republican victory in the 2024 US elections—more specifically a perception that the Democrats had lost touch with real (local) popular needs.
A starker political and economic example is the progressive economic failure of the centrally-planned economy of the Soviet Union, leading to the collapse of the Soviet government in 1991. China faced similar economic malaise until the late 1970s, when Deng Xiaoping allowed introduction of local private business initiative, among other capitalist elements, to the Chinese economy—called “socialism with Chinese characteristics”—resulting in more than 40 years of startling economic growth. Most recently, the re-centralization of the Chinese economy under Xi Jinping and accompanying economic slowdown have led to apparent weakening of Xi’s grip on Chinese politics and a return to private sector initiative.
It is simply not possible to effectively run a country—particular one as large as the United States, Russia or China—without enabling significant political and economic localism.
Local, messy action for responsible AI
The point, which by now is presumably obvious, is that responsible AI (using Andrew Ng’s preferred term—and I think now mine!) requires a detailed focus on local conditions. These local conditions include many features such as specific applications, types of users, training and inference data, AI models and other technical details, locations, and various other factors. Combined across their many dimensions and permutations, such features produce a highly complex (and often messy) environment for AI deployment. But the messes that are likely to ensue from failure to engage with these conditions is far more troublesome than the ones that we will inevitably encounter when we consider them properly.
Letter from Thomas Jefferson to Amos J. Cook, 1816.
Letter from Thomas Jefferson to Joseph Willard, 1789.
This definition/etymology is sexist, not suprising given its Roman origins. I ask readers to put that to one side and focus on my core point.
Wikipedia, Think globally, act locally.
I am reminded of the global response to Covid-19. Local adaptations were inconsistently applied and effective strategies were not promulgated rapidly (or persuasively) enough to the global network.
At the same time, central planning is like trying to win a local battle by relying solely on hand-written instructions posted to you by a tea-sipping General 500 miles away.
If virtue is a memeplex (Dawkins) - a collection of ideas that are adaptive only in your cultural neighbourhood - it follows that it must be agile locally and not reliant on the central committee’s best guesses. However, without coordination of these local adaptations, there could be global and rapid consequences: a lab leak, not a plane crash.
I'm wondering if it behoves the EU AI office not to construct hugely complex ethical ecosystems, but instead focus on the structures that enable the rapid sharing of local learning, while restricting itself to top-line principles. There is a balance somewhere between the laissez faire libertarian and the statist, but I doubt we’ll find it when they both "know" they’re right.