It is really hard to know who to believe about artificial intelligence. Prominent (and apparently highly intelligent) people say—with confidence—wildly divergent things about AI. Below, I look at three leading examples of AI topics reflecting this confusion of views. Two are closely related and orthogonal (explanation below), while the third is conceptually distinct.
LLM promoters vs. LLM skeptics. With the huge amount of capital being invested in companies whose products are based on large language models (LLMs), it is unsurprising that the biggest boosters of LLM technology include leaders of these companies like Sam Altman and Dario Amodei (although Altman has been sounding more cautious since the mixed reception of GPT-5 and a recent MIT study showing very weak returns on enterprise AI investments). On the other hand, Gary Marcus and Ed Zitron have been consistent and vocal critics of LLMs, and AI OG1 Yann LeCun has more recently converted to the LLM skeptic camp.
AI optimists vs AI doomers. Strongly optimistic views about AI tend to be closely linked to the digital optimism that we see from people like Marc Andreesen and Azeem Azhar. Andrew Ng also falls into the optimistic camp, with a greater focus on AI. At the same time, a significant number of prominent people have estimated frighteningly large p(doom)—i.e. the probability of an existentially bad outcome like human extinction due to AI2. Advocacy organization ControlAI3 has pursued a campaign around “AI extinction risk”. This optimist/doomer spectrum is largely orthogonal to the first dimension of LLM promoters/skeptics—for example, Gary Marcus is generally optimistic about AI despite his strong skepticism of LLMs, while Dario Amodei is an LLM promoter but has estimated a material p(doom).
Believers and skeptics in machine consciousness. On a entirely different dimension, people like AI OG Geoff Hinton and Ilya Sutskever argue that AIs are likely to become conscious, or may already be so, while others like Mustafa Suleyman and Bernardo Kastrup doubt that AI will ever become conscious. And AI OG Yoshua Bengio argues that “It does not (or should not) really matter to our safety whether you want to call an AI conscious or not.”4
These divergent opinions are interesting, including that AI seems to attract more divisive opinions than other technologies.5 Sam Altman recently noted the
attachment some people have to specific AI models. It feels different and stronger than the kinds of attachment people have had to previous kinds of technology … .
My working hypothesis is that such strong views arise largely because AI appears to act with intelligence, so it evokes a strange and evolving combination of our hopes and fears about both humanity and technology. There is also a significant element of emotions being whipped up by the current hype surrounding AI.6
So what should we do about this? Opinions about AI matter, in a world in which AI has ever-increasing effects on individuals and society. Who should we believe?
There is not an easy answer to these questions, but to achieve a better understanding of wtf is going on with AI, I’d suggest a few simple rules of thumb7:
Recognize the uncertainty. A fundamental reason why views on AI diverge is that there is huge uncertainty about the future trajectory of AI. This leads individuals to form divergent beliefs, depending upon various factors including incentives, philosophical beliefs, influence of particular individuals, and other factors. Starting with a recognition that views on AI are actually highly uncertain beliefs is crucial.8
Look at motivations. As in any area of life, what people say is heavily influenced by their motivations and contexts. I give a few examples above.
Consider a middle path. Situations in which two sides have widely diverging views frequently indicate the existence of a middle path (to borrow a term that is most associated with Buddhism). There may be truths lurking in opposing points of view that help to inform. Searching for a sensible middle path about AI is a big part of what I aim to do with this Substack LearnTech. At the same time, given the uncertainties around AI, it is entirely possible that—at least on some AI issues—the extremes may turn out to be where the truth lies.9 Certainly there have been some significant surprises in the recent history of AI, such as the amazing effectiveness of training LLMs on huge data sets with huge compute resources.
Learn, and form your own view. Given the uncertainty around AI and the consequent opportunities to choose different paths, forming one’s own view is crucial. This is also a key element of what I am attempting to do with LearnTech, and it applies equally to you as it does to me. Don’t necessarily believe what you read in my posts, or those of anyone else. Go out and continue to learn about AI, and then form your own view. In the complex, evolving AI ecosystem, there is a lot of room for people to have impact by developing and acting on their own convictions.
LeCun, Geoff Hinton and Yoshua Bengio are often referred to as ‘godfathers’ of AI, a term that I dislike. ‘OG’ seems more apt, and fun.
The various predictions of p(doom) appear to be highly contextual, and hard to compare. For example, Eliezer Yudkowsky, who is associated with estimation of p(doom) >95% has more recently criticized use of the concept.
I have been interested by ControlAI since they launched a previous campaign on AI deepfakes at a splashy event at the Royal Society of Arts in London in December 2023. ControlAI is an obviously well-funded non-profit, but they decline to disclose the source of their funding (as confirmed to me more than once by ControlAI personnel).
I tend to agree with Bengio about this debate, which seems to be essentially a reformulation of the longstanding philosophical hard problem of consciousness.
The three areas of divergent opinion about AI that I selected are important ones, but far from the only important ones. Another fascinating (and more subtle) debate is between AI as normal technology and AI as impending superintelligence (as exemplified by AI 2027). The exponents of the ‘normal technology’ approach Arvind Narayanan and Sayash Kapoor published a brilliant Substack post just yesterday on this debate.
There is a powerful rebuttal to AI hype in the ‘AI as Normal Technology’ paper mentioned in the previous footnote.
I was influenced to suggest rules of thumb by the excellent post ‘Academics are kidding themselves about AI’ by Harry Law, which suggests a larger number of dos and don’ts for criticism of AI.
With respect to the dimensions of belief summarized in this post, this uncertainty is more salient for LLM promoters vs. skeptics and AI optimists vs. doomers than it is about the AI consciousness debate—particularly if one accepts the view that the latter debate is unresolvable.
This presents an interesting contrast with real world conflicts, in which (except where one party is much stronger), the best outcome usually involves some element of compromise. My favorite examples of such real-world compromise are the mostly peaceful resolutions of the conflicts in South African and Northern Ireland towards the end of the last century.