Data Center 2035: Sam Altman and “Intellectual Capacity”

Is AI getting “smarter”? Are we getting more stupid?

Last week at UTokyo GlobE #14 Sam Altman suggested that, in a decade’s time, “a single data center [will be] smarter than the current total intellectual capacity of earth.”

This is partly a comment on the rapid scaling of AI compute. But it’s also a claim about the increasing intelligence of the models themselves, coming a few weeks after Altman reflected that his unborn child “is never gonna grow up being smarter than AI” in a podcast inverview.

Sam Altman speaking at UTokyo GlobE #14.

There are plenty of things that large language models (LLMs) are good at, from assisting with certain kinds of coding to producing those SEO-optimised recipes that tell a long story before sharing the ingredients or how to cook them.

However, I’m suspicious of claims about their intelligence, and the veracity of the benchmarks used to justify them. There are two main reasons for this:

  1. Training AI models to pass tests isn’t the same as teaching them to think. As Gary Marcus puts it, “doing well on a benchmark is not actually tantamount to being intelligent.”
  2. Any claims should be viewed in the context of an industry that’s desperately trying to illustrate the commercial viability of its product. Since the launch of DeepSeek—a less resource-intensive, open-source competitor—these pressures have grown considerably.

Of course, this all depends on how you define intelligence.1 For the purposes of Altman’s prediction, the “total intellectual capacity on earth” is a quantifiable resource consisting of “all of the people, all of the coordination, all the AI”. Thinking is something you get MORE of. By 2035, we’ll have LOADS.

Meanwhile, Cambridge Dictionary defines intellectual capability as the “ability to think and understand things”. Considered in these terms, today’s LLMs are nothing special. They simulate the way people use language to produce outputs that imitate intelligence. Reasoning models show their working, or at least pretend to.

Tellingly, OpenAI and Microsoft’s contractual definition of so-called “Artificial General Intelligence” (AGI) is any AI system that can generate at least $100 billion in profits. In their current agreement, “intelligent” is whatever makes eye-watering amounts of money.

Again, these are for-profit companies (to all intents and purposes, in the case of OpenAI). Take their claims with a pinch of salt.

To be fair, Altman qualified his 2035 claim. First off, he said “maybe” to hedge his bets.

Secondly, he noted that “we’re all going to individually get more capable”. His point was that both human and AI intelligence will increase over time, so you have to compare their particular capabilities in 2025 to those of 2035.

However, Altman’s assumption that humans will necessarily become “more capable” as the result of AI use is questionable. In fact, Microsoft researchers found that there may be an inverse relationship between the two in a recent study of 319 knowledge workers.

There’s a mix of quantitative and qualitative analysis in the paper. It’s all solid—no salt needed here. The gist is that GenAI tools harm critical thinking. As the researchers put it…

“[GenAI] can inhibit critical engagement with work and can potentially lead to long-term overreliance on the tool and diminished skill for independent problem-solving.”

The study suggests that knowledge work will increasingly shift from “task execution”  to forms of (hopefully critical) “AI oversight” and “stewardship”. However, the process of knowledge workers outsourcing cognitive tasks then checking the results is hampered by diminished critical thinking. Routinely using AI makes people less effective at tackling any outliers.

“a key irony of automation is that by mechanising routine tasks and leaving exception-handling to the human user, you deprive the user of the routine opportunities to practice their judgement and strengthen their cognitive musculature, leaving them atrophied and unprepared when the exceptions do arise.”

Crucially, the more confident people become in the output of models, the more extreme this effect is: “Higher confidence in GenAI’s ability to perform a task is related to less critical thinking effort.” 

Over time, this could chip away at our intellectual capability to think and understand things effectively. People will get less intelligent as trust in AI tools increases and their cognitive muscles atrophy.

Credit: SanyaRambo.

Of course, Altman’s proposed 2035 scenario is just another hot take in the endless discussion of “AGI”, “superintelligence”, the “AI singularity”, and quantum computing (a contender for the next speculative bubble).

But when researchers from Microsoft—a major partner and “minority owner” of OpenAI—find that knowledge workers risk getting worse at critical thinking and problem solving as a result of GenAI tools, it’s worth listening. 

AI models are likely to perform better and better on benchmarks over time, whether or not that constitutes intelligence. They’ll get more efficient too, enabling them to be smaller and/or accomplish more with the same amount of compute.

However, there are no guarantees that human intelligence will benefit from these advances. The opposite may well be true.

Who knows? A version of Altman’s prediction could prove correct—just not for the reason he thinks. An AI data center might end up “smarter” by virtue of quite how stupid we become…


  1. It’s worth noting that the idea of quantifiable “intelligence” has always been racialised to a greater or lesser extent. It can’t be entirely untangled from that and shouldn’t be reproduced uncritically. ↩︎

Leave a comment