Tech CEOs have long argued that for exponential improvements and emergent properties to develop in AI, the increased scale of computing power is all you need. This narrative has led to billions of dollars of investment in semiconductors. But in recent weeks, Chinese AI company DeepSeek claims to have produced models akin to OpenAI with a fraction of the compute and cost. Was the Tech CEO narrative about scale simply self-serving all along, and at its core, little more than a money grab? Dr Danyal Akarca here argues that scale is not sufficient to take AI to the new heights we have been promised.
It is hard to express the extent to which the last year has seen a seismic shift in AI, reshaping both the capabilities of these systems and the world’s perception of what is possible. With perhaps some exceptions, it is fair to say these systems have exceeded many lofty expectations. Public opinion and understanding are evolving at rapid pace. Governments are priming themselves for a new future.
___
I believe that the core motif at the centre of this rapid recalibration of perception is our understanding of scale: what to scale, why scale and how to scale intelligence.
___
At the same time, the economic landscape surrounding the building and deployment of AI is increasingly murky. As I write, NVIDIA is recovering from a 17% decrease in their market valuation—worth just shy of $600 billion—triggered by the (delayed) realisation that DeepSeek’s, China’s most famous AI company, breakthrough model reported required orders of magnitude less capital to build. This is the largest single-day drop by any US listed company in history. Commentators are now questioning the central economic assumptions about the cost of intelligence.
Join the conversation