Deutsch한국어日本語中文EspañolFrançaisՀայերենNederlandsРусскийItalianoPortuguêsTürkçePortfolio TrackerSwapCryptocurrenciesPricingIntegrationsNewsEarnBlogNFTWidgetsDeFi Portfolio TrackerOpen API24h ReportPress KitAPI Docs

Supply Side Economics for Compute

15h ago
bullish:

0

bearish:

0

Share

In tech, perhaps more than any other industry, problems are not just physical or constrained by practicality, but also by imagination. We’ll come back to that later.

We’ve written before about how neoclouds and decentralized compute platforms like Theta EdgeCloud are helping to organize existing GPU supply more efficiently. But the case for doing so is getting more urgent and a lot more obvious.

Numbers out just this week tell the story. Since the advent of ChatGPT in 2022, which kicked off a wave of AI innovation, the hunger for processing power capable of handling AI workloads has kickstarted a race for the hardware that makes this possible.

The massive ramp-up in demand has unsurprisingly led to a dwindling supply. As any amateur student of economics knows, this has led to a corresponding rise in price. In just the last four months, Nvidia, AMD and Intel GPUs have risen 15%, according to an investigation by TechSpot published just days ago. A single model, the RTX 5090, is up an incredible 31%. Increasing demand and static supply equals higher prices. No surprises there, this is economics 101.

The AI boom is also having a lot of less well-predicted secondary effects. Beyond graphics cards, shortages and price rises have spread across the core parts of a typical gaming PC, especially RAM and SSD storage, as data-centre builders consume vast quantities of the same memory chips used in consumer hardware. The net effect of that has been gaming PCs that are radically more expensive to put together than they were a few years ago. Possibly threatening one of the only profitable media industries we have left.

As if we don’t have enough to worry about?

Here is where imagination comes in.

The supply problem is real, but it’s also incomplete. There is an enormous amount of compute already out there, sitting idle. According to Microsoft’s 2022 annual report, more than 1.4 billion devices run Windows 10 or 11 globally. Nvidia has described a consumer RTX installed base of over 100 million GPUs. These machines are not doing much of anything for large stretches of the day. An Energy Star discussion guide found that desktops spend roughly two-thirds of their time in sleep or off modes, and notebooks even more.

It’s not just consumer hardware either. A 2024 report from Lawrence Berkeley National Laboratory models average server utilisation as low as 20% in smaller data centres and only around 50% even at hyperscale. This “idleness” is a problem going back to at least 2015. Research by Jonathan Koomey and Jon Taylor found over a decade ago that roughly 25 to 30 percent of enterprise servers qualify as “zombie” machines, racked and powered but delivering no useful work for six months or more.

That is a staggering amount of wasted capacity in a world that is supposedly desperate for more of it.

Now, admittedly, none of this is a magic fix. Consumer devices are intermittent. Enterprise servers carry compliance and security baggage. But decentralized compute networks and edge computing platforms like Theta EdgeCloud are already proving that idle hardware can be aggregated into something useful.

And we should, because we aren’t putting AI back in the box. Nor should we want to. AI is already accelerating drug discovery for complex illnesses, improving climate modelling, making precision agriculture viable, and helping researchers process genomic data at speeds that were unthinkable five years ago. The benefits are tangible and growing.

But AI also has a perception problem that as an industry we can’t afford to ignore. When people see GPU prices climbing and their gaming hobby getting more expensive, it’s harder to sell them on the upside. One of the most practical things the industry can do to address that is expand the supply side of the equation. Not just by building more data centres, but by bringing the compute we already have into circulation.

Call it supply-side economics for compute. The theory being that if you increase the available pool of resources, prices come down and more people benefit. Reaganomics it is not, but the principle holds.

That’s the imagination part, and we have the infrastructure to do it.

If you would like to add to the supply and contribute your spare compute, follow our Theta EdgeCloud client guide here.

This article is part of Theta’s Thought Leadership Series. To read more articles like this, follow our Medium page.


Supply Side Economics for Compute was originally published in Theta Network on Medium, where people are continuing the conversation by highlighting and responding to this story.

15h ago
bullish:

0

bearish:

0

Share
Manage all your crypto, NFT and DeFi from one place

Securely connect the portfolio you’re using to start.