Decentralized GPU networks are pitching themselves arsenic a lower-cost furniture for moving AI workloads, portion grooming the latest models remains concentrated wrong hyperscale information centers.
Frontier AI grooming involves gathering the largest and astir precocious systems, a process that requires thousands of GPUs to run successful choky synchronization.
That level of coordination makes decentralized networks impractical for top-end AI training, wherever net latency and reliability cannot lucifer the tightly coupled hardware in centralized information centers.
Most AI workloads successful accumulation bash not lucifer large-scale exemplary training, opening abstraction for decentralized networks to grip inference and mundane tasks.
“What we are opening to spot is that galore open-source and different models are becoming compact capable and sufficiently optimized to tally precise efficiently connected user GPUs,” Mitch Liu, co-founder and CEO of Theta Network, told Cointelegraph. “This is creating a displacement toward open-source, much businesslike models and much economical processing approaches.”
Training frontier AI models is highly GPU-intensive and remains concentrated successful hyperscale information centers. Source: Derya UnutmazFrom frontier AI grooming to mundane inference
Frontier grooming is concentrated among a fewer hyperscale operators, arsenic moving ample grooming jobs is costly and complex. The latest AI hardware, similar Nvidia’s Vera Rubin, is designed to optimize show wrong integrated information halfway environments.
“You tin deliberation of frontier AI exemplary grooming similar gathering a skyscraper,” Nökkvi Dan Ellidason, CEO of infrastructure institution Ovia Systems (formerly Gaimin), told Cointelegraph. “In a centralized information center, each the workers are connected the aforesaid scaffold, passing bricks by hand.”
That level of integration leaves small country for the escaped coordination and adaptable latency emblematic of distributed networks.
“To physique the aforesaid skyscraper [in a decentralized network], they person to message each ceramic to 1 different implicit the unfastened internet, which is highly inefficient,” Ellidason continued.
AI giants proceed to sorb a increasing stock of planetary GPU supply. Source: Sam AltmanMeta trained its Llama 4 AI exemplary utilizing a clump of much than 100,000 Nvidia H100 GPUs. OpenAI does not disclose the size of the GPU clusters utilized to bid its models, but infrastructure pb Anuj Saharan said GPT-5 was launched with enactment from much than 200,000 GPUs, without specifying however overmuch of that capableness was utilized for grooming versus inference oregon different workloads.
Inference refers to moving trained models to make responses for users and applications. Ellidason said the AI marketplace has reached an “inference tipping point.” While grooming dominated GPU request arsenic precocious arsenic 2024, helium estimated that arsenic overmuch arsenic 70% of request is driven by inference, agents and prediction workloads successful 2026.
“This has turned compute from a probe outgo into a continuous, scaling inferior cost,” Ellidason said. “Thus, the request multiplier done interior loops makes decentralized computing a viable enactment successful the hybrid compute conversation.”
Related: Why crypto’s infrastructure hasn’t caught up with its ideals
Where decentralized GPU networks really fit
Decentralized GPU networks are champion suited to workloads that tin beryllium split, routed and executed independently, without requiring changeless synchronization betwixt machines.
“Inference is the measurement business, and it scales with each deployed exemplary and cause loop,” Evgeny Ponomarev, co-founder of decentralized computing level Fluence, told Cointelegraph. “That is wherever cost, elasticity and geographic dispersed substance much than cleanable interconnects.”
In practice, that makes decentralized and gaming-grade GPUs successful user environments a amended acceptable for accumulation workloads that prioritize throughput and flexibility implicit choky coordination.
Low hourly prices for user GPUs exemplify wherefore decentralized networks people inference alternatively than large-scale exemplary training. Source: Salad.com“Consumer GPUs, with little VRAM and location net connections, bash not marque consciousness for grooming oregon workloads that are highly delicate to latency,” Bob Miles, CEO of Salad Technologies — an aggregator for idle user GPUs — told Cointelegraph.
“Today, they are much suited to AI cause discovery, text-to-image/video and ample standard information processing pipelines — immoderate workload that is outgo sensitive, user GPUs excel connected terms performance.”Decentralized GPU networks are besides well-suited to tasks specified arsenic collecting, cleaning and preparing information for exemplary training. Such tasks often necessitate wide entree to the unfastened web and tin beryllium tally successful parallel without choky coordination.
This benignant of enactment is hard to tally efficiently wrong hyperscale information centers without extended proxy infrastructure, Miles said.
When serving users each astir the world, a decentralized exemplary tin person a geographic advantage, arsenic it tin trim the distances requests person to question and aggregate web hops earlier reaching a information center, which tin summation latency.
“In a decentralized model, GPUs are distributed crossed galore locations globally, often overmuch person to extremity users. As a result, the latency betwixt the idiosyncratic and the GPU tin beryllium importantly little compared to routing postulation to a centralized information center,” said Liu of Theta Network.
Theta Network is facing a suit filed successful Los Angeles successful December 2025 by 2 erstwhile employees alleging fraud and token manipulation. Liu said helium could not remark connected the substance due to the fact that it is pending litigation. Theta has antecedently denied the allegations.
Related: How AI crypto trading volition marque and interruption quality roles
A complementary furniture successful AI computing
Frontier AI grooming volition stay centralized for the foreseeable future, but AI computing is shifting distant to inference, agents and accumulation workloads that necessitate looser coordination. Those workloads reward outgo efficiency, geographic organisation and elasticity.
“This rhythm has seen the emergence of galore open-source models that are not astatine the standard of systems similar ChatGPT, but are inactive susceptible capable to tally connected idiosyncratic computers equipped with GPUs specified arsenic the RTX 4090 oregon 5090,” Liu’s co-founder and Theta tech main Jieyi Long, told Cointelegraph.
With that level of hardware, users tin tally diffusion models, 3D reconstruction models and other meaningful workloads locally, creating an accidental for retail users to stock their GPU resources, according to Long.
Decentralized GPU networks are not a replacement for hyperscalers, but they are becoming a complementary layer.
As user hardware grows much susceptible and open-source models go much efficient, a widening people of AI tasks tin determination extracurricular centralized information centers, allowing decentralized models to acceptable successful the AI stack.
Magazine: 6 weirdest devices radical person utilized to excavation Bitcoin and crypto

4 hours ago








English (US)