Zero Latency Launches Zerogrid Closed Beta, a Distributed AI Inference Grid

Select Fortune 1000 enterprises, tier 1 telcos and leading DevOps platforms join the first constraint-aware AI inference grid.


CHARLOTTESVILLE, VA, May 06, 2026 (GLOBE NEWSWIRE) -- Zero Latency (0.lat), formerly known as Hyphastructure, today announced the launch of Zerogrid closed beta, a distributed AI inference grid that routes AI inference workloads, matching each inference decision to edge capacity that simultaneously satisfies low-latency, data-gravity and bursting constraints.

The beta program is now open to a select cohort of Fortune 1000 enterprises, tier 1 telecommunications and fiber operators, and leading enterprise DevOps application platforms.

Beta participants gain immediate access to a Zerogrid workload and image management dashboard. A command-line interface (CLI) will be introduced during the program and refined based on user feedback.

The Architecture: A Virtual Power Plant for Compute      

Zerogrid is architecturally modeled on behind-the-meter distributed virtual power plants (VPPs), a design pattern that Zero Latency’s founders have spent nearly a hundred collective years building across the energy sector. Zero Latency owns and operates a network of edge computing clusters across the United States and coordinates them as a single pool of capacity.

Rather than provisioning capacity statically, these clusters are aggregated and dispatched against workloads on a day-ahead and real-time basis, as well as longer-term arrangements, mirroring how distributed energy resources are operated in modern power markets.

This approach aligns with the AI grid concept that Nvidia articulated, a networked, dispatched layer of compute that treats inference as a grid service. Zerogrid extends that vision with constraint-aware dispatch, ensuring each workload reaches not just available capacity, but capacity that satisfies its specific operational envelope.

The result is an infrastructure tier purpose-built for a world where regulatory fracturing, sovereign AI requirements and heterogeneous enterprise constraints mean compute must come to the workload, not the other way around.

The Zero Latency team has pioneered and deployed first-of-a-kind battery, solar, demand response, electric bus, vehicle and distributed natural gas infrastructure. The team is applying the same architectural logic that unlocked over a billion dollars of decentralized power infrastructure behind customer meters to a new challenge: routing the right compute capacity to the right place, under constraint, at the moment it is needed.

"Innovation through decentralization is not a thesis we arrived at recently," said Michael Huerta, Co-founder of Zero Latency. "It is the lens through which we have built, financed and operated infrastructure for decades. We have applied the successes and hard lessons from deploying decentralized power infrastructure to unlock architectural and routing innovations for AI workloads. Zerogrid is the result: infrastructure designed for an inference world that the cloud was never built to serve."          

The Problem: Cloud and On-Prem Leave Inference on the Table

For AI training, the market is well-served. Hyperscalers and neoclouds have built enormous capacity, and Zero Latency does not compete in that space. But AI inference is a structurally different problem, particularly inference that must satisfy hard constraints around latency, data residency or regulatory geography. Cloud providers route regionally, not by constraint, while on-premises deployments are rigid by design.

Zero Latency was founded on the conviction that AI workloads deserve to be treated as first-class routing primitives. Not "pick a region." Route this specific inference decision to where burst, data-gravity and latency requirements are all satisfied. That is the problem Zerogrid was built to solve, and the one that neither cloud nor on-premises architectures address at scale

 

About Zero Latency

Zero Latency (0.lat) is a Charlottesville, Virginia-based distributed AI infrastructure company. Zerogrid, its AI inference grid, treats AI workloads as first-class routing primitives, dispatching inference decisions to edge capacity that satisfies latency, geo-spatial and other operational workload constraints. Zero Latency is currently in closed beta. Learn more at www.0.lat.

Attachments

 
Zero Latency (0.lat)
Zero Latency (0.lat)

Kontaktdaten

GlobeNewswire