Westlake Village, Dec. 29, 2025 (GLOBE NEWSWIRE) -- Microcaps.com curates and contextualizes news, analysis, and market data across the public micro- and small-cap ecosystem, with a focus on emerging growth themes shaping investor sentiment. As AI infrastructure spending accelerates and capital flows increasingly target data centers, GPU clouds, and adjacent platforms, Microcaps is examining how these developments are being reflected in public-market valuations, from established leaders like Nvidia (NVDA) to newer, infrastructure-adjacent entrants gaining investor attention.
The artificial intelligence data center industry has quickly evolved from a niche buildout into one of the most capital-intensive sectors in technology. What began as a race to secure GPUs has turned into a broader competition for energy capacity, cooling systems, high-speed interconnection, real estate, and long-term power contracts. Increasingly, this transformation is moving into public markets, where investors are looking to the physical infrastructure enabling AI as the next frontier of opportunity.
Why AI data centers are different
AI-focused data centers are fundamentally different from traditional enterprise facilities. These environments are devised to handle macro model training and inference, requiring accelerators, high rack power density, liquid or advanced cooling, and low-latency networking. According to McKinsey & Co., global investment in AI-ready data centers could reach $5.2 trillion by 2030, reflecting the scale and intensity of demand (McKinsey).
These technical requirements are driving new partnerships, facility types, and capital strategies as AI shifts from experimental to operational at scale.
The role of GPU supply
AI infrastructure is often defined by access to high-performance compute, especially graphics processing units. Nvidia, one of the most prominent suppliers of GPUs, plays a central role in powering many of the world’s largest AI compute clusters (Wikipedia). Though hardware companies are key players, the larger story involves how power, physical capacity and integration models affect the scale and resilience of AI deployments—particularly as some operators explore hybrid approaches that blend owned infrastructure with third-party and network-based GPU capacity.
Valuation multiples and public market signals
Investor expectations are increasingly reflected in valuation multiples. In both public and private markets, companies exposed to AI infrastructure have seen enterprise value-to-revenue ratios in the 20-to-30-times range, particularly where growth visibility is high (Aventis Advisors). By contrast, the average price-to-sales ratio for S&P 500 companies remains around 2.8 times revenue (Eqvista).
These valuation differences underscore the market’s confidence in the long-term value of infrastructure that enables AI, even when it’s capital-intensive or early in monetization.
AI infrastructure and data centers
GPU-focused cloud providers have emerged to serve training and inference workloads at scale. Some of these companies have attracted significant attention for their capital efficiency and growth rates. According to a recent Wall Street Journal article, such platforms have traded at revenue multiples as high as 13 times, reflecting investor appetite for scalable infrastructure models (Wall Street Journal).
Data center operators with experience in large-scale buildouts have also benefited. Oliver Wyman notes that infrastructure specialists often trade at 20-to-30-times EBITDA, given the strength of long-term leases and strategic real estate positioning (Oliver Wyman).
Infrastructure-adjacent valuations
Supporting infrastructure providers—including power delivery, thermal systems and high-performance fiber—are increasingly recognized as critical to AI expansion. These companies, while not compute providers themselves, have traded at revenue multiples of 20 or more during periods of elevated interest (Skeptically Optimistic).
Their role in enabling denser, more efficient compute environments makes them essential to the broader AI infrastructure stack.
The rise of ‘neoclouds’
A new class of cloud platforms, sometimes called “neoclouds,” has emerged to address AI-native workloads. These providers specialize in GPU infrastructure and orchestration tools tailored for AI applications. Their ability to secure scarce GPU supply and scale quickly has made them influential examples of how infrastructure strategy meets AI compute demand (Wikipedia – CoreWeave).
However, the capital intensity of these models—combined with execution risk—remains a topic of investor debate.
Infrastructure-adjacent public entrants
Some newer publicly traded firms are entering the AI market from adjacent angles—building data center infrastructure, facilitating energy deployments or offering highly specialized GPU-hosting services. In several cases, companies that originated in AI software or applied research have repositioned toward compute enablement as enterprise demand for flexible GPU access accelerates. Axe Compute (NASDAQ: AGPU), which recently rebranded from its earlier life sciences focus, is one example of this shift, reflecting how public-market entrants are adapting their strategies to align with the infrastructure layer of the AI economy. These companies often see their valuations driven more by future AI-aligned revenue potential than by current earnings. Their presence reflects the growing complexity and diversity of infrastructure models supporting the AI boom (Global Equity Briefing).
The enabling layer: landlords, interconnection and cooling
Data center landlords and interconnection platforms play a key role in the ecosystem, even when they do not directly offer AI compute. These providers benefit from long-term contracts, scarce land and high interconnection demand from cloud platforms and hyperscalers. As AI growth continues, these firms are seeing renewed attention from both investors and partners seeking strategic locations.
Cooling and energy delivery vendors, such as those specializing in liquid cooling or immersion systems, are increasingly vital as AI density grows. Their ability to adapt to next-generation hardware is becoming a competitive advantage.
The access layer: asset-light models
Asset-light platforms that aggregate compute capacity—rather than owning it—are emerging to meet short-term demand. These companies act as intermediaries between data center partners and end users, offering flexible pricing and availability; some public operators, including newer entrants like Axe Compute, are pursuing this model to monetize enterprise GPU access without assuming full hyperscale buildout risk. Their value is often measured through recurring revenue and partner depth, rather than hardware ownership.
This model appeals to developers and smaller enterprises that need scalable compute without long-term infrastructure commitments.
Capital intensity and execution risk define the space
Across all categories, AI data infrastructure is shaped by two opposing forces: massive demand growth and extremely high buildout costs. Capital requirements for land, power, cooling and chips can exceed billions per deployment. At the same time, operators must navigate execution challenges such as grid access, supply chain constraints, permitting and sustainability.
As McKinsey notes, while AI workloads will continue to scale, so too will the need for long-term infrastructure strategy and financing innovation (McKinsey).
Looking ahead
If 2023 and 2024 were the years of the GPU race, 2025 is increasingly defined by infrastructure readiness. Power, cooling, interconnection and capital planning are now as important as the chips themselves. In public markets, firms that combine technical leadership with scalable infrastructure and capital discipline are likely to lead the next wave of growth.
As AI infrastructure continues to expand, the companies positioned across compute, enablement, and access layers will play a central role in how the AI economy is built.
References
McKinsey & Co. The cost of compute: A $7 trillion race to scale data centers.
https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/the-cost-of-compute-a-7-trillion-dollar-race-to-scale-data-centers
Wikipedia. Nvidia.
https://en.wikipedia.org/wiki/Nvidia
The Wall Street Journal. Why AI Darling CoreWeave’s Bid for Its Own Landlord Spooked Investors.
https://www.wsj.com/finance/stocks/why-ai-darling-coreweaves-bid-for-its-own-landlord-spooked-investors-c552b913
Oliver Wyman. Leave Data Centers to the Specialists.
https://www.oliverwyman.com/our-expertise/insights/2020/dec/leave-data-centers-to-the-specialists.html
Aventis Advisors. AI Valuation Multiples in 2025.
https://aventis-advisors.com/ai-valuation-multiples/
Eqvista. Price-to-Sales Ratio by Industry (2025).
https://eqvista.com/price-to-sales-ratio-by-industry/
Skeptically Optimistic. The AI Gold Rush’s Best Kept Secret.
https://skepticallyoptimistic.substack.com/p/the-ai-gold-rushs-best-kept-secret
Wikipedia. CoreWeave.
https://en.wikipedia.org/wiki/CoreWeave
Global Equity Briefing. Exciting AI Infrastructure Business Poised for Strong Growth.
https://www.globalequitybriefing.com/p/exciting-ai-infrastructure-business
About Microcaps
Microcaps.com is a digital media and market intelligence platform focused on the public micro- and small-cap universe. The platform aggregates company news, press releases, and third-party research while providing editorial context around emerging sectors, valuation trends, and investor themes. Microcaps is designed to help investors, issuers, and market participants better understand how developing businesses and industries are being positioned and priced in the public markets.