Here's a number that should bother you: 60 percent. That's the share of GPU capacity sitting idle across most AI infrastructure providers right now. Companies are spending billions on graphics processing hardware, racking it in data centers on three continents, and then watching more than half of it do absolutely nothing. Hosted.ai, a startup with deep roots in virtualization and cloud infrastructure, just closed a $19 million seed round to make that waste disappear.

The round was led by Creandum, the Stockholm firm behind Spotify and Klarna in their earliest days. Repeat VC followed, alongside existing backers People Ventures, Z21 Ventures, Golden Sparrow, Hersir Ventures, and Tekton. For a seed round in an infrastructure layer most people never think about, the investor lineup carries serious conviction.

And that's sort of the point. The AI gold rush created a market obsessed with acquiring GPUs. Hosted.ai thinks the bigger opportunity is in actually using the ones we've already got.

The 60 Percent Problem Nobody Talks About

Traditional cloud compute scales dynamically. You pay for what you use, more or less. GPUs don't work that way. Customers rent fixed instances sized for their peak workload, which means the average AI deployment burns capacity around the clock even when demand drops to a whisper. Industry-wide, GPU utilization hovers near 40 percent. The other 60 percent? Paid for, powered on, doing nothing.

That creates a compounding problem with three ugly heads. Service providers, the so-called neoclouds, face enormous upfront capital expenditure with razor-thin margins. Their customers overpay for capacity they don't touch. And regional providers outside the big hyperscalers can't afford to enter the game at all, which concentrates the entire AI infrastructure market in the hands of a few giants.

"The GPU market has a waste problem, not a scarcity problem," says Ditlev Bredahl, Hosted.ai's CEO. "We've spent 25 years building infrastructure software that makes service providers competitive, and the GPU opportunity is the biggest we've seen."

A Founding Team That Built This Movie Before

Bredahl isn't exaggerating the resume. Before Hosted.ai, the founding team, which includes Narendar Shankar, Julian Chesterfield, and James Withall, scaled UK2 Group and founded OnApp, a company that brought cloud infrastructure as a service to mainstream hosting providers. They've held senior roles at VMware, Expedia, XenSource, and NVIDIA.

That matters because GPU infrastructure isn't a software problem you can solve with a clever algorithm and a pitch deck. It requires deep understanding of hardware abstraction, multi-tenant orchestration, and the business model pressures facing the providers who actually run GPU fleets. This team has spent two decades inside exactly those systems. They know where the waste lives because they've built the plumbing that creates it.

Founded in 2024 and launched commercially in 2025, the company already operates across the US, EMEA, and Asia-Pacific. That's unusually wide distribution for a seed-stage company, which tells you the product isn't theoretical.

Three Products, One Stack, Zero Idle GPUs

Hosted.ai's approach isn't a single product. It's a software stack with three layers, each attacking a different piece of the utilization puzzle.

The core platform, hosted.ai itself, is a GPUaaS system that pools GPU resources across tenants, optimizes workload placement, and enables GPU overcommit. The company claims it can deliver up to a 5x improvement in utilization. If true, that changes everything. A neocloud provider that previously needed $50 million in GPU hardware to serve its customer base could theoretically serve the same demand with $10 million. The economics flip from bleeding money to printing it.

Layer two is packet.ai, a neocloud built on top of customer infrastructure. It generates demand directly for Hosted.ai partners while offering GPUaaS at competitive pricing. Think of it as both a reference implementation and a revenue engine. The third piece, GPUaaS.com, works as wholesale matchmaking: enterprises with custom GPU cluster needs get connected to providers who can fill them.

Next on the roadmap is GPU Mesh, a resource exchange where providers buy and sell spare capacity without purchasing additional hardware. If Hosted.ai can make that work at scale, it essentially creates a liquid market for GPU compute, something that doesn't exist today and would reshape how the entire industry allocates resources.

Metric

Detail

Round

Seed

Amount

$19M

Lead Investor

Creandum (Stockholm)

Follow-on

Repeat VC, People Ventures, Z21, Golden Sparrow, Hersir, Tekton

Industry Avg GPU Utilization

~40%

Claimed Improvement

Up to 5x

Founded

2024

Commercial Launch

2025

Active Markets

US, EMEA, Asia-Pacific

Why Creandum Bet on Plumbing Over Products

Creandum doesn't chase hype. The firm built its reputation by backing companies that created infrastructure layers for massive markets. Spotify changed how music reached ears. Klarna changed how payments reached merchants. The thesis here runs along the same tracks: as AI inference scales beyond training, the bottleneck won't be model quality. It'll be whether the infrastructure underneath can deliver compute efficiently enough to make the economics work.

The timing matters. AI workloads are shifting decisively from training, where massive clusters run for weeks, to inference, where millions of small requests need low-latency responses close to the user. Inference demands distributed, regional infrastructure. It demands exactly the kind of providers who can't currently afford to compete with AWS, Azure, and Google Cloud. Hosted.ai is betting its software stack lowers the barrier enough to let them in.

The GPU Mesh concept is particularly ambitious. Today, if a provider in Frankfurt has spare capacity and a customer in Amsterdam needs compute, there's no standardized way to broker that transaction. Each provider operates its own isolated pool. GPU Mesh would create a federated network where spare cycles flow to wherever demand exists, turning individual GPU fleets into nodes in a larger compute fabric. VMware did something similar for CPUs with vMotion and DRS. Nobody has done it for GPUs at this scale.

There's a subtle land-grab dynamic here too. Whoever controls the orchestration layer that connects GPU providers into a federated network effectively becomes the clearing house for AI compute. That's an enormously powerful position. It's the kind of infrastructure play that starts boring and ends up essential.

The Competitive Landscape Has a Gap

Hosted.ai isn't operating in a vacuum. CoreWeave, Lambda Labs, and a growing list of neoclouds are all competing for AI infrastructure dollars. But most of them are competing on the hardware side: who can acquire the most GPUs and offer them at the best price. Hosted.ai sits one layer up, providing software that makes any provider's hardware more efficient. That's a complementary position rather than a directly competitive one.

NVIDIA's own DGX Cloud and partnerships with cloud providers create another dynamic. As the GPU manufacturer moves further into the stack, independent software providers like Hosted.ai need to offer something NVIDIA can't or won't build internally. The multi-vendor, multi-provider orchestration layer is that something. NVIDIA wants to sell more chips. Hosted.ai wants to make each chip work harder. Those goals don't conflict yet.

Europe's Sovereignty Play Needs an Operating System

There's a geopolitical layer here too, and it's thickening fast. As AI inference moves closer to end users, companies and governments increasingly want compute that stays within their borders. The EU's data sovereignty push isn't slowing down. Neither is demand from regulated industries, think banks, healthcare systems, defense contractors, that need to prove their AI runs on infrastructure they actually control.

Hosted.ai's model was designed for exactly this. By enabling regional service providers to operate competitive GPU infrastructure with standard server hardware, the company creates a path to sovereign AI compute that doesn't require a hyperscaler's balance sheet. That's a powerful pitch in a Europe that's simultaneously embracing AI and tightening its grip on data governance.

Denmark, where the company's CEO is based, has become an increasingly relevant node in this conversation. The country sits at the crossroads of Nordic energy infrastructure and European regulatory frameworks, making it a natural launch point for companies trying to serve both markets simultaneously.

A Seed Round That Behaves Like a Series A

$19 million is a big seed. It signals that investors expect Hosted.ai to reach commercial scale before coming back for more money. The company's already-global footprint and production-ready product suggest the capital will go toward customer acquisition and platform expansion rather than R&D.

The real question is whether a software-only approach can deliver the utilization gains the company promises. A 5x improvement sounds like a marketing slide until you see it running in production. But the founding team's track record of building and selling infrastructure platforms at scale gives the claim more credibility than most.

There's an elegance to Hosted.ai's positioning that's easy to miss. The company doesn't need to win customers away from AWS or Azure. It needs to win the providers who serve the customers that hyperscalers don't want, the mid-market enterprises and regional organizations that need local, compliant compute but can't afford to build it themselves. That's a huge and growing market, especially as AI inference pushes compute closer to end users.

Creandum's network will help here. The firm's portfolio companies span fintech, mobility, and enterprise SaaS across Europe. Each of those companies will eventually need cost-effective AI inference infrastructure. Whether they become Hosted.ai customers or simply validate the demand signal, the relationship web matters. In infrastructure investing, distribution often beats technology. Hosted.ai seems to understand both.

For the broader Nordic ecosystem, this round is another signal that the region's tech ambitions extend well beyond fintech and SaaS. Hosted.ai is building the kind of foundational layer that, if it works, every AI company will eventually touch. GPU waste isn't glamorous. But $19 million says someone thinks it's the biggest unsolved problem in the AI stack.

Keep Reading