Golden Gate Ventures and Antler are betting that Singapore-based OrtCloud has cracked a problem most cloud providers prefer to ignore: inconsistent performance and unpredictable bills are making life hard for AI developers. The pre-seed round worth US$1.7 million, announced today, will fuel the startup’s push into what its founders call “deterministic cloud infrastructure”, a technical […] The pos

OrtCloud co-founder and CEO Enyegue Carl Dimick Golden Gate Ventures and Antler are betting that Singapore-based OrtCloud has cracked a problem most cloud providers prefer to ignore: inconsistent performance and unpredictable bills are making life hard for AI developers. The pre-seed round worth US$1.7 million, announced today, will fuel the startup’s push into what its founders call “deterministic cloud infrastructure”, a technical term for virtual machines that behave the same way every single time without the performance lottery that plagues shared cloud environments. Also Read: Is the future of AI decentralised?

Cloud computing holds the key It’s an unsexy problem with serious consequences. When AI agents require sandboxed compute environments or enterprises run sensitive workloads, the traditional hyperscaler model — where resources are shared across customers and performance fluctuates based on neighbour activity — creates billing surprises, inconsistent benchmarks, and compliance headaches. OrtCloud’s solution is straightforward: fixed-resource virtual machine (VM) tiers that don’t share capacity.

When a workload outgrows its tier, it upgrades to the next size up. No elastic scaling drawing from mysterious shared pools. No performance variability.

No surprise invoices at month’s end. What OrtCloud actually does Strip away the jargon, and OrtCloud is selling cloud computing without the chaos. Traditional cloud providers like AWS, Google Cloud, and Azure achieve efficiency by sharing physical hardware across multiple customers.

That works brilliantly for general-purpose workloads. But when you’re running AI agent workflows that need isolated, predictable environments — or enterprise applications with strict compliance requirements — shared infrastructure becomes a liability. OrtCloud provisions virtual machines with guaranteed, dedicated resources.

Each VM tier has fixed CPU, memory, and storage allocations. Performance remains consistent because there’s no “noisy neighbour” problem, the industry term for when another customer’s spike in usage throttles your workload. The company serves two deployment models: a hosted cloud for teams wanting managed simplicity, and an on-premises option for enterprises with data residency or network isolation requirements.

The latter deploys OrtCloud’s orchestration and policy layer onto customer-owned hardware, delivering cloud-like provisioning without compromising compliance. Seven-figure revenue and heavyweight customers What makes this pre-seed raise unusual is the traction. OrtCloud claims seven-figure annual recurring revenue, backed by a customer list that includes OpenAI, Samsung, LG Innotek, MEMS Korea, Konkuk University, and KAIST University.

That level of enterprise adoption at the pre-seed stage suggests the pain point is real and urgent. These aren’t experimental pilot projects—they’re production deployments from organisations that typically move slowly on infrastructure decisions. Also Read: AI in cloud-native DevOps: Decoding development processes The presence of OpenAI on the customer roster is particularly telling.

If anyone understands the infrastructure requirements for AI agent workloads, it’s the company that sparked the current generative AI wave. Their use of OrtCloud signals validation that deterministic compute environments matter for AI development. Why Southeast Asia should care Southeast Asia’s cloud infrastructure market is approaching US$50 billion, but the real growth story lies in AI workloads, a category OrtCloud estimates at over US$20 billion in the region.

The significance for Southeast Asia runs deeper than market size. The region has struggled with issues of infrastructure sovereignty for years. Government agencies, financial institutions, and healthcare providers face data residency regulations that prevent public cloud adoption, forcing them to rely on expensive, difficult-to-manage on-premises infrastructure.

OrtCloud’s on-premises deployment model offers a middle path: cloud-like agility with data staying firmly within geographic and network boundaries. For a region where regulatory fragmentation is the norm, that’s strategically important. Singapore’s position as a regional tech hub makes it a natural launch point.

The city-state has invested heavily in AI research and development, whilst maintaining strict data protection standards. OrtCloud’s ability to serve both the hosted cloud and on-premises markets positions it to capture demand across the region’s diverse regulatory landscape. The AI agent infrastructure opportunity Agent-based workloads represent OrtCloud’s most compelling growth vector, and it’s worth understanding why.

AI agents — autonomous software that can plan, execute tasks, and interact with tools and APIs — require fundamentally different infrastructure than traditional applications. They need persistent, isolated compute environments that remain available 24/7. They execute unpredictable workloads that can