Sam Altman recently wrote in his blogpost that he wants OpenAI to be producing 1GW of compute per week. For comparison, all the datacenters in the world today have about 55GW power consumption at peak.

(source: Michael Cembalest, J.P. Morgan)

Sam Altman's goal: add, every year, as much new data-centre power as the entire world has online today, Stargate's first 15 GW commitment split across ten sites is only Phase 1.

Such statements force us to revisit the economics of who pays, who builds and who profits. This week we trace OpenAI's road to AGI and a trillion-dollar valuation, and dive into the Taiwan–Korea–Japan–China semiconductor map through the lens of supply-chain bottlenecks and demand lags.

OpenAI's plan to the trillion-dollar club

OpenAI began as a research lab founded by Elon Musk, Sam Altman, Ilya Sutskever, Greg Brockman, Andrej Karpathy, and others.

It is now a $500B valued “miniscaler”, essentially a sub-hyperscale (like GAFAM), vertically integrated AI platform, extending beyond ChatGPT into infrastructure and applications.

In January, OpenAI announced Project Stargate, a venture backed by SoftBank, Oracle, and other partners to build cutting-edge AI data centres across the United States. The plan targets $500 billion over four years, with an initial $100 billion phase anchored by a large Texas campus. The build aims for at least 10 GW of AI capacity, roughly equivalent to the power draw of 8 million U.S. homes.

The market is recognising that a pure LLM provider is unlikely to clear the trillion-dollar bar. The rationale for raising ever larger sums only to plough them into third-party GPUs is weakening.

OpenAI's answer is to become the hyperscaler of the AI era by verticalising around infrastructure, AI-native devices, and applications.

Competition remains intense. Google has consolidated its efforts under Gemini, fuelled by large in-house TPU (Tensor Processing Units) fleets, and momentum is visible in products like 2.5 Flash (“Nano Banana”), which topped the App Store.

(source: vellum.ai)

Project Stargate

Stargate’s supply-chain alliances read like a Fortune magazine cover seeing how many billionaires have lined up to invest. 

SoftBank’s Masayoshi Son serves as chair, SoftBank supplies much of the capital, OpenAI sets the operating direction, and Oracle contributes cloud and data-centre execution.

The consortium says it has accelerated toward roughly 7 GW of capacity by 2025, implying about $400 billion of investment. It recently announced additional sites in Ohio and Texas, and it now signals an ambition that could exceed the previously stated $500 billion commitment.

Key technology partners include Nvidia for accelerators, Oracle for cloud and data-centre expertise, Arm for CPU and IP, and Microsoft as a long-time backer. OpenAI is hedging its footprint: it will continue scaling on Microsoft Azure while partnering with Oracle to build dedicated superclusters under Stargate.

On its latest earnings call, Oracle highlighted a surge in multi-billion-dollar AI infrastructure contracts and a sharply larger backlog. The message is clear. Hyperscalers are racing to meet AI demand, and Stargate is designed to lock in supply at scale.

Intricate web of IoUs

To fund its planetary-level ambitions, OpenAI has found interesting ways to fund its growth. 

Circumventing venture capitalists and going directly to the source happens to be quite efficient: cashflow-rich companies like Nvidia needing a vanguard to keep companies buying chips & Oracle needing to come back in the tech universe is the perfect recipe.

This reminds us of IOUs, informal acknowledgements of debt towards someone. This is especially interesting as OpenAI itself does not have the balance sheet to foot the bill for all the GPUs it will end up using. Also a reason why these moves make sense…on paper.

Nvidia has reportedly agreed to invest up to $100 billion in OpenAI and to lease Blackwell GPUs. This unusual vendor-financing setup has raised antitrust questions, since OpenAI could just deploy Nvidia’s capital to buy more Nvidia chips.

Despite being Nvidia’s largest customer, it aims to reduce single-vendor risk; a reported $10 billion co-design contract with Broadcom, to be fabricated by TSMC, targeting bespoke chips in 2025–2026 that could lower cost per workload.

As Altman puts it, “everything starts with compute”. OpenAI is pulling every lever, from equity and pre-buys to leases and supplier capital, to lock in the critical resource of the AI age: computing power.

(source: Deloitte TMT Research)

Energy & Cost

With xAI’s Colossus, OpenAI’s Stargate, and Alibaba’s new campuses, the cost and energy requirements have taken off.

Industry estimates put required global data-centre investment near $7 trillion by 2030 to meet AI demand. OpenAI’s own plan calls for at least 15 GW of new capacity, roughly a dozen top-tier data centres’ worth.

(source: Lazard LCOE+ June 2025 Report)

The “data center blob” as we saw in the cover is a gluttonous one. Ten gigawatts, for example, is about 8% of current US data-centre grid load. McKinsey projects that compute may need to triple by 2030, with about 70% of the load tied to AI training and inference. A single high-end AI site can cost $10–25 billion and take more than five years to build.

OpenAI and its backers want to compress that timeline by deploying capital now. Even a $500 billion commitment could be only a down payment if model sizes and usage surge across consumer & industrial use cases.

These facilities stretch power and cooling to the limit. Today’s hyperscale sites run 50–200 MW. AI roadmaps now contemplate 1 GW+ campuses. Utilities will feel the strain. Cloud providers are seeking large grid upgrades and on-site generation, competing with rising EV charging demand for the same electricity supply.

(source: Bain & Co Technology Report 2025)

Geopolitical race to the top

AI’s rise is amplifying tensions in the global semiconductor supply chain, as countries vie for control over “the new oil”:  advanced chips and the capacity to produce them. This evolving landscape can be understood through the four horsemen of South-East Asia: China, Japan, South Korea, and Taiwan.

The four live players are each pursuing supply chain resilience through their own policy lens, despite existing entrenched cross-border dependencies.

Taiwan

Taiwan remains the crown jewel of the global semiconductor ecosystem, which is both its defining strength and its greatest vulnerability. 

TSMC, the island’s flagship, now commands roughly 64% of the foundry market and about 92% of sub-10 nm capacity, with Taiwan’s semiconductor output reaching $165 billion in 2024, up 22% year on year. This concentration underpins the “Silicon Shield,” the notion that world-leading technical capability can act as a deterrent to conflict; estimates suggest a full Taiwan contingency could subtract $2.7 trillion to $10 trillion from global output. 

On one side, TSMC’s dominance supplies the West and keeps China at arm’s length; on the other, China continues to import from Taiwan while accelerating its domestic build-out. As a geopolitical hedge, TSMC is diversifying production with roughly $40 billion committed to new fabs in Arizona and Japan, yet the majority of bleeding-edge 2 nm capacity will still reside in Taiwan. 

High-volume EUV manufacturing is not easily transplanted: the dense supplier base and deep talent pools clustered around Hsinchu and Tainan, two cities on the western coastline of Taiwan confer durable cost and execution advantages, reinforcing TSMC’s pricing power, and making the company a quasi-political actor in its own right.

China

(source: ITIF Comments to US Dept. of Commerce, 2025)

China now accounts for roughly 29% of the global semiconductor market and holds an upstream monopoly in refined gallium. Over the past decade it has used that base to vertically integrate across design, manufacturing, and materials, positioning itself to challenge global leadership in sub-5 nm semiconductors.

Flashpoints since 2020, focused on access to leading-edge tools and chips:

  • 2020: The first Trump administration barred Chinese access to ASML EUV systems and placed SMIC on a trade blacklist.
  • 2022: The Biden administration tightened controls on advanced processors, restricting Nvidia and AMD exports and widening component curbs.
  • August 2025: The second Trump administration revoked waivers that had allowed SK Hynix, Samsung, Intel, and TSMC to move U.S. chipmaking equipment into China.
  • Recent Chinese response: Beijing instructed major platforms to halt purchases of Nvidia products and accelerated domestic alternatives.

Despite mounting pressure, China’s manufacturing resilience is evident. SMIC has shipped 7 nm-class chips, and Huawei’s 910C has entered mass shipment

The flywheel is increasingly software-led: Huawei and Alibaba are building domestic AI frameworks and toolchains to reduce reliance on Nvidia’s CUDA, pairing them with custom, locally made accelerators.

South Korea

South Korea sits in the middle of the US-China tech tug-of-war. Its champions, Samsung Electronics and SK Hynix, lead global memory and remain meaningful in logic. Seoul’s playbook has been heavy investment and careful navigation of geopolitics, aiming to stay ahead while keeping markets open.

Key developments since 2023:

  • March 2023: Government outlines a $230 billion, 20-year plan for a semiconductor megacluster near Seoul.
  • October 2023: Waivers renewed for SK Hynix and Samsung to ship U.S. chipmaking equipment to China, supporting major NAND and DRAM fabs in Xi’an and Wuxi.
  • August 2025: Waivers revoked amid escalating U.S.–China controls, tightening equipment flows into China.
  • Early September 2025: U.S. immigration authorities reportedly raid more than 400 South Korean employees at LG-Hyundai Energy sites tied to next-gen power needs for AI data centres, dampening Korean appetite for additional U.S. capex.

Korea remains the pivotal supplier of AI memory. The geopolitical turn is nudging alignment toward the “Fab 4” (U.S., Taiwan, Japan, Korea) rather than a balanced stance between Washington and Beijing. Near term, HBM demand looks set to surge: SK Hynix guided to sharply higher HBM sales after a record quarter, and Samsung’s Nvidia qualification for 12-high HBM3E supports momentum at least into mid-2026.

Japan

Japan dominated semiconductors in the 1980s, then slipped to roughly 10% share by 2024. A renewed sense of urgency now underpins corporate and policy reform aimed at restoring competitiveness.

Key developments since 2022:

  • 2022: The government enacted the Economic Security Promotion Act (ESPA) to channel strategic investment into critical sectors, including semiconductors.
  • 2022: The U.S. and Japan agreed to collaborate on 2 nm and beyond, leading to the creation of the Leading-edge Semiconductor Technology Center (LSTC) and multiple joint ventures.
  • 2022: Major JVs formed and advanced: Rapidus (backed by Denso, Kioxia, MUFG, NEC, NTT, SoftBank) signed an agreement with IBM on 2 nm design; JASM (TSMC, Sony, Denso, Toyota) ramped its manufacturing collaboration. Aggregate subsidies exceeded $12 billion.
  • 2024: TSMC’s Kumamoto fab (JASM) opened, with combined investments surpassing $20 billion and plans for a second site.
  • April 2025: Further U.S. and Netherlands export-control tightening created positive spillovers for Japanese test, packaging, inspection, and metrology leaders (for example, Advantest).
  • Recent: Kumamoto reported progress toward a working 2 nm prototype a few months after starting production in April; the second Kumamoto site faces delays.

Signals point to an industrial revival. Policy and governance reforms have simplified corporate structures and encouraged divestment of non-core assets, while bleeding-edge manufacturing commitments have anchored capex in Japan and among allies. The second-order effects favour Japan’s testing, packaging, inspection, and metrology ecosystem.

Companies to watch

Micron (NASDAQ: MU) - HBM Memory Duopoly

Together with SK Hynix, the two control about 83% of the HBM market. What stands out with Micron is its ability to navigate geopolitics while scaling leading-edge product cycles across the U.S., Japan, and Taiwan.

Micron posted record Q4 FY2025 revenue of $11.32B (+46% YoY) with non-GAAP EPS at $3.03, guiding Q1 FY2026 to $12.5B revenue at ~51.5% gross margins. HBM revenue run-rate is ~$8B annualised (Q4 HBM ≈ $2B) with 2026 capacity essentially sold out; HBM3E is in volume and HBM4 samples are with multiple customers. 

Data-centre exposure hit record levels in FY2025 (company says 56% of full-year revenue). The company secured ~$6.1B in CHIPS Act funding for U.S. fab expansion, maintaining sole U.S. advanced memory manufacturer status. Stock trades ~$127 with several analysts lifting targets (some to $180–$200).

Wiwynn (TWSE: 6669) - AI ODM Pure-Play 

Wiwynn delivered Q2 2025 revenue of ~US$6.9B (+184.9% YoY) with EPS US$2.04 and operating margin ~7.2%; H1 2025 revenue reached US$12.3B (+166.1% YoY), already above full-year 2024.

The company specializes in liquid-cooled GB200 NVL72 systems commanding high ASPs. Meta represents a significant portion of revenue alongside key relationships with Microsoft, AWS, and Google, positioning the company to capture a premium in complex AI infrastructure assembly.

SanDisk (Nasdaq: SNDK) - NAND Storage Scaler

Spun out of Western Digital in Feb 2025, SanDisk is a flash pure-play scaling ultra-dense enterprise SSDs for AI data. 

Q4 FY2025 revenue was $1.90B with non-GAAP EPS $0.29; Q1 FY2026 guidance is $2.10–$2.20B revenue and $0.70–$0.90 EPS. Cloud buyers drove the mix higher in 2025 (~13%), with sell-side models pointing to ~20% in 2026 as AI qualifications ramp. 

The UltraQLC roadmap introduces 128TB test units now and a 256TB NVMe SSD shipping 1H26, aimed at lowering $/TB for read-heavy AI datasets. Management highlights favorable supply-demand and margin recovery into FY2026.

Smaller companies to invest in

To dig further into smaller companies bound to benefit from the AI Infrastructure buildout, create your own StockScreener like we did:

Bonus: ETF screener

To find out more about ETFs available to index the AI infrastructure buildout, make your own ETF Screener like we did:

Our take

AI has shifted from code to concrete. Sam Altman’s 1 GW-per-week target signals a buildout measured in turbines, transformers, and trillions. The capital now lining up, from Stargate’s $500B vision to Nvidia’s $100B vendor-financed stack, is an attempt to bend the physical world to AI’s demand.

That ambition runs through the Asian semiconductor corridor. TSMC’s foundries, Korea’s memory capacity, and Japan’s materials set the pace while the US–China tech war sets the rules. Taiwan’s “Silicon Shield” is the global system’s biggest single point of failure. There is no neutral ground.

For portfolios, the message is simple. Durable value sits with the bottleneck owners. Own the picks and shovels that ration progress: HBM suppliers like Micron, high-ASP server integrators such as Wiwynn, and high-capacity storage makers like SanDisk. Avoid operators that require cheap power, permissive policy, or heroic utilisation just to clear capex.

AGI is now a contest for scarce inputs: compute, power, packaging, memory, optics. Buy the tollbooths, size to real geopolitical and power risks, and insist on cash conversion and order visibility.

Stay invested, cautiously.