Oracle’s $455B Cloud Explosion, AMD's 'No‑Asterisk' AI Gamble, and NVIDIA's $4T Infrastructure Wave
Welcome, AI & Semiconductor Investors,
Could Oracle’s half-trillion-dollar cloud backlog reshape the AI landscape? With a staggering $455 billion in RPO driven by multi-cloud and AI database innovations, Oracle is betting aggressively that enterprise AI adoption is about to go vertical. Meanwhile, AMD’s EVP Forrest Norrod boldly frames the MI450 as its “no-asterisk” moment—directly challenging NVIDIA’s reign in AI hardware. And with NVIDIA itself projecting a monumental $4 trillion infrastructure build-out, the semiconductor giants are racing into an unprecedented AI gold rush. — Let’s Chip In.
What The Chip Happened?
🚀 Oracle’s “Half‑Trillion” Cloud Backlog Fuels Bold AI Forecasts
🧠 AMD’s MI450: the “no‑asterisk” bet to win AI training & inference
💰 NVIDIA: $3–$4T AI Build‑Out, Rubin Next, and a Networking Surge
[Oracle FY26 Q1: RPO Hits $455B as AI Demand Lifts Cloud]
Read time: 7 minutes
Get 15% OFF FISCAL.AI — ALL CHARTS ARE FROM FISCAL.AI —
Oracle (NYSE: ORCL)
🚀 Oracle’s “Half‑Trillion” Cloud Backlog Fuels Bold AI Forecasts
What The Chip: On September 9, 2025, Oracle posted Q1 FY26 results headlined by Remaining Performance Obligations (RPO) of $455B (+359% YoY), driven by four new multi‑billion‑dollar contracts and surging AI demand. Cloud revenue hit $7.2B (+28%), with OCI (IaaS) at $3.3B (+55%).
Details:
✅ Backlog goes parabolic: RPO rocketed to $455B (+359% YoY) after “four multi‑billion‑dollar contracts.” CEO Safra Catz called it an “astonishing quarter” and said RPO is likely to exceed half‑a‑trillion soon. Quote: “We signed four multi‑billion‑dollar contracts… RPO… likely to exceed half‑a‑trillion dollars.”
☁️ Cloud momentum: Total cloud revenue reached $7.2B (+28%); OCI (IaaS) rose to $3.3B (+55%). Management said OCI consumption rose ~57%, highlighting demand that “dramatically outstrips supply.”
🤝 Multi‑cloud flywheel: Multi‑cloud database revenue from AWS, Azure, and GCP grew 1,529% as Oracle embeds OCI regions inside partners’ clouds; Oracle plans +37 more partner data centers to 71 total.
🧠 New product: “Oracle AI Database”: Oracle will unveil a service that lets customers run the LLM of their choice (ChatGPT, Gemini, Grok, etc.) directly on Oracle Database—so companies can query their own data securely with top‑tier AI. In plain English: it vectorizes enterprise data so modern AI models can reason over it without exposing sensitive information. Quote (Larry Ellison): “AI changes everything.”
📈 Bigger OCI ambition: Oracle raised its plan for OCI—+77% this fiscal year to $18B, then $32B, $73B, $114B, $144B over the subsequent four years. Most of that is already booked in RPO, per Catz.
💵 Profitability & cash: Non‑GAAP EPS $1.47 (+6%); GAAP EPS $1.01 (‑2%). LTM operating cash flow $21.5B (+13%); Q1 OCF $8.1B. Free cash flow is negative as Oracle builds capacity (LTM FCF ‑$5.9B on $27.4B CapEx; Q1 FCF ‑$362M on $8.5B CapEx). FY26 CapEx guide: ~$35B. Dividend: $0.50 (record Oct 9, pay Oct 23).
⚠️ Watch these risks: Converting backlog depends on capacity adds (GPUs, networking) and customer acceptance; the heavy CapEx and near‑term negative FCF pressure cash returns. Oracle also flags tariffs/trade as a macro risk in its disclosures.
Why AI/Semiconductor Investors Should Care:
Oracle’s $455B backlog and aggressive OCI targets signal a multi‑year wave of AI data‑center spend that could re‑allocate cloud share and tighten supply in GPUs, networking, and power infrastructure. The AI Database is strategically important: if enterprises can safely point best‑in‑class LLMs at their existing Oracle data, that accelerates inferencing use‑cases (and cloud consumption) without rip‑and‑replace migrations—supporting growth even as training cycles ebb and flow. The flip side: execution hinges on build‑out speed, supply availability, and tariff‑inflated costs, with near‑term FCF trade‑offs as Oracle races to meet demand.
Get 15% OFF FISCAL.AI — ALL CHARTS ARE FROM FISCAL.AI —
Advanced Micro Devices (NASDAQ: AMD)
🧠 AMD’s MI450: the “no‑asterisk” bet to win AI training & inference
What The Chip: On September 8, 2025, AMD’s EVP of Data Center Solutions Forrest Norrod (leads AMD’s server/AI businesses) told Goldman Sachs’ Jim Schneider (semiconductor analyst) that AMD’s next accelerator, MI450 (2026 launch window), is intended to be AMD’s “no asterisk” generation—aimed at leadership in both training and inference—with a full rack‑level solution (“Helios”) to match.
Details:
🚀 Roadmap clarity & goal: Norrod framed MI450 as AMD’s “Milan moment”—the way 3rd‑gen EPYC (Milan) became the best x86 CPU “period.” Quote: “MI450 is our ‘no asterisk’ generation… the best training, inference, distributed inference, reinforcement learning solution available.” The plan: inference leadership with MI300/MI325, add training with MI355 (mid‑2025), and unify leadership with MI450 in 2026.
🧩 System-first design (Helios): AMD shifted from selling chips to delivering rack‑level infrastructure. Co‑developed with ZT Systems, Helios keeps the 72‑GPU per pod footprint and matches air/liquid cooling ratios customers already plan for, easing deployment alongside Nvidia builds. It’s physically larger than an NVL72 to simplify manufacturing, serviceability, and reliability—an explicit risk‑reduction choice.
🛜 Networking & distributed AI are bottlenecks: Norrod stressed that distributed compute and networking efficiency increasingly set system performance, especially for agentic/clustered workloads. Small inefficiencies in networking/distribution can swing utilization and TCO.
🧱 Software focus where spend is concentrated: AMD targets the top ~20% of customers driving ~80–85% of spend, ensuring day‑one support for priority frameworks and libraries. Example: JAX support has advanced substantially versus a year ago. Norrod: “We’re trying to be fully there at MI450 for the customers that really matter… Not the very long tail.”
💸 Pricing power via TCO, but concentrated buyers pressure margins: Data center accelerators are sold in few very large deals (Norrod noted Nvidia recently disclosed ~50% of revenue tied to a handful of major customers). That concentration pressures margins, but AMD aims to price to cluster‑level TCO value—mirroring CPUs where AMD’s ASPs exceed its rival’s: “We charge more… because we’re giving superior value.”
📈 CPU franchise keeps compounding: Mercury Research pegs AMD at 41% server CPU share, +8 pts YoY, with momentum in North American hyperscale and faster share gains in enterprise. Norrod: “We aspire to absolute server CPU leadership.” AI adoption itself is increasing CPU demand (more scenarios/agents drive more general compute).
📦 2026 GPU revenue setup: AMD expects “material contribution” from MI450 next year, citing tight work with OpenAI (“very close partnership… excitement over 450”), Oracle, Microsoft, Meta, and others. AMD emphasized predictable execution and a rigorous validation/derisk plan across mechanics, power/thermals, cabling, and signal integrity.
⚠️ Risks to watch: (1) Rack‑level complexity and validation; (2) Software breadth for the long tail (AMD isn’t chasing it near‑term); (3) Buyer concentration and competitive pricing; (4) Data center power availability, which Norrod said now modulates the pace of AI build‑outs; (5) Annual cadence is hard on customers, so AMD designs for infrastructure reuse across generations.
Why AI/Semiconductor Investors Should Care
If AMD ships MI450 on time with mature software and a serviceable Helios rack, it can contest Nvidia on performance and TCO—the two levers hyperscalers care about most—opening a path to meaningful double‑digit GPU share (~20% as an intermediate milestone) and better pricing mix over time. Meanwhile, the CPU flywheel (now 41% share) benefits directly from AI adoption, adding a second engine to data center growth. Execution, power constraints, and concentrated buyer dynamics remain real, but Norrod’s “no asterisk” posture, named customer collaborations, and system‑level focus give AMD a credible shot to scale AI revenue.
NVIDIA (NASDAQ: NVDA)
💰 NVIDIA: $3–$4T AI Build‑Out, Rubin Next, and a Networking Surge
What The Chip: On September 8, 2025, CFO Colette Kress told the Goldman Sachs Communacopia + Technology Conference that AI infrastructure needs could hit $3–$4 trillion by decade‑end, while NVIDIA’s one‑year cadence moves from Blackwell to Vera Rubin (six chips taped out). Data center revenue rose 12% sequentially in Q2 and NVIDIA guided +17% sequential for Q3, with networking a standout.
Details:
💰 Capex super‑cycle: Kress framed AI as a platform shift, noting the industry is “on track” toward $1 trillion of data‑center capital “in a couple of years,” with top CSPs doubling capex versus two years ago. Quote: “We’re really talking about a new computing platform for the next decades.”
🗺️ Roadmap & cadence: NVIDIA says its 1‑year cadence is “going quite well.” Blackwell GB200 and GB300 Ultra are shipping at scale; Vera Rubin (six chips) has taped out and is maturing. NVIDIA is emphasizing scale out, scale up, and scale across—the latter spreading performance across racks/clusters.
🧩 NVLink = rack‑scale moat: NVLink (Gen‑5) moves from 8‑GPU boxes to rack‑scale with 72 GPUs, a key driver of Blackwell’s jump. NVLink Fusion will let “many different other chips” participate in the fabric—opening NVIDIA’s platform while keeping it the hub. PCIe versions remain for enterprises not yet ready for liquid‑cooling.
📡 Networking outperforms: Networking attaches tightly to compute because NVLink lives in networking. Enterprise Ethernet for AI posted strong growth; InfiniBand—“the gold standard”—saw “nearly almost doubling sequentially.” Some customers “wallpaper” the data center with networking before compute, so timing can make networking look early.
📈 Data center momentum: Ex‑China H20, NVIDIA’s data center revenue grew +12% QoQ in Q2; guidance implies +17% QoQ for Q3. NVIDIA says GB300 Ultra ramped more seamlessly (and larger) than many expected.
🇨🇳 China optionality (H20): NVIDIA obtained licenses for several key China customers. Shipments still hinge on geopolitics, but Kress sees a “strong possibility” and sized the opportunity at $2–$5 billion if it clears.
🔋 TCO & power are king: NVIDIA is optimizing performance per watt and per dollar, stressing that power dominates total ownership over 4–6 years. Kress highlighted ~30× inference uplift vs. Hopper for Grace Blackwell deployments—freeing Hopper to keep doing prep/other work.
💸 Margins, cash & M&A: Gross margin is tracking back to mid‑70% by Q4 on better cycle times and mix. Capital goes first to strategic ecosystem investments; buybacks offset dilution (plus dividend). “Best acquisition on the planet” Mellanox sets the bar; large M&A isn’t ruled out but must fit.
“We’re maybe at the first or second inning of this journey… The world needs to transition to accelerated, parallel computing—and agentic AI that does work for us.” — Colette Kress, CFO
Why AI/Semiconductor Investors Should Care:
NVIDIA’s comments point to a durable, multi‑year demand stack—not just for GPUs but for networking fabric and power‑efficient rack‑scale systems that lock in platform share. The roadmap (Blackwell → Rubin) plus NVLink scale (to 72 GPUs per rack) reinforce NVIDIA’s system‑level moat, while gross margin marching back to the mid‑70s supports earnings power even as product mix evolves. Watch the swing factors: China H20 shipments ($2–$5B upside), power availability/TCO as the new bottleneck, potential ASIC/alt‑accelerator encroachment via NVLink Fusion, and any 2025 tariff shifts that could nudge costs.
Youtube Channel - Jose Najarro Stocks
X Account - @_Josenajarro
Get 15% OFF FISCAL.AI — ALL CHARTS ARE FROM FISCAL.AI —
Disclaimer: This article is intended for educational and informational purposes only and should not be construed as investment advice. Always conduct your own research and consult with a qualified financial advisor before making any investment decisions.
The overview above provides key insights every investor should know, but subscribing to the premium tier unlocks deeper analysis to support your Semiconductor, AI, and Software journey. Behind the paywall, you’ll gain access to in-depth breakdowns of earnings reports, keynotes, and investor conferences across semiconductor, AI, and software companies. With multiple deep dives published weekly, it’s the ultimate resource for staying ahead in the market. Support the newsletter and elevate your investing expertise—subscribe today!
[Paid Subscribers] Oracle FY26 Q1: RPO Hits $455B as AI Demand Lifts Cloud
Date of Event: September 9, 2025
Executive Summary
*Reminder: We do not talk about valuations, just an analysis of the earnings/conferences
Intro (Executive Summary)
Oracle opened fiscal 2026 with results that underscore how artificial intelligence (AI) workloads are reshaping its business. Total remaining performance obligations (RPO)—a forward‑revenue backlog—surged 359% year over year to $455 billion, driven by four multi‑billion‑dollar contracts signed in the quarter. Revenue rose 12% to $14.9 billion, with cloud (infrastructure plus applications) up 28% to $7.2 billion. Oracle reported GAAP EPS of $1.01 (down 2%) and non‑GAAP EPS of $1.47 (up 6%). CEO Safra Catz called it “an astonishing quarter,” and raised Oracle Cloud Infrastructure (OCI) targets, previewing 77% OCI growth to $18 billion in FY26 and a multi‑year path to $144 billion. The board declared a quarterly dividend of $0.50 per share (record date October 9, payable October 23).
Chairman and CTO Larry Ellison highlighted multi‑cloud momentum—Oracle embeds OCI regions inside AWS, Azure, and Google Cloud—with multi‑cloud database revenue up 1,529%. He also previewed the Oracle AI Database, a new service designed to let customers run the large language model (LLM) of their choice (e.g., Gemini, ChatGPT, Grok) directly atop Oracle Database to analyze existing enterprise data.
Growth Opportunities
OCI capacity expansion and multi‑cloud distribution. Oracle expects to deliver 37 additional data centers to hyperscaler partners, bringing the total to 71, a footprint that expands OCI’s addressable market and shortens time to revenue as customers spin up workloads where they already operate. Ellison described the near‑term opportunity in simple terms: demand for inferencing capacity is outpacing supply, a dynamic that is accelerating OCI consumption.