What The Chip Happened?

What The Chip Happened?

Share this post

What The Chip Happened?
What The Chip Happened?
Agents, Consoles & Custom Chips: Amazon, AMD & Marvell’s AI Power Plays

Agents, Consoles & Custom Chips: Amazon, AMD & Marvell’s AI Power Plays

Jose Najarro's avatar
Jose Najarro
Jun 18, 2025
∙ Paid
1

Share this post

What The Chip Happened?
What The Chip Happened?
Agents, Consoles & Custom Chips: Amazon, AMD & Marvell’s AI Power Plays
Share

Welcome, AI & Semiconductor Investors,
What do billion-agent visions, next-gen Xbox silicon, and hyperscaler bespoke chips have in common? Today’s leaders, Amazon, AMD, and Marvell, are all sprinting to own the infrastructure layer that will fuel the next AI boom. As Jassy readies “billions of agents,” Microsoft locks down AMD’s Ryzen-powered consoles, and Marvell maps out a $94 billion AI-compute turf, investors must ask: which platform will capture the fastest-growing slice of this $630 billion market?— Let’s Chip In

What The Chip Happened?

🎯 Agents Everywhere — Amazon Doubles Down on Generative AI
🎮 AMD Locks Down the Next Xbox—And More
🚀 Marvell Doubles‑Down on Custom AI Silicon, Eyes 20 % Market Share
[Custom Silicon Takes Center Stage at Marvell’s AI Investor Event]

Read time: 7 minutes
Get 15% OFF Finchat — MY FAVORITE STOCK MARKET DATA PLATFORM —


Amazon (NASDAQ: AMZN)
🎯 Agents Everywhere — Amazon Doubles Down on Generative AI

Generated image

What The Chip: On June 17, 2025, CEO Andy Jassy told employees Amazon is “investing quite expansively” in Generative AI, already running 1,000‑plus Gen‑AI projects and gearing up to build “billions of agents.” Jassy says the push will both reinvent customer experiences—from Alexa+ to AWS Bedrock—and shrink certain corporate roles through efficiency gains.

The Situation Explained:

🤖 Agents set to explode. Jassy sees “software systems that use AI to perform tasks on behalf of users” becoming ubiquitous and vows to “make it much easier to build agents” across every Amazon business.

🛒 Smarter shopping & Alexa. Alexa+ now takes real actions; the AI shopping assistant serves tens of millions of customers, while “Lens,” “Buy for Me,” and Recommended Size sharpen conversion and reduce returns.

🏗️ AWS silicon advantage. Home‑grown Trainium 2 claims 30‑40 % lower $‑per‑performance versus NVIDIA’s top GPUs, pairing with Bedrock, Nova, SageMaker, Q, and QCLI to anchor an end‑to‑end Gen‑AI stack.

📊 Market tailwind. IDC pegs global AI spend at $235 B in 2024, racing to $630 B by 2028; Generative‑AI’s five‑year 60 % CAGR and infra’s 30 % share mean AWS monetizes the fastest‑growing slice.

💼 Head‑count shake‑up. Jassy warns “we expect this will reduce our total corporate workforce” as AI automates rote tasks; WEF likewise finds 40 % of employers plan cuts even as net jobs rise long‑term.

📈 Early monetization proof. In Q1‑25, 50 K advertisers and ~500 K sellers tapped Amazon’s Gen‑AI tools, creating “measurably better” listings and ad campaigns.

⚠️ Execution risks remain. Writer’s 2025 survey shows only ⅓ of enterprises hitting Gen‑AI ROI targets and 68 % grappling with internal power struggles—cultural alignment may rival tech prowess.

💸 Capex watch. Massive AI investment could pressure margins near‑term; investors will track how quickly AWS converts cost‑efficient silicon plus Nova‑powered agents into sticky, high‑margin services.

Why AI/Semiconductor Investors Should Care: Generative‑AI infrastructure already absorbs one‑third of every AI dollar, and AWS still holds 29 % of IaaS share—Amazon’s silicon‑plus‑software stack positions it to seize disproportionate value as agent workloads surge. If Trainium 2 truly undercuts GPU economics by ~35 %, Amazon not only defends cloud margins but also widens the moat against Azure and Google. The flip side: productivity gains must outpace workforce and capex drag, and management must translate a sprawling AI vision into disciplined execution. Keep an eye on AWS growth, gross‑margin trendlines, and real‑world agent adoption over the next 12 months.


Get 15% OFF Finchat — MY FAVORITE STOCK MARKET DATA PLATFORM —


AMD (NASDAQ: AMD), Microsoft (NASDAQ: MSFT)
🎮 AMD Locks Down the Next Xbox—And More

Generated image

What The Chip: On June 17, 2025, Microsoft unveiled a “strategic, multi‑year” pact with AMD to co‑engineer the silicon for its next‑generation Xbox console, upcoming Xbox Cloud Gaming blades, and a new Xbox‑branded handheld line.

The Situation Explained:

🕹️ Console cycle secured. Xbox President Sarah Bond says the deal covers “next‑generation Xbox consoles in your living room and in your hands,” locking AMD into the full hardware stack launching late‑2026/early‑2027.

👾 Handheld upside. The freshly announced ROG Xbox Ally and Ally X portables run custom Ryzen Z‑series APUs and an Xbox UI on Windows, opening a higher‑ASP handheld sub‑segment that didn’t exist last cycle.

💸 Semi‑custom trough behind us. CFO Jean Hu told investors that while semi‑custom sales fell 30 % y/y in Q1‑25, “console channel inventories have normalized and demand signals have strengthened for 2025.”

📉 Context: the Gaming slump. AMD’s Gaming segment dropped from $6.2 bn in 2023 to $2.6 bn in 2024 as PS5/Xbox Series demand cooled; Q1‑25 logged $647 m (‑30 % y/y) but ticked up sequentially.

🏗️ Margin maths. Semi‑custom chips carry mid‑20 %–low‑30 % gross margins—below AMD’s ~48 % corporate average—but Microsoft and Sony fund NRE, keeping cash‑flow positive even in down years.

⚔️ Competitive watch. Qualcomm’s Snapdragon X Elite handheld SoCs and any in‑house Arm designs from Microsoft/Sony remain the primary vendor‑switch risks through 2027, but today’s announcement reduces that probability near‑term.

🚀 Cloud blades = smoother revenues. Dedicated Xbox Cloud servers could soften console‑cycle lumpiness by adding steadier B2B demand beginning 2026+.

Why AI/Semiconductor Investors Should Care: Semi‑custom console wins won’t drive AMD’s valuation the way Instinct AI accelerators and EPYC CPUs do, but they anchor R&D spend and wafer capacity, keeping fabs busy and cash flowing during PC or data‑center slowdowns. If the next cycle repeats FY‑2023’s ~$5–6 bn peak and handhelds add incremental billions, Gaming can swing from today’s drag back to a mid‑teens share of revenue—enough to pad earnings, negotiate better TSMC pricing, and fund AMD’s AI roadmap without diluting shareholders.


Marvell Technology (NASDAQ: MRVL)
🚀 Marvell Doubles‑Down on Custom AI Silicon, Eyes 20 % Market Share

Generated image

What The Chip: On June 17, 2025, Marvell’s leadership rolled out a sweeping update at its Custom AI Investor Event. Management revealed that custom, AI‑focused silicon is now Marvell’s core growth engine, boosting the company’s addressable data‑center market to $94 billion by 2028 and securing 18 multi‑generational design wins across the world’s hyperscalers.

The Situation Explained:

🚀 TAM turbo‑charged: Marvell lifted its 2028 data‑center total addressable market from $75 billion to $94 billion—a 25 % jump in just one year, driven by AI compute and interconnect demand.

🏆 18 sockets locked in: The team now counts 5 custom XPUs and 13 XPU‑attach chips in production or late design, up from three total sockets a year ago. CEO Matt Murphy put it plainly: “Custom is happening—in every cloud, today.”

📈 Even bigger pipeline: Beyond the wins, Marvell is chasing 50 + additional opportunities worth ~$75 billion in lifetime revenue; one‑third are full XPUs, two‑thirds are attach chips.

⚡ Tech edge keeps widening:

  • Demonstrated the industry’s first 400 Gb/s SerDes; power‑efficient die‑to‑die links hit 0.3 pJ/bit.

  • Unveiled 2 nm custom SRAM delivering 17× bandwidth density and 66 % lower standby power.

  • Showcased a custom HBM base‑die that frees 1.7× more compute area and cuts memory I/O power 75 %.

  • New in‑package voltage regulation trims IR losses 85 %, slashing total product power up to 15 %.

🛠 Full‑service moat: From architecture co‑design to advanced 4‑D packaging and co‑packaged optics, Marvell handles the entire stack—“We have a seat at every table for these opportunities,” said SVP Will Chu.

💰 Share and margin runway: Management targets 20 % share of the $55 billion custom compute segment by 2028 while keeping gross margins at the high end of the company’s model, helped by customer‑funded NRE.

⚠️ Execution still critical: Success hinges on hitting tape‑outs in 3 nm, 2 nm, and A16 on schedule and scaling supply—areas where Marvell claims long‑term contracts and 5‑year capacity planning with foundry partners.

Why AI/ Semiconductor Investors Should Care: Custom silicon is rapidly becoming the “picks‑and‑shovels” layer of the AI gold rush. By stitching together leading‑edge SerDes, in‑package memory, and power‑optimized architectures, Marvell positions itself as the go‑to enabler for hyperscalers that must squeeze every watt and millisecond from colossal AI clusters. If management executes, the leap from 13 % data‑center share in 2024 to 20 % by 2028 could accelerate revenue growth far above consensus—and expand margins as customer NRE offsets R&D. For investors, Marvell’s bet on bespoke silicon isn’t just about bigger chips; it’s about carving a durable, technology‑rich moat in the heart of AI infrastructure.


Youtube Channel - Jose Najarro Stocks
X Account - @_Josenajarro
Get 15% OFF Finchat — MY FAVORITE STOCK MARKET DATA PLATFORM

Disclaimer: This article is intended for educational and informational purposes only and should not be construed as investment advice. Always conduct your own research and consult with a qualified financial advisor before making any investment decisions.


The overview above provides key insights every investor should know, but subscribing to the premium tier unlocks deeper analysis to support your Semiconductor, AI, and Software journey. Behind the paywall, you’ll gain access to in-depth breakdowns of earnings reports, keynotes, and investor conferences across semiconductor, AI, and software companies. With multiple deep dives published weekly, it’s the ultimate resource for staying ahead in the market. Support the newsletter and elevate your investing expertise—subscribe today!

[Paid Subscribers] Custom Silicon Takes Center Stage at Marvell’s AI Investor Event

Date of Event: June 17, 2025 – Marvell Technology, Inc. Custom AI Investor Event

Executive Summary

*Reminder: We do not talk about valuations, just an analysis of the earnings/conferences

Marvell Technology (NASDAQ: MRVL) devoted its 2025 AI Investor Event almost entirely to one theme: custom, cloud‑optimized silicon. Chief Executive Officer Matt Murphy and a parade of senior technologists outlined how hyperscale and emerging cloud providers are reshaping their infrastructure around application‑specific chips—and how Marvell has positioned itself to capture 20 percent of that market by 2028.

Key numbers and milestones highlighted on stage include:

  • $94 billion total data‑center addressable market (TAM) for Marvell in 2028, up 25 percent from last year’s projection.

  • $55 billion of that TAM sits in custom compute, split into $40 billion for main compute dies (the “XPU” sockets) and $15 billion for companion devices (“XPU attach”) such as memory poolers, scale‑up fabrics, and specialized network interface controllers.

  • 18 multigenerational custom sockets already won—five XPUs and thirteen XPU‑attach—at the top four United States hyperscalers and at six “emerging hyperscalers.”

  • 50 additional active engagements representing up to $75 billion in lifetime revenue potential.

  • Gross margin for these programs is “on the higher end of the custom range,” according to Chief Financial Officer Willem Meintjes, supported by co‑funded non‑recurring engineering (NRE) contributions.

“Custom is happening. It’s happening in every cloud, and it’s here today,” Murphy declared, noting that hyperscaler capital expenditure on data‑center infrastructure has leapt from $150 billion in 2023 to more than $300 billion this year.


Growth Opportunities

Murphy identified cloud‑optimized silicon as “one of the largest growth drivers in the entire semiconductor industry.” Cloud providers are no longer satisfied with general‑purpose graphics processing units (GPUs) or traditional server processors; instead, they are commissioning bespoke accelerators tuned for training, inference, video, or autonomous‑vehicle workloads.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Jose Najarro
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share