AI Memory Squeeze: Micron Raises Q4 on Pricing; onsemi Doubles AI Power; GlobalFoundries Pushes Optics
Welcome, AI & Semiconductor Investors,
What if AI’s next bottleneck isn’t GPUs, but memory, power, and optics? Micron lifted FQ4 to about $11.2B and 44.5% gross margin on pricing, not units, and says its HBM pipeline is on track to sell out 2026, while onsemi’s AI data‑center power revenue doubled again and GlobalFoundries is leaning into specialty nodes and photonics toward a 30% margin target despite 2025 tariff costs. — Let’s Chip In
What The Chip Happened?
🚀 Micron: Pricing Power Pops, HBM Booked for ’26
⚡ onsemi: Stabilization Now, SiC Conviction & AI Power Ramps
🔗 GlobalFoundries leans into specialty nodes, optics, and auto amid tariff tailwinds
[Micron Lifts Q4 Outlook as AI Memory Pricing Surges]
Read time: 7 minutes
Get 15% OFF FISCAL.AI — ALL CHARTS ARE FROM FISCAL.AI —
Micron Technology, Inc. (NASDAQ: MU)
🚀 Micron: Pricing Power Pops, HBM Booked for ’26
What The Chip: On August 11, 2025, Micron raised its fiscal Q4 outlook at KeyBanc’s Technology Leadership Forum: revenue to $11.2B ± $0.1B (from $10.7B), gross margin to 44.5% ± 0.5 pp (from 42%), and non‑GAAP EPS to $2.85 ± $0.07 (from $2.50). EVP & Chief Business Officer Sumit Sadana said the upside came “very significantly [from] pricing improvements,” with shipments largely in line; tariff‑related pull‑ins weren’t the driver.
The Situation Explained:
💰 Pricing-led beat, not volume. Micron lifted FQ4 revenue, margin and EPS guidance while keeping shipment plans “largely consistent” with prior guidance—clear evidence of ASP strength across PCs, mobile, and data center.
🧠 AI data center demand stays hot. Sadana flagged 2025 CapEx for the top five hyperscalers at > $400B, with a “large chunk” aimed at AI infrastructure. The HBM‑to‑DDR5 wafer “trade ratio” of ~3:1 continues to tighten non‑HBM DRAM supply, supporting pricing across DDR5/LPDDR5
🧩 HBM ramp & pipeline. HBM3E 12‑high yields ramped faster than prior 8‑high and 12‑high volume now exceeds 8‑high. Micron has sampled HBM4 and—after recent customer discussions—expects to sell out CY’26 HBM supply (HBM3E 12‑high + HBM4). Quote: “We’re confident we’ll be able to sell out our CY ’26 supply.”
⚡ Differentiation where it matters: power. Management claims its HBM today delivers 30% lower power than the next competitor and called it the “best HBM product on the planet.” In power‑constrained data centers, every watt saved can expand cluster size or reduce TCO. Micron also stressed U.S. proximity (R&D teams co‑located/time‑zoned with customers) and its $200B U.S. commitment ($150B manufacturing, $50B R&D) with a front‑end fab underway in Idaho—a de‑risking angle amid tariffs.
🧱 Node choice advantage (HBM4). Micron will build HBM4 on its mature 1‑beta DRAM node (same node as HBM3E), while at least one competitor targets a newer 1C node—potentially more ramp risk.
🧬 HBM4E goes custom—fewer suppliers, higher ROI. Customers want logic in the HBM base die (some GPU functions embedded), turning parts of DRAM into an ASIC‑like business with deeper, multi‑year codevelopment. Sadana expects many buyers will practically work with only 1–2 suppliers per custom device—supporting premium pricing and higher ROI vs. commodity DRAM.
📲 Edge AI is next. Micron sees on‑device AI lifting smartphone DRAM from ~8GB → ~12GB averages over the next product cycles, plus AI moving into smart cars and even eyeglasses. Micron also highlighted shipping LPDRAM into data centers (lower‑power DRAM historically used in phones/laptops) and form‑factor work like SOCAMM/CAMM modules.
⚠️ Watch these. Tariff‑related pull‑ins exist but weren’t the quarter’s driver; DDR4/LPDDR4 shortages persist but DDR4 is a low‑single‑digit revenue slice, so limited impact. Key risks: HBM capacity/packaging constraints, timing of GPU/ASIC qualifications for HBM4, and hyperscaler CapEx sensitivity.
Why AI/Semiconductor Investors Should Care: A pricing‑driven guide raise, without extra units, signals a tight supply‑demand setup that can sustain margins into FY‑26. HBM remains the choke point of AI compute, and Micron’s claims of lower power, faster 12‑high ramp, and sold‑out CY’26 position the company for premium ASPs and structurally better ROI as HBM4/4E arrive. The emerging custom base‑die model could chip away at commodity pricing dynamics in DRAM; if it sticks, MU’s gross margin ceiling may rise. Balance that with execution risk in HBM node transitions, customer quals, and any wobble in hyperscaler spend—or a policy shock from new tariffs—and you’ve got the key checklist for MU holders into the next leg of the AI build‑out.
Get 15% OFF FISCAL.AI — ALL CHARTS ARE FROM FISCAL.AI —
ON Semiconductor Corporation (NASDAQ: ON)
⚡ onsemi: Stabilization Now, SiC Conviction & AI Power Ramps
What The Chip: At the KeyBanc Capital Markets Technology Leadership Forum on August 11, 2025, CEO Hassane El‑Khoury and CFO Thad Trent said onsemi sees stabilization—not yet a full recovery—with 2H expected to outgrow 1H as backlog layers in. Auto is mixed (China strong, U.S./Europe weak), while AI data center revenue doubled YoY for the second straight quarter.
The Situation Explained:
🔄 Cycle check: “It’s stabilization… I need to see demand driving replenishment and replenishment driving order patterns before I call a recovery,” El‑Khoury said. The team monitors fill rates, turns needed, and order behavior versus historical patterns.
🚗 Auto snapshot: Q2 auto declined for onsemi, but management called it the bottom and expects Q3 auto up. China automotive +23% in Q2, driven by EVs; U.S. and Europe remain soft. xEV (BEV + PHEV) still grows, aided by new programs like a Schaeffler plug‑in hybrid and a new China PHEV platform.
🧱 Silicon carbide (SiC) momentum: SiC usage in current production remains low, but about 90% of RFQs now specify SiC—setting up secular growth even if EV units are flat. onsemi claims device leadership: “Our 4th‑gen trench is a leapfrog over what’s in the market.” The company qualifies both internal and external substrates (including Chinese vendors) but doesn’t sell substrates, using a mixed model for tariff and supply resilience.
📦 Orders & lead times: Lead times sit around 14 weeks. “We entered the quarter better than 90 days ago,” Trent said, noting lower turns needed to hit the guide and faster backlog layering versus `90–120 days ago. The 2025 tariff headlines caused an early‑Q2 pause; orders resumed later in the quarter.
🖥️ AI data center uptrend: AI data center revenue doubled YoY again. onsemi is attacking from high power first (PSUs, battery‑backups >100 kW), then moving down the power tree with controllers, POL, and a 5×5 SPS (compact synchronous power stage). Treo products are bringing core‑level PMICs; “we are in production and sampling the dual,” El‑Khoury said. (Scale not broken out.)
⚡ 800‑V transition: Onsemi focuses investors on $ per kW content, not just device voltage. 1,200‑V parts carry higher ASPs than 650–750‑V, but modules and total system power drive the real ASP uplift.
🧹 Portfolio cleanup & exits: Roughly 5% of 2025 revenue won’t repeat in 2026 (ISG pruning, planned exits, EOL). ISG: onsemi will walk away from $50–$100M in 2026 as it repositions to machine vision. Planned exits were $300M for 2025; tracking ~$200M this year with ~$100M pushed into 2026. This separates legacy headwinds from higher‑margin growth.
💵 Capital returns & cash: Management expects to generate FCF through the transition and has raised buybacks to 100% of FCF—“we’ve done 107% YTD,” El‑Khoury noted.
Why AI/Semiconductor Investors Should Care: onsemi is leaning into two durable spend pockets—EV power (SiC) and AI data‑center power—where its device and packaging know‑how translate into content per system and margin mix. Stabilization plus 2H>1H order patterns could set up a recovery if replenishment finally kicks in, but investors should watch Western auto softness, tariff risk, and short lead times that keep business more turns‑driven and forecasts noisier. If management executes on the portfolio exits while scaling SiC and AI power, the mix shift and buybacks could support EPS and returns even if top‑line growth stays choppy near term.
GlobalFoundries (NASDAQ: GFS)
🔗 GlobalFoundries leans into specialty nodes, optics, and auto amid tariff tailwinds
What The Chip: At the KeyBanc Capital Markets Technology Leadership Forum on August 11, 2025, CFO John Hollister and SVP Sam Franklin outlined GF’s playbook: focus on specialty nodes (12/14nm–180nm), expand cross‑fab fungibility across Singapore/Dresden/Malta, and ride AI‑driven optics and auto. They also addressed 2025 tariffs (about $20M 2H cost) and mapped a path from 25% to 30% gross margin via mix and utilization.
Details:
🧩 Strategy & process mix. GF positions as a top‑5 foundry focused on analog/mixed‑signal from 12/14nm FinFET to 180nm, highlighting 130NSX and 22FDX (22nm FD‑SOI: lower power, RF‑friendly) plus silicon photonics on 45nm/180nm. Hollister: 22FDX enables “very low‑power operation, outstanding RF performance and high levels of CMOS integration.” Management believes this specialty focus supports double‑digit multi‑year revenue CAGR.
🏭 Footprint & fungibility = supply surety. Three 300mm fabs—Singapore, Dresden (Germany), Malta (NY)—with fungibility already between Singapore/Dresden and now expanding 22/28/40nm qualifications into Malta (originally a 12/14nm FinFET corridor). “It’s almost never been more important … to shore up supply,” Franklin said, noting customers can qualify multiple GF sources.
🛡️ Geopolitics & tariffs. With U.S. tariff noise (the moderator cited “100% tariffs” on non‑U.S. chips), GF says its “global operation” that’s also localized by region is resonating with customers. H2 tariff impact is ~$20M (<1% of COGS); GF plans pricing actions, alternate sourcing, and exemption work. The new “China‑for‑China” strategy uses a local partner while GF controls tech transfer and operations—serving demand without adding an organic fab.
📈 End‑market mix & design wins. Smart mobile ~40% of revenue; Q2 was strong and Q3 looks solid but remains a stable (not hyper‑growth) market. Automotive is a standout with mid‑teens 2025 growth expected. IoT faces high inventories and softer consumer patterns. Comms infra/data center is a bright spot. Franklin: “~200 design wins in Q2”—a record—bolstering future growth.
🔗 AI infrastructure via optics & SATCOM. 2025 silicon photonics ~ $200M revenue; SATCOM ~ $100M—together nearly half of the Comms/Infra/Data Center mix. Today’s growth is pluggables (rack‑to‑rack optics); co‑packaged optics (CPO)—embedding optics near the switch/ASIC for higher bandwidth/lower power—looks like a 2027+ revenue story and should be accretive, without displacing pluggables.
🚗 Auto engines broaden. Auto climbed from ~$100M (2020) to ~$1.2B (2024); management still sees mid‑teens growth in 2025. Beyond leading 40nm NVM MCUs, wins are building in sensing/safety/ADAS on 22FDX and power management on 130/55 BCD (bipolar‑CMOS‑DMOS). Silicon per car is rising: $500 → $750 → ~$1,000 next year → ~$1,500 longer term—supporting structural growth even if SAAR wobbles.
🧮 Margins & CapEx discipline. Targeting 30% gross margin from ~25% via richer mix and higher fab utilization (“low 80s” in 1H). GF invested $7B in capacity in prior years; now it’s in CapEx‑light mode for 2025, with a potential uptick late 2026. Priority: build “within the four walls”—Fab 8.2 (Malta) supported by the CHIPS Act, unutilized BTF capacity in Dresden, and options in Singapore—before any greenfield spend. Hollister also cited a “maniacal focus” on input costs.
⚔️ Competition & market size. On TSMC moving harder into mature nodes, Franklin pointed to a GF SAM of $70–$80B today expanding to >$120B by decade‑end—“plenty of room for both participants to thrive.” Watch items: smart‑mobile dependence, IoT digestion, tariff policy shifts, mature‑node pricing pressure, and CPO adoption timing.
Why AI/Semiconductor Investors Should Care
Optical I/O is becoming the plumbing of AI data centers, and GF already pegs ~$200M photonics revenue in 2025 with a line of sight to CPO later this decade—an under‑appreciated earnings lever as bandwidth and power constraints bite. Pair that with secular auto content growth and a fungible, three‑region footprint that benefits from 2025’s tariff regime, and you get a clearer path to utilization‑driven margin expansion toward 30%. The setup isn’t risk‑free (IoT inventory, smartphone stability, mature‑node competition), but the mix shift to optics/auto and disciplined “build‑within‑the‑walls” CapEx argue GF can grow and compound in a supply‑constrained, geopolitically sensitive cycle.
Youtube Channel - Jose Najarro Stocks
X Account - @_Josenajarro
Get 15% OFF FISCAL.AI — ALL CHARTS ARE FROM FISCAL.AI —
Disclaimer: This article is intended for educational and informational purposes only and should not be construed as investment advice. Always conduct your own research and consult with a qualified financial advisor before making any investment decisions.
The overview above provides key insights every investor should know, but subscribing to the premium tier unlocks deeper analysis to support your Semiconductor, AI, and Software journey. Behind the paywall, you’ll gain access to in-depth breakdowns of earnings reports, keynotes, and investor conferences across semiconductor, AI, and software companies. With multiple deep dives published weekly, it’s the ultimate resource for staying ahead in the market. Support the newsletter and elevate your investing expertise—subscribe today!
[Paid Subscribers] AMD’s Q2 2025: Record Revenue, MI350 Ramp, and Rising CPU Demand
Date of Event: August 11, 2025 — KeyBanc Capital Markets Technology Leadership Forum
Executive Summary
*Reminder: We do not talk about valuations, just an analysis of the earnings/conferences
Micron raised fiscal Q4 2025 guidance in three key places:
Revenue: midpoint up $0.5 billion to $11.2 billion (±$100 million), a 4.7% lift versus prior guidance.
Gross margin: midpoint up 2.5 percentage points to 44.5% (±0.5 pp).
Non‑GAAP EPS: midpoint up $0.35 to $2.85 (±$0.07), a 14% improvement.
The drivers are clear: pricing gains across data center, PC, and mobile, with shipments tracking prior plans. Management emphasized resilient AI infrastructure spending—Sadana cited more than $400 billion of 2025 capital expenditures among the top five hyperscalers—and the HBM (High Bandwidth Memory) mix effect that constrains non‑HBM DRAM supply. “HBM wafer is approximately 3:1 HBM wafers to DDR5,” he said, a trade‑off that “is creating a supply squeeze for the non‑HBM portion of the market,” supporting higher average selling prices.
Notable comments also touched on a long AI adoption runway (from today’s ANI—artificial narrow intelligence—to AGI and eventually ASI) and the expected proliferation of on‑device AI that pushes memory content higher in edge devices, starting with smartphones.
Growth Opportunities
AI infrastructure is the near‑term engine. Micron sees “very strong” data center demand, backed by rising customer CapEx plans. That demand anchors HBM3E today and paves the path to HBM4 and then HBM4E, where integration of custom logic in the base die will tie memory more tightly to leading AI accelerators.