Welcome, AI & Semiconductor Investors.
Nvidiaās upcoming B300 & GB300 GPUs promise a massive leap in memory and performanceājust in time to shake up the entire AI supply chain.
But how do a Chinese open-source LLM breakthrough and a shifting memory market tie it all together, and what does it mean for investors right now? --- Let's find out...
What The Chip Happened?
š B300 & GB300: Analysts Share Nvidiaās Next GPU Leap
š DeepSeek-AIās Low-Cost Open-Source LLM: A CapEx Disruptor?
š©ļø Micronās Memory Storm: DRAM Downturn, But HBM Could Be a Silver Lining
Read time: 6 minutes
Nvidia (NASDAQ: NVDA)
š B300 & GB300: Analysts Share Nvidiaās Next GPU Leap
What The Chip: While Nvidia hasnāt publicly revealed these GPUs, semiconductor research group SemiAnalysis just released fresh details about the B300 and GB300 platforms. The leaks point to major performance boosts and a sweeping rework of Nvidiaās supply chain model.
Details:
ā” 50% More Firepower: The B300 GPU on TSMCās 4NP node promises up to 50% higher FLOPS compared to B200āthanks partly to raising TDP to 1.4kW (from 1.2kW).
š Bigger Memory, Bigger Gains: Both B300 and GB300 move to 12-Hi HBM3E stacks (up to 288GB), boosting reasoning model inference and enabling longer sequences.
ā»ļø Supply Chain Shakeup: SemiAnalysis reports that Nvidia will no longer provide a complete GPU board; only the SXM āPuckā module, Grace CPU, and a new HMC from Axiado. This lets hyperscalers customize boards furtherāwhile complicating final design validation.
āļø Winners & Losers: Nvidiaās board partners like Wistron lose some share, while Foxconn (FII) gains by assembling the new SXM Puck. VRM suppliers could get reshuffled; some may see a loss in business, while newcomers scoop up share.
š Margin Watch: As more components come from outside Nvidiaās direct umbrella, gross margins can shift.
āļø Deployment Timelines: SemiAnalysis indicates that design complexities will accelerate some hyperscalersā roadmaps while slowing others. Microsoft, for instance, is said to be balancing GB200 orders before fully embracing the B300/GB300 wave.
Why AI/Semiconductor Investors Should Care: SemiAnalysisās data suggests that Nvidiaās push into large-memory inference solutions remains a top driver of advanced AI workloads. Investors should watch how this new modular supply chain unfoldsāespecially its impact on gross margins and emerging winners in the GPU ecosystem. With hyperscalers demanding more flexibility, suppliers who adapt quickly stand to gain in 2025 and beyond.
Moore Semiconductor Investing
š Unlock Q3 Semiconductor Earnings --- 50% OFF
What The Chip: Get a front-row seat to the financials shaping the semiconductor industry. This continuously updated e-book by Jose Najarro distills the latest quarterly insightsāfrom wafer production trends to AI chip breakthroughsāinto a single comprehensive resource.
Details:
šµ Dynamic Updates: Start with giants like TSMC and ASML, then expand to 30+ companies as their Q3 2024 earnings roll in. Already covering over 30 companies.
šµ Huge Value for Half the Price: For a limited time, the e-book is discounted from $49.07 USD to $24.54 USD, offering a robust market guide at a significant value.
šµ Expert Analysis: Curated by Jose Najarro (Masterās in Electrical Engineering, contributor at The Motley Fool), delivering reliable, accessible breakdowns.
šµ Key Metrics & Trends: Follow critical financial indicators, market shifts, and executive comments shaping the sectorās trajectory.
šµ Broad Coverage: From traditional chipmakers to cutting-edge AI semiconductor players, get the full picture as it emerges.
Why AI/Semiconductor Investors Should Care: This evolving earnings handbook gives you a strategic edge. Understanding quarterly earnings data is crucial for gauging industry health, discovering new growth leaders, and aligning your investment approach with emerging technological waves.
Disclaimer: For educational and informational purposes only. Not financial advice. Consult with a qualified professional before making any investment decisions.
DeepSeek-AI (Unlisted)
š DeepSeek-AIās Low-Cost Open-Source LLM: A CapEx Disruptor?
What The Chip: A Chinese LLM company, DeepSeek-AI, just unveiled an impressive 671B-parameter open-source modelātrained on cheaper, export-approved Nvidia GPUs. The big surprise? It delivers performance on par with high-end solutions, potentially rewriting CapEx projections for AI training.
Details:
ā” Minimal Budget, Major Impact: Training reportedly cost around $5.6M using Nvidiaās lower-specced H800 GPUs. Thatās significantly cheaper than standard high-end U.S.-based HPC systems.
š§© Mixture-of-Experts (MoE): DeepSeek-V3 uses an MoE approach, activating only a fraction of parameters per token, slashing real-time hardware needs.
š Open-Source Momentum: The release closes the gap between closed-source giants and community-driven projects, reinforcing the narrative that major LLM breakthroughs can come from outside the U.S. market.
ā±ļø Fast & Efficient: DeepSeek-AI leveraged advanced pipeline parallelism, memory optimizations, and FP8 mixed-precisionāpushing hardware utilization to new heights.
šļø Long Context Windows: With up to 128K tokens of context, the model handles tasks like code interpretation and multi-step math across extended inputs.
š” Shifting CAPEX Outlook?: This achievement raises the question: if advanced LLM performance can be attained on cheaper, export-approved chips, will data centers need the priciest GPUs for certain workloads?
Why AI/Semiconductor Investors Should Care: DeepSeek-AIās move spotlights a potential inflection point where cost-efficient GPU deployments deliver powerful AI models. Investors need to watch for changing spending patterns at hyperscalers and startups alikeāespecially if competitive performance can come from hardware once deemed āinferior.ā This could reshape supply-chain alignments and margin considerations in the near future.
Micron Technology (NASDAQ: MU)
š©ļø Micronās Memory Storm: DRAM Downturn, But HBM Could Be a Silver Lining
What The Chip: Amid weak consumer DRAM demand, Micron and others see memory softness persisting well into 1H25. Surprisingly, HBM (High Bandwidth Memory) still uses DRAM and could help rationalize supply.
Details:
š» Prolonged DRAM Weakness: Silicon Motion and Micron both forecast tepid DRAM demand through early 2025, citing weaker PC and smartphone sales.
š± HBMās Hidden Contribution: HBM production consumes roughly 3x the bits of standard DRAM, potentially tightening overall DRAM supplyāeven if consumer DRAM lags, but is only great news for memory companies that can make HBM, and the only two at the moment are Sk Hynix and Micron.
š Chinese Memory Rise: CXMTās rapid DDR5 ramp (from 2% to 10% market share by year-end 2024) signals fiercer competition for leaders like Samsung, Micron, and SK hynix.
š„ Price War Looms: Chinese makers undercut rivals by 10ā20%, per Economic Daily Newsāturning the domestic Chinese market into a battleground where export restrictions might matter less.
š NAND Outlook: Silicon Motion sees NAND demand lagging until at least mid-2025, with a fresh growth cycle possibly kicking off in 2026.
Why AI/Semiconductor Investors Should Care: Although the DRAM and NAND slump could drag earnings near-term, HBM uptake in AI applications may act as a demand lever, helping balance the supply equation. As Chinese memory makers gain ground, watch pricing pressure and potential supply chain shifts for Micron, SK hynix, Samsung, and emerging players.
Youtube Channel - Jose Najarro Stocks
Semiconductor Q3 Earnings Book ā 50% OFF
X Account - @_Josenajarro
Disclaimer: This article is intended for educational and informational purposes only and should not be construed as investment advice. Always conduct your own research and consult with a qualified financial advisor before making any investment decisions.