TL;DR
- Broadcom has quietly become the second-most important AI semiconductor company after NVIDIA, with AI-related revenue (custom XPU chips + networking) estimated at $16–20 billion in fiscal 2025 and growing 40–50% annually.
- The XPU custom chip business serves three confirmed hyperscaler customers — Google, Meta, and ByteDance — with a potential fourth (Apple or Microsoft) that could add $5–8 billion in incremental revenue. CEO Hock Tan has guided to a $60–90 billion serviceable addressable market by 2027.
- AI networking is the overlooked growth driver. Broadcom's Ethernet-based switching and NIC silicon is gaining share against NVIDIA's InfiniBand in large-scale AI clusters, particularly as hyperscalers seek vendor diversity and cost optimization.
- The VMware acquisition ($69 billion, closed November 2023) has been more accretive than bears expected, with annualized subscription revenue conversion tracking ahead of schedule and operating margins improving quarter over quarter.
- At ~30x forward earnings, Broadcom is not cheap but offers better risk-adjusted AI exposure than NVIDIA for investors concerned about customer concentration and valuation extremes. We believe Broadcom is the most compelling risk-reward in AI semiconductors today.
The Quiet AI Giant: How Broadcom Became Essential
Broadcom does not have an AI narrative problem. It has a marketing problem. While NVIDIA's Jensen Huang commands keynote stages in a leather jacket and Palantir's Alex Karp philosophizes about Western civilization, Broadcom's Hock Tan delivers deliberately understated earnings calls, provides conservative guidance, and lets the numbers speak. This communication style has contributed to a persistent valuation discount relative to Broadcom's actual AI exposure — a discount we believe represents an opportunity.
Consider the facts. Broadcom designs the custom AI accelerator chips (XPUs) that power Google's TPU infrastructure. It designs custom chips for Meta's AI training and inference workloads. It designs custom chips for ByteDance's recommendation and video processing systems. These are three of the five largest AI compute buyers on Earth. In parallel, Broadcom supplies the networking silicon — switches, routers, and network interface cards — that connects GPU and TPU clusters in every major AI data center globally. If NVIDIA provides the compute engine for the AI revolution, Broadcom provides both alternative engines and the nervous system that ties them together.
Total AI-related semiconductor revenue was approximately $12.2 billion in fiscal 2024 (ending October) and is tracking toward $16–20 billion in fiscal 2025. This represents roughly 45% of Broadcom's total semiconductor revenue, up from 15% just two years ago. The pace of this transformation is extraordinary and not yet fully reflected in the stock's valuation.
XPU Custom Chips: The Anti-NVIDIA Play
Why Hyperscalers Want Custom Silicon
The motivation for custom AI chips is straightforward: cost and efficiency. NVIDIA's H100 and B200 GPUs are general-purpose accelerators designed to run any AI workload. This flexibility comes at a price — literally (H100s cost $25,000–40,000 each) and in terms of computational efficiency. A chip designed for one specific task will always outperform a general-purpose chip on that task, just as a Formula 1 car outperforms a Toyota Camry on a racetrack.
Google understood this earliest. Its first TPU (Tensor Processing Unit), designed in collaboration with Broadcom, was deployed in 2015 — years before the current AI boom. TPUs are optimized for the matrix multiplications that dominate neural network computation, and they power the majority of Google's internal AI workloads: Search ranking, YouTube recommendations, Gmail spam filtering, Translate, and Gemini inference. Google has never disclosed the exact number of TPUs deployed, but estimates range from 1–2 million chips across its global data center fleet.
Meta followed a similar logic. Its MTIA (Meta Training and Inference Accelerator) chips, designed with Broadcom, are optimized for the recommendation models that drive Facebook and Instagram ad targeting. These models are the economic engine of Meta's $160 billion annual revenue — even a 5% improvement in recommendation quality translates to billions in incremental ad revenue. ByteDance, operator of TikTok, has similar motivations for its custom AI chips, which optimize for the video recommendation algorithms that power the world's fastest-growing social media platform.
Broadcom's Role: Design IP, Not Manufacturing
A critical nuance: Broadcom does not manufacture custom chips. It provides the design IP, chip architecture, SerDes (high-speed serializer/deserializer) technology, packaging design, and TSMC manufacturing relationship that enables hyperscalers to bring custom silicon to market. Think of Broadcom as the architect and general contractor, while TSMC is the construction company and the hyperscaler is the building owner.
This model carries significant advantages. Broadcom does not bear the capital expenditure of building fabrication facilities (which cost $20–40 billion). It does not take inventory risk on finished chips. Its revenue is driven by design fees, licensing royalties on IP blocks, and per-chip royalties on manufactured units. Gross margins on XPU revenue are estimated at 70–75%, comparable to NVIDIA's data center margins, but with lower capital intensity and more predictable revenue streams from multi-year design engagements.
The typical XPU engagement generates revenue across three phases: NRE (non-recurring engineering) fees of $100–300 million over 2–3 years during design, followed by per-unit royalties during volume production, followed by next-generation design engagements. This creates a compounding revenue stream where each customer relationship deepens over time rather than eroding. Google is now on its seventh-generation TPU with Broadcom — a 10-year relationship with increasing revenue per generation.
AI Networking: The Overlooked $15 Billion Opportunity
Investors fixated on the GPU and custom chip narrative are missing the networking story. Training a large language model on 10,000+ GPUs requires each GPU to communicate with thousands of others, exchanging gradient updates, activation values, and model parameters at terabit-per-second speeds. The networking fabric that connects these GPUs is as critical to training performance as the GPUs themselves. A cluster with inferior networking is like a supercar on a dirt road — the engine's potential is wasted.
NVIDIA has historically dominated this market through InfiniBand, a high-performance networking standard it acquired with its 2020 purchase of Mellanox Technologies. InfiniBand delivers superior latency and bandwidth for tightly coupled workloads, which is why it powers the majority of existing large-scale training clusters.
But the market is shifting. Ethernet — the open networking standard that Broadcom dominates — is closing the performance gap while offering significant advantages in cost, vendor flexibility, and operational simplicity. Broadcom's Tomahawk 5 switch ASIC delivers 51.2 terabits per second of switching capacity, sufficient for the largest AI clusters. Its Jericho3-AI chip enables ultra-low-latency fabric architectures optimized for AI training. And because Ethernet is an open standard, hyperscalers can avoid vendor lock-in with NVIDIA's proprietary InfiniBand ecosystem.
The Ultra Ethernet Consortium — backed by Microsoft, Meta, AMD, Broadcom, and Intel — is explicitly building Ethernet specifications for AI workloads, aiming to match InfiniBand's performance while maintaining Ethernet's cost and flexibility advantages. Meta has publicly committed to Ethernet-based AI networking for its next-generation data centers. Microsoft is deploying Ethernet alongside InfiniBand. Google has always used Ethernet internally.
We estimate Broadcom's AI networking revenue at $4–5 billion in fiscal 2025, growing at 40%+ annually. By fiscal 2027, this could reach $8–10 billion as Ethernet captures a majority share of new AI data center builds. Combined with XPU revenue, Broadcom's total AI-related semiconductor revenue is on a path to $25–30 billion by fiscal 2027 — a figure that would represent 55–60% of total company revenue. For a broader perspective on how the custom chip versus GPU dynamic is evolving, see our deep dive on custom AI chips versus NVIDIA GPUs.
VMware: The Cash Cow the Market Undervalues
Broadcom closed its $69 billion acquisition of VMware in November 2023, making it one of the largest technology acquisitions in history. The market's initial reaction was skepticism — VMware was a mature infrastructure software company with decelerating growth, and the acquisition added significant debt to Broadcom's balance sheet. Two years later, the integration has exceeded expectations.
Hock Tan's playbook is by now well understood: acquire large software companies, eliminate cost redundancies, convert perpetual licenses to subscriptions, and focus R&D on the highest-margin product lines. With VMware, the subscription conversion is tracking ahead of plan. VMware's annualized booking value (ABV) for subscription licenses exceeded $5 billion in the most recent quarter, up from near zero pre-acquisition. Operating margins for the VMware segment have expanded from the mid-20s to mid-30s percent, with a path to 65%+ as the subscription transition completes — consistent with Broadcom's other software businesses (CA Technologies, Symantec).
The strategic relevance for the AI thesis: VMware's VCF (VMware Cloud Foundation) is the virtualization layer that many enterprises use to manage their on-premise and hybrid cloud infrastructure. As enterprises deploy AI workloads on-premise (for data security and latency reasons), VCF becomes the management plane for those deployments. This creates a synergy between Broadcom's AI semiconductor business and its software business that no competitor can match — the chips and the software that manages them come from the same company.
Broadcom vs. AI Semiconductor Peers
| Metric | Broadcom (AVGO) | NVIDIA (NVDA) | AMD (AMD) | Marvell (MRVL) |
|---|---|---|---|---|
| Market Cap | ~$1T | ~$3.2T | ~$200B | ~$80B |
| AI Revenue (Est. FY2025) | $16-20B | $105-115B | $7-9B | $2-3B |
| AI Revenue Growth | 40-50% | 50-60% | 80-100% | 60-80% |
| AI as % of Semi Revenue | ~45% | ~80% | ~30% | ~35% |
| Forward P/E | ~30x | ~30x | ~38x | ~45x |
| Gross Margin | 75% | 74% | 52% | 48% |
| Software Business | VMware, CA, Symantec | CUDA, DGX Cloud | ROCm (limited) | None material |
| Primary AI Moat | Custom design IP, networking | CUDA ecosystem | Price/performance | Custom chip design |
Risk Factors: The Fourth Customer and Concentration Concerns
Customer Concentration in XPU
Three customers represent the entirety of Broadcom's XPU revenue. Google alone may account for 50%+ of XPU revenue given the scale and maturity of its TPU program. This concentration creates outsized risk from any single customer's decision to insource chip design, shift to a competitor (Marvell is the primary alternative), or reduce AI capex spending. The loss of even one XPU customer would remove $4–6 billion in revenue and trigger a significant re-rating of the stock.
The much-anticipated “fourth customer” — rumored to be Apple or Microsoft — would meaningfully diversify this risk and could add $5–8 billion in incremental revenue by fiscal 2027–2028. Hock Tan has been cagey about confirming additional customers, stating only that “we are in active engagement with several potential XPU partners.” A formal announcement of a fourth customer would be a significant positive catalyst.
TSMC Dependency and Geopolitical Risk
All of Broadcom's advanced chips are manufactured at TSMC, primarily at the 5nm and 3nm nodes. Any disruption to TSMC — whether from a Taiwan Strait conflict, earthquake, or capacity allocation dispute — would halt Broadcom's AI chip production entirely. This is not a Broadcom-specific risk (NVIDIA and AMD face the same exposure), but it is worth noting that Broadcom has no viable manufacturing alternative for its most advanced designs. Intel Foundry Services and Samsung are years behind TSMC on leading-edge process technology.
VMware Integration Execution
While the VMware integration is tracking well by most metrics, Broadcom's aggressive license conversion strategy has alienated some customers. Reports of 2–5x price increases on VMware renewals have pushed some enterprises to evaluate alternatives like Nutanix, Red Hat OpenShift, and Proxmox. If customer attrition exceeds expectations, it could offset the margin expansion that the subscription conversion is designed to deliver. We estimate VMware customer churn at 10–15% by count but less than 5% by revenue, as larger customers with more complex environments are the least likely to migrate.
Investment Thesis: Why We Prefer Broadcom to NVIDIA at These Levels
This is our contrarian take, and we recognize it will be unpopular. We believe Broadcom offers better risk-adjusted returns than NVIDIA for new money entering the AI semiconductor trade today. Here is the reasoning.
First, diversification. Broadcom generates roughly 55% of revenue from non-AI businesses (legacy semiconductor + VMware software), which provides downside protection if AI spending decelerates. NVIDIA generates 80%+ from data center, making it more exposed to any AI capex slowdown. Second, the VMware software business produces highly predictable recurring revenue at 65%+ operating margins, creating a valuation floor that NVIDIA lacks. Third, Broadcom's XPU customers (Google, Meta, ByteDance) are locked into multi-year design partnerships that provide forward revenue visibility NVIDIA does not have — NVIDIA sells GPUs on shorter-cycle procurement.
Fourth, and most importantly, Broadcom trades at a similar forward P/E to NVIDIA (~30x) but with a fundamentally different risk profile. NVIDIA's 30x requires the AI capex boom to continue unabated and for custom silicon not to erode its market share. Broadcom's 30x requires more modest AI growth continuation plus the VMware margin expansion that is already in motion. The downside scenario for Broadcom (AI spending slows but VMware grows steadily) is far less severe than the downside for NVIDIA (AI spending slows, period).
We would add to Broadcom positions on any pullback to the $200–220 range (25–27x forward earnings), which would likely coincide with a broader semiconductor sector correction. For additional context on how the AI semiconductor value chain is evolving, see our analysis of where smart money is deploying in the AI capex boom.
Frequently Asked Questions
What is Broadcom's XPU business and why does it matter?
XPU refers to Broadcom's custom AI accelerator chip business, where it designs application-specific integrated circuits (ASICs) for hyperscaler customers who want alternatives to Nvidia's general-purpose GPUs. Broadcom's three confirmed XPU customers are Google (which uses Broadcom-designed TPUs), Meta, and ByteDance. These custom chips are optimized for each customer's specific AI workloads — training and inference for large language models, recommendation systems, and video processing. The XPU business matters because it represents Broadcom's most direct exposure to the AI compute buildout, with estimated revenue of $12-15 billion in fiscal 2025 and a path to $25 billion+ by fiscal 2027. CEO Hock Tan has described the serviceable addressable market for XPUs at $60-90 billion by 2027.
How does Broadcom compete with Nvidia in AI?
Broadcom does not compete head-to-head with Nvidia for the same customers. Nvidia sells general-purpose GPUs that run any AI workload via the CUDA software ecosystem. Broadcom designs custom chips optimized for a single customer's specific workload. The tradeoff: custom chips deliver 30-50% better performance per watt and lower total cost of ownership for a specific task, but they lack the flexibility to run arbitrary workloads. Broadcom competes for the portion of hyperscaler AI spending that goes to internal, well-defined workloads (like Google's search ranking or Meta's ad targeting), while Nvidia dominates the external-facing cloud compute market where customers need flexibility. They are complementary more than competitive — most hyperscalers buy both Nvidia GPUs and Broadcom custom ASICs.
What is Broadcom's role in AI networking?
Broadcom is the dominant supplier of networking silicon for AI data centers. Its Memory Tomahawk 5 and Jericho3-AI switches, along with custom network interface cards, are essential for connecting thousands of GPUs or TPUs in large AI training clusters. AI training requires massive data transfer between accelerators — a 10,000 GPU cluster generates petabytes of inter-node traffic. Broadcom's Ethernet-based networking solutions compete with Nvidia's InfiniBand (acquired through Mellanox) for this market. Broadcom's advantage is that Ethernet is an open standard with broader industry support, lower cost, and more vendor flexibility. AI networking revenue is estimated at $4-5 billion annually and growing 40%+ year-over-year.
Is Broadcom stock fairly valued at 30x forward earnings?
Broadcom trades at approximately 30x forward earnings as of February 2026, a significant premium to its historical average of 15-18x but a discount to pure-play AI infrastructure companies like Nvidia (28-35x) and AMD (35-40x). The premium reflects the AI-driven transformation of its revenue mix — AI-related revenue (XPU + networking) has grown from roughly 15% of semiconductor revenue in fiscal 2023 to an estimated 45%+ in fiscal 2025. If AI revenue continues to grow at 40-50% annually while legacy semiconductor and VMware revenue grows at 5-10%, the blended growth profile justifies a 25-30x multiple. The risk is that AI revenue growth decelerates or that VMware integration challenges weigh on margins.
What is the risk that hyperscalers build custom chips in-house instead of using Broadcom?
This is a real but manageable risk. Google has its own chip design team (which collaborates with Broadcom on TPU manufacturing and packaging), and Amazon has Annapurna Labs for Trainium and Graviton chips. However, building a custom AI accelerator from scratch requires hundreds of specialized engineers, 2-3 year design cycles, and billions in NRE (non-recurring engineering) costs. Broadcom's value proposition is that it provides the design expertise, IP libraries, and manufacturing relationships (particularly with TSMC) that allow hyperscalers to get custom chips to market faster and at lower risk than building everything internally. As long as the pace of AI hardware iteration remains rapid — requiring new chip generations every 18-24 months — most hyperscalers will continue to rely on Broadcom's design capabilities rather than fully insourcing.
Track AI Semiconductor Revenue Across the Value Chain
Broadcom's AI thesis depends on XPU design wins, networking share gains, and VMware margin expansion — metrics buried in earnings calls, 10-Q filings, and management commentary. DataToBrief automatically extracts and tracks these signals across Broadcom, NVIDIA, AMD, Marvell, and 30+ adjacent semiconductor names, giving you the data-driven clarity to position ahead of consensus.
This article is for informational purposes only and does not constitute investment advice. The opinions expressed are those of the authors and do not reflect the views of any affiliated organizations. Past performance is not indicative of future results. Always conduct your own research and consult a qualified financial advisor before making investment decisions.