A departure from the last post, mainly in source and structure. Using Sonnet 4.5 after feeding it thousands of market signals. Generated and shared without amendment. It is structured around some of my prompts and the context analysis it shaped as output.

Disclaimer: May be accurate, may be bollocks, it is certainly a high dose of cognitive dissonance when using a foundation model to synthesise this... but maybe not, as there's no ROI in this work, just a modicum of sanity preservation. The market doesn't tend to price such uses well. Sources are all there for you to examine. Better out than in as they say. My brain is now a less messy place.


The Power Constraint Thesis

"Sam Altman's quote about a GPU glut and people getting stung for long power contracts seems to link to this. One of the threads I pulled in this complex picture was multiple Mag 7 firms saying power is the new pinch point - another big CapEx requirement."

The evidence for this observation is now overwhelming. In early November 2025, both Sam Altman and Satya Nadella made remarkably candid admissions about the power constraint bottleneck. Nadella stated on the BG2 podcast: "The biggest issue we are now having is not a compute glut, but it's a power... If you can't do that, you may actually have a bunch of chips sitting in inventory that I can't plug in. In fact, that is my problem today."

Altman went further, predicting a catastrophic scenario: "If a very cheap form of energy comes online soon at mass scale, then a lot of people are going to be extremely burned with existing contracts they've signed."

This isn't theoretical. Goldman Sachs analysis shows that eight of thirteen US regional power markets are already at or below critical spare capacity levels. By 2030, with faster data centre growth, multiple regions including ERCOT (Texas), PJM (Mid-Atlantic), SE Georgia, and US Lower 48 aggregate will hit critical reliability thresholds.

Yet the CapEx continues. The fundamental question becomes acute: can returns surface quickly enough to protect from investor liquidity issues and financing appetite erosion?

The Revenue Gap

The mathematics are stark. Bain estimates that AI data centres will need to generate £1.5 trillion ($2 trillion) in annual revenue by 2030 to justify their costs. Current AI revenues globally? £15 billion ($20 billion). That requires 100-fold growth in five years.

J.P. Morgan's analysis is even more sobering: achieving merely a 10% return on AI investments through 2030 would require approximately £500 billion ($650 billion) of annual revenue in perpetuity. That's equivalent to £26.60 ($34.72) per month from every current iPhone user, or £138 ($180) from every Netflix subscriber—forever.


The China Question: Innovation or Imitation?

"Is there a credible comparison to the compute and power capacity in China that seems to be driving a low compute overhead innovation wave - Deepseek as the core example... The Chinese have always been an expert 2nd follower - unless that is a poor statement."

The characterisation of China as "expert 2nd follower" was historically accurate, but appears increasingly obsolete. DeepSeek's R1 model, released in January 2025, achieved performance comparable to OpenAI's o1 reasoning model whilst being built for £4.3 million ($5.58 million) compared to OpenAI's estimated £4.6 billion ($6+ billion) investment.

The cost efficiency extends to operations: DeepSeek R1 costs £130 ($169.80) for processing 100 million tokens monthly, compared to OpenAI o1's £3,565 ($4,650) a 96% cost reduction.

The Compute Capacity Reality

Despite this efficiency breakthrough, raw capacity still favours the West. As of May 2025, the United States contains approximately 75% of global GPU cluster performance, with China in second place at 15%. However, China's strategic advantage lies elsewhere.

China has 3,200 gigawatts of installed electricity generation capacity compared to 1,293 gigawatts in the US. More critically, China maintains an 80-100% electricity reserve margin nationwide—meaning it has at least twice the capacity it needs. US regional grids typically operate with just a 15% reserve margin.

As one AI researcher noted after touring Chinese AI hubs: "Everywhere we went, people treated energy availability as a given." In China, data centres are seen as a convenient way to "soak up oversupply." In the US, they're a grid stability threat.

The Distillation Controversy

The efficiency gains come with controversy. OpenAI claims it found evidence that DeepSeek used "distillation" from accounts believed to belong to DeepSeek using OpenAI's API. Research suggests that DeepSeek-V3 and Qwen-Max demonstrated higher distillation levels, aligning closely with GPT-4o.

However, DeepSeek's technical paper states that knowledge was distilled from R1 to Llama and Qwen to enhance their reasoning capabilities—the opposite direction from what OpenAI alleges. The truth likely involves multiple pathways, but the cost reduction and architectural innovations appear legitimate regardless of training data provenance.


The Silicon Alternative: Gemini 3 and the NVIDIA Bypass

Whilst DeepSeek demonstrated efficiency could bypass compute requirements, another development—released just hours before NVIDIA's earnings—struck at the foundation itself: Google proved NVIDIA chips are optional for frontier AI.

The Technical Achievement

On 19th November 2025, Google released Gemini 3.0, achieving the top score of 73 on Artificial Analysis benchmarks and an Elo rating of 1501 on LMArena—the first LLM to cross the 1500 threshold. On GPQA Diamond, it reached 91.9%; on MathArena Apex, 23.4% versus 0.5% for Gemini 2.5 Pro.

The achievement itself was extraordinary. The method was revolutionary: Gemini 3.0 was "trained entirely on TPUs and not Nvidia's GPUs," as CNBC's Deirdre Bosa reported, marking "a first at this level."

Google's head of Cloud TPU stated: "there is A LOT of great ML happening on TPUs with zero dependence on GPUs or CUDA... Anthropic, Midjourney, Salesforce, and many others are already building their stack on TPUs."

For years, TPUs were dismissed as "a powerful but niche tool that only Google could really use." Gemini 3 proved "these chips are now capable of training and serving a true frontier system at global scale."

The Adoption Cascade

The external validation came swiftly. Anthropic committed to "tens of billions of dollars" for "as much as a million" TPUs. Apple, OpenAI, and Meta reportedly began "using or testing TPU infrastructure in some capacity."

Midjourney, producing "the most-impressive image model," reportedly uses TPUs. The cost advantage: TPUs "cost 70% less than GPUs and are also cheaper (and likely superior) for inference."

Most tellingly: Apple surveyed the field—testing Gemini, ChatGPT, and Claude—then chose Google's TPU-trained Gemini to overhaul Siri. When your biggest rival in smartphones selects your AI infrastructure, the technical validation is complete.

The Economics of Independence

The insight from industry observers: "Google shows if you go back in time a decade and start working on custom silicon and eat billions in losses before seeing any real payoff and attract/retain top talent across multiple highly competitive disciplines: you too can stop paying NVIDIA."

But crucially, as one analyst noted: "Google has the muscle to develop and maintain its own alternative to CUDA. 99% of other companies don't."

The hyperscaler divergence creates a two-tier market:

Tier One: Self-Sufficient

  • Google: TPU infrastructure operational
  • Amazon: Trainium and Inferentia chips in production
  • Microsoft: "Athena"/MAIA accelerators in development
  • Apple: Custom silicon expertise from decades of iPhone development

Tier Two: NVIDIA-Dependent

  • Startups lacking capital for custom silicon
  • Mid-tier enterprises without hardware expertise
  • Traditional corporations adopting AI

The problem for NVIDIA: Tier One represents its largest customers. "NVIDIA's biggest customers are hyperscalers which can afford to even if it takes time. The market is forward looking."

Impact on the $500bn Order Book

NVIDIA's claimed $500bn order visibility through 2026 faces multiple compression vectors:

  1. Google self-sufficient – No NVIDIA chips needed for Gemini training/inference
  2. Anthropic switching – Multi-billion TPU commitment reduces GPU dependence
  3. Amazon deploying Trainium – AWS offering non-NVIDIA alternatives
  4. Microsoft building Athena – Reducing reliance on external chips
  5. Apple choosing Gemini – Validates TPU infrastructure for external customers

One analyst's timeline assessment: "I'd expect a Chinese tech company to achieve parity with NVDA/TPU within a decade" suggests the custom silicon movement extends beyond Western hyperscalers.

The Moat Erosion

NVIDIA's competitive advantages historically rested on two pillars:

Pillar One: CUDA Software Lock-In The TPU head's quote undermines this: "A LOT of great ML happening on TPUs with zero dependence on GPUs or CUDA." Anthropic, Midjourney, Salesforce "building their stack on TPUs" demonstrates CUDA is bypass-able at scale.

Pillar Two: Performance Superiority Gemini 3 didn't just match NVIDIA-trained models—it beat them. The performance data: "TPU v5p, Google's current production chip, is roughly 2.8× faster than NVIDIA's flagship H100 for training large models."

When the world's top-performing LLM trains on non-NVIDIA hardware, the "only NVIDIA can deliver peak performance" narrative collapses.

The Timing Catastrophe

Gemini 3's release on 19th November 2025—the day before NVIDIA earnings—created a devastating context:

Wednesday 19th November:

  • Gemini 3 tops all benchmarks
  • Trained entirely on TPUs
  • Proves custom silicon works at frontier scale
  • Anthropic announces multi-billion TPU commitment

Wednesday Evening (Post-Market):

  • NVIDIA reports record $57bn revenue
  • Huang rebuts AI bubble concerns
  • Confirms $500bn order book
  • Stock initially surges 5%

Thursday 20th November:

  • NVIDIA shares reverse, close down 3.15%
  • $365bn market cap erased
  • Broader market sells off

The market connected the dots: if Google doesn't need NVIDIA, and Anthropic is switching to TPUs, and the top model proves custom silicon works, why is NVIDIA's order book secure?

The Convergence: Efficiency + Hardware Alternatives

DeepSeek and Gemini 3 create a pincer movement:

DeepSeek Path: Don't need massive compute

  • $4.3m training cost vs $4.6bn
  • 96% operational cost reduction
  • Proves efficiency over brute force

Gemini 3 Path: Don't need NVIDIA chips

  • Custom silicon reaches frontier performance
  • 70% cost advantage on TPUs
  • Eliminates NVIDIA's margin capture

Combined message: You need neither massive compute nor NVIDIA hardware to compete. The infrastructure spending justified by "AI necessity" increasingly resembles sunk cost.

As one analyst summarised: "Google has capital. Google has training data. Google has proprietary TPUs. And now Google has proven these TPUs can reach the frontier. NVIDIA sells shovels. Google is building the mine. If the AI race is a marathon, Google has the infrastructure advantage."

The 99% Problem

The critical nuance: custom silicon benefits only hyperscalers. The reality check: "Google started the TPU project over a decade ago. It takes an absurd amount of time, talent, and money to recreate this."

For 99% of companies, NVIDIA remains necessary. But that 99% doesn't represent 99% of NVIDIA's revenue. The hyperscalers—now developing alternatives—are the bulk of demand.

When your largest customers prove they don't need you, pricing power collapses. When your moat (CUDA) is bypassed, margins compress. When alternatives work at frontier scale, order visibility becomes questionable.

Gemini 3 didn't create a new problem for NVIDIA. It made the existing problem visible: the company's biggest customers are becoming its competitors, and they're proving they can win without NVIDIA's chips.


Three Mile Island: The Infrastructure Scarcity Play

"Noting there are reported moves to reopen 3 Mile Island, not necessarily for nuclear, but because sites with grid capacity to take increased loads before feed in to distribution grid portions are scarce and suffer from national infrastructure under investment - is that a rational thesis?"

Absolutely rational and likely the core economic driver.

Microsoft's £12.3 billion ($16 billion) deal with Constellation Energy to restart Three Mile Island Unit 1 isn't primarily about nuclear power—it's about grid connection infrastructure. The plant already has transmission infrastructure designed for 837 megawatts that took decades to build.

The alternative? In Virginia, connecting a data centre to the grid now takes seven years, up from around four years. Delays are largely due to power complexity, capacity, permitting, and supply-chain issues.

PJM Interconnection projects that peak load will grow by 32 gigawatts from 2025 to 2030, with data centres comprising 30 gigawatts of that growth. The independent market monitor found that current tight capacity conditions are "almost entirely the result of large data centre load additions."

Most new greenfield data centres require an onsite substation of 50MVA-1GW from day one. Legacy data centres served by local distribution networks are requesting load growth in the 5x, 10x, and 20x range. Power availability has emerged as a limiting factor extending data centre construction timelines by 24-72 months.

The nuclear plant is almost incidental, it's the grid connection infrastructure that's actually scarce. This validates a critical observation: US underinvestment in national infrastructure creates artificial scarcity that China doesn't face.


Private Credit: The Canary in the Coal Mine

The private credit market is showing stress signals that presage broader problems. Multi-Color, a labelling company owned by private equity firm Clayton Dubilier & Rice, has more than £3.8 billion ($5 billion) in debt coming due over the next few years. Its £530 million (10.5%) unsecured notes due in 2027 are now trading at around 63 cents on the pound, plunging from 91 cents in late August.

Cornerstone Building's 9.5% bond due in 2029 currently trades at about 80 cents, down from nearly 98 cents on 9th September. Cloudera's £1.7 billion ($2.19 billion) term loan due in 2028 was quoted at about 95 cents, down from 98.5 cents at the start of October.

This matters because private credit funding of AI is running at around £38 billion ($50 billion) per quarter at minimum. The £1.3 trillion ($1.7 trillion) private credit market has no Fed emergency facility ”unlike the September 2019 repo crisis where the Fed could intervene, private credit is illiquid and opaque.

Meta raised £23 billion ($30 billion) in October ”the largest corporate bond issuance in more than two years. Alphabet followed with a £19 billion ($25 billion) debt issue, whilst Oracle issued about £14 billion ($18 billion) at the end of the third quarter.

Man Group warned that a "glut" of lower quality AI names may prove "too much for markets to stomach" amid the rush to secure financing. Oracle's 5-year credit default swap spreads widened 18 basis points to 105 basis points, compared with its yearly low of 33 basis points.

Blue Owl: The AI-Private Credit Nexus

"Blue Owl blocking redemptions, then proposing a merger that would mean 20% value drop for clients, then calling the merger off - quite the week."

This observation from 23rd November 2025 identified the transmission mechanism between the AI bubble and private credit crisis as it was happening in real-time. Blue Owl Capital, managing $273 billion in assets, represents the direct link between AI infrastructure financing and institutional investor exposure.

The AI Exposure

Blue Owl's concentration in AI infrastructure is extraordinary:

Project Amount Partner Ownership Structure
Meta Hyperion (Louisiana) $27-30bn (£20-23bn) Meta 20%, Blue Owl 80% SPV structure
Crusoe/Abilene (Texas) $15bn (£11.4bn) Joint venture Active
OpenAI Stargate (New Mexico) $3bn (£2.3bn) Blue Owl investment Active
Oracle/OpenAI (Texas) $14bn (£10.7bn) Credit package Active
Total AI Exposure ~$59bn (£45bn) Multiple ~22% of AUM

Twenty-two per cent of Blue Owl's assets under management concentrated in a single thesis—that AI data centres will generate sufficient revenue to service 24-year debt on infrastructure that may become obsolete within 3-5 years.

The SPV Structure: Off-Balance-Sheet Leverage

The Meta Hyperion deal exemplifies the financial engineering:

  • Blue Owl creates Special Purpose Vehicle (SPV)
  • SPV owns 80% of data centre; Meta retains 20% equity and operational control
  • $27bn debt issued by SPV (not appearing on Meta's balance sheet)
  • PIMCO provides £13.7bn ($18bn) as anchor lender, BlackRock £2.3bn ($3bn)
  • Debt rated A+ (backed by Meta guarantee)
  • Maturity: 2049 (24 years)
  • Yield: 225 basis points over Treasuries

The risk transfer is complete: Meta gets the data centre without balance sheet debt. Blue Owl, insurance companies, and pension funds absorb the risk. If AI demand disappoints, they hold stranded assets whilst Meta can simply walk away from its 20% equity stake.

Who holds this debt? Insurance companies seeking "long-duration yield" and pension funds seeking "stable infrastructure returns"—the same institutions that cannot now redeem from Blue Owl's gated funds.

The Redemption Crisis Timeline

May & August 2025: Early Warning

  • Tender offers oversubscribed
  • Blue Owl met requests using 7% allowance (5% standard + 2% discretionary)
  • Signal: Sophisticated investors wanted out

October 2025: Portfolio Stress Visible

  • Tricolor bankruptcy (subprime auto lender in portfolio)
  • First Brands bankruptcy (automotive parts distributor)
  • Morningstar assessment: Bankruptcies "fueling speculation about inflated valuations and liquidity in private credit"
  • Market begins questioning Blue Owl's marks

5th November 2025: Desperation Move

  • Blue Owl announces merger of OBDC II ($1.7bn/£1.3bn) into OBDC ($17.1bn/£13bn)
  • OBDC trading at 20% discount to NAV
  • Would force private fund investors to accept shares worth 80 cents on the pound
  • Morningstar: "Deal's inherent unfairness"
  • Blue Owl stock falls 11% over following 8 days

16th November 2025: Redemption Gate

  • Financial Times reveals Blue Owl blocked redemptions
  • Investors trapped until Q1 2026 merger completion
  • Stock falls additional 6%

17-20th November 2025: Class Action Cascade Three separate law firms filed securities fraud suits:

  • Law Offices of Howard G. Smith
  • Levi & Korsinsky
  • Law Offices of Frank R. Cruz

Allegations: Blocking redemptions whilst forcing 20% losses, failing to disclose liquidity problems that "could leave investors with large losses."

19th November 2025: Merger Cancelled

  • Boards announce merger termination: "did not see benefits outweighing volatility"
  • But redemptions remain frozen
  • Investors now gated with no exit path and no merger compensation

The Insurance and Pension Exposure Chain

The contagion mechanism operates through multiple channels:

Insurance Companies ($500-800bn exposure to private credit)

  • Hold Blue Owl funds (cannot redeem—gated)
  • Hold 24-year AI infrastructure debt (225bps over Treasuries)
  • Must meet ongoing claims obligations
  • When redemptions blocked, must sell liquid holdings (public equities, corporate bonds)
  • Creates forced selling cascade in public markets

Pension Funds (£230-307bn/$300-400bn exposure)

  • Direct investors in Blue Owl funds (gated)
  • Holders of AI infrastructure debt (long-duration)
  • Holders of AI stocks (Meta, NVIDIA, Oracle, Alphabet)
  • Triple exposure to AI thesis
  • For typical £769bn ($1tn) pension fund:
    • 30% equities (20% in AI stocks) = £154bn ($200bn) exposed
    • 15% private credit (Blue Owl et al) = £115bn ($150bn) gated
    • 10% infrastructure debt (AI data centres) = £77bn ($100bn) at risk
    • Total AI exposure: 55% of portfolio
    • Potential losses: 15-25% of total portfolio value

The Fed Cannot Rescue This

Unlike 2008 banking crisis:

  • No lender of last resort for private credit
  • No deposit insurance
  • No emergency facilities
  • No discount window access
  • $1.3 trillion (£1 trillion) market with zero backstop

CNBC's assessment connected the dots: "It also added to concerns about the state of the private credit industry among investors, especially the area that has started to heavily finance the artificial intelligence data centre build-out that many fear is overhyped."

The market is beginning to understand: Blue Owl gating is not an isolated liquidity event. It is the first domino in a $1.3 trillion private credit market with massive AI concentration and no regulatory backstop.


The AGI Paradox: Who Buys When Nobody Works?

"Hard to square these two messages - real economy vs AI AGI messaging against all this backdrop... where does consumer demand come from as the folk with private wealth that don't have to work are not going to be spending on survival essentials."

This is the catastrophic logical impossibility at the heart of the entire thesis.

To justify current AI valuations (Palantir at 449x earnings, for instance), you need AGI achieving near-term work replacement. But AGI destroying employment destroys the consumer base that would generate the revenue needed to service the £400 billion annual CapEx spend.

The wealthy won't pick up the slack, they're not buying 100x more toothpaste because they have money. Everyone must keep investing (survival depends on not being left behind), but success destroys the economic conditions needed to generate returns.

It's a game of musical chairs where the music stopping is baked into the win condition.


The Extractive Iteration

"AI rapidly finding it's way back there via Agentic browsers for retail and Microsoft's vision of users being agent managers where agents invisibly carry out tasks on machines with user permissions... shifting compute and therefore depreciation, capacity, and resilience burden towards the edge."

The pattern is depressingly familiar:

  • Streaming: Started as Netflix disrupting cable. Now costs more than cable with ads everywhere
  • Prime: Free fast shipping. Now costs £95/year + ads + "buy to remove ads"
  • Social Media: Connect with friends, Algorithmic rage farming + pay for reach
  • Search: Find information, Ads disguised as results

AI's trajectory:

  • Started: "Talk to your data! Amazing capabilities!"
  • Current: Agentic browsers shopping for you (with affiliate cuts), Microsoft Copilot requiring subscriptions
  • Next: Pay to remove AI "recommendations," pay to stop agents "optimising" your purchases

Amazon is suing Perplexity over its agentic shopping tool—not because it doesn't work, but because Amazon wants that extractive layer for themselves.

The Edge Compute Shift

Microsoft's "agent manager" vision (Work IQ, announced at Ignite) and Google Recall weren't fumbles—they're the roadmap:

  • Depreciation burden shifts to consumer hardware
  • "AI PC" requirements = forced upgrade cycle
  • Your device does the processing, Microsoft extracts the value
  • Nvidia wins (edge AI chips), consumers pay (new hardware + electricity), Microsoft monetises (agent subscription)

Nadella's idle chips comment? Shifting inference to edge devices solves their power problem by making it your electricity bill.

The opportunity cost is staggering. £400 billion annually could rebuild transmission infrastructure, fund fusion research properly, or provide universal healthcare and education. Instead: building infrastructure to charge subscriptions for agents to show ads on devices you paid for using electricity you paid for.


Enterprise Capture: The Work IQ Surveillance Layer

Microsoft's Windows 11 has major core features broken, admitted publicly in November 2025. They shut down replies when "Agentic OS" messaging received backlash. Yet they're pushing Work IQ, which:

  • Monitors projects, collaborators, communication style
  • Tracks your work rhythm (strategic work morning, admin evening)
  • "Joins threads" across reports, proposals, client requests
  • "Behaves consistently across everything you do"

Total workplace surveillance marketed as productivity enhancement.

The Operational Reality

Heavy on-device inference creates:

  • API calls you can't monitor
  • Token costs you can't predict
  • Attack surface you can't see
  • Failures you can't debug
  • Cascading failure risks
  • No observability layer
  • Access controls still undefined

You're running applications you licensed, on infrastructure you own, with agents making API calls you don't control, to services billing by token, with failure modes you can't observe, whilst Microsoft monitors your work patterns.

Every layer of abstraction removes control whilst adding cost, fragility, and surveillance.


The Vertical Collapse: From Competition to Contagion

"Multiple other cross-fertilisation deals - it is building into an increasingly cash poor, barter rich, bond issuing monolith vs the intra firm fiefdoms with apparent protection of discrete verticals... now that seems to be reshaping to interdependence via chips, data centres, power, and linked market exposure."

The transformation is stark.

FROM: Discrete verticals with moats

  • Microsoft: OS + Azure
  • Apple: Devices + App Store
  • Amazon: Retail + AWS
  • Google: Search + GCP
  • Meta: Social Graph + Ad tech

TO: Shared bottlenecks creating interdependence

  • All need NVIDIA chips (oligopoly)
  • All need data centre capacity (constrained)
  • All need power (grid at critical spare)
  • All need capital (debt markets tightening)

Microsoft couldn't pipe Claude through AWS into Office Copilot two years ago, that would hand Amazon access to crown jewels. They're doing it now because they're desperate for inference capacity and model diversity.

OpenAI restructuring for IPO whilst Microsoft reduces exposure? That's not confidence, that's SoftBank demanding liquidity and Microsoft hedging. SoftBank exiting NVIDIA (where gains are) whilst retaining OpenAI (where costs are) is telling.

The Contagion Architecture

Previously: Google stumbles, Microsoft benefits.

Now: Oracle defaults on £14 billion bonds, data centre construction halts → affects Microsoft, Amazon, Google capacity plans: grid operators adjust forecasts, power contracts reprice, everyone's CapEx assumptions break.

They've created 2008 banking system dynamics: "diversified" risk by spreading it around, actually created contagion channels.

The Evidence:

  • Meta £23 billion bond issuance (October 2025)
  • Alphabet £19 billion (October 2025)
  • Oracle £14 billion (Q3 2025)
  • Private credit at £1.3 trillion (no Fed backstop)
  • Multi-Color debt at 63 cents
  • Cross-licensing deals proliferating
  • Shared power purchase agreements

This isn't competition, it's a cartel bound by shared infrastructure constraints and mutual default risk.


Physics Meets Hubris

"In among the deathly complexity and historical dependence on selling the future then building it, then monetising it... Huang leaked meeting saying we are at the centre of world equity market health. Physics and the maths plus market sentiment is the ultimate blocker. Not something these firms have often had to deal with so publicly."

The historical playbook - sell future, build it, monetise it”worked for 40 years:

  • Software scales infinitely (marginal cost zero)
  • Moore's Law delivered hardware improvements
  • Cloud abstracted infrastructure away
  • Problems solved by throwing engineers at code

The OpenAI effect made trillion-pound CEOs feel slow. They got careless. Sundar, Satya, Zuck all started racing each other with startup-style announcements whilst running companies with £2 trillion market caps.

The inauguration theatre—tech CEOs front row at Trump's swearing-in—confirmed they think they're beyond constraints. Jensen Huang saying NVIDIA is "basically holding the planet together—and it's not untrue" isn't warning, it's hubris.

In a leaked internal all-hands meeting on 21st November 2025—just hours after NVIDIA's Q3 earnings beat—Huang revealed the company's predicament with remarkable candour. According to audio reviewed by Business Insider, he told employees that market expectations had placed NVIDIA in a no-win situation: "If we delivered a bad quarter, it is evidence there's an AI bubble. If we delivered a great quarter, we are fueling the AI bubble."

He continued: "If we delivered a bad quarter, if we're off by just a hair, if it just looked a little bit creaky, the whole world would've fallen apart. There's no question about that, OK? You should've seen some of the memes that are on the internet. Have you guys seen some of them? We're basically holding the planet together—and it's not untrue."

Huang also reflected on NVIDIA's market value volatility with dark humour, joking about "the 'good old days' when the company had a $5 trillion market capitalisation" before noting: "Nobody in history has ever lost $500 billion in a few weeks. You've got to be worth a lot to lose $500 billion in a few weeks."

The Earnings Paradox: 20th November 2025

The day after Huang's leaked meeting admission, NVIDIA reported Q3 fiscal 2026 earnings that seemingly vindicated the company's position. Revenue reached $57.01 billion (£43.5bn), up 62% year-over-year, beating analyst expectations of $54.92 billion (£41.9bn). Data centre revenue hit $51.2 billion (£39bn), up 66% annually. Q4 guidance of $65 billion (£49.6bn) exceeded the $61.66 billion (£47bn) consensus.

Huang used the earnings call to directly rebut bubble concerns: "There's been a lot of talk about an AI bubble. From our vantage point, we see something very different." He outlined three growth vectors: non-AI software increasingly running on GPUs, new AI applications, and "agentic AI" requiring additional computing power.

CFO Colette Kress confirmed the $500 billion order book through 2026 remained "on track" and would "probably grow," noting it didn't yet include recent deals with Anthropic or Saudi Arabia expansion. Melius Research analysts enthused: "More than just good numbers, we believe investors needed some hand-holding from Jensen which he provided in spades."

The Market's Verdict: Rejection

NVIDIA shares initially surged 5% in after-hours trading, then reversed sharply, closing down 3.15% the following session. The stock wiped out post-earnings gains, erasing approximately $365 billion in market capitalisation within 24 hours of the earnings beat.

NBC News captured the disconnect: "Losses were further compounded by ongoing concerns about AI — specifically, how much more profitable the companies buying chips like Nvidia's will be." The Nasdaq fell 2%, S&P 500 dropped 1.5%, with "ongoing skepticism about the longevity of the artificial intelligence boom."

The Analyst Divide

The bullish consensus remained intact on the surface. Evercore raised its target to $352, Melius to $320, HSBC to $320, with median targets around $250 (38% upside from $180 levels). Wall Street's response: overwhelming Buy ratings from 43 of 48 analysts.

But the sole cautious voice proved prescient: Ross Seymore at Deutsche Bank maintained Hold, raising target only from $180 to $215—noting the shares were "fairly valued." Seaport's sole "Underperform" rating suddenly looked less lonely, with analyst Doug Goldberg warning about "overreliance on OpenAI" and questioning "how long customers can keep spending billions on Nvidia chips."

The most telling observation came from 24/7 Wall Street: "The bearish argument that prevailed on Wall Street early this year is not entirely gone, though. While the AI rally may continue, it remains speculative, whereas the reasons for Nvidia stock's decline in the spring were genuine."

The Depreciation Defence

NVIDIA attempted to counter Burry's depreciation critique directly. Kress stated on the earnings call: "CUDA's compatibility in our massive installed base extend the life [of] NVIDIA systems well beyond their original estimated useful life. Thanks to CUDA, the A100 GPUs we shipped six years ago are still running at full utilization today, powered by vastly improved software stack."

Ben Reitzes at Melius Research noted: "Nvidia did a good job hinting at how depreciation schedules at their big customers are accurate as software updates prove to extend lives of older chips."

This defence inadvertently creates a problem: if A100 chips from six years ago remain at "full utilization," why would customers urgently need Blackwell at premium prices? The argument protecting existing revenue undermines the growth narrative.

The Circular Funding Elephant

CNN Business identified the core concern: "At the heart of the bubble concerns are the circular funding deals that Nvidia and other chipmakers have formed with AI companies, such as a $100 billion investment in OpenAI in exchange for chip purchases that was announced in September."

OpenAI CFO Sarah Friar's suggestion that government should backstop tech debt for AI infrastructure—quickly walked back—heightened anxiety. Yet NVIDIA continued making such deals post-earnings, with Kress confirming the order book excluded recent Anthropic and Saudi Arabia announcements.

The Institutional Assessment

FinancialContent's analysis captured the market psychology: "This perplexing market reaction underscores a pervasive and growing apprehension among investors regarding a potential 'AI bubble,' drawing uncomfortable parallels to the dot-com bust of the early 2000s... While Nvidia continues to deliver exceptional financial results, the market's response suggests that a strong balance sheet alone may no longer be enough to assuage fears of overvaluation."

A Bank of America survey revealed that 45% of global fund managers perceived an AI bubble as the biggest "tail risk," with a majority believing AI stocks were already overvalued.

The Timing Collision

The earnings came as:

  • Blue Owl gated redemptions (16th November)
  • Huang admitted the "no-win situation" in leaked meeting (21st November)
  • Gemini 3 proved TPU viability at frontier scale (19th November)
  • DeepSeek efficiency gains undermined compute necessity
  • Power constraints limited deployment capacity
  • Private credit markets showed stress

NVIDIA delivered a "perfect" quarter—62% revenue growth, $500bn order visibility, commanding margins—and the market sold off anyway. The disconnect between fundamentals and price action suggests investors are pricing a different question: not "can NVIDIA execute?" but "will the returns ever justify the investment?"

When record earnings and future guidance no longer support the stock price, the market is signalling something more profound than quarterly concerns. It's questioning the viability of the entire AI infrastructure thesis.

The Immovable Object

They've hit physical constraints they can't code around:

  • Can't generate power faster (grid takes 10-15 years)
  • Can't transmit it (substations take 4-7 years)
  • Can't depreciate it slower (physics determines GPU lifespan)
  • Can't conjure revenue faster (humans buy at human speed)
  • Can't print chips instantly (fabs take 5+ years)

This is the first time these companies have faced visible, undeniable physical constraints:

  • Nadella admitting chips sitting idle (can't hide power shortage)
  • Goldman showing 8/13 grids critical (can't dispute)
  • Oracle CDS widening (markets pricing default risk)
  • DeepSeek proving £4 million beats £4 billion (maths doesn't lie)

Software companies could always claim "we'll optimise it." You can't optimise physics.


The Final Metaphor: The Emperor Has No Looms

"The emperor has no power to run the looms to make the clothes to wow the people because he spent the loom Capex on marketing and the looms and power supply for them will take 5-6 years to come online and the marketing, noble, and merchant classes can't stretch belief long enough to see the fantabulous clothes."

The Timeline Collision

  • CapEx spent: NOW (£400 billion annually)
  • Marketing/announcements: NOW (Ignite, keynotes, demos)
  • Revenue needed to service debt: 2-3 years
  • Power infrastructure delivery: 5-7 years (grid connections)
  • Nuclear plants: 10-15 years (if building new)
  • Transmission upgrades: 7-10 years (substations, lines)
  • Full CapEx recoupment needed: 5 years (to hit £1.5 trillion revenue target)

The Belief Exhaustion

The marketing class (VCs, analysts) believed for 3 years.
The noble class (institutional investors) believed for 2 years.
The merchant class (corporate buyers) is asking "where's the ROI?"

  • Private credit at 63 cents says: belief exhausted
  • Oracle CDS at 105bps says: belief exhausted
  • Eight grids at critical spare says: physics doesn't care about belief

The Final Irony

Even if the power arrives (2030), and the looms work (2031), and the clothes get made (2032)...

DeepSeek already showed everyone the emperor could've bought a perfectly good suit at the charity shop for £4 million instead of spending £400 billion on custom looms.

The crowd is starting to notice there are no clothes. And won't be. For years.

The emperor has no grid connection.


Appendix: Sources and References

Power Infrastructure and Grid Constraints

  1. TechCrunch (3 November 2025) - "Altman and Nadella need more power for AI, but they're not sure how much"
    https://techcrunch.com/2025/11/03/altman-and-nadella-need-more-power-for-ai-but-theyre-not-sure-how-much/
    Nadella and Altman quotes on power constraints and idle chips
  2. White & Case (2025) - "Grid operators propose innovative measures to manage electricity demand from data centers"
    https://www.whitecase.com/insight-alert/grid-operators-propose-innovative-measures-manage-electricity-demand-data-centers
    PJM 32GW growth forecast, data centres comprising 30GW
  3. IT Brew (26 March 2025) - "Data center energy requirements leading to grid connectivity problems"
    https://www.itbrew.com/stories/2025/03/26/data-center-energy-requirements-leading-to-grid-connectivity-problems-and-solutions
    Virginia 7-year grid connection timeline, up from 4 years
  4. DISTRIBUTECH (2025) - "Navigating data center demand and load growth"
    https://www.distributech.com/2025-technical-conference-sessions/navigating-data-center-demand-and-load-growth-strategies-for-utilities-and-grid-operators-in-the-new-era-of-power-distribution-1
    50MVA-1GW substation requirements, 5x-20x load growth requests
  5. World Resources Institute (2025) - "Powering the US Data Center Boom: The Challenge of Forecasting Electricity Needs"
    https://www.wri.org/insights/us-data-centers-electricity-demand
    24-72 month delays due to power availability constraints

Three Mile Island and Nuclear Restart

  1. Pennsylvania Capital-Star (26 June 2025) - "Microsoft describes Three Mile Island plant as a once-in-a-lifetime opportunity"
    https://penncapital-star.com/economy/microsoft-describes-three-mile-island-plant-as-a-once-in-a-lifetime-opportunity/
    £12.3 billion deal details, 837MW capacity, 2028 restart target
  2. Data Center Dynamics (20 September 2024) - "Three Mile Island nuclear power plant to return as Microsoft signs 20-year PPA"
    https://www.datacenterdynamics.com/en/news/three-mile-island-nuclear-power-plant-to-return-as-microsoft-signs-20-year-835mw-ai-data-center-ppa/
    Plant shut down in 2019 due to poor economics, 20-year PPA details

AI Revenue and Return Requirements

  1. Sparkling Capital (23 October 2025) - "Surviving the AI Capex Boom"
    https://www.sparklinecapital.com/post/surviving-the-ai-capex-boom
    Bain £1.5 trillion revenue requirement by 2030, current £15 billion revenue
  2. Tom's Hardware (November 2025) - "J.P. Morgan calls out AI spend, says $650 billion in annual revenue required"
    https://www.tomshardware.com/tech-industry/artificial-intelligence/usd650-billion-in-annual-revenue-required-to-deliver-10-percent-return-on-ai-buildout-investment-j-p-morgan-claims-equivalent-to-usd35-payment-from-every-iphone-user-or-usd180-from-every-netflix-subscriber-in-perpetuity
    £500 billion annual revenue for 10% return, perpetuity requirement
  3. Morgan Stanley (2025) - "AI Capex Amid 2025 Bull Market: What's Next?"
    https://www.morganstanley.com/insights/articles/ai-spending-bull-market-2025
    Free cash flow turning negative, -16% forecast over 12 months

DeepSeek and China AI Development

  1. RD World Online (23 January 2025) - "DeepSeek-R1 RL model: 95% cost cut vs. OpenAI's o1"
    https://www.rdworldonline.com/this-week-in-ai-research-a-0-55-m-token-model-rivals-openais-60-flagship/
    £4.3 million training cost vs OpenAI's £4.6 billion, 95% cost reduction
  2. Analytics Vidhya (4 April 2025) - "DeepSeek R1 vs OpenAI o1: Which One is Faster, Cheaper and Smarter?"
    https://www.analyticsvidhya.com/blog/2025/01/deepseek-r1-vs-openai-o1/
    2.78 million GPU hours vs Meta's 30.8 million, resource optimization details
  3. Aloa - "ChatGPT vs DeepSeek - AI Model Comparison"
    https://aloa.co/ai/comparisons/llm-comparison/chatgpt-vs-deepseek
    £130 vs £3,565 for 100 million tokens monthly, 96% cost reduction
  4. VentureBeat (26 August 2025) - "DeepSeek-R1's bold bet on reinforcement learning"
    https://venturebeat.com/ai/deepseek-r1s-bold-bet-on-reinforcement-learning-how-it-outpaced-openai-at-3-of-the-cost
    Pure RL training methodology, bypassing supervised fine-tuning
  5. Federal Reserve (6 October 2025) - "The State of AI Competition in Advanced Economies"
    https://www.federalreserve.gov/econres/notes/feds-notes/the-state-of-ai-competition-in-advanced-economies-20251006.html
    China 3,200GW capacity vs US 1,293GW, 429GW added in 2024 alone
  6. Fortune (14 August 2025) - "AI experts return from China stunned: The U.S. grid is so weak"
    https://fortune.com/2025/08/14/data-centers-china-grid-us-infrastructure/
    China 80-100% reserve margin vs US 15%, treating data centres as "oversupply absorption"
  7. Epoch AI (5 June 2025) - "The US hosts the majority of GPU cluster performance"
    https://epoch.ai/data-insights/ai-supercomputers-performance-share-by-country
    US 75% of global GPU cluster performance, China 15%

DeepSeek Distillation Controversy

  1. BGR (29 January 2025) - "OpenAI says it has evidence DeepSeek used ChatGPT to train its AI"
    https://www.bgr.com/tech/openai-says-it-has-evidence-deepseek-used-chatgpt-to-train-its-ai/
    OpenAI allegations of distillation, account blocking details
  2. Recode China AI (2 February 2025) - "Inside the OpenAI-DeepSeek Distillation Saga"
    https://recodechinaai.substack.com/p/decoding-the-openai-deepseek-distillation
    Distillation Quantification research, DeepSeek-V3 and Qwen-Max findings
  3. Hugging Face - "deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B"
    https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
    Official technical documentation showing distillation FROM R1 TO Qwen/Llama

Private Credit and Bond Market Stress

  1. Bloomberg (19 November 2025) - "Creditors Brace for an Ugly Fight Over $5 Billion Debt at CD&R's Multi-Color"
    https://www.bloomberg.com/news/newsletters/2025-11-19/creditors-brace-for-an-ugly-fight-over-5-billion-debt-at-cd-r-s-multi-color
    Multi-Color £3.8 billion debt, 10.5% notes at 63 cents (from 91 cents August)
  2. Fortune (24 August 2025) - "Credit fuels the AI boom — and fears of a bubble"
    https://fortune.com/2025/08/24/private-credit-bonds-loans-debt-ai-boom-bubble/
    £38 billion per quarter private credit AI funding, £1.3 trillion market size
  3. The American Prospect (19 November 2025) - "The AI Bubble Is Bigger Than You Think"
    https://prospect.org/2025/11/19/ai-bubble-bigger-than-you-think/
    Private credit £1.3 trillion with no Fed emergency facility, SPV structure details
  4. CNBC (20 November 2025) - "AI capex spending fears spread to the bond market"
    https://www.cnbc.com/2025/11/20/ai-capex-spending-fears-spread-to-bond-market-following-tech-jitters.html
    Meta £23 billion October issuance, Alphabet £19 billion, Oracle £14 billion, Man Group warning
  5. Connect Money (19 November 2025) - "AI's Capital Crunch Hits the Bond Market"
    https://www.connectmoney.com/stories/ais-capital-crunch-hits-the-bond-market/
    Oracle CDS widening 18bps to 105bps, from 33bps yearly low

Market Concentration and Systemic Risk

  1. Mill Creek Capital (8 September 2025) - "Artificial Intelligence is Leading a CapEx Boom"
    https://millcreek.com/perspectives/charts-of-the-week-artificial-intelligence-is-leading-a-capex-boom/
    CapEx 4% of US GDP, responsible for 0.5-1% GDP growth, hyperscalers 70% of revenues to AI
  2. GWK Invest (7 October 2025) - "When Will AI Investments Start Paying Off?"
    https://www.gwkinvest.com/insight/macro/when-will-ai-investments-start-paying-off/
    Top 10 stocks 58% of S&P 500 market cap increase since 2022, Magnificent 7 now 30% of S&P 500 CapEx
  3. RIA Advisors (21 November 2025) - "Capex Spending On AI Is Masking Economic Weakness"
    https://realinvestmentadvice.com/resources/blog/capex-spending-on-ai-is-masking-economic-weakness/
    Bank of America forecast: global hyperscale spending up 67% in 2025, 31% in 2026, £467 billion total

Enterprise Risk and Microsoft Work IQ

  1. Twitter/X - Pirat_Nation (22 November 2025) - Windows 11 broken features admission
    https://twitter.com/Pirat_Nation/status/1859753489234735615
    Microsoft admits almost all major Windows 11 core features broken
  2. Twitter/X - robmay70 (23 November 2025) - Work IQ announcement analysis
    https://twitter.com/robmay70/status/1859915537891315715
    Work IQ intelligence layer details, monitoring capabilities, UK rollout December 2025

China AI Strategy and Innovation Transition

  1. Centre for International Governance Innovation - "DeepSeek and China's AI Innovation"
    https://www.cigionline.org/articles/deepseek-and-chinas-ai-innovation-in-us-china-tech-competition/
    Liang Wenfeng quotes on China as follower, cost optimization achievements
  2. Chatham House (29 January 2025) - "The world should take Chinese tech dominance seriously"
    https://www.chathamhouse.org/2025/01/world-should-take-prospect-chinese-tech-dominance-seriously-and-start-preparing-now
    China transition from follower to leader in specific domains, diffusion vs innovation
  3. MIT Technology Review (3 November 2025) - "The State of AI: Is China about to win the race?"
    https://www.technologyreview.com/2025/11/03/1126780/the-state-of-ai-is-china-about-to-win-the-race/
    China 22.6% of AI citations vs US 13%, 69.7% of AI patents
  4. MIT Technology Review (26 March 2025) - "China built hundreds of AI data centers. Now many stand unused"
    https://www.technologyreview.com/2025/03/26/1113802/china-ai-data-centers-unused/
    80% of China's new computing resources unused, DeepSeek moment causing strategic shift

Additional Market Indicators

  1. Goldman Sachs (November 2025) - "China's AI providers expected to invest $70 billion in data centers"
    https://www.goldmansachs.com/insights/articles/chinas-ai-providers-expected-to-invest-70-billion-dollars-in-data-centers-amid-overseas-expansion
    China £54 billion 2025 investment, 65% growth year-over-year, 30GW electricity capacity
  2. NextBigFuture (21 October 2025) - "China AI Chip and AI Data Centers Versus US"
    https://www.nextbigfuture.com/2025/10/china-ai-chip-and-ai-data-centers-versus-us-ai-data-centers.html
    China £75 billion AI CapEx potential in 2025, 550 planned data centres with 125GW capacity
  3. S&P Global (14 October 2025) - "Data center grid-power demand to rise 22% in 2025"
    https://www.spglobal.com/commodity-insights/en/news-research/latest-news/electric-power/101425-data-center-grid-power-demand-to-rise-22-in-2025-nearly-triple-by-2030
    61.8GW in 2025, 134.4GW by 2030, Virginia 12.1GW, Texas 9.7GW state-level demand
  4. Fortune / Business Insider (21 November 2025) - "Nvidia CEO says the company is in a no-win situation amid AI-bubble chatter, leaked meeting reveals"
    https://fortune.com/2025/11/21/nvidia-ceo-jensen-huang-q3-earnings-beat-no-win-situation-ai-bubble-fears-nvda-stock-selloff/
    Jensen Huang internal all-hands meeting audio: "We're basically holding the planet together—and it's not untrue," no-win bubble situation, $500 billion market cap loss
  5. Tom's Hardware (21 November 2025) - "Nvidia CEO Jensen Huang complains about stock price slide during all-hands meeting"
    https://www.tomshardware.com/tech-industry/nvidia-ceo-jensen-huang-complains-about-stock-price-slide-during-all-hands-meeting-says-market-did-not-appreciate-companys-incredible-quarter
    Additional context on Huang's "AI bubble" dilemma quotes and market reactions
  6. The American Prospect (19 November 2025) - "The AI Bubble Is Bigger Than You Think"
    https://www.americanprospect.org/2025/11/19/ai-bubble-bigger-than-you-think/
    David Dayen reporting on Blue Owl AI infrastructure financing, SPV structures, Meta Hyperion deal, private credit market mechanics, redemption blocking
  7. Morningstar / Bloomberg (19 November 2025) - "Creditors Brace for an Ugly Fight Over $5 Billion Debt at CD&R's Multi-Color"
    https://www.bloomberg.com/news/newsletters/2025-11-19/creditors-brace-for-an-ugly-fight-over-5-billion-debt-at-cd-r-s-multi-color
    Blue Owl portfolio bankruptcies (Tricolor, First Brands), "inflated valuations and liquidity" concerns in private credit
  8. Law Offices of Howard G. Smith (17-20 November 2025) - Blue Owl class action securities fraud lawsuit
    Allegations: Blocking redemptions, forcing 20% investor losses, failure to disclose liquidity problems
  9. Levi & Korsinsky (17-20 November 2025) - Blue Owl class action securities fraud lawsuit
    Parallel allegations regarding redemption gates and investor losses
  10. Law Offices of Frank R. Cruz (17-20 November 2025) - Blue Owl class action securities fraud lawsuit
    Additional securities fraud claims related to OBDC merger and redemption blocking
  11. NVIDIA Corporation (19 November 2025) - "NVIDIA Announces Financial Results for Third Quarter Fiscal 2026"
    https://investor.nvidia.com/news/press-release-details/2025/NVIDIA-Announces-Financial-Results-for-Third-Quarter-Fiscal-2026/default.aspx
    Q3 FY2026: $57.01bn revenue, $51.2bn data center, $65bn Q4 guidance
  12. CNBC (19 November 2025) - "Nvidia (NVDA) earnings report Q3 2026"
    https://www.cnbc.com/2025/11/19/nvidia-nvda-earnings-report-q3-2026.html
    Earnings beat, data center revenue detail, Huang bubble rebuttal comments
  13. CNBC (20 November 2025) - "Nvidia earnings takeaways: Bubble talk, 'half a trillion' forecast and China orders"
    https://www.cnbc.com/2025/11/20/nvda-stock-earnings-ai-bubble-china.html
    $500bn order book confirmation, circular funding concerns, Kress quotes
  14. CNBC (20 November 2025) - "Nvidia stock closes nearly 3% lower, wiping out post-earnings rally"
    https://www.cnbc.com/2025/11/20/nvidia-shares-rise-after-after-stronger-than-expected-3q-results.html
    Stock price reversal, market reaction analysis, analyst commentary
  15. TrendSpider (21 November 2025) - "NVDA Stock Slides After Strong Q3 Earnings"
    https://trendspider.com/blog/nvidia-stock-falls-despite-blowout-q3-earnings-and-ai-momentum/
    3.15% decline, $365bn market cap loss, analyst upgrades and targets
  16. NBC News (20 November 2025) - "Stock market sinks as AI and interest rate worries grip investors"
    https://www.nbcnews.com/business/business-news/stock-market-nvidia-walmart-jobs-rcna244960
    Broader market selloff, Burry "just because something is used" comment, AI profitability concerns
  17. CNBC (20 November 2025) - "Nvidia rebuts 'Big Short' investor Burry's bear case. What the company says"
    https://www.cnbc.com/2025/11/20/nvidia-refutes-big-short-investor-burrys-bear-case-what-the-company-says.html
    Kress CUDA depreciation defence, A100 utilization claims, Melius Research analysis
  18. CNN Business (19 November 2025) - "Nvidia beats earnings expectations, even as bubble concerns mount"
    https://www.cnn.com/2025/11/19/tech/nvidia-earnings-ai-bubble-fears
    Circular funding deals, OpenAI $100bn investment, Sarah Friar government backstop comment
  19. Stocktwits (21 November 2025) - "Is Nvidia's Lone Bear Vindicated? Seaport Analyst Slams 'Circular Deals'"
    https://stocktwits.com/news-articles/markets/equity/is-nvidia-s-lone-bear-vindicated-seaport-analyst-slams-circular-deals/cLPMKdtREOc
    Doug Goldberg Underperform rating, OpenAI overreliance concerns, $140 price target
  20. FinancialContent (21 November 2025) - "Nvidia Stumbles Amidst AI Slump Fears Despite Stellar Earnings"
    https://markets.financialcontent.com/stocks/article/marketminute-2025-11-21-nvidia-stumbles-amidst-ai-slump-fears-despite-stellar-earnings
    Dot-com bubble parallels, Bank of America 45% "tail risk" survey, valuation concerns
  21. 24/7 Wall Street (21 November 2025) - "Nvidia (NVDA) Bull, Base, & Bear Price Prediction and Forecast"
    https://247wallst.com/forecasts/2025/11/21/nvidia-nvda-bull-base-bear-price-prediction-and-forecast/
    "Bearish argument not entirely gone" analysis, 80% market share, speculative concerns
  22. VentureBeat (19 November 2025) - "Google unveils Gemini 3 claiming the lead in math, science, multimodal, and agentic AI benchmarks"
    https://venturebeat.com/ai/google-unveils-gemini-3-claiming-the-lead-in-math-science-multimodal-and
    Benchmark scores: 73 Artificial Analysis, 1501 Elo, 91.9% GPQA Diamond, 23.4% MathArena Apex
  23. StartupHub.ai (19 November 2025) - "Google's Gemini 3.0 and the Strategic Resurgence of TPUs"
    https://www.startuphub.ai/ai-news/ai-video/2025/googles-gemini-3-0-and-the-strategic-resurgence-of-tpus/
    CNBC Deirdre Bosa reporting, "trained entirely on TPUs", Anthropic tens of billions commitment, Google Cloud TPU head quotes
  24. Aakash G / News.aakashg.com (19 November 2025) - "Gemini 3 isn't just the top model, it's rewriting AI infrastructure"
    https://www.news.aakashg.com/p/gemini-3
    "Google has the infrastructure advantage" analysis, capital/data/TPU combination
  25. InvestingLive (19 November 2025) - "Google proved that you don't need Nvidia and so much more"
    https://investinglive.com/stocks/google-proved-that-you-dont-need-nvidia-and-so-much-more-20251119/
    70% TPU cost advantage, Midjourney TPU usage, Apple Siri Gemini selection
  26. Team Blind Forum (19 November 2025) - "Gemini 3 proves you don't need NVIDIA chips for LLMs"
    https://www.teamblind.com/post/gemini-3-proves-you-dont-need-nvidia-chips-for-llms-105ptqfq
    Industry commentary: "99% can't afford", "hyperscalers are biggest customers", decade timeline for Chinese parity
  27. Vincent Schmalbach (31 March 2025) - "The Best AI Model Doesn't Run on NVIDIA Chips"
    https://www.vincentschmalbach.com/the-best-ai-model-doesnt-run-on-nvidia-chips/
    TPU v5p 2.8× faster than H100 for training, Amazon Trainium/Inferentia, Microsoft Athena development

Analysis Compiled: 23rd November 2025
Sources Verified: 23rd November 2025
Market Data Current As Of: 23rd November 2025

This analysis represents observations of publicly available market signals and does not constitute investment advice. All currency conversions use approximate exchange rates current at time of analysis.

The Emperor Has No Grid Connection: A Real-Time Analysis of the AI Infrastructure Bubble