← Back to all episodes
TPUs, Shopping Agents, and State AI Showdown

TPUs, Shopping Agents, and State AI Showdown

Nov 26, 2025 • 8:57

Google’s reported TPU deal with Meta rattles Nvidia as AI hardware diversifies. We break down OpenAI’s subscription ambitions and shopping assistant, the state AG push against federal preemption, China’s surge in open-model downloads, and Perplexity’s one-click personal shopper.

Show Notes

Welcome to AI News in 10, your top AI and tech news podcast in about 10 minutes. AI tech is amazing and is changing the world fast, for example this entire podcast is curated and generated by AI using my and my kids cloned voices...

Here’s what’s moving in AI and tech on Wednesday, November 26th, 2025... We’ve got fresh tremors in the chip race: reports say Google is negotiating a multibillion-dollar deal to supply TPUs to Meta—spooking Nvidia investors. OpenAI, meanwhile, is eyeing a huge subscription future for ChatGPT and dipping into shopping. State attorneys general from 35 states are telling Congress, do not block state AI laws. A new study says China just leapfrogged the US in downloads of open AI models. And Perplexity rolled out an AI personal shopper with one-click checkout. Let’s dive in.

[BEGINNING_SPONSORS]

First up, the chip wars... and a potential tectonic shift. Multiple outlets say Google is in talks to sell its custom Tensor Processing Units—TPUs—to Meta for on-premises deployment by 2027, with Meta also poised to rent TPU capacity from Google Cloud as early as next year.

If the deal lands, Google insiders reportedly think it could claw as much as 10 percent of Nvidia’s annual revenue—billions of dollars—by prying customers away from Nvidia’s GPU stack. That possibility rattled markets: Nvidia shares slid around two to four percent on the headlines, while Alphabet ticked up.

The subtext here is strategic: Google’s been pitching TPUs as a cost-efficient, secure alternative to CUDA—complete with a growing software ecosystem—and the company appears ready to do what it hasn’t before... sell the chips for use outside its own data centers. That’s a direct shot at the GPU king—and a sign that the AI hardware market is moving from single supplier to multi-rail. Reporting emphasized potential deployment timelines and revenue cannibalization for Nvidia.

Why it matters: training is still GPU-heavy, but inference at scale will live or die on cost, energy, and supply chain resilience. If hyperscalers start dual-sourcing compute—think GPUs from Nvidia and TPUs from Google—the moat narrows. And if Meta adopts TPUs for core workloads while continuing its open-model push, that could ripple across the ecosystem—frameworks, compilers, and developer tools—well before 2027.

Second story—OpenAI’s subscription ambitions just got a big number attached. OpenAI projects that by 2030, roughly 220 million of ChatGPT’s estimated 2.6 billion weekly users—about 8.5 percent—will be paid subscribers. As of July, about 35 million were paying for Plus or Pro, priced at 20 dollars and 200 dollars per month. The company’s annualized revenue run rate is expected to hit around 20 billion dollars by year-end, yet losses are still mounting given R and D and compute costs.

And in a nod to near-term monetization, OpenAI rolled out a personal shopping assistant inside ChatGPT this week—positioned to enable ad or commission revenue as the holiday rush kicks in.

What’s interesting here is strategic mix: subscriptions plus commerce and advertising. If AI shopping agents become the new front door for product discovery, retailers and brands will have to optimize not just for search engines—but for AI assistants that compare specs, parse reviews, and personalize picks. Expect competition to heat up between the major assistants as they fight to sit between consumers and the checkout button.

Third—big policy drumbeat. A bipartisan group of 35 state attorneys general, plus DC, urged Congress not to preempt state AI laws. Their argument: with federal efforts stalled, states need autonomy to protect residents from AI harms—think deepfake abuse, discrimination in credit or housing, and risky chatbots—especially as sweeping rules in places like California are set to kick in starting in 2026.

The backdrop: the administration has weighed adding preemption language to the annual defense bill, but the Senate previously voted 99 to 1 against blocking state AI laws. The AGs’ letter—led by New York, North Carolina, Utah, and New Hampshire—signals a likely showdown over who sets the rulebook for frontier models and high-risk use cases.

If you’re a developer or enterprise AI buyer, the takeaway is fragmentation versus uniformity. Industry wants one national standard; states want the flexibility to move faster. In the near term, compliance teams should plan for a patchwork—disclosure, risk management, and robustness testing obligations varying by state—until Congress lands something durable.

[MIDPOINT_SPONSORS]

Fourth—an eye-opening data point on open models. A new MIT and Hugging Face study says China has leapfrogged the United States in the global market for open AI models, accounting for about 17 percent of downloads versus roughly 15.8 percent for US developers. Names like DeepSeek and Alibaba’s Qwen loom large, and the report underscores how frequent releases and smaller, efficient models are driving adoption.

It reinforces a broader trend: even as US labs emphasize closed frontier systems, China’s open-weights ecosystem is accelerating—despite chip constraints—through rapid iteration and community contributions.

Zooming out, this shift matters for innovation diffusion and sovereignty. Open models are easier to audit, fine-tune, and deploy locally—appealing to startups and governments alike. It also turns up the heat on US initiatives pushing for a stronger open-source posture—warning that the US could cede leadership without a concerted open-models strategy.

Fifth—Perplexity is stepping directly into AI-shopping season. The company introduced an AI personal shopper that lets you search conversationally, refine picks with follow-ups, and buy directly via PayPal’s Instant Buy. Suggestions appear as product cards with specs and reviews; the assistant can remember prior interactions—say, that jacket you liked—so it can recommend compatible boots next time. Perplexity is positioning this as a user-first alternative to ad-driven editorial and affiliate content that often clutters the web. It’s rolling out on desktop and web in the US—with mobile coming soon.

Seen together with OpenAI’s new shopping assistant, this marks an inflection point: AI agents aren’t just answering questions—they’re curating and transacting. For retailers, that means optimizing feeds, metadata, and structured product data for agents—not just for search. For consumers, it could mean fewer tabs and more trustworthy comparisons—if the assistants keep their sourcing transparent.

Quick recap... Google’s reported TPU talks with Meta rattled Nvidia and signaled a more plural AI hardware future. OpenAI set a bold target—220 million paid ChatGPT users by 2030—and leaned into holiday shopping. State AGs told Congress to keep states in the AI rulemaking game. China edged ahead of the US in open-model downloads. And Perplexity launched a personal shopper that could make holiday buying a lot more conversational. We’ll keep watching how these threads—chips, commerce, policy, and open models—intertwine over the next few weeks.

Thanks for listening and a quick disclaimer, this podcast was generated and curated by AI using my and my kids' cloned voices, if you want to know how I do it or want to do something similar, reach out to me at emad at ai news in 10 dot com that's ai news in one zero dot com...