← Back to all episodes
Meta Bets Big, Agents Go Shopping

Meta Bets Big, Agents Go Shopping

Feb 18, 2026 • 8:40

Meta locks in millions of Nvidia chips, Google sets I/O dates, and Mastercard demos agent-led checkout in India — while users rally for GPT-4o and the AI bottleneck shifts to land and power. A fast, insightful tour of the deals, dates, and infrastructure shaping AI this week.

Episode Infographic

Infographic for Meta Bets Big, Agents Go Shopping

Show Notes

Welcome to AI News in 10, your top AI and tech news podcast in about 10 minutes. AI tech is amazing and is changing the world fast, for example this entire podcast is curated and generated by AI using my and my kids cloned voices...

Here's what's shaping the AI and tech world on Wednesday, February 18, 2026.

Meta is doubling down on Nvidia silicon in a multiyear deal — reportedly millions of chips. Google has locked May 19 to 20 for I/O 2026 with an AI-heavy agenda. Mastercard just showed off a fully authenticated, agent-led checkout in India — no taps, no OTPs, just an AI buying things on your behalf. OpenAI's decision last week to retire GPT-4o has sparked a fast-growing petition to resurrect it. And behind the scenes, power and land are the new gold, as a little-known firm called Cloverleaf becomes a takeover target for its AI-ready energy pipeline.

Let's dive in.

[BEGINNING_SPONSORS]

Story one: Meta just made one of the loudest statements yet about where AI compute is headed.

The company has struck a multiyear agreement to buy millions of Nvidia chips — next-gen Blackwell and Rubin GPUs, plus Nvidia's Grace and upcoming Vera CPUs.

Why does this matter? Two reasons.

First, it signals Meta isn't waiting for its in-house silicon to catch up after reported setbacks — Nvidia remains the fastest path to scale.

Second, it's notable that Meta is committing to Nvidia CPUs, not just GPUs, as inference soaks up more data center capacity and CPU-GPU balance becomes strategic.

The Financial Times reports Meta could push AI infrastructure spend to as much as 135 billion dollars this year, and identifies Meta as the first Big Tech firm to publicly commit to Nvidia CPUs for inference. The Verge adds the deal includes future Vera CPUs by 2027, and underscores broader market jitters about how all this spend gets financed.

However you slice it, Meta just told the market it plans to compute — at planetary scale.

There's a meta question underneath Meta's move — pun intended. If the AI world is pivoting from big-bang model training to relentless, always-on inference, power envelopes, memory bandwidth, and CPU offload start to matter as much as raw GPU count. Nvidia pushing standalone CPUs into these stacks is a strategic wedge — one that, if it sticks, could reshape data center bills of materials and vendor lock-in dynamics over the next hardware cycle.

Story two: Mark your calendar — Google I/O 2026 is officially set for May 19 to 20 at Shoreline Amphitheatre in Mountain View, with a global livestream.

Expect Gemini front and center across Android, Chrome, Workspace, and more. Google teased the dates with a playful Gemini-powered puzzle on the I/O site, and outlets are highlighting the AI-heavy framing.

The two-day format sticks, and the opening keynote lands the morning of May 19. If last year's I/O was about blanketing Google's product line with generative features, this year looks like deeper agentic workflows, Android-scale on-device AI, and tighter fusion between cloud AI and developer tooling.

Countdown's on... T minus 89 days.

Story three: Agentic commerce just took a concrete step in India.

At the India AI Impact Summit, Mastercard demonstrated what it calls the country's first fully authenticated, agent-led purchase — an AI agent discovered a product, verified the merchant, and completed a tokenized, authenticated payment end to end, without the user opening a shopping app or typing card details.

The demo ran with Mastercard cards from Axis Bank and RBL Bank. Payment aggregators included Razorpay, Cashfree, Juspay, and PayU. Merchant partners ranged from Swiggy and Zepto to Vodafone Idea and Tira. Business Standard reports the transaction used an LLM interface with tokenization and an open standard bridge to external systems, while the Times of India frames it as a milestone for secure, in-flow AI shopping.

One key caveat: the Economic Times notes it was a sandbox demonstration pending regulatory approvals — commercial rollout depends on policy clarity. Still, it's a vivid preview of how governed, auditable agents could make checkout feel... invisible.

[MIDPOINT_SPONSORS]

Story four: OpenAI is feeling unexpected whiplash from nostalgia.

After officially retiring GPT-4o on February 13, a user-organized petition has now passed roughly 21,900 signatures urging the company to restore the model. Business Insider reports petitioners say 4o's conversational warmth and emotional nuance made it uniquely compelling — even if OpenAI insists usage was just 0.1 percent, and that newer models surpass it on capability.

It's a telling moment: as labs optimize for reasoned accuracy, some power users miss the vibe. Product-wise, OpenAI has moved on, arguing that 4o's DNA lives on inside later models — emotionally, a slice of the community isn't ready to let go.

For developers and enterprises, the signal is different — expect models to churn faster, with deprecations tied to safety, cost, or roadmap cohesion. Translation: build with API abstraction layers and a deprecation plan.

Story five: The AI land-and-power race is accelerating, and a startup you may not know — Cloverleaf Infrastructure — has become a hot commodity.

Axios reports multiple suitors are circling Cloverleaf, which amasses gigawatt-scale land and grid access for data centers by tapping underused regional capacity instead of defaulting to on-site gas. The firm has raised about 300 million dollars to date and is pursuing 10 to 15 gigawatts of projects — roughly 10 to 15 times Seattle's peak demand — though not all sites pan out, underscoring rising local resistance to mega-facilities.

Why does this matter? Because compute is constrained by electrons and permits as much as by chips. A takeover could roll up scarce grid interconnect rights and timelines — core inputs for anyone training or serving frontier models at scale.

And it's not just startups. Barron's highlights DTE Energy's long-term, 1.4-gigawatt supply deal for Oracle's Michigan data center, with regulators insisting Oracle — not ratepayers — foot the bill. The new AI stack begins at the substation.

Quick notes to watch as we wrap:

Google's I/O timing lines up neatly with Microsoft Build and Apple's WWDC window — expect competitive volleys on agentic assistants and on-device AI. And on the model front, Anthropic's Claude Sonnet 4.6 just landed with beefed-up coding and computer-use skills, plus a 1-million-token beta context. If you're benchmarking for enterprise agents, it's worth a look.

That's the lineup for Wednesday, February 18. Meta's chip binge signals the next phase of AI infrastructure, Google's I/O clock starts ticking, agentic commerce inches from demo to deployment, users push back on losing a beloved model, and the real moat in AI — power and permits — takes center stage. We'll be back tomorrow with the next wave.

Thanks for listening and a quick disclaimer, this podcast was generated and curated by AI using my and my kids' cloned voices, if you want to know how I do it or want to do something similar, reach out to me at emad at ai news in 10 dot com that's ai news in one zero dot com. See you all tomorrow.