Owning the Stack: Agents, Power, and Memory
Microsoft pushes toward its own frontier models, Anthropic vows to pay grid upgrades, Google brings checkout to Gemini and AI Search, Mistral funds a Swedish data center for EU sovereignty, and SK hynix teases a hybrid memory design that could slash inference costs. We unpack the stakes, timelines, and how these moves reshape AI's power, parts, and purchase flows.
Episode Infographic
Show Notes
Welcome to AI News in 10, your top AI and tech news podcast in about 10 minutes. AI tech is amazing and is changing the world fast, for example this entire podcast is curated and generated by AI using my and my kids cloned voices...
It's Thursday, February 12.
Here's what's on deck. Microsoft is signaling a major shift toward building its own frontier models — think professional-grade AGI — while keeping OpenAI in the mix. Anthropic is trying to defuse backlash over AI's power bills by pledging to cover grid upgrades itself. Google is turning Gemini and AI Search into places where you can actually buy things — directly — thanks to new retail integrations. France's Mistral AI is putting 1.2 billion euros into a Swedish data center to double down on European AI sovereignty. And SK hynix is pitching a fresh memory architecture that could supercharge inference efficiency... we'll unpack the numbers and what they really mean.
[BEGINNING_SPONSORS]
Let's start with Microsoft.
The Financial Times says Redmond is pivoting toward true self-sufficiency in AI — investing heavily in its own data, compute, and talent to build cutting-edge foundation models, not just leaning on OpenAI. Even after they restructured the partnership in October 2025, Microsoft keeps access to OpenAI tech through 2032 and still holds a big equity stake. But Mustafa Suleyman, Microsoft's AI chief, is casting the next phase as professional-grade AGI aimed at automating large chunks of white-collar work over the next 12 to 18 months.
That vision sits on top of a massive capital spending plan — about 140 billion dollars for AI infrastructure — alongside health care ambitions like medical superintelligence. If that timeline holds, expect agents that don't just chat... they manage workflows, evolve with your org, and execute tasks with minimal oversight.
Why does this matter? Because it signals a move from being primarily a platform for others' models to owning more of the model stack itself. For customers, that could mean tighter integration with Microsoft 365, Azure, and security tooling — and potentially more predictable pricing and roadmaps. For competitors, it raises the stakes: Anthropic, OpenAI, and Google now face an even more vertically integrated Microsoft — one that can tune models directly against enterprise data and compliance requirements.
Story two: Anthropic is trying to get ahead of the "AI is raising my utility bill" narrative.
The company says it will pay 100 percent of grid-connection upgrade costs for its U.S. data centers — expenses that often get passed on to ratepayers. CEO Dario Amodei argues the costs of powering AI should fall on Anthropic, not everyday Americans. The company also committed to tools that cut load during peak demand and to supporting new power sources near its facilities.
This follows a previously announced plan to invest roughly 50 billion dollars in U.S. AI infrastructure, starting in Texas and New York. Context... utilities sought about 31 billion dollars in rate hikes in 2025 amid soaring data center demand, so there's real political heat. If Anthropic's model sticks, peers will feel pressure to follow.
What to watch next: will regulators and local utilities let hyperscalers assume those upgrade costs cleanly... and will voluntary peak-shaving commitments become contractual requirements tied to interconnection? Also watch how these pledges intersect with clean energy procurement timelines — matching AI loads to new generation is going to be a big theme in 2026.
Story three: Google is pushing harder on AI-native shopping.
New features let you buy select items from partners like Etsy and Wayfair directly inside Gemini and the AI mode in Search — no tab hopping, no separate checkout. A new Direct Offers feature brings promotional pricing right into AI results. And Google is courting retailers with a Universal Commerce Protocol so agents can handle discovery, checkout, and post-purchase support while the merchant remains the merchant of record.
The throughline is agentic commerce. The assistant doesn't just recommend — it completes the transaction. For retailers, that means access to Google's AI surfaces without ceding brand and order ownership. For consumers, it's fewer steps between "I need..." and "It's on the way."
Strategically, this puts Google on a collision course with both OpenAI and Amazon for control of the last mile of intent. If agentic checkout becomes the default, whoever owns the AI surface — and the standards underneath — could reshape retail traffic flows. Watch adoption among big-box retailers and marketplaces, and whether payments networks and loyalty programs wire in deeply enough to make this feel native.
[MIDPOINT_SPONSORS]
Story four: Europe's AI sovereignty push just got more concrete.
Mistral AI is investing 1.2 billion euros to build an AI-focused data center in Borlänge, Sweden — its first outside France — in partnership with EcoDataCenter. The facility, targeting 2027, emphasizes renewable power, advanced cooling, and local data residency. Early details point to an initial 23-megawatt footprint and several hundred eventual jobs as it scales.
CEO Arthur Mensch is pitching a fully European AI stack — from models to metal — so governments and enterprises can keep sensitive data on EU soil while avoiding lock-in to American cloud providers. Expect Nvidia's next-gen GPUs to anchor the deployment.
Beyond the symbolism, there's market logic — surging European demand for sovereign AI, strict privacy regimes, and public-sector procurement rules. If Mistral executes, this could become a template for regional AI infrastructure: low-carbon power, short supply lines, and compliance by design. It also intensifies competition to site the most power-hungry compute in regions with favorable energy economics.
Closing with chips — specifically, memory.
SK hynix is floating a hybrid architecture that marries today's High Bandwidth Memory with something new: High Bandwidth Flash, or HBF. In simulations presented this week, pairing eight HBM3E stacks with eight HBF stacks alongside an Nvidia Blackwell-class GPU reportedly delivered up to 2.69 times better performance per watt for certain AI workloads.
The idea is to offload gigantic key-value caches and long-context data to a dense, NAND-based tier right next to the GPU, freeing pricey HBM and compute for real-time work. In some simulated scenarios, batch throughput jumped as much as 18.8 times — shrinking what used to demand 32 GPUs down to just two. If even a slice of that shows up in production, inference economics could change fast.
Caveats matter. NAND writes are slower than DRAM, controller design is hard, and standards have to gel. But there's already a standardization drumbeat around HBF, with SK hynix and partners pushing toward samples and broader adoption later in the decade. For buyers staring at eye-watering inference bills, a fatter, cheaper adjacent memory tier could be a release valve — especially as context windows balloon and agents juggle more tools.
Quick recap... Microsoft is moving to own more of the AI stack while still tapping OpenAI. Anthropic says it will foot the bill for grid upgrades to blunt rate hikes. Google is turning Gemini and AI Search into real checkout surfaces with retailers like Etsy and Wayfair. Mistral is spending big in Sweden to build a sovereign European AI cloud. And SK hynix is pitching a hybrid HBM-plus-flash memory design that could make inference far more efficient.
Those are the signals to watch as AI shifts from splashy demos to the gritty economics of power, parts, and purchase flows.
Thanks for listening and a quick disclaimer, this podcast was generated and curated by AI using my and my kids' cloned voices, if you want to know how I do it or want to do something similar, reach out to me at emad at ai news in 10 dot com that's ai news in one zero dot com. See you all tomorrow.