← Back to all episodes
Inside Trainium, Cursor’s Kimi Twist, Orbital Compute

Inside Trainium, Cursor’s Kimi Twist, Orbital Compute

Mar 23, 2026 • 8:49

We go inside AWS’s Trainium scale, unpack Cursor’s Kimi-based model, test WordPress.com’s new agent powers, and explore Blue Origin’s orbital data center plans — plus a cyberattack that left some drivers stranded. Your fast, practical briefing to start the week.

Episode Infographic

Infographic for Inside Trainium, Cursor’s Kimi Twist, Orbital Compute

Show Notes

Welcome to AI News in 10, your top AI and tech news podcast in about 10 minutes. AI tech is amazing and is changing the world fast, for example this entire podcast is curated and generated by AI using my and my kids cloned voices...

It’s Monday, March 23, 2026, and here’s what’s shaping the AI and tech landscape today.

We’ve got a rare inside look at Amazon’s Trainium chip program and how it’s underpinning Anthropic — and soon OpenAI — at staggering scale. Cursor, a fast-growing coding startup, just acknowledged its flagship model starts from China’s Kimi 2.5, raising fresh questions about open models, licensing, and geopolitics. WordPress.com is opening the gates for AI agents to write and publish on your site. Jeff Bezos’ Blue Origin is officially in the orbital data center race with a filing for tens of thousands of compute satellites. And a cyberattack on a U.S. vehicle breathalyzer company has literally kept some people from starting their cars.

Let’s get into it...

[BEGINNING_SPONSORS]

Story one — Amazon’s Trainium lab... numbers we rarely get.

Over the weekend, TechCrunch toured AWS’s Austin chip lab and surfaced specifics that hint at how quickly Amazon’s custom silicon is scaling. AWS says there are now 1.4 million Trainium chips deployed across three generations, with Anthropic’s Claude running on more than 1 million Trainium2 parts.

One mega-cluster — Project Rainier — went live in late 2025 with 500,000 Trainium2 chips for Anthropic. And in the new OpenAI agreement, AWS has committed two gigawatts of Trainium capacity — compute at utility scale.

TechCrunch also reports that Trainium3 on new UltraServers aims to cut costs by up to 50 percent for comparable performance, and that Trainium now supports PyTorch with near drop-in workflows — basically a one-line change. It’s notable that Trainium is now heavily used for inference, not just training, and even handles most inference on Amazon’s Bedrock service... all per TechCrunch’s on-site report.

Why this matters: Nvidia’s GTC dominated last week, but Trainium’s trajectory shows hyperscalers aren’t just renting GPUs — they’re building alternatives with different cost curves and supply dynamics. If AWS can standardize tooling and prove strong price-per-token for inference at scale, that’s real pressure on GPU incumbency.

Story two — Cursor and the Kimi curveball.

Cursor, a red-hot AI coding startup, launched Composer 2 touting frontier-level coding prowess. Then a user on X dug into identifiers and argued it was Kimi 2.5 under the hood — a model from China’s Moonshot AI. Cursor’s VP of developer education confirmed Composer 2 started from an open source base, saying roughly a quarter of total compute came from the base and the rest from Cursor’s additional training and reinforcement learning.

Moonshot’s Kimi account then congratulated Cursor and said the usage was authorized via Fireworks AI. TechCrunch notes Cursor raised 2.3 billion dollars last fall at a 29.3 billion dollar valuation and is reportedly above 2 billion in annualized revenue — which is why the disclosure stirred debate on transparency, licensing, and what counts as “your model” in 2026.

What to watch: This could normalize a pattern — start with a strong open model, then differentiate with data, RL, evaluation, and product UX. But it also spotlights rising cross-border dependencies... a U.S. dev tool leaning on a Chinese-origin base model amid intensifying policy scrutiny.

Story three — WordPress.com gives AI agents the keys, carefully.

WordPress.com now lets AI agents draft, edit, and publish posts; manage comments; and reorganize tags and categories — all through natural-language commands. By default, AI-authored posts save as drafts and require human approval... but site owners can grant broader privileges.

WordPress powers over 43 percent of all websites. And while WordPress.com is just a slice of that, its hosted network still sees about 20 billion page views and 409 million unique visitors a month. Under the hood, this builds on the Model Context Protocol integration WordPress.com shipped last fall, so assistants like Claude Desktop, Cursor, and ChatGPT can securely access site context — and now act on it, according to TechCrunch.

The big picture: If agents can run parts of the web stack — content, SEO hygiene, taxonomy, and community replies — we’ll see a lot more sites that feel alive, but less obviously human. Expect new norms around disclosure, audit trails, and content provenance to follow... fast.

[MIDPOINT_SPONSORS]

Story four — Blue Origin shoots for orbital compute.

In an FCC filing dated March 19, Blue Origin outlined Project Sunrise, a constellation of more than 50,000 satellites designed to function as a data center in orbit. The pitch: shift energy- and water-intensive compute off-planet to ease pressure on terrestrial grids and cooling — while tapping abundant solar power in space. The filing mentions using Blue Origin’s communications constellation, TeraWave, as the backbone.

It lands amid a mini-rush. SpaceX has separately discussed a massive distributed compute network; startup Starcloud has floated 60,000 spacecraft; and Google’s Project Suncatcher plans demo craft with Planet Labs next year. Economically, there are huge questions — radiation-hard chips, optical interlinks, in-space thermal management, and launch cadence — but Blue Origin’s heavy-lift New Glenn, which flew for the first time last year, could improve logistics if it achieves regular reusability. Experts told TechCrunch large-scale deployment is probably a 2030s story.

Why this matters: If AI inference becomes the world’s default workload and energy constraints keep tightening, orbital compute shifts from sci-fi to a plausible infrastructure hedge. The regulatory, environmental, and debris-mitigation debates start now — well before the first racks reach orbit.

Story five — a cyberattack that stranded drivers.

A breach at Intoxalock, the company behind in-car breathalyzers required by some state programs, has prevented calibrations nationwide since a March 14 incident — leaving many customers unable to start their vehicles. Intoxalock says it paused systems as a precaution, hasn’t provided a recovery timeline, and hasn’t characterized the attack type.

The company’s tech is used in 46 states and serves around 150,000 drivers annually, so even a short outage cascades into real-world mobility problems. Local reports from Maine to Minnesota describe cars sitting for days awaiting calibration. It’s a stark reminder of how brittle critical digital dependencies have become — even for seemingly offline-first systems, per TechCrunch.

Quick recap: Amazon’s Trainium program is scaling to utility-grade capacity while targeting cheaper inference; Cursor’s Kimi-based model shows how open foundations and commercial polish are merging; WordPress.com just handed AI agents real publishing powers; Blue Origin wants to move compute above the clouds; and a cyberattack reminded us that when software stops... daily life can, too. Those are the signals to watch as we head into the week.

Thanks for listening and a quick disclaimer, this podcast was generated and curated by AI using my and my kids' cloned voices, if you want to know how I do it or want to do something similar, reach out to me at emad at ai news in 10 dot com that's ai news in one zero dot com. See you all tomorrow.