Moonshots, AI Diplomacy, and the Chip Crunch
Artemis II beams back Earth from deep space as Washington opens a new AI export push. We unpack Microsoft’s reliability stumbles, rising hardware costs, and Meta’s custom inference chips — and what it all means in the weeks ahead.
Episode Infographic
Show Notes
Welcome to AI News in 10, your top AI and tech news podcast in about 10 minutes. AI tech is amazing and is changing the world fast, for example this entire podcast is curated and generated by AI using my and my kids cloned voices...
Here’s what we’re diving into today: NASA’s Artemis II crew is officially outbound and sending back jaw dropping images of Earth... the White House just kicked off a new phase to export full stack American AI abroad... Microsoft’s massive AI spending is meeting some down to earth reliability hiccups... new signals show AI demand and global tensions are pushing hardware costs higher... and Meta is laying out an aggressive roadmap for its own inference chips. Let’s get into it.
[BEGINNING_SPONSORS]
First up, Artemis II... the photos are as historic as the flight path.
After lifting off on April 1, NASA’s Orion spacecraft performed perigee raise maneuvers to leave Earth orbit and is now on a lunar free return trajectory — think Apollo era physics with 21st century tech. The crew — Reid Wiseman, Victor Glover, Christina Koch, and Canada’s Jeremy Hansen — shared fresh shots of our blue marble as they head outward, a reminder of both how small we are... and how big this mission feels.
NASA’s flight updates confirmed the burns that refined Orion’s orbit. Expect a close lunar pass before the crew swings home late next week, if timelines hold. The mission runs roughly ten days, with key trajectory corrections along the way. If all goes well, this is the dress rehearsal for sustained lunar operations later in the decade. Sources include NASA mission blog updates in early April, plus reporting from AP News and Axios.
Second, a policy move with global ripples.
The U.S. Commerce Department just opened a 90 day window — starting April 1 — for industry coalitions to propose full stack AI export packages. The idea is to bundle compute, models, tools, and services so allies can deploy American AI systems quickly, with Commerce selecting proposals in consultation with State, Defense, and Energy. Axios reports the program’s next phase is officially under way, and earlier briefings detailed what Commerce is seeking and how selection will work.
This dovetails with the administration’s broader AI legislative framework released March 20 — pushing for national rules and federal preemption of conflicting state laws. For U.S. AI vendors, there’s a lot at stake: new markets, new geopolitics, and new compliance overhead... especially as export control debates continue. Sources include Axios coverage from March and April, and the White House framework summary.
Third, Microsoft’s “AI at any cost” era is colliding with a very human reality — software reliability.
Even as Redmond and its Big Tech peers collectively point toward hundreds of billions in 2026 AI capital expenditures, Outlook headaches made headlines — yes, even NASA astronauts reportedly had to work around email issues amid all the Artemis buzz.
The juxtaposition is striking: breathtaking demos, huge data center bets, and then... the mundane friction of daily tools not behaving. The investor angle is just as stark. With spending lines surging, markets are asking when efficiency and revenue scale will catch up. The lesson for IT leaders is familiar — before you ask AI to automate the office, make sure the office still works. Source: Fortune on April 3.
[MIDPOINT_SPONSORS]
Fourth, if you’ve tried to price a build — or even replace a component — you’ve felt this next one.
Costs are rising not just for the obvious pieces like GPUs and memory, but also for motherboards, printed circuit boards, plastics, and other inputs. Why? A perfect storm: the AI build out is devouring supply, plus new geopolitical risk premiums across energy and materials routes.
It’s not just gamers — enterprises are paying more for everything from networking gear to power systems. Add to that a world where high bandwidth memory — H B M — has become the limiting reagent for AI accelerators, and the squeeze gets tighter. Recent reporting outlines how H B M scarcity is taxing the broader tech economy, while industry data shows how hot the semiconductor market ran last year — setting the stage for tough supply and demand math in 2026. Translation: delivery dates stretch, bills of materials bloat, and CFOs brace. Sources include Tom’s Hardware in early April, Fortune in March, and WSTS data from March.
Fifth, Meta is getting louder about its homegrown AI silicon — specifically for inference.
In mid March, the company outlined four new MTIA chips on a roughly six month cadence, with claims of big bandwidth and throughput gains from generation to generation, plus a software stack that runs against PyTorch, v L L M, and Triton with minimal friction.
The pitch is simple: move more inference to custom silicon where it’s cost efficient, free scarce GPU supply for training, and iterate faster than traditional chip cycles. Analysts note this won’t replace GPUs outright, but it could meaningfully change how large scale inference is provisioned across Meta’s apps. Competitive stakes are high — Nvidia and AMD are pushing their own inference optimized parts, while cloud players roll out custom silicon of their own. Sources include Tom’s Hardware analyses from March, and industry roundups.
Quick reality check on overlaps and timelines...
For Artemis II, NASA’s day by day briefings confirm the outbound trajectory and the perigee raise burns completed this week, with fresh images from space documented by AP and Axios on Friday, April 3. That puts today’s coverage squarely in mission in progress — with a lunar swing by expected in the coming days.
On AI exports, the proposals window opened Wednesday, April 1, giving industry 90 days to respond. This isn’t hypothetical — it’s active procurement of ideas, partnerships, and packaged offerings.
On hardware cost pressure, multiple data points tell the same story: component prices trending up, tight supply in high bandwidth memory, and last year’s huge chip sales compounding demand into 2026. Expect more inventory triage, more long term supply agreements, and a higher total cost of ownership for AI projects this year.
That’s our lineup today — moonbound astronauts beaming back perspective... Washington courting allies with a turnkey American AI stack... Microsoft confronting the unglamorous parts of scale... the hardware market tightening under AI’s weight... and Meta aiming to reshape inference economics with custom chips.
We’ll keep tracking Artemis II milestones over the next 48 to 72 hours — and the early jockeying around those AI export proposals as the 90 day clock starts ticking.
Thanks for listening and a quick disclaimer, this podcast was generated and curated by AI using my and my kids' cloned voices, if you want to know how I do it or want to do something similar, reach out to me at emad at ai news in 10 dot com that's ai news in one zero dot com. See you all tomorrow.