← Back to all episodes
Sovereign Compute Rises, WhatsApp Faces EU Heat

Sovereign Compute Rises, WhatsApp Faces EU Heat

Dec 5, 2025 • 9:16

Australia and OpenAI plan a massive sovereign AI campus, while the EU probes Meta’s WhatsApp policies and Arm seeds chip design talent in South Korea. Demis Hassabis pegs AGI around 2030 with present-day risks, and U.S. senators push a 30‑month freeze on advanced AI chip exports.

Episode Infographic

Infographic for Sovereign Compute Rises, WhatsApp Faces EU Heat

Show Notes

Welcome to AI News in 10, your top AI and tech news podcast in about 10 minutes. AI tech is amazing and is changing the world fast, for example this entire podcast is curated and generated by AI using my and my kids cloned voices...

Here’s what’s new today in AI and tech... Australia lands a major sovereign AI build with OpenAI backing, Europe takes aim at Meta over WhatsApp’s AI policies, Arm pushes a skills program in South Korea, DeepMind’s chief talks timelines and risks for AGI, and in Washington a bipartisan bill seeks to freeze any loosening of AI chip exports to China. Let’s jump in...

[BEGINNING_SPONSORS]

Story one—Australia just secured a flagship piece of sovereign AI infrastructure.

OpenAI and Australian data center operator NEXTDC signed a memorandum of understanding to plan a next-generation AI campus and GPU supercluster at NEXTDC’s S7 site in Eastern Creek, Sydney. Capacity is targeted at roughly 550 megawatts, with investment around seven billion Australian dollars—making it one of the Southern Hemisphere’s largest AI builds. NEXTDC shares jumped as much as 11 percent on the news.

The campus is designed to provide sovereign compute for sensitive government, enterprise, and research workloads—with OpenAI as an initial offtaker, and room to scale. The partners say this sits under OpenAI for Australia, a national initiative that also includes upskilling programs for more than a million workers through partners like Commonwealth Bank, Coles, and Wesfarmers. Reuters first flagged the supercluster plan and the share move, while OpenAI and Australian officials outlined the skills push and the seven-billion-dollar figure.

Here’s why this matters... Australia has been racing to boost digital sovereignty and grid capacity as AI demand soars. A local, high-security supercluster could keep sensitive workloads onshore, and catalyze an ecosystem around skills, startups, and suppliers. The open question is power—projects at this scale put real pressure on the grid. Expect energy debates to intensify as the campus moves from memorandum of understanding to build phase. Backers say it could create thousands of construction and long-term technical roles... if the power and supply chain pieces click into place.

Story two—the European Commission opened a formal antitrust investigation into Meta over new WhatsApp policies that curb third-party AI chatbots.

Meta’s rule change—effective for new providers on October 15, 2025, and for existing ones on January 15, 2026—limits AI providers from using WhatsApp’s Business Solution if the AI is the primary service, while Meta’s own assistant, Meta AI, remains available. Regulators say that could block rivals from reaching users across the EU, potentially harming competition in fast-growing AI markets. Meta counters that the Business API wasn’t designed for chatbot distribution at scale, and says competing services are accessible elsewhere. The probe covers all twenty-seven member states, with Italy running a separate case. If violations are found, fines could reach up to ten percent of global revenue.

The strategic takeaway... Messaging platforms are becoming distribution highways for AI assistants. If regulators decide platform owners can wall off their own assistants while limiting rivals, it could reshape where and how consumers encounter AI—pushing more activity into app stores, OS-level assistants, or the open web. Conversely, if the EU forces interoperability on WhatsApp’s AI access, we could see a more level playing field for independent chatbots trying to reach Europe’s massive messaging ecosystems.

Story three—Arm is deepening its footprint in Asia’s chip talent pipeline.

The SoftBank-controlled IP giant plans a chip design training facility in South Korea, under a memorandum of understanding with the industry ministry, aiming to train roughly 1,400 high-level chip design specialists. SoftBank’s Masayoshi Son, meeting South Korea’s leadership, tied the move to surging demand from AI—and flagged energy supply as an industry bottleneck. The initiative comes as Seoul courts global partners, and as local champions Samsung and SK Hynix expand AI-related commitments. Reuters reports the program targets weaknesses in South Korea’s fabless and system-semiconductor segments, and is part of a broader national push to lead in AI hardware and skills.

Why it matters... If 2024 and 2025 were about capacity and capital, 2026 and beyond look like a talent race. Arm sits at the center of mobile—and increasingly, data center—designs. By seeding thousands of chip designers, South Korea is shoring up a critical layer in the AI stack. Expect more nations to follow with training centers tied to specific architectures or ecosystems—especially where domestic fabs or packaging capacity are ramping.

[MIDPOINT_SPONSORS]

Story four—Demis Hassabis says AGI is within striking distance... but the risks are here already.

At Axios’ AI+ Summit in San Francisco, the DeepMind CEO reiterated his view that artificial general intelligence—systems matching or exceeding human capabilities—could arrive around 2030. He highlighted world models as a key research direction for the next year—AI systems that build internal simulations of how the world works to plan, reason, and act. Hassabis also warned that some dangers aren’t hypothetical, pointing to the risk of AI-enabled cyberattacks on critical infrastructure, and describing his own probability of catastrophic outcomes as non-zero. The comments follow a year of rapid progress in reasoning-focused models, and growing emphasis on alignment and safety.

What to listen for next... If world models are the next frontier, expect benchmarks—and funding—to shift toward long-horizon planning, simulation fidelity, and cross-modal understanding. You’ll also hear more on defense-first AI—using the same techniques to harden grids, water systems, and health infrastructure against the risks Hassabis flagged.

Story five—on Capitol Hill, a bipartisan group moved to lock in tough AI chip export controls.

Senators Pete Ricketts and Chris Coons introduced legislation dubbed the SAFE CHIPS Act, to block the Commerce Department from approving licenses to sell advanced AI chips to China, Russia, Iran, and North Korea for thirty months—and to require congressional notification before any future rule changes. The push reflects deep concern over military end-use of AI hardware, and comes amid debate over how strictly to enforce and update export rules, including those affecting Nvidia’s China-bound products. It’s a rare instance of Congress seeking to limit executive leeway on tech trade with strategic rivals. Reuters broke down the bill’s scope and timeline.

The bigger picture... Between export controls in the U.S., industrial policy in Asia, and Europe’s competition and AI rule-making, geopolitics is increasingly steering the AI stack—from where chips are designed, to how power is procured, to which assistants can run on which platforms. That fragmentation creates new moats for incumbents—and new openings for countries and companies that can align skills, energy, and policy quickly.

Quick recap... Australia’s seven billion dollar OpenAI and NEXTDC plan signals the rise of sovereign AI campuses; the EU’s probe of WhatsApp’s AI policy could set ground rules for assistant distribution; Arm’s South Korea training hub underscores the talent race; DeepMind’s Hassabis pegs AGI near 2030 while warning of present-day risks; and U.S. senators want a thirty-month freeze on easing AI chip exports to adversaries. That’s your AI News in 10 for Friday, December 5, 2025.

Thanks for listening and a quick disclaimer, this podcast was generated and curated by AI using my and my kids' cloned voices, if you want to know how I do it or want to do something similar, reach out to me at emad at ai news in 10 dot com that's ai news in one zero dot com. See you all tomorrow.