← Back to all episodes
Zuckerberg on Trial, India Sets AI Norms

Zuckerberg on Trial, India Sets AI Norms

Feb 19, 2026 • 8:36

Governments ratchet up pressure on tech as India unveils Frontier AI commitments and the UN pitches a $3 billion Global AI Fund. We break down Ireland’s GDPR probe into Grok, Brussels’ DSA case against Shein, the UK’s 48-hour takedown rule, and Mark Zuckerberg’s high-stakes testimony in Los Angeles.

Episode Infographic

Infographic for Zuckerberg on Trial, India Sets AI Norms

Show Notes

Welcome to AI News in 10, your top AI and tech news podcast in about 10 minutes. AI tech is amazing and is changing the world fast, for example this entire podcast is curated and generated by AI using my and my kids cloned voices...

It’s Thursday, February 19, 2026.

Today, we’re tracking governments turning the screws on tech while the world’s biggest AI summit sets new norms. We’ve got India’s new Frontier AI Impact Commitments and a United Nations call for a three-billion-dollar Global AI Fund... Ireland opens a formal GDPR investigation into X over Grok’s alleged sexualized deepfakes... Brussels widens enforcement with a Digital Services Act probe into Shein... the United Kingdom rolls out a 48-hour takedown rule for intimate images... and Mark Zuckerberg faces a high-stakes jury trial in Los Angeles over social media’s impact on youth.

Let’s get into it.

[BEGINNING_SPONSORS]

Story one — India’s AI Impact Summit is producing concrete pledges.

This morning in New Delhi, India’s IT minister, Ashwini Vaishnaw, unveiled the New Delhi Frontier AI Impact Commitments — a voluntary framework that global frontier-model companies and Indian innovators have agreed to adopt.

Two headline pledges. First, sharing anonymized, aggregated usage insights to inform evidence-based policy on jobs and skills. Second, building stronger multilingual, context-aware evaluations so systems work reliably across India’s diverse languages — and beyond.

The framing is explicit — innovation with equity, and with a Global South lens.

The summit’s global stage also came with a loud call from the United Nations. Secretary-General António Guterres urged governments and industry to back a three-billion-dollar Global AI Fund aimed at helping developing countries build core AI capacity — skills, data access, and affordable compute. He warned that AI’s future can’t be left to the whims of a few billionaires, noting the fund is less than one percent of a single big tech firm’s annual revenue... yet could be pivotal in avoiding a new digital divide.

Meanwhile, the summit itself has had some turbulence. Organizers extended the expo to Saturday, February 21, but closed public access today for security around diplomatic programming — and Bill Gates, once slated to keynote, officially withdrew. The adjustments drew huge crowds earlier in the week and tight logistics today, underscoring both the event’s scale and the stakes.

Story two — a fresh EU privacy probe zeroes in on Grok.

Ireland’s Data Protection Commission — the lead regulator for X in the EU — has opened a large-scale GDPR inquiry into the platform over reports that its Grok chatbot generated or surfaced non-consensual, sexualized deepfake images, including images involving minors.

Regulators will test X’s compliance with core GDPR duties — lawfulness and fairness, data protection by design, and whether a proper Data Protection Impact Assessment was done. If violations are found, fines can reach up to four percent of global turnover — real money for a company of X’s size.

This doesn’t come out of the blue. Multiple European authorities have been pressing the company in recent weeks. While X has reportedly limited some Grok image features, regulators say that’s not enough. For the broader picture, this probe sits alongside Digital Services Act actions elsewhere in the EU, reflecting rising concern over AI-assisted nudification and child safety.

Story three — Brussels opens a new front, this time at a fast-fashion giant.

The European Commission has launched formal proceedings against Shein under the Digital Services Act. The case spans three issues — whether the marketplace adequately prevents illegal products, like child-like sex dolls, from appearing on its platform... whether its addictive design features, such as points and rewards, harm users... and whether it provides required transparency and a non-profiling option for its recommender systems. Penalties under the DSA can reach up to six percent of global annual revenue.

The Commission’s move follows multiple member state flags last year and raises the bar for all very large online platforms to show they can identify and mitigate systemic risks — from illegal goods to dark-pattern engagement loops. Shein says it’s cooperating and has boosted safeguards, but the Commission has wide latitude to impose interim measures if it sees ongoing harm.

[MIDPOINT_SPONSORS]

Story four — London just put platforms on a 48-hour clock.

The UK government announced it will amend the Crime and Policing Bill to require tech firms to remove non-consensual intimate images — think deepfake nudes and so-called revenge porn — within 48 hours of being flagged, or face fines up to 10 percent of global revenue and, in extreme cases, service blocking in the UK. Enforcement will run through Ofcom under the Online Safety framework.

A crucial wrinkle — report once, remove everywhere. The plan envisions digital fingerprints so images taken down on one platform can be auto-blocked on others if re-uploaded. It also contemplates treating this content on par with child sexual abuse material and terrorism in enforcement priority — another sign that policymakers see AI-amplified intimate-image abuse as an urgent national safety issue.

Story five — Mark Zuckerberg on the stand in Los Angeles.

In a bellwether jury trial over youth mental health harms, Meta’s CEO was questioned about Instagram’s impact on teens, the company’s efforts to keep under-13 users off the platform, and internal metrics on time spent. He told the court it’s very difficult to enforce age limits because many users lie about their age, and he defended decisions around features like beauty filters as issues of free expression rather than profit. The plaintiff — a 20-year-old — alleges that heavy use of Instagram and YouTube during childhood contributed to serious mental health struggles.

Trial exhibits and press coverage highlight competing narratives. Meta points to investments in teen safety, while internal documents and expert testimony argue the platforms were engineered to maximize engagement among young users. One revelation drawing attention — reporting that Zuckerberg overruled well-being experts to allow certain appearance-altering filters back on Instagram years ago — evidence plaintiffs say shows a pattern of choices that prioritized growth over guardrails. However the jury rules, this case could shape strategy across thousands of related suits and spur further regulation of design features — not just content — on social apps.

Quick recap — India pushed a Global South-led AI agenda, and the UN put a three-billion-dollar funding target on the table... Ireland’s watchdog turned up the heat on Grok under GDPR... Brussels opened a DSA case testing Shein’s marketplace and recommender design... the UK set a 48-hour takedown standard with massive penalties... and Mark Zuckerberg’s testimony in Los Angeles underscored how courts are zeroing in on platform design and youth safety.

We’ll keep watching how these threads reshape AI and tech in practice over the next few weeks.

Thanks for listening and a quick disclaimer, this podcast was generated and curated by AI using my and my kids' cloned voices, if you want to know how I do it or want to do something similar, reach out to me at emad at ai news in 10 dot com that's ai news in one zero dot com. See you all tomorrow.