← Back to all episodes
New Year, New Rules: AI, CES, Euro Moves

New Year, New Rules: AI, CES, Euro Moves

Jan 1, 2026 • 11:00

We kick off 2026 with new AI and privacy laws in China and across U.S. states, a CES preview with Nvidia and AMD, a Texas youth-safety law halted in court, and Europe’s latest monetary and policy shifts. Get the need-to-know analysis and what it means for builders, policymakers, and anyone tracking AI.

Episode Infographic

Infographic for New Year, New Rules: AI, CES, Euro Moves

Show Notes

Welcome to AI News in 10, your top AI and tech news podcast in about 10 minutes. AI tech is amazing and is changing the world fast, for example this entire podcast is curated and generated by AI using my and my kids cloned voices...

Happy New Year... and a big one for AI and tech.

Today's headlines — China flips the switch on a major cybersecurity law that explicitly folds in AI. A wave of U.S. state AI and privacy rules starts today, from Texas to California to the Midwest. CES week is upon us, with Nvidia and AMD primed to frame the agenda. A federal judge halts Texas's app store age verification law hours before it would have taken effect. And across the Atlantic, Bulgaria joins the euro as Cyprus steps in to lead the EU — moves with real implications for fintech and digital policy.

Let's get into it.

[BEGINNING_SPONSORS]

We'll start with China, where it's not just a 'new year, new you'... it's 'new year, new rules.'

As of today — January 1, 2026 — amendments to China's foundational Cybersecurity Law take effect. Why does that matter for AI? Because the revision explicitly backs AI research and development — including computing infrastructure and training data resources — while tightening obligations for network operators and critical infrastructure on security, content, and incident response. In short, more carrots for AI development, and bigger sticks on compliance. China Briefing has the details.

The law's AI language comes with a push to standardize governance. Ethics, risk monitoring, and security supervision are explicitly called out in the statute, and regulators are expected to follow with detailed implementation rules. For firms operating in China, the signal is clear — document your model development, tighten end-to-end data governance, and prepare for stronger enforcement. IAPP has a helpful overview.

Zooming in on data flows — complementary rules published in October set new certifications for certain cross-border personal data transfers, and they also take effect today. That raises the bar for companies that move user data offshore. Expect more paperwork... and more scrutiny... around exports of sensitive or large-scale data sets. Reuters has more.

There's also a patent backdrop worth watching. China's intellectual property office tightens AI patent examination standards starting today, emphasizing genuine technical contributions and detailed disclosure of training data, model architecture, and parameters. Think fewer 'apply model to new scenario' patents — and more demand for substantive algorithmic novelty. Patent firm Mathys and Squire breaks down the changes.

And if you're tracking the 'AI that feels human' debate — late last week Beijing floated draft rules for emotionally interactive, human-like AI. They would require usage warnings, addiction safeguards, algorithm reviews, and strict content controls. They're not in force yet, but they outline where oversight is headed in consumer AI products. Reuters reports.

From Beijing to the U.S., January 1 is also a reset date for a lot of state-level AI and privacy policy.

In Texas, the Responsible AI Governance Act — TRAIGA — enters into force today. It's a different flavor than the EU's risk-tiered approach. Texas focuses on banning intentional misuse — things like AI that incites self-harm or violence, infringes constitutional rights, or creates unlawful deepfakes — and it sets up an AI sandbox and an advisory council. The Texas Attorney General can demand high-level documentation of models, data, and safeguards during investigations, with penalties for violations. Latham and Watkins has a good breakdown.

California, meanwhile, leans into transparency. Lawmakers passed the Transparency in Frontier AI Act — widely known as SB 53 — requiring large AI developers to publish safety frameworks aligned to industry and international standards, protect whistleblowers, and establish a mechanism to report critical safety incidents to the state. Given California's outsized tech footprint, disclosure obligations here tend to ripple nationwide. The Verge has coverage.

There's also quicker breach notice in California starting today. Companies must notify affected individuals within 30 calendar days of discovering a breach, and send a copy of that notice to the Attorney General within 15 days if 500 or more residents are affected. Oklahoma's updated breach law also kicks in today, expanding personal information to include biometric identifiers and requiring Attorney General notice for large incidents. Faster clocks and broader definitions mean incident-response teams need to be buttoned up. JD Supra summarizes the changes.

Two more privacy acts light up as well — Indiana's Consumer Data Protection Act and Kentucky's, both effective January 1, 2026. They largely follow the Virginia model, with thresholds for applicability, a right to opt out of targeted advertising and data sales, impact assessment expectations for higher-risk processing, and Attorney General enforcement without a private right of action. If your data stack touches consumers in those states, those rights and duties apply starting now. ArentFox Schiff has details.

Okay — let's talk CES.

The show opens in a few days, but the agenda is already taking shape. Nvidia is staging a live presentation in Las Vegas on January 5, with CEO Jensen Huang expected to outline what's next in accelerated computing, robotics, and physical AI — the stuff that turns models into machines. The schedule highlights sessions on industrial AI, general-purpose robotics, and even agentic AI for planning and reasoning. Translation: we'll hear about moving beyond chatbot demos into embodied systems and enterprise workflows.

AMD is also headlining. Dr. Lisa Su delivers the official CES opening keynote the evening of January 5, with AMD signaling updates across cloud, enterprise, edge, and devices — so watch for data center silicon news alongside AI PCs and automotive compute. CES is where these two companies frame the year's silicon story, and this time the through line is clear — more performance per watt for generative workloads, and more edge AI for robotics, cars, and local inference.

If you want a guide to the livestreams, Engadget has rounded up how to watch Nvidia's keynote and what they expect — a blend of cutting-edge AI, robotics, simulation, gaming, and creator tools, plus a lot of ecosystem partners on stage. We'll be watching for anything on training at lower cost, inference optimizations, and signals about the pace of agentic features in mainstream software.

[MIDPOINT_SPONSORS]

Back to Texas for a legal curveball. Just before the ball dropped, a federal judge in Austin issued a preliminary injunction blocking the state's App Store Accountability Act from taking effect today. The law would have required app stores to verify users' ages and obtain parental consent before minors could download apps or make in-app purchases — raising a tangle of privacy, speech, and implementation questions for Apple, Google, developers, and families. The court found the law likely violates the First Amendment and isn't the least restrictive way to achieve the state's goals. Texas plans to appeal, but for now, enforcement is paused. Reuters reports.

This is an important signal case. States are experimenting with screen time limits and youth online safety rules, but when statutes compel broad gating of speech or mandate intrusive age checks, constitutional headwinds get strong. Keep an eye on whether narrower measures — like targeted design obligations or clearer parental control tooling — gain favor as these cases wind through appeals.

And finally, Europe opens 2026 with two moves worth flagging for tech and fintech.

First, Bulgaria officially adopts the euro today, becoming the twenty-first member of the currency union. That means ATMs in Sofia are dispensing euros, the lev and the euro will coexist for cash payments through January, and the euro becomes the sole legal tender on February 1. For payment companies and banks, this simplifies cross-border settlement, SEPA rails, and foreign exchange exposure in a fast-digitizing market. The Associated Press has the basics.

The Financial Times notes the transition arrives amid political turbulence and disinformation campaigns... but economically, Bulgaria has long pegged the lev to the euro — so the near-term impact should be orderly, while giving the country a seat at the ECB table.

Second, Cyprus assumes the rotating presidency of the Council of the European Union today. The presidency's published program stresses autonomy, security, and competitiveness, and industry groups are already pressing for common-sense streamlining of digital rules — think harmonized enforcement and clearer timelines, including around the AI Act's 2026 obligations. That's a storyline to watch as Brussels calibrates innovation, compliance, and growth.

Quick recap... China's amended Cybersecurity Law and related data export rules are now live, signaling tougher compliance alongside state-backed AI development. A cluster of U.S. AI and privacy laws turns on — from Texas's TRAIGA to California's transparency and faster breach notices, plus new Indiana and Kentucky privacy regimes. CES 2026 is days away, with Nvidia and AMD anchoring the AI narrative. A federal judge blocks Texas's app store age verification law from taking effect. And in Europe, Bulgaria adopts the euro while Cyprus takes the EU presidency with digital priorities front and center.

It's a packed first day of 2026... and we're just getting started.

Thanks for listening and a quick disclaimer, this podcast was generated and curated by AI using my and my kids' cloned voices, if you want to know how I do it or want to do something similar, reach out to me at emad at ai news in 10 dot com that's ai news in one zero dot com. See you all tomorrow.