<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[The AI Policy Newsletter : The AI Policy Newsletter ]]></title><description><![CDATA[Every week I gather the latest updates on AI policy/regulation and send it to your inbox]]></description><link>https://alisarmustafa.substack.com/s/the-ai-policy-newsletter</link><generator>Substack</generator><lastBuildDate>Fri, 01 May 2026 00:00:05 GMT</lastBuildDate><atom:link href="https://alisarmustafa.substack.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Alisar Mustafa]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[alisarmustafa@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[alisarmustafa@substack.com]]></itunes:email><itunes:name><![CDATA[Alisar Mustafa]]></itunes:name></itunes:owner><itunes:author><![CDATA[Alisar Mustafa]]></itunes:author><googleplay:owner><![CDATA[alisarmustafa@substack.com]]></googleplay:owner><googleplay:email><![CDATA[alisarmustafa@substack.com]]></googleplay:email><googleplay:author><![CDATA[Alisar Mustafa]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[The AI Policy Newsletter 04.16.2026]]></title><description><![CDATA[CA Governor Signs Executive Order Expanding AI Use with New Safeguards, South Africa Proposes National AI Policy and New Regulators, OpenAI Calls for Industrial Policy to Manage AI Economic Impact]]></description><link>https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-04162026</link><guid isPermaLink="false">https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-04162026</guid><dc:creator><![CDATA[Alisar Mustafa]]></dc:creator><pubDate>Thu, 16 Apr 2026 16:01:06 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!z-tV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49a6b34a-944e-45ea-a972-d213b0e0eaba_1080x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>&#128064;</p><p><strong>TLDR</strong></p><p><strong>&#127963;&#65039; In the U.S.</strong>, CA Governor signed an  executive order that  <a href="https://www.gov.ca.gov/wp-content/uploads/2026/03/3.30-FINAL-Trusted-AI-Procurement-EO-N-5-26.pdf">advanced</a> trusted AI procurement standards, while an appeals court <a href="https://www.pbs.org/newshour/politics/appeals-court-decides-against-anthropic-in-latest-round-of-its-ai-battle-with-the-trump-administration">ruled</a> against Anthropic in its dispute with the Trump administration. Additionally, two assistant city attorneys in New Orleans <a href="https://www.fox8live.com/2026/04/01/two-new-orleans-assistant-city-attorneys-resign-after-using-ai-federal-court-filing/">resigned</a> after submitting a federal court filing that used AI-generated content.</p><p>&#127757; <strong>Globally</strong>, South Africa <a href="https://www.reuters.com/legal/litigation/south-africa-unveils-draft-ai-policy-proposes-new-institutions-incentives-2026-04-10/">introduced</a> a draft AI policy proposing new institutions and infrastructure investments. China <a href="https://www.geopolitechs.org/p/china-issues-new-rules-on-ai-ethics">issued</a> rules on AI ethics review and governance support, while the United Kingdom <a href="https://www.pymnts.com/cpi-posts/uk-courts-anthropic-amid-transatlantic-tensions-over-ai-policy/">sought</a> to expand ties with Anthropic amid transatlantic policy tensions.</p><p>&#128126; <strong>In Industry</strong>, Open AI released industrial policy framework <a href="https://cdn.openai.com/pdf/561e7512-253e-424b-9734-ef4098440601/Industrial%20Policy%20for%20the%20Intelligence%20Age.pdf">outlining</a> approaches for managing AI&#8217;s economic impact, while Anthropic <a href="https://www.techbuzz.ai/articles/anthropic-launches-pac-to-shape-ai-policy-ahead-of-midterms">launched</a> a PAC to support candidates aligned with its policy priorities. xAI <a href="https://www.theguardian.com/technology/2026/apr/09/elon-musk-xai-colorado-lawsuit">filed</a> a lawsuit challenging Colorado&#8217;s AI law on constitutional grounds. Open AI <a href="https://cdn.openai.com/pdf/9886ee82-5a5e-4f0a-acaa-a47b01b0a68e/Child-Protection-Blueprint.pdf">released</a> a child protection blueprint developed with organizations such as the Attorney General Alliance proposed legal updates, reporting standards, and safety-by-design safeguards. Meanwhile, OpenAI <a href="https://www.bbc.com/news/articles/clyd032ej70o">paused</a> a UK data centre investment citing energy costs and regulatory uncertainty.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://alisarmustafa.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><em><strong>If you enjoy the content, consider upgrading to a paid subscription. Your help supports my writing</strong>.</em></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>&#127963;&#65039;</p><p><strong><a href="https://www.gov.ca.gov/wp-content/uploads/2026/03/3.30-FINAL-Trusted-AI-Procurement-EO-N-5-26.pdf">California Issues New Executive Order on Responsible AI Use</a></strong></p><p>California Governor Gavin Newsom signed Executive Order N-5-26, outlining a comprehensive approach to expanding the use of generative AI in state government while strengthening safeguards around privacy, security, and civil liberties. The order directs state agencies to develop new procurement standards requiring AI vendors to disclose safety measures related to harmful content, bias, and civil rights risks. It also calls for expanded use of vetted AI tools across government operations, workforce training, and public-facing services, including a pilot to streamline access to state resources. Additionally, the order emphasizes transparency measures such as watermarking AI-generated content and introduces oversight of federal supply chain risk designations to ensure they do not improperly restrict state procurement.</p><p><strong><a href="https://www.pbs.org/newshour/politics/appeals-court-decides-against-anthropic-in-latest-round-of-its-ai-battle-with-the-trump-administration">Appeals Court Rules Against Anthropic in Ongoing AI Dispute</a></strong></p><p>A federal appeals court in Washington, D.C. ruled against Anthropic, declining to block the Pentagon from blacklisting the company as a supply chain risk amid a dispute over the use of its AI systems in military and surveillance contexts. The decision contrasts with a separate ruling from a California federal court, which had ordered the Trump administration to remove the designation. The appeals court acknowledged potential harm to Anthropic but said the company had not sufficiently demonstrated the extent of that harm to justify intervention.</p><p><strong><a href="https://www.fox8live.com/2026/04/01/two-new-orleans-assistant-city-attorneys-resign-after-using-ai-federal-court-filing/">Two New Orleans Attorneys Resign After AI-Generated Court Filing Errors</a></strong></p><p>Two assistant city attorneys in New Orleans resigned after using AI tools, including ChatGPT, in a federal court filing that resulted in sanctions. A judge found that multiple legal citations in the brief were fabricated, likely due to AI &#8220;hallucinations,&#8221; and that the attorneys failed to verify the information before submission. The court imposed fines on both the primary attorney and a supervising attorney, emphasizing professional responsibility. Following the incident, the city implemented a new AI policy requiring disclosure of AI use and annual compliance certification.</p><p>&#127757;</p><p><strong>Global&nbsp;</strong></p><p><strong><a href="https://www.reuters.com/legal/litigation/south-africa-unveils-draft-ai-policy-proposes-new-institutions-incentives-2026-04-10/">South Africa Proposes National AI Policy with New Regulatory Bodies</a></strong></p><p>South Africa unveiled a draft national AI policy aimed at positioning the country as a leader in artificial intelligence while addressing ethical and economic risks. The proposal includes plans to establish new institutions such as a National AI Commission, an AI Ethics Board, and a dedicated regulatory authority to oversee compliance and handle AI-related harms. The policy also introduces incentives like tax breaks and grants to encourage private-sector innovation, particularly among startups. In addition, it emphasizes investment in local supercomputing and digital infrastructure, while raising concerns about reliance on foreign technology providers and potential data security risks.</p><p><strong><a href="https://www.geopolitechs.org/p/china-issues-new-rules-on-ai-ethics">China Introduces New AI Ethics Review Framework</a></strong></p><p>China released new rules establishing a comprehensive system for ethical review of artificial intelligence, requiring organizations to assess risks before launching AI projects. The framework introduces a multi-layered governance model, combining internal ethics committees, external service centers, and government-led expert reviews for high-risk applications such as autonomous decision-making and public opinion influence. Companies must submit detailed documentation, including risk assessments and mitigation plans, and are subject to ongoing monitoring after deployment. The policy also expands oversight beyond content control to include labor protections, algorithmic fairness, transparency, and privacy, signaling a shift toward a more structured and enforceable AI governance system.</p><p><strong><a href="https://www.pymnts.com/cpi-posts/uk-courts-anthropic-amid-transatlantic-tensions-over-ai-policy/">UK Courts Anthropic Amid Transatlantic Tensions Over AI Policy</a></strong></p><p>The United Kingdom is seeking to expand the presence of Anthropic as part of efforts to strengthen its AI sector. Officials are considering proposals including expanding the company&#8217;s London office and exploring a dual stock market listing in the UK and US. The outreach comes as Anthropic faces tensions with the United States Department of Defense, which designated the company a potential supply-chain risk, alongside criticism from Donald Trump over limits on military AI use. UK officials, including Sadiq Khan, have promoted London as a base for AI development, while the government advances plans for a &#163;40 million research lab.</p><p>&#128126;</p><p><strong>Industry&nbsp;&nbsp;</strong></p><p><strong><a href="https://cdn.openai.com/pdf/561e7512-253e-424b-9734-ef4098440601/Industrial%20Policy%20for%20the%20Intelligence%20Age.pdf">OpenAI Releases Industrial Policy Report to Manage AI Economic Disruption and Share Gains Broadly</a></strong></p><p>OpenAI outlines proposals for managing the transition to advanced AI and potential superintelligence through industrial policy focused on broad participation and risk mitigation. The document highlights expected economic shifts, including changes to work, productivity, and income distribution, and calls for updated policy tools such as workforce support, tax reforms, and expanded access to AI systems. It proposes mechanisms including worker participation in AI deployment, portable benefits, public investment funds, and infrastructure expansion. The paper also emphasizes safety systems, auditing, and governance frameworks to address risks such as misuse, misalignment, and institutional disruption, while encouraging public-private collaboration and ongoing policy development.</p><p><strong><a href="https://www.techbuzz.ai/articles/anthropic-launches-pac-to-shape-ai-policy-ahead-of-midterms">Anthropic Launches PAC to Shape AI Policy Ahead of Midterms</a></strong></p><p>Anthropic has created a political action committee, AnthroPAC, to support candidates aligned with its AI policy priorities ahead of the 2026 midterm elections. The PAC will provide campaign contributions to influence policymaking on issues such as AI regulation, liability, and data use. The move marks a shift from traditional lobbying to direct political engagement. Anthropic, known for its focus on AI safety, joins other technology companies that operate PACs. The initiative comes as governments debate AI legislation in areas including deepfakes, accountability, and model governance. The development reflects increased involvement by AI firms in shaping regulatory frameworks affecting the industry.</p><p><strong><a href="https://www.theguardian.com/technology/2026/apr/09/elon-musk-xai-colorado-lawsuit">xAI Sues Colorado Over AI Regulation Law</a></strong></p><p>xAI, founded by Elon Musk, has filed a lawsuit against Colorado challenging a new law regulating artificial intelligence systems. The law, set to take effect in June, introduces requirements aimed at preventing algorithmic discrimination in sectors including employment, healthcare, and housing. xAI argues the legislation violates First Amendment protections by restricting how AI systems generate content. The lawsuit seeks to block enforcement and have the law declared unconstitutional.</p><p><strong><a href="https://cdn.openai.com/pdf/9886ee82-5a5e-4f0a-acaa-a47b01b0a68e/Child-Protection-Blueprint.pdf">Open AI Released a Policy Blueprint on Protecting Children in the Age of Generative AI</a></strong></p><p>A policy blueprint developed with participation from organizations including the Attorney General Alliance and the National Center for Missing &amp; Exploited Children outlines approaches to address AI-enabled child sexual exploitation. It identifies risks such as synthetic abuse material and scaled grooming, while proposing three priority areas: modernizing state laws to cover AI-generated content, improving reporting and coordination standards for platforms, and implementing safety-by-design safeguards in AI systems. Recommended measures include clearer liability for attempted offenses, structured reporting to support investigations, and layered detection systems combining automated tools and human oversight. The framework emphasizes coordination among government, industry, and law enforcement.</p><p><strong><a href="https://www.bbc.com/news/articles/clyd032ej70o">OpenAI Pauses UK Data Centre Project</a></strong></p><p>OpenAI has paused its planned &#8220;Stargate UK&#8221; data centre project, citing concerns over high energy costs and regulatory uncertainty in the United Kingdom. The project, part of a broader investment initiative, aimed to expand AI infrastructure and computing capacity through partnerships with Nvidia and Nscale. OpenAI stated it would proceed when conditions better support long-term infrastructure investment. The decision follows ongoing concerns about energy pricing and policy clarity, including rules around AI training data. The company indicated it will continue investing in research, talent, and public sector AI deployment in the UK.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-04162026?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-04162026?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p>&#127797;</p><p><strong>Resources&nbsp;</strong></p><ul><li><p><strong><a href="https://www.theaipolicycourse.com/">The AI Policy Course</a></strong></p></li><li><p><strong><a href="https://www.alisarmustafa.com/resources">AI Policy Resources</a></strong></p></li><li><p><strong><a href="https://www.techpolicy.press/newsletter/">Tech Policy Press Weekly Newsletter</a></strong></p></li><li><p><strong><a href="https://alltechishuman.org/responsible-tech-job-board">All Tech Is Human Job Board</a></strong></p></li></ul><p><strong>&#128197;</strong></p><p><strong>Upcoming Events</strong></p><ul><li><p><strong><a href="https://www.far.ai/events/event-list/berkeley-controlconf-2026">Berkeley ControlConf 2026</a> </strong>| Berkeley, California | 18&#8211;19 April, 2026</p></li><li><p><strong><a href="https://iclr.cc/Conferences/2026">International Conference on Learning Representations (ICLR) 2026</a></strong> | Rio De Janeiro, Brazil | 23 - 27 April</p></li><li><p><strong><a href="https://iapp.org/conference/iapp-global-summit">AI Safety Research Accelerator Week</a></strong><a href="https://iapp.org/conference/iapp-global-summit"> </a> | Cambridge, UK | 18-25 May 2026</p></li><li><p><strong><a href="https://iapp.org/conference/iapp-aigg-europe">IAPP AI Governance Global Europe 2026</a></strong>  | Dublin, Ireland | 1 - 4 June 2026</p></li><li><p><strong><a href="https://coairesearch.org/aitc-2026/">AI Transparency Conference 2026</a></strong> | Nuremberg, Germany | 5 - 6 June 2026</p></li><li><p><strong><a href="https://foresight.org/events/vision-weekend-uk-2026/">Vision Weekend United Kingdom 2026</a></strong><a href="https://foresight.org/events/vision-weekend-uk-2026/"> </a>| London, United Kingdom | 5 &#8211; 7 June 2026</p></li><li><p><strong><a href="https://luma.com/AI_Safety_NZ?ai_safety_com">AI Safety New Zealand Conference 2026</a> </strong>| Christchurch, New Zealand | 4 July 2026</p></li><li><p><strong><a href="https://www.aisafetyforum.au">AI Safety Forum 2026 </a></strong>| Sydney, Australia | 7 - 8 July 2026</p></li><li><p><strong><a href="https://taigr-workshop.com">Technical AI Governance Research TAIGR 2026</a> </strong>| Seoul, South Korea | 10-11 July 2026</p></li><li><p><strong><a href="https://www.trustcon.net/event/trustcon2026/summary">5th Annual Trustcon 2026</a></strong> | San Francisco, California |  20 - 22 July 2026</p></li><li><p><strong><a href="https://cyber.fsi.stanford.edu/trust-and-safety-research-conference">Trust &amp; Safety Research Conference</a></strong> | Stanford, California | 1 - 2 October 2026</p></li></ul><p>Thank you for reading and see you next week &#128131;</p><p>Alisar Mustafa</p><p>&#128391;&#65039;<a href="https://www.linkedin.com/in/alisarmustafa1/">Linkedin</a> | &#129419; <a href="https://bsky.app/profile/alisarmustafa.bsky.social">Bluesky</a></p>]]></content:encoded></item><item><title><![CDATA[The AI Policy Newsletter 04.03.2026]]></title><description><![CDATA[US Proposes Federal AI Policy Framework Limiting State Laws, EU Delays AI Act and Adds New Restrictions, Court Blocks US Action Against Anthropic]]></description><link>https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-04032026</link><guid isPermaLink="false">https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-04032026</guid><dc:creator><![CDATA[Alisar Mustafa]]></dc:creator><pubDate>Fri, 03 Apr 2026 18:00:50 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!z-tV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49a6b34a-944e-45ea-a972-d213b0e0eaba_1080x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em><strong>Programming Note:</strong> The AI Policy Newsletter is shifting from weekly to twice a month. This will give me more time to focus on deeper analysis and other writing projects.</em></p><p>&#128064;</p><p><strong>TLDR</strong></p><p><strong>&#127963;&#65039; In the U.S.</strong>, the White House <a href="https://www.politico.com/news/2026/03/20/white-house-releases-ai-policy-blueprint-for-congress-00837354">released</a> an AI policy blueprint for Congress focused on accelerating innovation and limiting regulatory barriers, while Sen. Bernie Sanders and Rep. Alexandria Ocasio-Cortez <a href="https://www.axios.com/2026/03/25/sanders-aoc-data-center-moratorium-bill">proposed</a> a moratorium on new AI data centers over environmental concerns. At the state level, Washington <a href="https://www.kuow.org/stories/washington-passes-new-ai-laws-to-crack-down-on-misinformation-protect-minors">passed</a> laws requiring chatbot transparency, misinformation tracking, and stronger protections for minors, while Colorado <a href="https://governorsoffice.colorado.gov/governor/news/colorado-artificial-intelligence-policy-workgroup-delivers-unanimous-support-revised-policy">advanced</a> a framework ensuring disclosure and human review in AI-driven decisions. Pennsylvania lawmakers also <a href="https://penncapital-star.com/technology-information/pa-senate-passes-bill-regulating-ai-chatbots-used-by-children-and-teens/">passed</a> a bill imposing safeguards on AI companion services used by minors. Meanwhile, leadership changes continued as David Sacks <a href="https://www.theverge.com/policy/902140/david-sacks-out-ai-crypto-czar">stepped down </a>as the White House AI and Crypto Czar.</p><p>&#127757; <strong>Globally</strong>, Australia <a href="https://www.defence.gov.au/sites/default/files/2026-03/Policy-Settings-for-Responsible-Use-of-Artificial-Intelligence-in-Defence-[OFFICIAL].pdf">moved forward</a> with its AI policy direction (including restructuring oversight efforts), while the EU <a href="https://www.europarl.europa.eu/news/en/press-room/20260323IPR38829/artificial-intelligence-act-delayed-application-ban-on-nudifier-apps">delayed</a> parts of its AI Act, adding requirements like watermarking AI-generated content and banning &#8220;nudifier&#8221; apps. The UK <a href="https://www.gov.uk/government/publications/report-and-impact-assessment-on-copyright-and-artificial-intelligence">examined</a> copyright reforms to address tensions between AI training and content ownership. Meanwhile, a Dutch court <a href="https://www.techpolicy.press/dutch-court-orders-x-grok-to-stop-aigenerated-sexual-abuse-content/">ordered</a> X and its Grok AI to halt non-consensual sexualized content generation, setting a major precedent for platform accountability.</p><p>&#128126; <strong>In Industry</strong>, a federal court <a href="https://theaiinsider.tech/2026/03/27/federal-court-blocks-u-s-government-action-against-anthropic-in-ai-policy-dispute/">ruled</a> in favor of Anthropic, blocking U.S. government restrictions on the company, while the Trump administration <a href="https://www.reuters.com/business/trump-name-zuckerberg-ellison-huang-tech-panel-wsj-reports-2026-03-25/">appointed</a> leaders from Nvidia, Meta, and other tech giants to a top science and technology advisory council. Wikipedia <a href="https://www.theverge.com/tech/901461/wikipedia-ai-generated-article-ban">banned</a> AI-generated articles due to accuracy concerns, and GitHub <a href="https://github.blog/news-insights/company-news/updates-to-github-copilot-interaction-data-usage-policy/">announced</a> it will use Copilot user interaction data for AI training by default (with an opt-out), reflecting a broader shift toward stricter AI governance and increased reliance on real-world data.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://alisarmustafa.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><em><strong>If you enjoy the content, consider upgrading to a paid subscription. Your help supports my writing</strong>.</em></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>&#127963;&#65039;</p><p><strong>United States</strong></p><p><strong><a href="https://www.politico.com/news/2026/03/20/white-house-releases-ai-policy-blueprint-for-congress-00837354">White House Proposes Federal AI Framework Prioritizing Preemption of State Laws and Limited Regulation</a></strong></p><p>The White House released a policy blueprint urging Congress to establish a federal AI framework that preempts many state-level regulations. The proposal calls for overriding state laws governing AI model development and limiting liability for how companies&#8217; systems are used by third parties, while preserving state authority over child protection laws. It discourages creating new federal AI regulatory agencies and instead promotes a &#8220;minimally burdensome&#8221; approach. The plan includes age-gating requirements, parental control tools, workforce training initiatives, and data collection on AI-driven job disruption. It also recommends codifying a pledge requiring companies to cover energy costs for data centers. Congressional negotiations remain ongoing, with debate over federal preemption and states&#8217; rights.</p><p><strong><a href="https://www.axios.com/2026/03/25/sanders-aoc-data-center-moratorium-bill">Sanders and AOC Propose Nationwide Moratorium on AI Data Center Construction</a></strong></p><p>Sen. Bernie Sanders and Rep. Alexandria Ocasio-Cortez introduced legislation to halt all new AI data center construction until federal safeguards are enacted. The proposed Artificial Intelligence Data Center Moratorium Act would pause projects nationwide until Congress passes laws addressing worker protections, consumer safety, environmental impacts, and civil rights. The restriction could remain in place for years given ongoing disagreements over AI regulation. The proposal follows earlier outreach by Sanders to AI companies and emerges amid broader debates in Congress over regulatory approaches. It also highlights growing concerns over the energy demands of AI infrastructure, which are influencing positions across party lines ahead of upcoming elections.</p><p><strong><a href="https://www.kuow.org/stories/washington-passes-new-ai-laws-to-crack-down-on-misinformation-protect-minors">Washington Enacts AI Disclosure and Safety Laws Targeting Misinformation and Youth Protections</a></strong></p><p>Washington state passed two AI laws requiring companies like OpenAI and Anthropic to implement new transparency and safety measures. House Bill 1170 mandates that AI-generated or substantially modified content include traceable watermarks or metadata to address misinformation, applying to platforms with over 1 million users. House Bill 2225 imposes rules on chatbot interactions, requiring disclosure that users are interacting with AI at regular intervals and prohibiting bots from presenting themselves as human. Additional restrictions apply to minors, including more frequent disclosures, bans on manipulative engagement tactics, and prohibitions on sexually explicit content. The law also requires safeguards for identifying and responding to conversations involving self-harm or mental health risks.</p><p><strong><a href="https://governorsoffice.colorado.gov/governor/news/colorado-artificial-intelligence-policy-workgroup-delivers-unanimous-support-revised-policy">Colorado AI Workgroup Advances Consumer Protection Framework for Automated Decision Systems</a></strong></p><p>Colorado&#8217;s Artificial Intelligence Policy Workgroup unanimously approved a policy framework governing the use of AI and Automated Decision Making Systems (ADMT) in high-impact consumer decisions. Convened by Governor Jared Polis, the group included representatives from industry, healthcare, education, and consumer organizations. The framework requires that individuals be notified when AI is used in consequential decisions affecting their lives. If outcomes are adverse, consumers must be given access to explanations, the ability to correct inaccurate data, and the option to request human review. The proposal aims to balance consumer protections with continued innovation and will undergo further refinement during the state legislative process.</p><p><strong><a href="https://penncapital-star.com/technology-information/pa-senate-passes-bill-regulating-ai-chatbots-used-by-children-and-teens/">Pennsylvania Senate Passes Bill Imposing Safeguards on AI Chatbots for Minors</a></strong></p><p>The Pennsylvania Senate passed a bipartisan bill regulating AI companion chatbots, introducing safeguards focused on protecting minors. The legislation requires chatbot operators to prevent content that promotes self-harm, suicide, or violence and to provide users with resources such as crisis hotline information. Companies must publish safety protocols publicly. When operators know or suspect a user is a minor, chatbots must disclose they are not human, repeat reminders periodically, and encourage breaks. Additional restrictions prohibit sexually explicit content and interactions for minors and require warnings about suitability for users under 18. The state Attorney General would enforce the law, with civil penalties up to $10,000 for violations. The bill now moves to the state House.</p><p><strong><a href="https://www.theverge.com/policy/902140/david-sacks-out-ai-crypto-czar">David Sacks Steps Down as White House AI and Crypto Advisor After Hitting Service Limit</a></strong></p><p>David Sacks is no longer serving as the White House&#8217;s Special Advisor on AI and Crypto after reaching the 130-day limit for special government employees. Sacks stated he will now focus on co-chairing the President&#8217;s Council of Advisors on Science and Technology (PCAST), where he will provide recommendations on a broader range of technology issues. During his tenure, he played a central role in shaping AI policy and had direct access to senior leadership.</p><p>&#127757;</p><p><strong>Global&nbsp;</strong></p><p><strong><a href="https://www.europarl.europa.eu/news/en/press-room/20260323IPR38829/artificial-intelligence-act-delayed-application-ban-on-nudifier-apps">European Parliament Proposes Updates to AI Act with Delays and New Restrictions</a></strong></p><p>The European Parliament adopted a proposal to amend the Artificial Intelligence Act, introducing adjusted timelines and additional provisions. The plan delays implementation of rules for high-risk AI systems, setting dates of December 2027 for listed systems and August 2028 for those covered by sectoral laws. Requirements for watermarking AI-generated content would apply by November 2026. The proposal includes a ban on &#8220;nudifier&#8221; applications that create non-consensual explicit images of identifiable individuals. It also introduces flexibility for companies, including reduced regulatory overlap for products already governed by sector-specific laws and extended support measures for small mid-cap enterprises. Negotiations with the Council will determine the final version of the legislation.</p><p><strong><a href="https://www.defence.gov.au/sites/default/files/2026-03/Policy-Settings-for-Responsible-Use-of-Artificial-Intelligence-in-Defence-[OFFICIAL].pdf">Australia Sets Policy Framework for Responsible Military Use of AI</a></strong></p><p>Australia&#8217;s Department of Defence released policy settings outlining the responsible use of artificial intelligence across military operations. The framework applies to all stages of the AI lifecycle, from development to decommissioning, and emphasizes compliance with domestic and international law. It establishes three core requirements: lawful use, adherence to values-based principles, and risk-based controls. The policy requires human accountability for all AI-enabled decisions and mandates oversight through designated officials. It also highlights the need for transparency, reliability, and mitigation of bias and harm. Governance mechanisms and oversight bodies will monitor implementation, with updates planned as technology and policy evolve.</p><p><strong><a href="https://www.gov.uk/government/publications/report-and-impact-assessment-on-copyright-and-artificial-intelligence">UK Examines Copyright and AI as Data Use Debate Intensifies</a></strong></p><p>UK government reports highlight growing challenges around how copyright law applies to artificial intelligence, particularly as AI systems rely on large volumes of data that may include protected works. The analysis emphasizes the economic importance of both the AI sector and the creative industries, while noting increasing tensions over data access and ownership. Policymakers are assessing multiple approaches, including maintaining current rules, expanding data mining permissions, or strengthening licensing requirements. Each option presents trade-offs between enabling AI innovation and ensuring protections for rights holders. The reports conclude that further evidence and consultation are needed before determining a final policy direction.</p><p><strong><a href="https://www.techpolicy.press/dutch-court-orders-x-grok-to-stop-aigenerated-sexual-abuse-content/">Dutch Court Orders Halt to AI-Generated Non-Consensual Sexual Content</a></strong></p><p>A Dutch court ordered X and its AI system Grok to stop generating non-consensual sexualized imagery and child sexual abuse material, imposing fines of &#8364;100,000 per day for violations. The ruling requires xAI to prevent the creation and distribution of such content and mandates that X suspend Grok&#8217;s functionality where violations persist. The court found that safeguards claimed by the company were insufficient, citing evidence that harmful content could still be generated. It held that the platform, not just users, bears responsibility for preventing unlawful outputs under data protection and civil law. The decision also aligns with broader European regulatory and enforcement actions targeting AI-generated content.</p><p>&#128126;</p><p><strong>Industry&nbsp;&nbsp;</strong></p><p><strong><a href="https://theaiinsider.tech/2026/03/27/federal-court-blocks-u-s-government-action-against-anthropic-in-ai-policy-dispute/">Federal Court Blocks U.S. Action Against Anthropic in AI Dispute</a></strong></p><p>A federal judge ruled in favor of Anthropic, ordering the U.S. government to withdraw its designation of the company as a &#8220;supply-chain risk&#8221; and stop efforts to cut federal ties. Judge Rita F. Lin found the government&#8217;s actions likely violated legal protections, granting an injunction. The dispute arose after Anthropic sought to limit the use of its AI systems in areas such as autonomous weapons and mass surveillance, prompting government action.</p><p><strong><a href="https://www.reuters.com/business/trump-name-zuckerberg-ellison-huang-tech-panel-wsj-reports-2026-03-25/">Trump Appoints Tech Leaders to Advisory Council on AI and Technology</a></strong></p><p>President Donald Trump appointed several technology executives to the President&#8217;s Council of Advisors on Science and Technology (PCAST), including Mark Zuckerberg, Larry Ellison, and Jensen Huang. Additional members include Sergey Brin and Lisa Su. The council will advise on artificial intelligence policy and broader technology issues, with potential expansion to 24 members. The appointments reflect efforts to involve industry leaders in shaping national AI strategy, as the administration emphasizes investment, reduced regulatory barriers, and global competition in artificial intelligence.</p><p><strong><a href="https://www.theverge.com/tech/901461/wikipedia-ai-generated-article-ban">Wikipedia Bans AI-Generated Articles While Allowing Limited Use</a></strong></p><p>Wikipedia updated its guidelines to prohibit editors from writing or rewriting articles using artificial intelligence, citing conflicts with core content policies. The rule applies to the English-language site and follows ongoing efforts to address AI-generated content. Editors may still use AI tools for limited purposes, such as basic copy editing or translating articles, provided no new content is introduced and accuracy can be verified. The policy also clarifies that writing style alone is insufficient to identify AI use. The change follows community discussions and complements existing measures to remove low-quality AI-generated entries and improve content oversight.</p><p><strong><a href="https://github.blog/news-insights/company-news/updates-to-github-copilot-interaction-data-usage-policy/">GitHub Updates Copilot Data Policy to Use User Interactions for AI Training</a></strong></p><p>GitHub announced that, starting April 24, interaction data from Copilot Free, Pro, and Pro+ users will be used to train and improve its AI models unless users opt out. This data includes inputs, outputs, code snippets, and contextual information from user sessions. Business and Enterprise users are excluded from the change. The company stated that incorporating real-world interaction data aims to improve code suggestions, accuracy, and bug detection. Users can manage participation through privacy settings, and previously saved preferences will remain in effect. Data may be shared with affiliated companies, including Microsoft, but not with third-party AI providers.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-04032026?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-04032026?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p>&#127797;</p><p><strong>Resources&nbsp;</strong></p><ul><li><p><strong><a href="https://www.theaipolicycourse.com/">The AI Policy Course</a></strong></p></li><li><p><strong><a href="https://www.alisarmustafa.com/resources">AI Policy Resources</a></strong></p></li><li><p><strong><a href="https://www.techpolicy.press/newsletter/">Tech Policy Press Weekly Newsletter</a></strong></p></li><li><p><strong><a href="https://alltechishuman.org/responsible-tech-job-board">All Tech Is Human Job Board</a></strong></p></li></ul><p><strong>&#128197;</strong></p><p><strong>Upcoming Events</strong></p><ul><li><p><strong><a href="https://sight.ieee.org/event/21st-european-dependable-computing-conference-edcc-2026/">1st International Workshop on AI Safety and Security (AI-SS 2026)</a></strong> | Canterbury, UK | 7 April 2026</p></li><li><p><strong><a href="https://artificial-intelligence.hspioa.org/GSAI-S6">Global Summit on Artificial Intelligence (GSAI) 2026</a> </strong>| Online | 7 - 8 April 2026</p></li><li><p><strong><a href="https://www.mobility-ai-conference.com">The Mobility + AI Conference 2026</a> </strong>| Ottobrun, Germany | 14-15 April 2026</p></li><li><p><strong><a href="https://www.far.ai/events/event-list/berkeley-controlconf-2026">Berkeley ControlConf 2026</a> </strong>| Berkeley, California | 18&#8211;19 April, 2026 :</p></li><li><p><strong><a href="https://iclr.cc/Conferences/2026">International Conference on Learning Representations (ICLR) 2026</a></strong> | Rio De Janeiro, Brazil | 23 - 27 April</p></li><li><p><strong><a href="https://iapp.org/conference/iapp-global-summit">AI Safety Research Accelerator Week</a></strong><a href="https://iapp.org/conference/iapp-global-summit"> </a> | Cambridge, UK | 18-25 May 2026 (Deadline 4 April)</p></li><li><p><strong><a href="https://taigr-workshop.com">Technical AI Governance Research TAIGR 2026</a> </strong>| Seoul, South Korea | 10-11 July 2026 (submission deadline 24 April)</p></li></ul><p>Thank you for reading and see you next week &#128131;</p><p>Alisar Mustafa</p><p>&#128391;&#65039;<a href="https://www.linkedin.com/in/alisarmustafa1/">Linkedin</a> | &#129419; <a href="https://bsky.app/profile/alisarmustafa.bsky.social">Bluesky</a></p>]]></content:encoded></item><item><title><![CDATA[The AI Policy Newsletter 03.17.2026]]></title><description><![CDATA[US Senators Push for AI Commission, EU Targets AI Copyright Rules, Anthropic Fights Pentagon Ban]]></description><link>https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-03172026</link><guid isPermaLink="false">https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-03172026</guid><dc:creator><![CDATA[Alisar Mustafa]]></dc:creator><pubDate>Wed, 18 Mar 2026 01:01:03 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!z-tV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49a6b34a-944e-45ea-a972-d213b0e0eaba_1080x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>&#128064;</p><p><strong>TLDR</strong></p><p><strong>&#127963;&#65039; In the U.S.</strong>, senators <a href="https://www.wsj.com/articles/senators-propose-federal-ai-commission-days-after-anthropic-ban-319a5d70?gaa_at=eafs&amp;gaa_n=AWEtsqfSPgKIWKTRRCYKYdaHYkDNVBF1vzb-S0TqNmv6bhZgnXj74PsbgLgMoRt1g3Q=&amp;gaa_ts=69b36397&amp;gaa_sig=j8Oh6LCQrTTluYaYJ-W_xvNCanjeRYnZn9AMrCk-sslDEZvMqsJb6hMfHcFPw9OvwOAAb0y8lh6TMtTE2j_Epg==">proposed</a> establishing a federal artificial intelligence commission to study national AI governance and recommend regulatory frameworks following tensions between the government and AI firms. At the state level, Colorado lawmakers <a href="https://www.cpr.org/2026/03/09/colorado-ai-health-care-guardrails-bills/">advanced</a> bills restricting AI use in mental health therapy and requiring human review in insurance coverage decisions, while Minnesota legislators <a href="https://www.cbsnews.com/minnesota/news/kids-ban-chatbots-regulation-artificial-intelligence-bills-minnesota/">considered</a> measures including banning minors from using chatbots and limiting AI-driven pricing and surveillance tools. Meanwhile, the Florida House <a href="https://www.wctv.tv/2026/03/11/florida-house-passes-bill-regulate-ai-data-centers-measure-heads-senate/">passed</a> legislation requiring technology companies to cover utility costs for AI data centers and restricting where such facilities can be built.</p><p>&#127757; <strong>Globally</strong>, members of the European Parliament <a href="https://www.europarl.europa.eu/news/en/agenda/plenary-news/2026-03-09/9/protecting-copyrighted-creative-work-in-the-age-of-ai">called</a> for stronger protections for copyrighted works used in generative AI training, including transparency requirements and compensation for rights holders. Australia <a href="https://www.esafety.gov.au/newsroom/media-releases/online-safety-codes-introduce-real-world-protections-for-children-online">introduced</a> new online safety codes requiring platforms and AI chatbots to prevent children from accessing harmful content and to implement age-assurance measures. Indonesia <a href="https://en.antaranews.com/news/408083/indonesia-sets-rules-for-ai-digital-tech-use-in-education">issued</a> national guidelines governing the use of AI and digital technology in education with stricter limits for younger students, while India&#8217;s Goa state <a href="https://timesofindia.indiatimes.com/city/goa/goa-shares-draft-ai-policy-with-stakeholders-for-inputs/articleshow/129230660.cms">circulated</a> a draft AI policy seeking stakeholder feedback on digital innovation, governance, and the development of a local-language AI model.</p><p>&#128126; <strong>In Industry</strong>, Anthropic <a href="https://thehill.com/policy/technology/5781022-anthropic-challenges-pentagon-designation/">sought</a> an emergency court stay after the Pentagon designated its products a supply-chain risk following a dispute over AI safeguards and the company warned the decision could harm its business operations. The Meta Oversight Board also <a href="https://www.oversightboard.com/news/board-calls-for-new-rules-on-deceptive-ai-during-conflicts/">urged</a> Meta to strengthen labeling and detection systems for deceptive AI-generated content shared during conflicts. Separately, journalist Julia Angwin <a href="https://www.techbuzz.ai/articles/grammarly-hit-with-class-action-suit-over-ai-identity-theft">filed</a> a class-action lawsuit against Grammarly alleging the company used the identities of journalists without consent to present AI-generated writing suggestions as expert advice.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://alisarmustafa.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><em><strong>If you enjoy the content, consider upgrading to a paid subscription. Your help supports my writing</strong>.</em></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>&#127963;&#65039;</p><p><strong>United States</strong></p><p><strong><a href="https://www.wsj.com/articles/senators-propose-federal-ai-commission-days-after-anthropic-ban-319a5d70?gaa_at=eafs&amp;gaa_n=AWEtsqfSPgKIWKTRRCYKYdaHYkDNVBF1vzb-S0TqNmv6bhZgnXj74PsbgLgMoRt1g3Q=&amp;gaa_ts=69b36397&amp;gaa_sig=j8Oh6LCQrTTluYaYJ-W_xvNCanjeRYnZn9AMrCk-sslDEZvMqsJb6hMfHcFPw9OvwOAAb0y8lh6TMtTE2j_Epg==">Senators Propose Federal AI Commission</a></strong></p><p>A bipartisan group of U.S. senators introduced legislation to establish a federal commission to study and recommend regulations for artificial intelligence, following recent disputes between the Pentagon and AI company Anthropic. The proposed commission would examine the risks, national security implications, and economic impacts of AI technologies, and develop policy recommendations for Congress and federal agencies. Lawmakers said the initiative aims to coordinate federal oversight as AI systems expand across government, defense, and commercial sectors.</p><p><strong><a href="https://www.cpr.org/2026/03/09/colorado-ai-health-care-guardrails-bills/">Colorado Advances Bills Regulating AI Use in Health Care</a></strong></p><p>Lawmakers in Colorado advanced two bills regulating artificial intelligence in the medical system. House Bill 1195 would prohibit licensed therapists from using AI chatbots to communicate directly with patients or generate treatment plans without review by a qualified professional, while allowing limited uses such as administrative support with patient consent for recorded sessions. House Bill 1139 would restrict health insurers from relying solely on AI to deny coverage and require review by a qualified clinician. The measure also mandates disclosure when AI tools are used in care and prohibits chatbots from presenting themselves as licensed professionals. Both bills passed committee and could be amended as stakeholders review implementation concerns.</p><p><strong><a href="https://www.cbsnews.com/minnesota/news/kids-ban-chatbots-regulation-artificial-intelligence-bills-minnesota/">Minnesota Lawmakers Propose AI Rules Including Ban on Chatbots for Minors</a></strong></p><p>Lawmakers in Minnesota introduced a package of artificial intelligence bills that would ban individuals under 18 from using AI chatbots and require businesses to disclose when customers are interacting with AI systems. The proposals would also prevent health insurers from using AI to determine medical necessity and prohibit algorithmic &#8220;surveillance pricing&#8221; that generates different prices for consumers. The measures have drawn bipartisan support but face opposition from the technology industry, which warns the restrictions could limit beneficial AI tools as states move to regulate the technology amid limited federal oversight.</p><p><strong><a href="https://www.wctv.tv/2026/03/11/florida-house-passes-bill-regulate-ai-data-centers-measure-heads-senate/">Florida House Passes Bill to Regulate AI Data Centers</a></strong></p><p>The Florida House of Representatives passed legislation to regulate artificial intelligence data centers, sending the measure to the Florida Senate for consideration. The bill would require technology companies to pay the full cost of utilities used by their facilities and restrict where data centers can be built to prevent infrastructure burdens on residents. Lawmakers said the proposal aims to address potential impacts before large-scale facilities are established in the state. The legislation is supported by Ron DeSantis and forms part of his proposed &#8220;AI Bill of Rights,&#8221; which outlines principles for managing artificial intelligence development and deployment in Florida.</p><p>&#127757;</p><p><strong>Global&nbsp;</strong></p><p><strong><a href="https://www.europarl.europa.eu/pdfs/news/expert/2026/3/briefing/20260202BRI32902/20260202BRI32902_en.pdf">EU Lawmakers Call for Copyright Protections in AI Training</a></strong></p><p>Members of the European Parliament are preparing to propose measures to protect copyrighted creative works from exploitation by generative artificial intelligence systems. A report from the parliament&#8217;s Legal Affairs Committee examines how AI training uses copyrighted material and calls for greater transparency and compensation for rights holders. Lawmakers propose requiring AI providers to disclose which copyrighted works were used to train their models and ensure that creators receive remuneration. The proposal also suggests allowing rights holders to opt out of having their content used for AI training. In addition, legislators urge the European Commission to support licensing markets for copyrighted material and strengthen protections for the press and news media sector.</p><p><strong><a href="https://www.esafety.gov.au/newsroom/media-releases/online-safety-codes-introduce-real-world-protections-for-children-online">Australia Enforces Online Safety Codes to Protect Children</a></strong></p><p>Australia implemented new Age-Restricted Material Codes requiring technology companies to introduce safeguards that limit children&#8217;s exposure to harmful online content across platforms such as app stores, search engines, social media, gaming services, pornography sites, and AI-powered chatbots. The rules require services to apply age-assurance measures before granting access to material involving pornography, high-impact violence, self-harm, or suicide-related content. AI companion chatbots capable of generating explicit or harmful material must confirm users are 18 or older before allowing access. Search engines must blur explicit results by default for minors and direct searches related to self-harm to support services. The eSafety Commissioner will oversee compliance, with penalties of up to $49.5 million for violations.</p><p><strong><a href="https://en.antaranews.com/news/408083/indonesia-sets-rules-for-ai-digital-tech-use-in-education">Indonesia Issues National Guidelines for AI Use in Education</a></strong></p><p>Indonesia issued a joint ministerial decree regulating the use of digital technology and artificial intelligence across its education system, from early childhood education to universities. Coordinating Minister for Human Development and Culture Pratikno said the policy, signed by seven cabinet ministers, establishes guidance on the minimum age for technology use, permitted applications, and recommended duration of use at different education levels. The rules impose stricter limits on younger students, including controls on screen time and accessible digital content. At primary and secondary levels, students will not be allowed to use instant AI tools that automatically generate answers, although AI applications designed specifically for educational activities may still be permitted. The government said the policy aims to guide responsible technology use in schools.</p><p><strong><a href="https://timesofindia.indiatimes.com/city/goa/goa-shares-draft-ai-policy-with-stakeholders-for-inputs/articleshow/129230660.cms">Goa Circulates Draft AI Policy for Stakeholder Consultation</a></strong></p><p>The government of Goa circulated a draft artificial intelligence policy to stakeholders for feedback as part of efforts to shape the state&#8217;s AI strategy. The IT department said consultations will continue with industry representatives, academic institutions, and central government officials in the coming weeks. The proposal centers on the Goa AI Mission 2027, which aims to promote digital innovation, expand digital governance, and support the development of a start-up ecosystem. Stakeholders discussed priority sectors including public service delivery, tourism, agriculture, healthcare, education, and e-governance. Participants also explored developing a Konkani large language model in collaboration with national initiatives such as Bhashini to support language inclusion in AI systems.</p><p>&#128126;</p><p><strong>Industry&nbsp;&nbsp;</strong></p><p><strong><a href="https://thehill.com/policy/technology/5781022-anthropic-challenges-pentagon-designation/">Anthropic Seeks Court Block on Pentagon Supply Chain Risk Label</a></strong></p><p>Anthropic filed a request with a U.S. appeals court seeking an emergency stay of the Pentagon&#8217;s designation of the company as a supply chain risk. The filing argues that Defense Secretary Pete Hegseth issued the designation without statutory authority, legal explanation, or a formal agency process, and that the action followed a dispute between the company and the Defense Department over restrictions on using AI for mass surveillance and autonomous weapons. The designation prompted the administration of President Donald Trump to direct federal agencies to stop using Anthropic&#8217;s products. Anthropic said the decision could harm its business relationships and revenue, and it also filed a separate lawsuit in a federal court in California challenging the action.</p><p><strong><a href="https://www.oversightboard.com/news/board-calls-for-new-rules-on-deceptive-ai-during-conflicts/">Meta Oversight Board Urges New Rules on AI-Generated Content in Conflicts</a></strong></p><p>Meta&#8217;s Oversight Board called on the company to establish new rules for identifying and labeling AI-generated content during conflicts, citing a case involving a video posted during the 2025 Israel&#8211;Iran war. The Board overturned Meta&#8217;s earlier decision not to label the video, determining it should have carried a &#8220;High Risk AI&#8221; label due to the potential to mislead users during a crisis. The Board recommended that Meta create a separate policy framework for AI-generated content, expand the use of provenance standards such as Content Credentials, and strengthen detection systems for AI-generated media. It also urged clearer labeling protocols, greater transparency about penalties for failing to disclose altered content, and improved coordination with fact-checkers during conflict-related misinformation events.</p><p><strong><a href="https://www.techbuzz.ai/articles/grammarly-hit-with-class-action-suit-over-ai-identity-theft">Grammarly Faces Class-Action Lawsuit Over AI &#8220;Expert Review&#8221; Feature</a></strong></p><p>Journalist Julia Angwin filed a class-action lawsuit against Grammarly&#8217;s parent company, Superhuman Platform Inc., alleging the company used her identity and those of other journalists without consent in its AI &#8220;Expert Review&#8221; feature. The tool generated writing suggestions that appeared under the names and credentials of real professionals, creating the impression that they had personally reviewed user content. The feature was disabled after reporting revealed the practice. The lawsuit claims the company violated privacy and publicity rights by using individuals&#8217; names and reputations for commercial purposes without permission. The case seeks class-action status for other affected experts.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-03172026?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-03172026?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p>&#127797;</p><p><strong>Resources&nbsp;</strong></p><ul><li><p><strong><a href="https://www.theaipolicycourse.com/">The AI Policy Course</a></strong></p></li><li><p><strong><a href="https://www.alisarmustafa.com/resources">AI Policy Resources</a></strong></p></li><li><p><strong><a href="https://www.techpolicy.press/newsletter/">Tech Policy Press Weekly Newsletter</a></strong></p></li><li><p><strong><a href="https://alltechishuman.org/responsible-tech-job-board">All Tech Is Human Job Board</a></strong></p></li></ul><p><strong>&#128197;</strong></p><p><strong>Upcoming Events</strong></p><ul><li><p><strong><a href="https://unidir.org/event/global-conference-on-ai-security-and-ethics-2026/">UNIDIR: Women, Peace &amp; Security</a></strong> | Geneva | 18-19 March 2026</p></li><li><p><strong><a href="https://www.eventbrite.com/e/brand-ai-safety-summit-asia-from-singapore-tickets-1977289551275">Brand &amp; AI Safety Summit Asia</a></strong> | Singapore | 19 March 2026</p></li><li><p><strong><a href="https://apartresearch.com/sprints/ai-control-hackathon-2026-03-20-to-2026-03-22">AI Control Hackathon</a></strong><a href="https://apartresearch.com/sprints/ai-control-hackathon-2026-03-20-to-2026-03-22"> </a>| Virtual | 20 - 22 March 2026</p></li><li><p><strong><a href="https://www.worldmunday.com/ai-governance-and-public-policy-summit/">Global Youth Summit on AI Governance and Public Policy (Policy Pitch Summit)</a></strong> | Online |  23 March 2026</p></li><li><p><strong><a href="https://www.rsaconference.com">RSA Conference 2026</a></strong> | San Francisco, CA A| March 23&#8211;26, 2026</p></li><li><p><strong><a href="https://irmuk.co.uk/dg-ai-governance-conference/">Data Governance &amp; AI Governance Conference Europe</a> </strong>| London, UK |  March 23&#8211;27 2026</p></li><li><p><strong><a href="https://www.conferencealert.com/eventdetail/1717253">GSAIET 2026 - Global Summit on Artificial Intelligence and Emerging Technologies</a> </strong>| Florida, USA | 27 Mar 2026</p></li><li><p><strong><a href="https://www.far.ai/events/event-list/technical-innovations-for-ai-policy-tiap-conference-2026">Technical Innovations for AI Policy (TIAP) Conference 2026</a></strong> | Washington, DC | 30 - 31 March 2026</p></li><li><p><strong><a href="https://www.oecd-events.org/e/ai-wips-2026">AI WIPS OECD</a></strong> | Virtual | 30 March - 1 April</p></li><li><p><strong><a href="https://iapp.org/conference/iapp-global-summit">IAPP Global Summit 2026</a></strong>  | Washington, DC | 30 March-2 April :</p></li></ul><p>Thank you for reading and see you next week &#128131;</p><p>Alisar Mustafa</p><p>&#128391;&#65039;<a href="https://www.linkedin.com/in/alisarmustafa1/">Linkedin</a> | &#129419; <a href="https://bsky.app/profile/alisarmustafa.bsky.social">Bluesky</a></p>]]></content:encoded></item><item><title><![CDATA[The AI Policy Newsletter 03.11.2026]]></title><description><![CDATA[Anthropic Refuses Pentagon Request to Remove AI Safeguards, China Prioritizes AI and Quantum in New Five-Year Plan, OpenAI Revises Pentagon Contract After Surveillance Backlash]]></description><link>https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-03112026</link><guid isPermaLink="false">https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-03112026</guid><dc:creator><![CDATA[Alisar Mustafa]]></dc:creator><pubDate>Thu, 12 Mar 2026 00:38:17 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!z-tV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49a6b34a-944e-45ea-a972-d213b0e0eaba_1080x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>&#128064;</p><p><strong>TLDR</strong></p><p><strong>&#127963;&#65039; In the U.S.</strong>, Anthropic <a href="https://www.reuters.com/sustainability/society-equity/anthropic-rejects-pentagons-requests-ai-safeguards-dispute-ceo-says-2026-02-26/">refused</a> a request from the United States Department of Defense to remove safeguards preventing its AI from being used for mass domestic surveillance or fully autonomous weapons, and the Pentagon later <a href="https://www.reuters.com/technology/pentagon-informed-anthropic-it-is-supply-chain-risk-official-says-2026-03-05/">designated</a> the company a supply-chain risk restricting contractors from using its tool and ordered federal agencies to stop using Anthropic products following the dispute, while U.S. officials also <a href="https://www.reuters.com/world/us-mulls-new-rules-ai-chip-exports-including-requiring-investments-by-foreign-2026-03-05/">considered</a> new export rules that could require foreign buyers of advanced AI chips to invest in American data-center infrastructure.</p><p>&#127757; <strong>Globally</strong>, China <a href="https://thequantuminsider.com/2026/03/05/chinas-new-five-year-plan-specifically-targets-quantum-leadership-and-ai-expansion/">prioritized</a> AI and quantum computing in its new five-year plan while while The United Kingdom <a href="https://www.pv-magazine.com/2026/03/03/uk-government-open-call-for-evidence-ai-energy-policy-data/">sought</a> expert input on datasets for AI in the energy system and <a href="https://www.ft.com/content/e759a712-eddf-4bdd-b4d9-03446f8c6545">delayed</a> decisions on AI copyright rules after pushback from creative industries. Kazakhstan <a href="https://timesca.com/kazakhstan-adopts-pragmatic-ai-regulation-in-financial-sector/">opted</a> to govern AI in finance through existing regulations while investing in infrastructure and supervisory technology.</p><p>&#128126; <strong>In Industry</strong>, OpenAI <a href="https://www.bbc.com/news/articles/c3rz1nd0egro">revised</a> its agreement with the United States Department of Defense after backlash over its military partnership and added language prohibiting domestic surveillance of Americans. Meta Platforms <a href="https://www.wsj.com/business/meta-to-open-up-whatsapp-to-rival-ai-chatbots-for-a-fee-following-eu-objections-10330fe0">agreed</a> to reopen the WhatsApp Business API to rival AI chatbots in Europe following scrutiny from the European Commission, while the family of a victim in Canada&#8217;s Tumbler Ridge school shooting <a href="https://www.theguardian.com/world/2026/mar/10/tumbler-ridge-shooting-victim-sues-openai-canada">sued</a> OpenAI over the attacker&#8217;s prior violent interactions with ChatGPT.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://alisarmustafa.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><em><strong>If you enjoy the content, consider upgrading to a paid subscription. Your help supports my writing</strong>.</em></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>&#127963;&#65039;</p><p><strong>United States</strong></p><p><strong><a href="https://www.reuters.com/sustainability/society-equity/anthropic-rejects-pentagons-requests-ai-safeguards-dispute-ceo-says-2026-02-26/">Pentagon Dispute with Anthropic Over AI Safeguards</a></strong></p><p>Anthropic said it would not remove safeguards from its AI systems despite a request from the United States Department of Defense tied to a contract worth up to $200 million. The dispute concerns Anthropic&#8217;s refusal to allow its models to be used for fully autonomous weapons targeting or mass domestic surveillance. CEO Dario Amodei stated that current frontier AI systems are not reliable enough for life-or-death targeting decisions and raised concerns about AI-driven population profiling in surveillance contexts. The Pentagon warned it could terminate the contract, classify Anthropic as a supply chain risk, and invoke the Defense Production Act if the safeguards remain in place.</p><p><strong><a href="https://www.reuters.com/technology/pentagon-informed-anthropic-it-is-supply-chain-risk-official-says-2026-03-05/">Pentagon Labels Anthropic a Supply-Chain Risk</a></strong></p><p>The United States Department of Defense designated Anthropic a &#8220;supply-chain risk,&#8221; prohibiting government contractors from using the company&#8217;s AI systems in Pentagon-related projects. The decision follows a dispute over Anthropic&#8217;s refusal to remove safeguards preventing its models from being used for autonomous weapons targeting or mass domestic surveillance. CEO Dario Amodei said the restriction applies only to Pentagon contracts and that the company plans to challenge the designation in court. Despite the ban, Anthropic&#8217;s Claude model can still be used in non-defense projects. The company and the Pentagon have discussed potential arrangements to continue limited cooperation without removing the safeguards.</p><p><strong><a href="https://www.reuters.com/world/us-mulls-new-rules-ai-chip-exports-including-requiring-investments-by-foreign-2026-03-05/">US Considers New Conditions for AI Chip Exports</a></strong></p><p>Officials in the United States are considering new regulations governing exports of advanced artificial intelligence chips, including potential requirements for foreign companies to invest in U.S. AI data centers or provide security assurances to obtain large shipments. A draft framework reviewed by Reuters suggests exports exceeding 200,000 chips could be tied to such commitments, while smaller installations might still require licenses and monitoring by exporters such as Nvidia and Advanced Micro Devices. The approach would differ from policies under former President Joe Biden that broadly exempted close allies. The United States Department of Commerce confirmed internal discussions on new rules but said they would differ from the previous administration&#8217;s framework.</p><p>&#127757;</p><p><strong>Global&nbsp;</strong></p><p><strong><a href="https://thequantuminsider.com/2026/03/05/chinas-new-five-year-plan-specifically-targets-quantum-leadership-and-ai-expansion/">China Five-Year Plan Targets AI and Quantum Technologies</a></strong></p><p>China placed artificial intelligence and quantum computing at the center of its new five-year national development plan, outlining expanded investment in advanced computing infrastructure, research, and industrial deployment. The blueprint includes an &#8220;AI+ action plan&#8221; to integrate AI across sectors such as manufacturing, healthcare, logistics, and robotics, alongside efforts to build large-scale computing clusters for model training. It also calls for developing scalable quantum computers and constructing an integrated space-earth quantum communication network. Additional priorities include humanoid robotics, 6G communications, brain-machine interfaces, and nuclear fusion. The strategy aims to strengthen domestic innovation capacity amid ongoing technology tensions with the United States.</p><p><strong><a href="https://www.pv-magazine.com/2026/03/03/uk-government-open-call-for-evidence-ai-energy-policy-data/">UK Seeks Expert Input on AI and Energy Data Policy</a></strong></p><p>The United Kingdom government launched an open call for evidence seeking input from artificial intelligence and energy experts on datasets that could support AI applications in the energy sector. The initiative aims to identify high-impact datasets and barriers to data access that affect AI development for electricity grid optimization, renewable generation forecasting, heat pump deployment, and industrial energy efficiency. The consultation asks stakeholders to outline what data is needed, how it could be structured and maintained, and which users would benefit. Responses will help inform policy on improving data availability and governance to support a more digitalized and efficient national energy system.</p><p><strong><a href="https://www.ft.com/content/e759a712-eddf-4bdd-b4d9-03446f8c6545">UK Delays Decision on AI Copyright Policy</a></strong></p><p>The United Kingdom government plans to delay changes to copyright rules affecting artificial intelligence after proposals allowing AI companies easier access to copyrighted content faced opposition from creative industries. Officials are reconsidering policy options following a public consultation in which responses did not support the government&#8217;s initial proposals, including a model allowing AI firms to use online content unless creators opted out. Media companies, publishers, and film producers argued the approach could undermine intellectual property protections. The government will gather additional evidence and extend consultations, with new legislation on AI and copyright unlikely to appear in the upcoming parliamentary session.</p><p><strong><a href="https://timesca.com/kazakhstan-adopts-pragmatic-ai-regulation-in-financial-sector/">Kazakhstan Adopts AI Approach Within Existing Financial Regulations</a></strong></p><p>Kazakhstan is applying existing financial regulations to artificial intelligence rather than introducing new AI-specific rules for the sector. According to the National Bank of Kazakhstan, about 75% of the country&#8217;s banks already use AI for functions such as credit underwriting, fraud detection, and anti-money-laundering monitoring, with most planning to expand deployment. Regulators have emphasized technological neutrality, maintaining that current cybersecurity, data protection, and risk management rules apply regardless of whether decisions are made by humans or algorithms. Authorities are also investing in domestic data centers and supervisory technology systems to support AI oversight and allow fintech firms to test algorithms in controlled environments.</p><p>&#128126;</p><p><strong>Industry&nbsp;&nbsp;</strong></p><p><strong><a href="https://www.bbc.com/news/articles/c3rz1nd0egro">OpenAI Revises Pentagon AI Agreement After Backlash</a></strong></p><p>OpenAI revised its agreement with the United States Department of Defense following criticism over the company&#8217;s involvement in classified military operations. CEO Sam Altman said the updated contract adds language explicitly prohibiting the use of OpenAI systems for domestic surveillance of U.S. persons and requires additional authorization before intelligence agencies can deploy the technology. The agreement emerged after a dispute between the Pentagon and Anthropic over safeguards related to mass surveillance and fully autonomous weapons. The partnership also prompted public backlash, including increased uninstall rates for the ChatGPT app following the announcement.</p><p><strong><a href="https://www.wsj.com/business/meta-to-open-up-whatsapp-to-rival-ai-chatbots-for-a-fee-following-eu-objections-10330fe0">Meta Allows Rival AI Chatbots on WhatsApp After EU Scrutiny</a></strong></p><p>Meta Platforms said it will allow competing artificial intelligence chatbots to access the WhatsApp Business API in Europe for a fee following concerns raised by the European Commission during an antitrust investigation. Regulators had warned that Meta&#8217;s earlier policy restricting third-party AI assistants from interacting with users on the platform could harm competition in the market for general-purpose AI chatbots. In response, the company said it will support rival AI services on the platform for the next 12 months while the Commission continues its review of the policy and assesses its impact on the market.</p><p><strong><a href="https://www.theguardian.com/world/2026/mar/10/tumbler-ridge-shooting-victim-sues-openai-canada">Canada Lawsuit Targets OpenAI After School Shooting Linked to ChatGPT Interactions</a></strong></p><p>The family of a student critically injured in the February 10 mass shooting in Tumbler Ridge, British Columbia, has filed a lawsuit against OpenAI, alleging the company could have prevented the attack. The shooting killed eight people, including five students aged 12&#8211;13. According to reports, the 18-year-old attacker discussed violent scenarios involving guns with ChatGPT months earlier. OpenAI&#8217;s automated review system flagged the interactions and suspended the account but determined they did not indicate &#8220;credible or imminent&#8221; plans and did not alert authorities. The lawsuit claims ChatGPT was released without sufficient safety testing and seeks damages. Canadian officials are now pressing OpenAI to review past flagged cases and strengthen reporting standards.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-03112026?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-03112026?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p>&#127797;</p><p><strong>Resources&nbsp;</strong></p><ul><li><p><strong><a href="https://www.theaipolicycourse.com/">The AI Policy Course</a></strong></p></li><li><p><strong><a href="https://www.alisarmustafa.com/resources">AI Policy Resources</a></strong></p></li><li><p><strong><a href="https://www.techpolicy.press/newsletter/">Tech Policy Press Weekly Newsletter</a></strong></p></li><li><p><strong><a href="https://alltechishuman.org/responsible-tech-job-board">All Tech Is Human Job Board</a></strong></p></li></ul><p><strong>&#128197;</strong></p><p><strong>Upcoming Events</strong></p><ul><li><p><strong><a href="https://indico.un.org/event/1022678/">STEPAN Webinar : Advancing Responsible AI</a></strong><a href="https://indico.un.org/event/1022678/"> </a>| Virtual | 12 March 2026</p></li><li><p><strong><a href="https://www.eventbrite.com/e/brand-ai-safety-summit-asia-from-singapore-tickets-1977289551275">Brand &amp; AI Safety Summit Asia</a></strong> | Singapore | 19 March 2026</p></li><li><p><strong><a href="https://apartresearch.com/sprints/ai-control-hackathon-2026-03-20-to-2026-03-22">AI Control Hackathon</a></strong><a href="https://apartresearch.com/sprints/ai-control-hackathon-2026-03-20-to-2026-03-22"> </a>| Virtual | 20 - 22 March 2026</p></li><li><p><strong><a href="https://www.worldmunday.com/ai-governance-and-public-policy-summit/">Global Youth Summit on AI Governance and Public Policy (Policy Pitch Summit)</a></strong> | Online |  23 March 2026</p></li><li><p><strong><a href="https://www.rsaconference.com">RSA Conference 2026</a></strong> | San Francisco, CA A| March 23&#8211;26, 2026</p></li><li><p><strong><a href="https://irmuk.co.uk/dg-ai-governance-conference/">Data Governance &amp; AI Governance Conference Europe</a> </strong>| London, UK |  March 23&#8211;27 2026</p></li><li><p><strong><a href="https://www.far.ai/events/event-list/technical-innovations-for-ai-policy-tiap-conference-2026">Technical Innovations for AI Policy (TIAP) Conference 2026</a></strong> | Washington, DC | 30 - 31 March 2026</p></li><li><p><strong><a href="https://www.oecd-events.org/e/ai-wips-2026">AI WIPS OECD</a></strong> | Virtual | 30 March - 1 April</p></li><li><p><strong><a href="https://iapp.org/conference/iapp-global-summit">IAPP Global Summit 2026</a></strong>  | Washington, DC | 30 March-2 April </p></li></ul><p>Thank you for reading and see you next week &#128131;</p><p>Alisar Mustafa</p><p>&#128391;&#65039;<a href="https://www.linkedin.com/in/alisarmustafa1/">Linkedin</a> | &#129419; <a href="https://bsky.app/profile/alisarmustafa.bsky.social">Bluesky</a></p>]]></content:encoded></item><item><title><![CDATA[The AI Policy Newsletter 03.02.2026]]></title><description><![CDATA[Anthropic-Backed Group Runs $300K Pro-State AI Ad Blitz, 91 Nations Sign New Delhi AI Declaration, Anthropic Unveils Responsible Scaling Policy 3.0]]></description><link>https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-03022026</link><guid isPermaLink="false">https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-03022026</guid><dc:creator><![CDATA[Alisar Mustafa]]></dc:creator><pubDate>Tue, 03 Mar 2026 01:33:32 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!z-tV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49a6b34a-944e-45ea-a972-d213b0e0eaba_1080x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://www.ducoexperts.com/resources/reports/2026-AI-threat-forecast" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!DTLZ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3e086a8f-f598-451a-b698-95d314991b56_1928x958.png 424w, https://substackcdn.com/image/fetch/$s_!DTLZ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3e086a8f-f598-451a-b698-95d314991b56_1928x958.png 848w, https://substackcdn.com/image/fetch/$s_!DTLZ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3e086a8f-f598-451a-b698-95d314991b56_1928x958.png 1272w, https://substackcdn.com/image/fetch/$s_!DTLZ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3e086a8f-f598-451a-b698-95d314991b56_1928x958.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!DTLZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3e086a8f-f598-451a-b698-95d314991b56_1928x958.png" width="1456" height="723" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3e086a8f-f598-451a-b698-95d314991b56_1928x958.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:723,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2211725,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://www.ducoexperts.com/resources/reports/2026-AI-threat-forecast&quot;,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://alisarmustafa.substack.com/i/189720036?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3e086a8f-f598-451a-b698-95d314991b56_1928x958.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!DTLZ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3e086a8f-f598-451a-b698-95d314991b56_1928x958.png 424w, https://substackcdn.com/image/fetch/$s_!DTLZ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3e086a8f-f598-451a-b698-95d314991b56_1928x958.png 848w, https://substackcdn.com/image/fetch/$s_!DTLZ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3e086a8f-f598-451a-b698-95d314991b56_1928x958.png 1272w, https://substackcdn.com/image/fetch/$s_!DTLZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3e086a8f-f598-451a-b698-95d314991b56_1928x958.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p>&#128064;</p><p><strong>TLDR</strong></p><p><strong>&#127963;&#65039; In the U.S.</strong>, the administration <a href="https://thehill.com/policy/technology/5751114-us-signs-ai-declaration/">signed</a> a non-binding AI declaration at India&#8217;s AI Impact Summit, with OSTP Director Michael Kratsios highlighting a shift toward opportunity-focused AI policy and international AI exports, while in New Jersey a Public First Action ad campaign backed by Anthropic <a href="https://www.nytimes.com/2026/02/23/technology/ai-pac-ad-blitz.html">urged</a> lawmakers to oppose limits on state-level AI protections. In Ohio, bipartisan lawmakers introduced HB 524, <a href="https://woub.org/2026/02/25/suicide-prevention-group-backs-bill-regulate-artificial-intelligence-ohio/">supported</a> by the Ohio Suicide Prevention Foundation, to impose penalties on AI systems that suggest self-harm or violence and to require accountability for harmful outputs.</p><p>&#127757; <strong>Globally</strong>, 91 countries <a href="https://www.pib.gov.in/PressReleasePage.aspx?PRID=2231208&amp;v=4&amp;reg=3&amp;lang=2">endorsed</a> the New Delhi Declaration at the AI Impact Summit, establishing voluntary cooperation across seven pillars including secure AI, workforce development, and energy-efficient systems, as Vietnam&#8217;s central bank <a href="https://marketech-apac.com/vietnams-central-bank-tightens-ai-rules-in-banking-mandates-customer-notice/">proposed</a> rules requiring disclosure and human review for AI use in banking. South Africa <a href="https://iafrica.com/south-africa-to-finalize-national-ai-policy-by-2027-seeking-middle-ground-between-innovation-and-regulation/">outlined</a> plans to finalize its national AI policy by 2027 with frameworks addressing oversight, data localization, and sector coordination, while Australia <a href="https://babl.ai/australia-scraps-planned-ai-advisory-body-after-15-month-recruitment-process-shifts-to-new-ai-safety-institute/">canceled</a> a planned AI advisory body and redirected funding to launch a new AI Safety Institute in 2026.</p><p>&#128126; <strong>In Industry</strong>, Anthropic <a href="https://www.anthropic.com/news/responsible-scaling-policy-v3">released</a> Version 3.0 of its Responsible Scaling Policy introducing a Frontier Safety Roadmap and recurring public Risk Reports, while African competition regulators <a href="https://africa.businessinsider.com/local/markets/mark-zuckerbergs-meta-faces-antitrust-probe-across-21-african-markets-over-whatsapp/f42y276">opened</a> an antitrust probe into Meta&#8217;s WhatsApp Business AI terms across 21 markets. Global privacy authorities <a href="https://ico.org.uk/media2/fb1br3d4/20260223-iewg-joint-statement-on-ai-generated-imagery.pdf">issued</a> a joint statement calling for safeguards, transparency, and enforcement coordination to address harms from AI-generated imagery and non-consensual content.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://alisarmustafa.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><em><strong>If you enjoy the content, consider upgrading to a paid subscription. Your help supports my writing</strong>.</em></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>&#127963;&#65039;</p><p><strong><a href="https://www.nytimes.com/2026/02/23/technology/ai-pac-ad-blitz.html">Public First Action Launches Ad Campaign Supporting AI Regulation</a></strong></p><p style="text-align: justify;">Public First Action, an advocacy group backed by Anthropic, began a $300,000 advertising campaign in northern New Jersey urging voters to oppose federal legislation that would block states from enacting AI consumer protection laws. The ads reference AI-generated scams and call on Representative Josh Gottheimer to support state-level safeguards. The campaign is part of a broader national effort ahead of the midterm elections and follows Anthropic&#8217;s previously announced $20 million contribution to the group. Public First Action was formed to counter rival political efforts aligned with leaders and investors connected to OpenAI.</p><p><strong><a href="https://thehill.com/policy/technology/5751114-us-signs-ai-declaration/">U.S. Signs Non-Binding AI Declaration at India Summit</a></strong></p><p style="text-align: justify;">The United States joined 88 other countries and organizations in endorsing a non-binding declaration following the AI Impact Summit in New Delhi, committing to a shared global vision for artificial intelligence development. The document outlines seven pillars, including expanding access to AI resources, supporting economic and social development, and promoting energy-efficient systems. It emphasizes security, voluntary industry measures, and policy frameworks that support innovation, but does not reference AI safety provisions.</p><p><strong><a href="https://woub.org/2026/02/25/suicide-prevention-group-backs-bill-regulate-artificial-intelligence-ohio/">Ohio Bill Would Penalize AI Platforms That Suggest Self-Harm</a></strong></p><p style="text-align: justify;">The Ohio Suicide Prevention Foundation is backing House Bill 524, bipartisan legislation introduced by Representatives Christine Cockley and Ty Mathews that would impose penalties on entities whose AI models suggest users harm themselves or others. Foundation CEO Tony Coder cited cases in which AI tools were used to write suicide notes and raised concerns about chatbots advising minors to withhold suicidal thoughts from parents. Ohio Department of Health data show 1,777 suicide deaths in 2023, with suicide the second leading cause of death for children ages 10&#8211;14. The bill faces potential challenges, including industry opposition and a federal executive order limiting state-level AI regulation.</p><p>&#127757;</p><p><strong>Global&nbsp;</strong></p><p><strong><a href="https://www.pib.gov.in/PressReleasePage.aspx?PRID=2231208&amp;v=4&amp;reg=3&amp;lang=2">AI Impact Summit Concludes with Adoption of New Delhi Declaration</a></strong></p><p style="text-align: justify;">The AI Impact Summit 2026 concluded in New Delhi with 91 countries and international organizations endorsing the non-binding New Delhi Declaration on AI Impact, outlining a shared framework for global cooperation on artificial intelligence. The declaration is structured around seven pillars: democratizing AI resources, economic growth and social good, secure and trusted AI, AI for science, access for social empowerment, human capital development, and resilient and efficient AI systems. It introduces voluntary initiatives including the Charter for the Democratic Diffusion of AI, Global AI Impact Commons, Trusted AI Commons, and an International Network of AI for Science Institutions. The declaration emphasizes international collaboration, respect for national sovereignty, energy-efficient infrastructure, and workforce development in AI.</p><p><strong><a href="https://marketech-apac.com/vietnams-central-bank-tightens-ai-rules-in-banking-mandates-customer-notice/">Vietnam Central Bank Proposes AI Disclosure and Risk Controls for Banking Sector</a></strong></p><p style="text-align: justify;">Vietnam&#8217;s State Bank of Vietnam has issued a draft circular introducing new requirements for the use of artificial intelligence in banking and payment services. The proposal would require banks and e-wallet providers to notify customers before using AI tools such as chatbots, automated hotlines, and virtual assistants in direct interactions. Institutions must also disclose the use of AI for emotion recognition or biometric classification and clearly label AI-generated content. The draft prohibits using AI to target customer vulnerabilities when marketing high-risk financial products and grants customers the right to request human review of AI-driven decisions. The regulations are expected to take effect in March, with existing systems given until September 2027 to comply.</p><p><strong><a href="https://iafrica.com/south-africa-to-finalize-national-ai-policy-by-2027-seeking-middle-ground-between-innovation-and-regulation/">South Africa Sets 2027 Timeline to Finalize National AI Policy Framework</a></strong></p><p style="text-align: justify;">South Africa&#8217;s government plans to finalize its national artificial intelligence policy in the 2026&#8211;2027 financial year, with publication in the Government Gazette expected in March for a 60-day public comment period. The draft framework, structured around 14 pillars including education, infrastructure, ethics, safety, privacy, and industry collaboration, will be reviewed by the economic cluster ministerial council and a cabinet committee before adoption. The communications ministry outlined plans to establish stakeholder forums, a regulators forum, ICT sandboxes, and coordination road maps through 2027. The policy emphasizes human oversight, accountability for AI-related harm, measures addressing deepfakes and misinformation, and the development of locally representative datasets and language models.</p><p><strong><a href="https://babl.ai/australia-scraps-planned-ai-advisory-body-after-15-month-recruitment-process-shifts-to-new-ai-safety-institute/">Australia Cancels Planned AI Advisory Body, Establishes AI Safety Institute</a></strong></p><p style="text-align: justify;">Australia&#8217;s federal government has discontinued plans for a permanent AI Advisory Body after a 15-month recruitment process that narrowed 270 applicants to 12 nominees at a reported cost of approximately AUD $188,000. The advisory body, announced in 2024, was intended to guide national AI policy and develop guardrails. The government has instead shifted to establishing a new AI Safety Institute, backed by AUD $29.9 million in funding and expected to launch in early 2026 within the Department of Industry, Science and Resources. The institute will coordinate expertise across government, industry, and international partners, while the government continues consultations with external experts outside a standalone advisory structure.</p><p>&#128126;</p><p><strong>Industry&nbsp;&nbsp;</strong></p><p><strong><a href="https://www.anthropic.com/news/responsible-scaling-policy-v3">Anthropic Releases Responsible Scaling Policy Version 3.0 with Expanded Risk Reporting</a></strong></p><p style="text-align: justify;">Anthropic published Version 3.0 of its Responsible Scaling Policy (RSP), updating the voluntary framework it uses to manage potential catastrophic AI risks. The revised policy separates company-specific commitments from broader industry recommendations and introduces a Frontier Safety Roadmap outlining goals across security, alignment, safeguards, and policy. It also formalizes periodic Risk Reports, to be published every three to six months, detailing model capabilities, threat models, and mitigation measures, with provisions for external expert review in certain cases. The update reflects lessons from earlier AI Safety Levels (ASLs), including the activation of ASL-3 safeguards in 2025, and addresses challenges related to ambiguous capability thresholds and evolving regulatory environments.</p><p><strong><a href="https://africa.businessinsider.com/local/markets/mark-zuckerbergs-meta-faces-antitrust-probe-across-21-african-markets-over-whatsapp/f42y276">COMESA Launches Antitrust Probe into Meta&#8217;s WhatsApp AI Terms Across 21 African Markets</a></strong></p><p style="text-align: justify;">The Common Market for Eastern and Southern Africa Competition and Consumer Commission (COMESA CCC) has opened an investigation into changes introduced in October 2025 by Meta Platforms Ireland Limited to the WhatsApp Business Solution Terms. Regulators are examining whether the revised terms restrict third-party AI providers&#8217; access to WhatsApp while maintaining full functionality for Meta&#8217;s own AI tools, potentially constituting an abuse of dominance across the 21-member bloc, which includes Kenya, Egypt, Ethiopia, Uganda, and Zambia. The commission described the action as a fact-finding process and invited stakeholder submissions by 16 March 2026.</p><p style="text-align: justify;"><strong><a href="https://ico.org.uk/media2/fb1br3d4/20260223-iewg-joint-statement-on-ai-generated-imagery.pdf">Global Privacy Authorities Issue Joint Statement on AI-Generated Imagery and Privacy Risks</a></strong></p><p style="text-align: justify;">A coalition of data protection and privacy authorities, coordinated by the Global Privacy Assembly&#8217;s International Enforcement Cooperation Working Group, issued a joint statement addressing risks from AI systems that generate realistic images and videos of identifiable individuals without consent. The signatories emphasized that organizations developing and deploying AI content generation tools must comply with applicable data protection and privacy laws and implement safeguards against non-consensual intimate imagery and other harmful content, particularly involving children. The statement calls for transparency regarding AI capabilities, accessible content removal mechanisms, and enhanced protections for vulnerable groups. Authorities also committed to coordinated information sharing, enforcement actions where appropriate, and ongoing regulatory engagement to address cross-border privacy risks.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-03022026?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-03022026?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p>&#127797;</p><p><strong>Resources&nbsp;</strong></p><ul><li><p><strong><a href="https://www.theaipolicycourse.com/">The AI Policy Course</a></strong></p></li><li><p><strong><a href="https://www.alisarmustafa.com/resources">AI Policy Resources</a></strong></p></li><li><p><strong><a href="https://www.techpolicy.press/newsletter/">Tech Policy Press Weekly Newsletter</a></strong></p></li><li><p><strong><a href="https://alltechishuman.org/responsible-tech-job-board">All Tech Is Human Job Board</a></strong></p></li></ul><p><strong>&#128197;</strong></p><p><strong>Upcoming Events</strong></p><ul><li><p><strong><a href="https://indico.un.org/event/1022678/">STEPAN Webinar : Advancing Responsible AI</a></strong><a href="https://indico.un.org/event/1022678/"> </a>| Virtual | 12 March 2026</p></li><li><p><strong><a href="https://events.lynx.co/ai-security-summit/">AI Security Summit 2026</a></strong><a href="https://events.lynx.co/ai-security-summit/"> </a>| Tel Aviv, Israel | March 15, 2026</p></li><li><p><strong><a href="https://www.eventbrite.com/e/brand-ai-safety-summit-asia-from-singapore-tickets-1977289551275">Brand &amp; AI Safety Summit Asia</a></strong> | Singapore | 19 March 2026</p></li><li><p><strong><a href="https://apartresearch.com/sprints/ai-control-hackathon-2026-03-20-to-2026-03-22">AI Control Hackathon</a></strong><a href="https://apartresearch.com/sprints/ai-control-hackathon-2026-03-20-to-2026-03-22"> </a>| Virtual | 20 - 22 March 2026</p></li><li><p><strong><a href="https://www.worldmunday.com/ai-governance-and-public-policy-summit/">Global Youth Summit on AI Governance and Public Policy (Policy Pitch Summit)</a></strong> | Online |  23 March 2026</p></li><li><p><strong><a href="https://www.rsaconference.com">RSA Conference 2026</a></strong> | San Francisco, CA A| March 23&#8211;26, 2026</p></li><li><p><strong><a href="https://irmuk.co.uk/dg-ai-governance-conference/">Data Governance &amp; AI Governance Conference Europe</a> </strong>| London, UK |  March 23&#8211;27 2026</p></li><li><p><strong><a href="https://www.far.ai/events/event-list/technical-innovations-for-ai-policy-tiap-conference-2026">Technical Innovations for AI Policy (TIAP) Conference 2026</a></strong> | Washington, DC | 30 - 31 March 2026</p></li><li><p><strong><a href="https://www.oecd-events.org/e/ai-wips-2026">AI WIPS OECD</a></strong> | Virtual | 30 March - 1 April</p></li><li><p><strong><a href="https://iapp.org/conference/iapp-global-summit">IAPP Global Summit 2026</a></strong>  | Washington, DC | 30 March-2 April </p></li></ul><p>Thank you for reading and see you next week &#128131;</p><p>Alisar Mustafa</p><p>&#128391;&#65039;<a href="https://www.linkedin.com/in/alisarmustafa1/">Linkedin</a> | &#129419; <a href="https://bsky.app/profile/alisarmustafa.bsky.social">Bluesky</a></p>]]></content:encoded></item><item><title><![CDATA[The AI Policy Newsletter 02.25.2026]]></title><description><![CDATA[Alabama advances AI health insurance bill, EU charges Meta over WhatsApp AI access, Anthropic gives $20M for AI policy advocacy]]></description><link>https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-02252026</link><guid isPermaLink="false">https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-02252026</guid><dc:creator><![CDATA[Alisar Mustafa]]></dc:creator><pubDate>Thu, 26 Feb 2026 03:51:19 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!z-tV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49a6b34a-944e-45ea-a972-d213b0e0eaba_1080x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Dear readers, I&#8217;m hosting a Women in AI Policy &amp; Safety gathering on March 3 at 6 PM in San Francisco. The evening will bring together women leaders across AI policy and safety to connect, exchange perspectives, and share ideas in an intimate setting. A few spots remain available. If you&#8217;re interested in attending, please email me at theaipolicynewsletter@gmail.com.</em></p><p>&#128064;</p><p><strong>TLDR</strong></p><p><strong>&#127963;&#65039; In the U.S.</strong>, an Alabama Senate committee <a href="https://alabamareflector.com/briefs/bill-regulating-ai-in-determining-health-coverage-passes-senate-committee/">advanced</a> a bill requiring human review and written disclosure when AI is used in health insurance coverage decisions. In New York, rival AI-focused PACs <a href="https://www.cnbc.com/2026/02/19/dueling-pacs-take-center-stage-in-midterm-elections-over-ai-regulation.html">increased</a> midterm spending in a congressional race tied to state AI safety legislation while Nashville songwriters <a href="https://www.wsmv.com/2026/02/15/nashville-songwriters-push-ai-regulation-capitol-hill/">met</a> with federal lawmakers to advocate for copyright safeguards, compensation, transparency in AI training data, and enforcement mechanisms. In Pennsylvania, parents <a href="https://6abc.com/post/parents-address-radnor-township-school-board-ai-deepfake-scandal/18583014/">called</a> for updated district policies and clearer investigative procedures following an AI-generated deepfake incident at a high school.</p><p>&#127757; <strong>Globally</strong>, the European Commission <a href="https://www.politico.eu/article/eu-to-halt-whatsapp-business-chatbot-policy/">issued</a> a chargesheet to Meta over alleged antitrust violations related to restricting rival AI chatbots on WhatsApp and signaled possible interim measures. At the AI Impact Summit in New Delhi, Prime Minister Narendra Modi <a href="https://www.business-standard.com/technology/tech-news/ai-summit-modi-democratising-ai-manav-vision-global-governance-126021901495_1.html">presented</a> India&#8217;s MANAV framework centered on data sovereignty, transparency, and human oversight. India also <a href="https://www.cnbctv18.com/technology/india-ai-content-regulation-compliance-timelines-over-censorship-debate-19847957.htm">implemented</a> new AI compliance rules requiring certain flagged content to be removed within two to three hours. Separately, French President Emmanuel Macron <a href="https://www.theguardian.com/technology/2026/feb/19/emmanuel-macron-eu-ai-rules-child-safety-digital-abuse">defended</a> EU AI regulations and outlined child online safety measures during France&#8217;s G7 presidency.</p><p>&#128126; <strong>In Industry</strong>, Anthropic <a href="https://www.anthropic.com/news/donate-public-first-action">announced</a> a $20 million donation to Public First Action to support bipartisan AI governance efforts focused on transparency, export controls, and federal regulation. Meta <a href="https://www.nytimes.com/2026/02/18/technology/meta-65-million-election-ai.html">launched</a> a $65 million state-level election initiative through multiple super PACs to influence AI-related legislation. Meanwhile, thermal drone footage <a href="https://www.theguardian.com/environment/2026/feb/13/elon-musk-xai-datacenters-air-pollution-mississippi">showed</a> xAI operating gas turbines at a Mississippi data center amid a permitting dispute involving state regulators and the Environmental Protection Agency.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://alisarmustafa.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><em><strong>If you enjoy the content, consider upgrading to a paid subscription. Your help supports my writing</strong>.</em></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>&#127963;&#65039;</p><p><strong><a href="https://alabamareflector.com/briefs/bill-regulating-ai-in-determining-health-coverage-passes-senate-committee/">Alabama Senate Committee Advances Bill Regulating AI Use in Health Coverage Decisions</a></strong></p><p>The Alabama Senate committee approved SB 63, a bill sponsored by Sen. Arthur Orr, that would regulate the use of artificial intelligence in health insurance coverage determinations. The legislation does not prohibit insurers from using AI but requires that a licensed health care professional make the final decision in cases where coverage is denied. It also mandates written disclosure to plan sponsors and individual enrollees when AI is used in determining coverage. If insurers fail to re-evaluate AI-based denials or repeatedly fail to disclose AI use, the Alabama Department of Insurance would be required to take disciplinary action. The bill now moves to the full Senate for consideration.</p><p><strong><a href="https://www.cnbc.com/2026/02/19/dueling-pacs-take-center-stage-in-midterm-elections-over-ai-regulation.html">AI-Focused PACs Compete in New York Congressional Primary</a></strong></p><p>Two political action committees centered on artificial intelligence policy are backing opposing positions in the Democratic primary for New York&#8217;s 12th congressional district. Jobs and Democracy PAC is supporting Assemblyman Alex Bores, who helped advance New York&#8217;s RAISE Act requiring large AI developers to publish safety protocols and report serious misuse. Bores has also been targeted by Leading the Future PAC, which is backed by venture capital and technology industry figures. The activity is part of a broader national effort by AI-focused groups to support candidates aligned either with expanded AI regulation or policies limiting state-level AI rules ahead of the midterm elections.</p><p><strong><a href="https://www.wsmv.com/2026/02/15/nashville-songwriters-push-ai-regulation-capitol-hill/">Nashville Songwriters Advocate for AI Copyright Guardrails</a></strong></p><p>Representatives from the Nashville Songwriters Association International met with lawmakers in Washington, D.C. to discuss proposed regulations addressing the use of copyrighted music in artificial intelligence systems. The group outlined policy priorities focused on permission from copyright holders before AI training, compensation for use of creative works, transparency regarding training data sources, and legal remedies for unauthorized use. They referenced pending federal legislation including the COPIED Act, the TRAIN Act, and the CLEAR Act, which address training data disclosure, copyright protections, and enforcement mechanisms. Songwriters said they are seeking clearer legal standards as AI tools become more integrated into music production and distribution.</p><p><strong><a href="https://6abc.com/post/parents-address-radnor-township-school-board-ai-deepfake-scandal/18583014/">Radnor High School Parents Call for Policy Updates After AI Deepfake Incident</a></strong></p><p>Parents at Radnor High School addressed the school board following harassment charges filed against a juvenile in connection with the creation and distribution of AI-generated sexualized images of students. Families called for updates to district policies related to bullying, harassment, and technology use, as well as clearer communication and defined timelines during investigations. Parents also requested parental consent before student interviews and annual age-appropriate education on artificial intelligence and its misuse. School board members discussed potential policy revisions, including how district rules apply to off-campus conduct involving AI tools.</p><p>&#127757;</p><p><strong>Global&nbsp;</strong></p><p><strong><a href="https://www.politico.eu/article/eu-to-halt-whatsapp-business-chatbot-policy/">EU Issues Antitrust Chargesheet to Meta Over WhatsApp AI Access</a></strong></p><p>The European Commission sent Meta a statement of objections outlining preliminary concerns that the company breached EU antitrust rules by restricting rival AI chatbot providers from using the WhatsApp Business Solution. The investigation focuses on a policy preventing AI services from accessing the platform when AI is their primary offering. The Commission said it is considering interim measures to maintain competitor access to WhatsApp during the investigation to prevent potential market harm. Meta disputed the Commission&#8217;s position, stating that multiple distribution channels for AI services remain available.</p><p><strong><a href="https://www.business-standard.com/technology/tech-news/ai-summit-modi-democratising-ai-manav-vision-global-governance-126021901495_1.html">India Highlights MANAV Framework and Global AI Cooperation at Impact Summit</a></strong></p><p>At the AI Impact Summit in New Delhi, Prime Minister Narendra Modi outlined India&#8217;s MANAV framework for artificial intelligence, centered on ethical systems, accountable governance, data sovereignty, accessibility, and lawful, verifiable deployment. He called for AI to be developed as a global common good, supported by open standards, transparent safety rules, and authenticity labeling for AI-generated content. The summit included participation from leaders such as Emmanuel Macron and Antonio Guterres, as well as technology executives including Sundar Pichai, Sam Altman, and Dario Amodei. Discussions addressed sovereign AI, global governance, infrastructure demands, and workforce impacts, with a leaders&#8217; declaration scheduled at the summit&#8217;s conclusion.</p><p><strong><a href="https://www.cnbctv18.com/technology/india-ai-content-regulation-compliance-timelines-over-censorship-debate-19847957.htm">India&#8217;s Three-Hour AI Takedown Rule Triggers Debate Over Platform Liability</a></strong></p><p>India&#8217;s revised AI compliance framework requires social media intermediaries to act on government takedown orders within three hours, while non-consensual nude imagery must be removed within two hours and impersonation-related content within 36 hours. The rules form part of updated obligations addressing AI-generated content, including deepfakes and fabricated documents. Platforms must also label lawfully generated synthetic content and verify user uploads for AI origin. Legal experts say the shortened timelines could affect intermediary Safe Harbour protections, compliance operations, and content moderation processes. The government has stated the changes are intended to address rapid harm from synthetic media, while legal challenges related to enforcement mechanisms, including the Sahyog portal, are ongoing in Indian courts.</p><p><strong><a href="https://www.theguardian.com/technology/2026/feb/19/emmanuel-macron-eu-ai-rules-child-safety-digital-abuse">Macron Defends EU AI Rules and Calls for Child Online Protections</a></strong></p><p>At the AI Impact Summit in Delhi, French President Emmanuel Macron defended the European Union&#8217;s AI regulatory framework amid criticism from U.S. officials and reaffirmed plans to address online child safety during France&#8217;s G7 presidency. He cited concerns over AI-generated sexualized images of children and said France is moving to ban social networks for users under 15. Antonio Guterres called for global cooperation and warned against concentration of AI governance among a small number of actors. Prime Minister Narendra Modi raised issues related to AI monopolies and content authenticity, while executives including Sam Altman and Dario Amodei discussed safeguards and oversight mechanisms.</p><p>&#128126;</p><p><strong>Industry&nbsp;&nbsp;</strong></p><p><strong><a href="https://www.anthropic.com/news/donate-public-first-action">Anthropic Contributes $20 Million to Public First Action for AI Policy Advocacy</a></strong></p><p>Anthropic announced a $20 million donation to Public First Action, a bipartisan 501(c)(4) organization focused on AI policy education and governance. The group supports measures including AI model transparency requirements, a federal AI regulatory framework, export controls on advanced AI chips, and targeted regulation addressing risks such as AI-enabled cyberattacks and biological threats. Public First Action works with Republican and Democratic policymakers and opposes federal preemption of state AI laws unless stronger safeguards are enacted. Anthropic stated that the contribution is intended to support policy development related to AI governance, transparency, and national security considerations.</p><p><strong><a href="https://www.nytimes.com/2026/02/18/technology/meta-65-million-election-ai.html">Meta Launches $65 Million State-Level Election Effort on AI Policy</a></strong></p><p>Meta is preparing to spend $65 million in 2026 to support state-level candidates aligned with its artificial intelligence policy priorities, beginning in Texas and Illinois. The company has established two new super PACs&#8212;Forge the Future Project, backing Republicans, and Making Our Tomorrow, backing Democrats&#8212;joining two existing Meta-backed PACs focused on California and other states. The spending marks Meta&#8217;s largest election investment to date and is aimed at influencing state legislation related to AI development and data center expansion.</p><p><strong><a href="https://www.theguardian.com/environment/2026/feb/13/elon-musk-xai-datacenters-air-pollution-mississippi">Thermal Footage Shows xAI Turbines Operating Amid Permit Dispute in Mississippi</a></strong></p><p>Thermal drone footage published by Floodlight indicates that xAI is operating more than a dozen gas turbines at its Southaven, Mississippi facility without state air permits, despite a January ruling by the Environmental Protection Agency stating such equipment requires permits under the Clean Air Act. Mississippi regulators classify the trailer-mounted turbines as portable units exempt from permitting, while the EPA has maintained that similar pollution sources require prior approval. The turbines power xAI&#8217;s Grok chatbot and are located near residential areas and schools. The company has applied for permits to expand operations, and a public hearing and comment period are underway.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-02252026?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-02252026?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p>&#127797;</p><p><strong>Resources&nbsp;</strong></p><ul><li><p><strong><a href="https://www.theaipolicycourse.com/">The AI Policy Course</a></strong></p></li><li><p><strong><a href="https://www.alisarmustafa.com/resources">AI Policy Resources</a></strong></p></li><li><p><strong><a href="https://www.techpolicy.press/newsletter/">Tech Policy Press Weekly Newsletter</a></strong></p></li><li><p><strong><a href="https://alltechishuman.org/responsible-tech-job-board">All Tech Is Human Job Board</a></strong></p></li></ul><p><strong>&#128197;</strong></p><p><strong>Upcoming Events</strong></p><ul><li><p><strong><a href="https://sites.google.com/view/sc4ai/workshops/sc4ai26e">Social Choice for AI Ethics and Safety 2026 Europe (SC4AI&#8217;26e)</a> </strong>| Paris, France | 26 - 27 February 2026</p></li><li><p><strong><a href="https://caio-london.re-work.co">Chief AI Officer (CAIO) Summit</a></strong> | London, UK | 27 February 2026</p></li><li><p>Women in AI Policy &amp; Safety | San Francisco, CA| 3 March 2026 - Reach out to theaipolicynewsletter@gmail.com to request attendance. </p></li><li><p><strong><a href="https://www.far.ai/events/event-list/london-alignment-workshop-2026">London Alignment Workshop</a></strong><a href="https://www.far.ai/events/event-list/london-alignment-workshop-2026"> </a> | London, UK | 2 - 3 March 2026</p></li><li><p><strong><a href="https://www.dataversity.net/webinar/annual-executive-briefing-leading-ai-governance-webinar-series/">AI Governance vs. Data Governance: Strategic Alignment Without Redundancy</a> </strong>| Online | 3 March 2026</p></li><li><p><strong><a href="https://www.gartner.com/en/conferences/na/data-analytics-us">Gartner Data &amp; Analytics Summit 2026</a></strong> | Orlando, Florida | 9 - 11 March 2026</p></li><li><p><strong><a href="https://www.nvidia.com/gtc/">NVIDIA GTC 2026</a></strong> | San Jose, California | 16 - 19 March 2026 :</p></li><li><p><strong><a href="https://iapp.org/conference/iapp-global-summit">IAPP Global Summit 2026</a></strong>  | Washington, DC | 30 March-2 April :</p></li></ul><p>Thank you for reading and see you next week &#128131;</p><p>Alisar Mustafa</p><p>&#128391;&#65039;<a href="https://www.linkedin.com/in/alisarmustafa1/">Linkedin</a> | &#129419; <a href="https://bsky.app/profile/alisarmustafa.bsky.social">Bluesky</a></p>]]></content:encoded></item><item><title><![CDATA[The AI Policy Newsletter 02.09.2026]]></title><description><![CDATA[Trump tests AI for faster federal rulemaking, China plans policies to manage AI&#8217;s impact on jobs, Pentagon and Anthropic clash over military use of AI]]></description><link>https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-02092026</link><guid isPermaLink="false">https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-02092026</guid><dc:creator><![CDATA[Alisar Mustafa]]></dc:creator><pubDate>Tue, 10 Feb 2026 02:17:01 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!z-tV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49a6b34a-944e-45ea-a972-d213b0e0eaba_1080x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>&#128064;</p><p><strong>TLDR</strong></p><p><strong>&#127963;&#65039; In the U.S.</strong>, national labor leaders <a href="https://www.politico.com/news/2026/02/04/labor-leaders-blast-gavin-newsom-over-ai-demand-more-regulation-00764927">pressed</a> California Gov. Gavin Newsom to support new AI-related worker protections as part of a broader debate over employment, surveillance, and automation. The Trump administration <a href="https://www.propublica.org/article/trump-artificial-intelligence-google-gemini-transportation-regulations?%5C">outlined</a> plans for the Department of Transportation to use generative AI tools to draft federal regulations, while California lawmakers <a href="https://www.reuters.com/legal/government/california-senate-passes-bill-regulating-lawyers-use-ai-2026-01-30/">advanced</a> a bill requiring lawyers to verify AI-generated materials and restrict AI use in arbitration. Wisconsin legislators <a href="https://www.wpr.org/news/chatbots-age-verification-companionship-minors-wisconsin-assembly-chatgpt">examined</a> age-verification and safety requirements for companionship chatbots used by minors, and the FTC <a href="https://natlawreview.com/article/ftc-signals-pause-ai-regulation">signaled</a> it is not planning new AI-specific rulemaking, while continuing enforcement under existing privacy and consumer protection laws.</p><p>&#127757; <strong>Globally</strong>, China <a href="https://www.nytimes.com/2026/02/02/business/china-ai-regulations.html">reiterated</a> its strategy of accelerating AI development while enforcing extensive regulatory controls focused on information management, data protection, and social stability while its authorities also <a href="https://www.globaltimes.cn/page/202601/1354301.shtml">announced</a> plans to introduce employment policies addressing AI&#8217;s impact on jobs and workforce transitions. Uzbekistan <a href="https://www.dentons.com/en/insights/articles/2026/january/29/uzbekistan-adopts-first-ai-focused-amendments-to-information-and-administrative-laws">adopted</a> its first AI-focused amendments to information and administrative laws, including a statutory AI definition, human-in-the-loop requirements, and penalties for unlawful AI-based data processing. Mexico <a href="https://babl.ai/mexico-unveils-national-declaration-on-ethical-ai-to-guide-public-policy-and-protect-human-rights/">released</a> a national declaration outlining ethical principles for AI in public policy, and Indonesia <a href="https://www.bernama.com/en/world/news.php?id=2517192">began</a> drafting rules to require labeling or watermarks on AI-generated content.</p><p>&#128126; <strong>In Industry</strong>, pharmaceutical companies <a href="https://www.reuters.com/legal/litigation/drugmakers-turn-ai-speed-trials-regulatory-submissions-2026-01-26/">reported</a> wider use of AI to streamline clinical trials, site selection, and regulatory documentation. The Pentagon and Anthropic <a href="https://www.reuters.com/business/pentagon-clashes-with-anthropic-over-military-ai-use-2026-01-29/">entered</a> discussions over limits on military and domestic surveillance uses of commercial AI systems while Apple co-founder Steve Wozniak <a href="https://www.govtech.com/education/higher-ed/steve-wozniak-calls-for-ai-regulation-at-lehigh-university-event">called</a> for AI transparency, source attribution, and in-person assessment in education during a university event. Separately, Starlink <a href="https://www.newsbytesapp.com/news/science/spacex-can-now-use-starlink-user-data-to-train-ai/story">updated</a> its privacy policy to allow customer data to be used for AI training and shared with third-party collaborators.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://alisarmustafa.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><em><strong>If you enjoy the content, consider upgrading to a paid subscription. Your help supports my writing</strong>.</em></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>&#127963;&#65039;</p><p><strong>United States&nbsp;</strong></p><p><strong><a href="https://www.politico.com/news/2026/02/04/labor-leaders-blast-gavin-newsom-over-ai-demand-more-regulation-00764927">Labor Leaders Call on Newsom to Support Expanded AI Regulation</a></strong></p><p>Labor union leaders from the AFL-CIO held a news conference urging California Governor Gavin Newsom to support additional artificial intelligence regulations focused on worker protections. Representatives from multiple states linked their political support for Newsom to his stance on AI-related legislation, citing concerns about layoffs, workplace surveillance, algorithmic decision-making in hiring and discipline, and other employment impacts. Union leaders highlighted proposed California bills that would require advance notice for AI-related layoffs, mandate human oversight of AI systems used in employment decisions, and restrict certain surveillance tools in workplaces. Newsom&#8217;s office responded that California has enacted a range of worker-related AI measures, including safety and deepfake laws, and said the governor&#8217;s approach seeks to balance regulation with continued AI development.</p><p><strong><a href="https://www.propublica.org/article/trump-artificial-intelligence-google-gemini-transportation-regulations?%5C">Trump Administration Explores Using AI to Draft Federal Transportation Rules</a></strong></p><p>The Trump administration is planning to use artificial intelligence to assist in drafting federal transportation regulations, according to internal Department of Transportation records and staff accounts. DOT officials have discussed using Google&#8217;s Gemini model to rapidly generate draft rulemakings, with agency lawyers reviewing and revising the output. Senior leadership described the goal as significantly accelerating the rulemaking timeline, potentially producing full draft regulations within weeks rather than months. The department has already used AI to draft an unpublished Federal Aviation Administration rule. Supporters within the agency frame AI as a tool to increase efficiency and handle routine drafting work, while some staff have raised concerns about accuracy, oversight, and the risks of relying on automated systems for complex safety-related regulations governing aviation, pipelines, rail, and roadway transportation.</p><p><strong><a href="https://www.reuters.com/legal/government/california-senate-passes-bill-regulating-lawyers-use-ai-2026-01-30/">California Senate Advances Bill on Lawyers&#8217; Use of AI</a></strong></p><p>The California Senate passed a bill that would require lawyers to verify the accuracy of any materials produced using artificial intelligence, including legal citations and factual statements in court filings. The measure would also prohibit arbitrators from delegating decision-making to generative AI and from relying on AI-generated information outside the case record without notifying the parties. Under the bill, attorneys would need to take reasonable steps to correct false or biased AI output, avoid inputting confidential or nonpublic information into public AI tools, and ensure AI use does not result in unlawful discrimination. The legislation, SB 574, now moves to the State Assembly for consideration.</p><p><strong><a href="https://www.wpr.org/news/chatbots-age-verification-companionship-minors-wisconsin-assembly-chatgpt">Wisconsin Lawmakers Consider Age Verification for AI Companionship Chatbots</a></strong></p><p>Wisconsin lawmakers held a hearing on a proposal that would require age verification and additional safeguards for human-like AI companionship chatbots used by minors. The bill targets chatbots with features such as memory of past conversations, emotional questioning, and personalized interactions, and would require guardrails to prevent encouragement of self-harm, substance use, violence, illegal activity, or sexual behavior. Companies could face enforcement actions or lawsuits for violations. Supporters cited usage data showing widespread teen engagement with chatbots, while critics raised concerns about compliance burdens, data privacy risks from age verification, and potential impacts on educational AI tools. Similar measures have been considered in other states and at the federal level.</p><p><strong><a href="https://natlawreview.com/article/ftc-signals-pause-ai-regulation#google_vignette">FTC Says No New AI-Specific Rulemaking Is Planned</a></strong></p><p>The Federal Trade Commission said it does not plan to introduce new artificial intelligence&#8211;specific rules in the near term. At the Privacy State of the Union Conference, Bureau of Consumer Protection Director Chris Mufarrige stated that there is currently no AI-related rulemaking in the FTC&#8217;s pipeline. The comments followed the agency&#8217;s decision to reopen and set aside a 2024 consent order involving AI writing tool Rytr and referenced the Trump administration&#8217;s AI Action Plan, which focuses on reducing regulatory barriers to AI development. The FTC indicated it will rely on existing legal authorities rather than new AI-specific regulations, while continuing enforcement activities related to children&#8217;s privacy, including oversight under the Children&#8217;s Online Privacy Protection Act.</p><p>&#127757;</p><p><strong>Global&nbsp;</strong></p><p><strong><a href="https://www.nytimes.com/2026/02/02/business/china-ai-regulations.html">China Sets Parallel Push for AI Expansion and Regulatory Compliance</a></strong></p><p>China&#8217;s leadership outlined an approach to artificial intelligence that pairs accelerated development with expanded regulatory oversight. President Xi Jinping described AI as a driver of future economic growth while calling for early controls to address potential risks. Chinese AI companies are being encouraged to scale development while complying with multiple rules covering data sources, information controls, algorithm disclosures, and content management. Firms such as Zhipu AI have cited compliance obligations in investor filings, including requirements to prevent the spread of prohibited information. Since 2022, companies have been required to report algorithmic details to regulators, with additional draft rules introduced for generative and companion-style AI systems.</p><p><strong><a href="https://www.globaltimes.cn/page/202601/1354301.shtml">China Plans Policy Measures Addressing AI and Employment</a></strong></p><p>China is preparing a set of policy measures to address the impact of artificial intelligence on employment, according to statements from government officials reported by state media. The Ministry of Human Resources and Social Security said an official document will outline responses to job market changes linked to AI adoption, including employment support for key industries and priority groups such as university graduates and young job-seekers. Officials noted that AI-driven technological change is expected to reshape job roles rather than eliminate employment overall, alongside expanded efforts to train interdisciplinary talent combining AI and manufacturing skills. The measures align with broader efforts to integrate AI across major industries while adapting workforce policies to technological shifts.</p><p><strong><a href="https://www.dentons.com/en/insights/articles/2026/january/29/uzbekistan-adopts-first-ai-focused-amendments-to-information-and-administrative-laws">Uzbekistan Enacts AI Amendments to Information and Administrative Laws</a></strong></p><p>Uzbekistan has adopted amendments to its information and administrative legislation establishing its first statutory framework specifically addressing artificial intelligence. The law introduces a legal definition of AI, sets general rules for the use of AI in information systems, and requires that legally significant decisions affecting individuals&#8217; rights not be based solely on AI outputs. It also adds new administrative liability for the unlawful processing and dissemination of personal data using AI technologies, with specified fines and confiscation measures. The amendments apply across sectors and entered into force on January 21, 2026, following official publication.</p><p><strong><a href="https://babl.ai/mexico-unveils-national-declaration-on-ethical-ai-to-guide-public-policy-and-protect-human-rights/">Mexico Issues National Declaration on Ethical AI Use</a></strong></p><p>Mexico has released a National Declaration of Ethics and Best Practices for the Use and Development of Artificial Intelligence, led by the Secretariat of Science, Humanities, Technology and Innovation and the Agency for Digital Transformation and Telecommunications. Presented on January 29, 2026, the declaration provides a voluntary framework to guide public policy and the use of AI across government, the private sector and civil society. It outlines principles focused on human responsibility for AI-supported decisions, explainability, responsible data use, protection of human rights, cultural and linguistic diversity, and alignment with national priorities. Officials said the declaration is intended to inform future legislative and regulatory efforts as Mexico develops its long-term AI governance approach.</p><p><strong><a href="https://www.bernama.com/en/world/news.php?id=2517192">Indonesia Drafts Rule Requiring Labels for AI-Generated Content</a></strong></p><p>Indonesia is preparing a regulation that would require content created using generative artificial intelligence to carry a watermark or special label, according to the Communication and Digital Ministry. The proposed ministerial rule would require AI platforms to label AI-generated content uploaded to digital platforms, with non-compliant content subject to takedown. Officials said the regulation is intended to complement two forthcoming Presidential Regulations covering a National AI Roadmap and ethical guidelines for AI use. Existing sanctions for AI-generated content that violates current laws would continue to be enforced under Indonesia&#8217;s Electronic Information and Transaction law.</p><p>&#128126;</p><p><strong>Industry&nbsp;&nbsp;</strong></p><p><strong><a href="https://www.reuters.com/business/pentagon-clashes-with-anthropic-over-military-ai-use-2026-01-29/">Pentagon and Anthropic Disagree Over Military Use of AI Systems</a></strong></p><p>The U.S. Department of Defense and artificial intelligence developer Anthropic are in discussions over limits on how Anthropic&#8217;s AI models may be used for military and intelligence purposes, according to people familiar with the matter. The disagreement centers on safeguards that Anthropic seeks to maintain, including restrictions on autonomous weapons targeting and domestic surveillance, while Pentagon officials argue they should be able to deploy commercial AI tools in line with U.S. law regardless of company usage policies. Talks have taken place under a contract valued at up to $200 million, and the outcome could affect Anthropic&#8217;s role in U.S. national security projects, as well as broader interactions between AI developers and the military.</p><p><strong><a href="https://www.newsbytesapp.com/news/science/spacex-can-now-use-starlink-user-data-to-train-ai/story">Starlink Updates Privacy Policy to Allow AI Training Use of Customer Data</a></strong></p><p>Starlink has updated its privacy policy to permit the use of customer data for artificial intelligence training and to allow data sharing with third-party collaborators. The company collects data including location information, payment and contact details, IP addresses, and certain communications data, though the policy does not specify which data categories will be used for AI training. The change follows broader AI development efforts linked to Elon Musk&#8217;s companies, including xAI, which is developing the Grok large language model. The policy update applies across Starlink&#8217;s global user base and reflects an expansion of permitted data uses beyond service provision.</p><p><strong><a href="https://www.reuters.com/legal/litigation/drugmakers-turn-ai-speed-trials-regulatory-submissions-2026-01-26/">Drugmakers Use AI to Streamline Clinical Trials and Regulatory Submissions</a></strong></p><p>Pharmaceutical companies are using artificial intelligence to support participant recruitment, site selection for clinical trials, and preparation of regulatory documents, according to executives speaking at the JP Morgan Healthcare Conference. Drugmakers including Novartis, AstraZeneca, Roche, Pfizer, GSK, and Eli Lilly reported using AI tools to reduce manual administrative work, manage large volumes of clinical and safety documentation, and shorten trial setup timelines. Examples cited include faster site selection, automated formatting of regulatory submissions, and AI-assisted trial enrollment and data analysis. Companies said these uses are focused on operational processes rather than drug discovery itself, with time savings ranging from weeks to months depending on the application.</p><p><strong><a href="https://www.govtech.com/education/higher-ed/steve-wozniak-calls-for-ai-regulation-at-lehigh-university-event">Steve Wozniak Discusses AI Oversight and Education at Lehigh University</a></strong></p><p>Apple co-founder Steve Wozniak spoke at a Lehigh University event about artificial intelligence governance, transparency, and education practices. He said AI systems should be able to cite sources for generated information and emphasized the role of human oversight in technology use. Wozniak encouraged skepticism toward AI outputs and said in-person assessments can help educators evaluate students&#8217; knowledge as AI tools become more common. He described AI as useful for generating ideas but limited in understanding, noting that humans remain responsible for reviewing and contextualizing results. Wozniak also raised concerns about deepfakes and inaccurate outputs, and discussed potential effects of AI on the labor market, advising students to focus on developing independent skills.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-02092026?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-02092026?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p>&#127797;</p><p><strong>Resources&nbsp;</strong></p><ul><li><p><strong><a href="https://www.theaipolicycourse.com/">The AI Policy Course</a></strong></p></li><li><p><strong><a href="https://www.alisarmustafa.com/resources">AI Policy Resources</a></strong></p></li><li><p><strong><a href="https://www.techpolicy.press/newsletter/">Tech Policy Press Weekly Newsletter</a></strong></p></li><li><p><strong><a href="https://alltechishuman.org/responsible-tech-job-board">All Tech Is Human Job Board</a></strong></p></li></ul><p><strong>&#128197;</strong></p><p><strong>Upcoming Events</strong></p><ul><li><p><strong><a href="https://www.worldaicannes.com">World AI Cannes Festival (WAICF)</a></strong> | Cannes, France | February 12-13 2026</p></li><li><p><strong><a href="https://impact.indiaai.gov.in">India - AI Impact Summit 2026</a></strong> | New Delhi, India | February 16-20 2026</p></li><li><p><strong><a href="https://events.asc.ac.at/event/232/">Trustworthy AI: Legal Aspects</a></strong> | Online | 17 February 2026</p></li><li><p><strong><a href="https://iapp.org/conference/iapp-uk-intensive">IAPP UK Intensive 2026</a> </strong>| London | 23-26 2026</p></li><li><p><strong><a href="https://ismg.events/summit/implications-of-ai-virtual-feb-2026/">Cybersecurity Summit : Implications of AI </a></strong>| Virtual | February 24 2026</p></li><li><p><strong><a href="https://www.iaseai.org/our-programs/iaseai26">The International Association for Safe &amp; Ethical AI (IASEAI)</a></strong> | Paris, France | February 24&#8211;26 2026</p></li><li><p><strong><a href="https://caio-london.re-work.co">Chief AI Officer (CAIO) Summit</a></strong> | London, UK | 27 February 2026</p></li><li><p><strong><a href="https://iapp.org/conference/iapp-global-summit">IAPP Global Summit 2026</a></strong>  | Washington, DC | 30 March-2 April</p></li></ul><p>Thank you for reading and see you next week &#128131;</p><p>Alisar Mustafa</p><p>&#128391;&#65039;<a href="https://www.linkedin.com/in/alisarmustafa1/">Linkedin</a> | &#129419; <a href="https://bsky.app/profile/alisarmustafa.bsky.social">Bluesky</a></p>]]></content:encoded></item><item><title><![CDATA[The AI Policy Newsletter 01.26.2026]]></title><description><![CDATA[Oklahoma proposes 3 AI bills, South Korea enacts sweeping AI law, Gates & OpenAI launch $50M AI health pilot in Africa.]]></description><link>https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-01262026</link><guid isPermaLink="false">https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-01262026</guid><dc:creator><![CDATA[Alisar Mustafa]]></dc:creator><pubDate>Tue, 27 Jan 2026 02:01:27 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!z-tV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49a6b34a-944e-45ea-a972-d213b0e0eaba_1080x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>&#128064;</p><p><strong>TLDR</strong></p><p><strong>&#127963;&#65039; In the U.S.</strong>, Oklahoma lawmakers <a href="https://www.kswo.com/2026/01/17/oklahoma-lawmaker-files-trio-ai-regulation-bills/">introduced</a> three AI bills restricting legal personhood for AI, limiting government use of AI for surveillance and deepfakes, and banning social AI companions for minors, while Florida&#8217;s Senate <a href="https://www.cbsnews.com/miami/news/florida-senate-backs-artificial-intelligence-bill-of-rights/">advanced</a> an &#8220;Artificial Intelligence Bill of Rights&#8221; covering parental controls, AI disclosures, political ads, and limits on contracts with foreign-linked AI firms. Separately, a new EPA rule <a href="https://www.politico.com/news/2026/01/22/epa-thwarts-musks-diesel-turbines-ai-00737605">clarified</a> that gas turbines powering AI data centers require permits, complicating xAI&#8217;s expansion in Tennessee and Mississippi amid environmental scrutiny.</p><p>&#127757; <strong>Globally</strong>, South Korea <a href="https://www.reuters.com/world/asia-pacific/south-korea-launches-landmark-laws-regulate-ai-startups-warn-compliance-burdens-2026-01-22/">brought</a> its AI Basic Act into force with human-oversight, labeling, and high-impact system rules alongside a grace period for enforcement, as startups warned about compliance burdens. The UAE <a href="https://www.arabnews.com/node/2630175/amp">outlined</a> a flexible, principles-based approach to AI governance at the World Economic Forum, emphasizing human accountability and adaptable regulation. In the UK, lawmakers <a href="https://www.reuters.com/sustainability/boards-policy-regulation/britain-needs-ai-stress-tests-financial-services-lawmakers-say-2026-01-20/">urged</a> financial regulators to adopt AI stress tests and issue clearer guidance to address consumer and systemic risks in financial services.</p><p>&#128126; <strong>In Industry</strong>, the Gates Foundation and OpenAI <a href="https://healthpolicy-watch.news/gates-and-openai-team-up-to-pilot-ai-solutions-to-african-healthcare-problems/">launched</a> a $50 million pilot to deploy AI tools across 1,000 African healthcare clinics, starting in Rwanda. Hiring platform Eightfold AI was <a href="https://www.reuters.com/sustainability/boards-policy-regulation/ai-company-eightfold-sued-helping-companies-secretly-score-job-seekers-2026-01-21/">sued</a> under U.S. credit reporting laws for allegedly scoring job applicants without notice. Salesforce CEO Marc Benioff <a href="https://www.cnbc.com/2026/01/20/salesforce-benioff-ai-regulation-suicide-coaches.html">called</a> for stronger AI regulation, citing cases where AI systems were linked to self-harm.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://alisarmustafa.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><em><strong>If you enjoy the content, consider upgrading to a paid subscription. Your help supports my writing</strong>.</em></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>&#127963;&#65039;</p><p><strong>United States&nbsp;</strong></p><p><strong><a href="https://www.kswo.com/2026/01/17/oklahoma-lawmaker-files-trio-ai-regulation-bills/">Oklahoma Lawmaker Introduces Three Bills to Regulate AI Use</a></strong></p><p>An Oklahoma legislator has introduced three bills aimed at establishing safeguards for artificial intelligence across the state. House Bill 3546 would prohibit any AI system from being granted legal personhood under state or federal law. House Bill 3545 focuses on government use of AI, barring state agencies from deploying systems for discriminatory classification, biometric surveillance, or generating deepfakes. It would also require human review of AI-generated recommendations and mandate agency participation in an annual statewide AI report. House Bill 3544 targets protections for minors by banning &#8220;social AI companions&#8221; for children and requiring age verification for AI chatbots, with limited exceptions for therapeutic AI tools operating under professional oversight.</p><p><strong><a href="https://www.cbsnews.com/miami/news/florida-senate-backs-artificial-intelligence-bill-of-rights/">Florida Senate Advances &#8220;Artificial Intelligence Bill of Rights&#8221; Despite Industry Opposition</a></strong></p><p>A Florida Senate committee unanimously advanced a proposed &#8220;Artificial Intelligence Bill of Rights&#8221; that would establish new protections around AI use, despite opposition from national tech groups and parallel federal efforts to curb state-level regulation. The bill would grant parents control over children&#8217;s interactions with AI, require disclosure when users are communicating with AI systems, and restrict unauthorized use of names, images, or likenesses. It would also mandate disclosure of AI-generated political advertising and bar state agencies from contracting with AI firms tied to foreign countries of concern. Supporters cited risks to minors and vulnerable populations, while critics warned the measure could create compliance burdens and regulatory fragmentation.</p><p><strong><a href="https://www.politico.com/news/2026/01/22/epa-thwarts-musks-diesel-turbines-ai-00737605">EPA Rule Challenges xAI&#8217;s Use of Unpermitted Gas Turbines at Data Centers</a></strong></p><p>A newly finalized Environmental Protection Agency rule could affect Elon Musk&#8217;s AI company xAI by clarifying that gas turbines used at data centers require Clean Air Act permits, even if they are portable or temporary. xAI has faced scrutiny for operating dozens of methane gas turbines at its Memphis-area facilities without permits, arguing they qualified as temporary equipment. The EPA rule rejects that interpretation, stating such turbines are stationary sources subject to air pollution permitting. Environmental groups said the provision undercuts a loophole used at xAI&#8217;s site, while EPA leadership denied the rule specifically targets the company. The clarification comes as xAI expands data center operations in Tennessee and Mississippi, where additional turbines are planned or under review.</p><p>&#127757;</p><p><strong>Global&nbsp;</strong></p><p><strong><a href="https://www.reuters.com/world/asia-pacific/south-korea-launches-landmark-laws-regulate-ai-startups-warn-compliance-burdens-2026-01-22/">South Korea Enacts Comprehensive AI Law as Startups Raise Compliance Concerns</a></strong></p><p>South Korea has brought into force what it describes as the world&#8217;s first comprehensive legal framework regulating artificial intelligence, with the AI Basic Act taking effect on January 22. The law introduces requirements for human oversight of &#8220;high-impact&#8221; AI used in areas such as healthcare, finance, transport, and critical infrastructure, alongside advance user notification and clear labeling of generative AI outputs. Companies face potential fines for violations, including penalties for failing to label AI-generated content, though authorities have granted at least a one-year grace period before enforcement. While the government says the framework balances AI adoption with trust and safety, startup groups warned that vague provisions and compliance costs could discourage innovation, prompting officials to consider additional guidance and support measures.</p><p><strong><a href="https://www.arabnews.com/node/2630175/amp">UAE Sets Out AI Governance Vision Focused on Flexibility and Human Oversight</a></strong></p><p>At the World Economic Forum in Davos, UAE Minister of State Maryam Al-Hammadi outlined the country&#8217;s approach to AI governance, emphasizing the need for adaptable regulation that supports rapid AI adoption while preserving core legal principles. She said the UAE has revised 90% of its laws in four years and is developing an AI system to assist with legislative drafting and stakeholder feedback, while keeping humans responsible for final decisions. Al-Hammadi stressed non-negotiable principles including accountability, transparency, privacy, and constitutional safeguards. The UAE aims to share its regulatory model internationally within two years. Other panelists cautioned against overregulation, arguing regulators should address concrete harms as they emerge rather than anticipate speculative risks.</p><p><strong><a href="https://www.reuters.com/sustainability/boards-policy-regulation/britain-needs-ai-stress-tests-financial-services-lawmakers-say-2026-01-20/">UK Lawmakers Call for AI Stress Tests in Financial Services Regulation</a></strong></p><p>British lawmakers have urged financial regulators to adopt AI-specific stress tests to address risks posed by the growing use of artificial intelligence in financial services. A report from Parliament&#8217;s Treasury Committee said the Financial Conduct Authority and the Bank of England should move beyond a &#8220;wait and see&#8221; approach as AI systems become embedded across banking, insurance, and credit decisions. The committee recommended that the FCA issue guidance by the end of 2026 clarifying how consumer protection rules apply to AI and what level of understanding senior managers must have of AI systems. Lawmakers cited risks including opaque credit decisions, exclusion of vulnerable consumers, AI-driven fraud, and potential threats to financial stability from automated trading and reliance on major U.S. technology providers.</p><p>&#128126;</p><p><strong>Industry&nbsp;&nbsp;</strong></p><p><strong><a href="https://healthpolicy-watch.news/gates-and-openai-team-up-to-pilot-ai-solutions-to-african-healthcare-problems/">Gates Foundation and OpenAI Launch $50M AI Healthcare Pilot in Africa</a></strong></p><p>The Gates Foundation and OpenAI have announced a $50 million pilot program, Horizon 1000, to deploy AI tools across 1,000 primary healthcare clinics in Africa by 2028. The initiative will provide funding, technology, and technical support to improve care delivery, reduce administrative burdens, and support decision-making for health workers. The pilot will begin in Rwanda before expanding to Kenya, South Africa, and Nigeria. Rwanda plans to use AI for disease diagnosis support, malaria prediction, health commodity forecasting, and administrative automation for its network of more than 60,000 community health workers. The program builds on existing digital infrastructure and data systems and aligns with broader efforts by global health organizations to use AI for applications such as tuberculosis screening in low-resource settings.</p><p><strong><a href="https://www.reuters.com/sustainability/boards-policy-regulation/ai-company-eightfold-sued-helping-companies-secretly-score-job-seekers-2026-01-21/">Eightfold AI Sued Over Alleged Secret Scoring of Job Applicants</a></strong></p><p>Eightfold AI, a hiring platform used by major companies including Microsoft and PayPal, is facing a proposed class-action lawsuit in California alleging it unlawfully evaluated job applicants without their knowledge. Plaintiffs claim Eightfold generated reports and talent profiles used in hiring decisions without providing required notice or opportunities to dispute inaccuracies, in violation of the U.S. Fair Credit Reporting Act and California consumer protection law. The lawsuit alleges Eightfold&#8217;s tools assess candidates&#8217; traits, education quality, and career trajectories using large datasets. Eightfold said it relies on data provided by candidates or employers and does not scrape social media, emphasizing its commitment to compliance. The case is described as the first to apply credit reporting laws directly to AI hiring systems.</p><p><strong><a href="https://www.cnbc.com/2026/01/20/salesforce-benioff-ai-regulation-suicide-coaches.html">Salesforce CEO Marc Benioff Urges AI Regulation Over Safety Risks</a></strong></p><p>Salesforce CEO Marc Benioff has renewed calls for regulating artificial intelligence, citing documented cases in which AI systems were linked to suicides. Speaking at the World Economic Forum in Davos, Benioff said some AI models had effectively become &#8220;suicide coaches,&#8221; arguing that unchecked deployment mirrors earlier harms caused by unregulated social media. He pointed to gaps in U.S. AI governance, where states such as California and New York have enacted their own safety and transparency laws while federal standards remain unclear. Benioff also questioned whether Section 230 liability protections should apply to AI systems that cause harm, suggesting existing legal frameworks may need revision as AI tools increasingly influence vulnerable users.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-01262026?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-01262026?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p>&#127797;</p><p><strong>Resources&nbsp;</strong></p><ul><li><p><strong><a href="https://www.theaipolicycourse.com/">The AI Policy Course</a></strong></p></li><li><p><strong><a href="https://www.alisarmustafa.com/resources">AI Policy Resources</a></strong></p></li><li><p><strong><a href="https://www.techpolicy.press/newsletter/">Tech Policy Press Weekly Newsletter</a></strong></p></li><li><p><strong><a href="https://alltechishuman.org/responsible-tech-job-board">All Tech Is Human Job Board</a></strong></p></li></ul><p><strong>&#128197;</strong></p><p><strong>Upcoming Events</strong></p><ul><li><p><strong><a href="https://luma.com/alignment-pretraining">How Training Data Shapes AI Values - Alignment Pretraining</a></strong> | Virtual | January 28, 2026</p></li><li><p><strong><a href="https://cdt.org/event/benchmarking-beyond-borders-making-ai-testing-truly-global/">Benchmarking Beyond Borders: Making AI Testing Truly Global</a></strong> | Virtual | January 29, 2026</p></li><li><p><strong><a href="https://apartresearch.com/sprints/the-technical-ai-governance-challenge-2026-01-30-to-2026-02-01">The Technical AI Governance Challenge</a></strong> | Virtual | January 30 &#8211; Feb 1 2026</p></li><li><p><strong><a href="https://www.ciscoaisummit.com/ai-virtual-summit.html">CISCO AI Summit</a></strong> | Online | February 3 2026</p></li><li><p><strong><a href="https://www.ai-expo.net/global/">AI &amp; Big Data Expo Global</a></strong>  | London, Great Britain | February 4&#8211;5, 2026</p></li><li><p><strong><a href="https://iser.org.in/conf/index.php?id=100144974">International Conference on Artificial Intelligence, Ethics, and Human Rights (ICAIEHR-26)</a> </strong>| Abu Dhabi, UAE | Feb 6 - 7, 2026</p></li><li><p><strong><a href="https://www.etsi.org/events/2591-etsi-ai-data-conference-2026">The ETSI AI and Data Conference 2026</a></strong> | France |  9-11 February 2026</p></li><li><p><strong><a href="https://impact.indiaai.gov.in">India - AI Impact Summit 2026</a></strong> | New Delhi, India | February 16 - 20 2026</p></li><li><p><strong><a href="https://ismg.events/summit/implications-of-ai-virtual-feb-2026/">Cybersecurity Summit : Implications of AI</a></strong> | Online | February 24 2026</p></li><li><p><strong><a href="https://www.iaseai.org/our-programs/iaseai26">The International Association for Safe &amp; Ethical AI (IASEAI)</a></strong> | Paris, France | February 24&#8211;26 2026</p></li></ul><p>Thank you for reading and see you next week &#128131;</p><p>Alisar Mustafa</p><p>&#128391;&#65039;<a href="https://www.linkedin.com/in/alisarmustafa1/">Linkedin</a> | &#129419; <a href="https://bsky.app/profile/alisarmustafa.bsky.social">Bluesky</a></p>]]></content:encoded></item><item><title><![CDATA[The AI Policy Newsletter 01.20.2026]]></title><description><![CDATA[DOJ forms task force to fight state AI laws, Brazil probes WhatsApp over chatbot ban, Meta exempts Italy after antitrust push.]]></description><link>https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-01202026</link><guid isPermaLink="false">https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-01202026</guid><dc:creator><![CDATA[Alisar Mustafa]]></dc:creator><pubDate>Wed, 21 Jan 2026 01:17:54 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!z-tV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49a6b34a-944e-45ea-a972-d213b0e0eaba_1080x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>&#128064;</p><p><strong>TLDR</strong></p><p><strong>&#127963;&#65039; In the U.S.</strong>, OSTP Director Michael Kratsios <a href="https://www.techpolicy.press/transcript-ostp-director-kratsios-testifies-on-trump-ai-action-plan/">outlined</a> federal priorities for maintaining U.S. AI leadership as the Justice Department, under the Attorney General, <a href="https://www.justice.gov/ag/media/1422986/dl?ref=broadbandbreakfast.com">launched</a> an AI Litigation Task Force to challenge state AI laws seen as conflicting with national policy. In parallel, the Commerce Department <a href="https://www.bbc.com/news/articles/cg4erx1n04lo">approved</a> limited sales of Nvidia&#8217;s H200 AI chips to China under supply and security conditions, while Wisconsin lawmakers <a href="https://docs.legis.wisconsin.gov/document/proposaltext/2025/REG/AB840.pdf">introduced</a> legislation imposing utility cost controls, water usage rules, and reclamation requirements on large data centers.</p><p>&#127757; <strong>Globally</strong>, Brazil&#8217;s antitrust authority CADE <a href="https://www.pymnts.com/cpi-posts/brazils-cade-halts-whatsapp-ai-policy-and-opens-antitrust-probe/">ordered</a> WhatsApp to suspend restrictions on third-party AI chatbots and opened a competition probe into Meta, as the European Commission <a href="https://brusselsmorning.com/european-commission-defends-policy-decisions-in-proposed-eu-ai-law-changes/90958/">defended</a> proposed amendments to the EU AI Act that retain biometric bans, high-risk system obligations, and transparency rules for general-purpose AI. South Korea <a href="https://babl.ai/south-koreas-revised-ai-basic-act-to-take-effect-january-22-with-new-oversight-watermarking-rules/">finalized</a> revisions to its AI Basic Act, introducing national oversight authority, mandatory watermarking for AI-generated content, and new compliance requirements for high-impact systems ahead of 2027 enforcement.</p><p>&#128126; <strong>In Industry</strong>, Meta <a href="https://www.reuters.com/sustainability/boards-policy-regulation/meta-exclude-italy-rival-chatbot-ban-whatsapp-2026-01-12/">exempted</a> Italy from its WhatsApp chatbot ban following intervention by Italian antitrust regulators, while xAI <a href="https://www.reuters.com/sustainability/boards-policy-regulation/musks-ai-bot-grok-limits-image-generation-x-paid-users-after-backlash-2026-01-09/">limited</a> Grok&#8217;s image-generation features on X after backlash over sexualized content. Bandcamp <a href="https://www.side-line.com/bandcamp-bans-ai-music-new-generative-ai-policy/">announced</a> a ban on generative AI music uploads and restricted AI training on its catalog, as former OpenAI policy chief Miles Brundage <a href="https://fortune.com/2026/01/15/former-openai-policy-chief-creates-nonprofit-institute-calls-for-independent-safety-audits-of-frontier-ai-models/">launched</a> a nonprofit pushing for independent safety audits of frontier AI models. Separately, Samsung <a href="https://www.sammyfans.com/2026/01/14/samsung-clarifies-galaxy-ai-policy-confirms-free-basic-features/">confirmed</a> that its core Galaxy AI features will remain free, while leaving open the possibility of paid advanced tools in the future.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://alisarmustafa.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><em><strong>If you enjoy the content, consider upgrading to a paid subscription. Your help supports my writing</strong>.</em></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>&#127963;&#65039;</p><p><strong>United States&nbsp;</strong></p><p><strong><a href="https://www.justice.gov/ag/media/1422986/dl?ref=broadbandbreakfast.com">Justice Department Creates AI Litigation Task Force to Challenge State AI Laws</a></strong></p><p>On January 9, 2026, the U.S. Department of Justice announced the creation of an Artificial Intelligence Litigation Task Force through a memorandum issued by the Attorney General. The task force is charged with challenging state-level AI laws that the Trump administration views as inconsistent with federal AI policy. The memo cites President Trump&#8217;s directive to promote U.S. national and economic security through global AI leadership and to minimize regulatory burdens on AI companies. The task force will argue that certain state AI laws unlawfully regulate interstate commerce, are preempted by federal law, or are otherwise unconstitutional. It will be chaired by the Attorney General or a designee and include representatives from multiple DOJ divisions, with coordination across relevant White House offices.</p><p><strong><a href="https://www.techpolicy.press/transcript-ostp-director-kratsios-testifies-on-trump-ai-action-plan/">Kratsios Defends Trump AI Action Plan, Opposes State and Global AI Rules at House Hearing</a></strong></p><p>Michael Kratsios, director of the White House Office of Science and Technology Policy, testified before the House on January 14, 2026, outlining the Trump administration&#8217;s AI Action Plan and its emphasis on innovation and federal coordination. Kratsios defended an executive order seeking to block state-level AI regulations, arguing that a fragmented regulatory landscape advantages large technology firms with greater compliance resources. He also described U.S. efforts to push back against international AI governance initiatives at forums such as the UN and G7, which the administration views as overly restrictive. Democratic lawmakers questioned the administration&#8217;s financial involvement in AI-related companies and pressed Kratsios on federal agencies&#8217; reported use of Elon Musk&#8217;s Grok chatbot, which has faced scrutiny over harmful content.</p><p><strong><a href="https://www.bbc.com/news/articles/cg4erx1n04lo">U.S. Approves Conditional Sale of Nvidia H200 AI Chips to China</a></strong></p><p>The U.S. government has approved Nvidia&#8217;s sale of its H200 artificial intelligence processors to China, according to the Department of Commerce. The H200 chip, Nvidia&#8217;s second-most-advanced AI semiconductor, had previously faced export restrictions over national security concerns. Under the revised policy, shipments are permitted if there is sufficient domestic supply in the United States, Chinese customers demonstrate adequate security procedures, and the chips are not used for military purposes. The policy applies to the H200 and certain less advanced processors, while Nvidia&#8217;s most advanced Blackwell chips remain restricted. President Trump has stated that sales will be limited to approved Chinese customers and subject to a 25% fee paid to the U.S. government.</p><p><strong><a href="https://docs.legis.wisconsin.gov/document/proposaltext/2025/REG/AB840.pdf">Wisconsin Lawmakers Introduce Bill Regulating Data Center Energy, Water, and Infrastructure Costs</a></strong></p><p>On January 9, 2026, Wisconsin lawmakers introduced Assembly Bill 840, proposing new statewide requirements for data centers. The bill directs the Public Service Commission to ensure that costs for electric infrastructure built primarily to serve data centers are not passed on to other utility customers. It requires renewable energy facilities serving data centers to be located on-site and mandates the use of closed-loop cooling systems to recycle water used for cooling. Data center operators would need to report annual water usage to the Department of Natural Resources and post a bond or other financial security to cover potential reclamation costs. If a data center project is abandoned, the owner must restore the site to its pre-construction condition.</p><p>&#8212;&#9679;&#8212;&#8212;&#8212;&#9679;&#8212;&#9679;&#8212;&#8212;&#8212;&#8212;&#9679;&#8212;&#8212;&#8212;&#9679;&#8212;&#8212;&#8212;&#9679;&#8212;&#9679;&#8212;&#8212;&#8212;&#8212;&#9679;&#8212;&#8212;&#8212;&#9679;&#8212;&#8212;&#8212;&#9679;&#8212;&#8212;</p><p><strong>Sponsored</strong></p><p><strong>Institute for Law &amp; AI Summer Research Fellowship - Applications Open</strong></p><p>Applications are still open for the <a href="https://law-ai.org/srf-us/">Institute for Law &amp; AI</a>&#8216;s Summer Research Fellowship - a 10-week program offering $1,500/week for law students, PhD candidates, and postdocs interested in AI policy.</p><p>&#128197; <strong>Application deadline:</strong> January 30, 2026.</p><p>&#128279;<strong>Learn more and apply:</strong><a href="https://law-ai.org/srf-us/">https://law-ai.org/srf-us/</a></p><p></p><p>For sponsorship inquiries please email: alisarmustafa2@gmail.com</p><p>&#8212;&#9679;&#8212;&#8212;&#8212;&#9679;&#8212;&#9679;&#8212;&#8212;&#8212;&#8212;&#9679;&#8212;&#8212;&#8212;&#9679;&#8212;&#8212;&#8212;&#9679;&#8212;&#9679;&#8212;&#8212;&#8212;&#8212;&#9679;&#8212;&#8212;&#8212;&#9679;&#8212;&#8212;&#8212;&#9679;&#8212;&#8212;</p><p>&#127757;</p><p><strong>Global&nbsp;</strong></p><p><strong><a href="https://www.pymnts.com/cpi-posts/brazils-cade-halts-whatsapp-ai-policy-and-opens-antitrust-probe/">Brazil Antitrust Regulator Suspends WhatsApp AI Policy, Opens Probe Into Competition Risks</a></strong></p><p>Brazil&#8217;s antitrust authority CADE has ordered WhatsApp to suspend enforcement of changes to its Business API terms that restrict third-party AI chatbots, while opening an investigation into potential anti-competitive conduct. The regulator is examining whether Meta&#8217;s updated WhatsApp Business Solution Terms unlawfully limit access for AI tool providers and favor Meta AI, Meta&#8217;s own chatbot. The policy, introduced in October and set to take effect January 15, bars third-party companies from offering AI chatbots through WhatsApp, affecting firms such as OpenAI, Microsoft, and Perplexity. CADE will assess whether the restrictions exceed what is necessary to operate the service and distort competition in the AI chatbot market. Similar antitrust reviews are underway in the EU and Italy.</p><p><strong><a href="https://brusselsmorning.com/european-commission-defends-policy-decisions-in-proposed-eu-ai-law-changes/90958/">European Commission Defends Core EU AI Act Rules in Proposed Amendments</a></strong></p><p>The European Commission has defended proposed amendments to the EU AI Act, confirming that its core policy decisions will remain unchanged. The revisions preserve bans on practices such as real-time remote biometric identification in public spaces, social scoring, and untargeted biometric data scraping under Article 5. The Commission also reaffirmed the Act&#8217;s risk-based framework for high-risk AI systems and maintained transparency obligations for general-purpose AI models, including additional requirements for models trained above 10&#215;10&#178;&#8309; FLOPs. The package addresses implementation challenges identified during early enforcement, including compliance burdens for small and mid-sized enterprises, while retaining phased application timelines and centralized supervisory powers for the European AI Office over high-risk and general-purpose AI systems.</p><p><strong><a href="https://babl.ai/south-koreas-revised-ai-basic-act-to-take-effect-january-22-with-new-oversight-watermarking-rules/">South Korea&#8217;s Revised AI Basic Act Introduces Oversight, Watermarking, and High-Impact AI Rules</a></strong></p><p>South Korea&#8217;s revised Artificial Intelligence Basic Act will take effect on January 22, 2026, establishing a national framework that combines AI promotion with trust, safety, and accountability requirements. Approved by the National Assembly in December, the law designates the Presidential Council on National Artificial Intelligence Strategy as the central body for coordinating AI policy. It introduces mandatory watermarking and disclosure rules for AI-generated content and enhanced oversight for &#8220;high-impact&#8221; AI systems that affect public services, rights, or critical infrastructure. Operators must implement risk management plans and may face data requests and inspections. The government will provide a one-year grace period focused on compliance preparation, while also expanding public-sector AI use, research infrastructure, and digital accessibility support.</p><p>&#128126;</p><p><strong>Industry&nbsp;&nbsp;</strong></p><p><strong><a href="https://www.reuters.com/sustainability/boards-policy-regulation/meta-exclude-italy-rival-chatbot-ban-whatsapp-2026-01-12/">Meta Exempts Italy From WhatsApp Ban on Rival AI Chatbots After Antitrust Order</a></strong></p><p>Meta will exclude Italy from its planned ban on rival AI chatbots on WhatsApp following an order from the Italian antitrust authority, AGCM. The exemption applies to phone numbers with an Italian country code and responds to an ongoing investigation into whether Meta abused its market power by restricting third-party AI chatbots on WhatsApp. The updated WhatsApp Business API terms, set to take effect January 15, would otherwise block competing AI providers while allowing Meta&#8217;s own chatbot, Meta AI. The European Commission is also examining the policy but has not issued interim measures. Meta stated the restrictions were due to system constraints. Rival AI developers criticized the Italy-only exemption and called for broader suspension across the EU.</p><p><strong><a href="https://www.reuters.com/sustainability/boards-policy-regulation/musks-ai-bot-grok-limits-image-generation-x-paid-users-after-backlash-2026-01-09/">xAI Restricts Grok Image Generation on X After Backlash Over Sexualized Content</a></strong></p><p>xAI has restricted image generation and editing features of its Grok chatbot on X following backlash over the creation and posting of sexualized images. Users had been able to ask Grok to edit photos of people, including generating sexualized images without consent, which were then automatically posted on the platform. xAI limited these features to paying subscribers, preventing Grok from generating and publishing images directly in replies. However, users can still create images through the Grok tab and post them manually, and the standalone Grok app continues to allow image generation without a subscription. Regulators in the European Union and other jurisdictions said the changes do not address concerns over illegal and harmful content and have opened inquiries into Grok&#8217;s use.</p><p><strong><a href="https://www.side-line.com/bandcamp-bans-ai-music-new-generative-ai-policy/">Bandcamp Bans Generative AI Music Uploads Under New Platform Policy</a></strong></p><p>Bandcamp has introduced a new generative AI policy prohibiting music and audio created wholly or substantially using generative AI. Announced in a post titled <em>&#8220;Keeping Bandcamp Human,&#8221;</em> the policy also bans AI-enabled impersonation of other artists or styles. Bandcamp stated it may remove content based on suspicion of AI generation and is asking users to report releases that appear heavily reliant on AI. The rules extend beyond uploads, barring scraping, text and data mining, and the use of Bandcamp content to train AI models. The policy took effect in January 2026 and applies platform-wide, affecting uploads, moderation practices, and downstream use of Bandcamp&#8217;s music catalog.</p><p><strong><a href="https://fortune.com/2026/01/15/former-openai-policy-chief-creates-nonprofit-institute-calls-for-independent-safety-audits-of-frontier-ai-models/">Former OpenAI Policy Chief Launches Nonprofit Advocating Independent AI Safety Audits</a></strong></p><p>Former OpenAI policy researcher Miles Brundage has launched the AI Verification and Evaluation Research Institute (AVERI), a nonprofit focused on promoting independent safety audits for frontier AI models. Announced alongside a research paper coauthored by more than 30 AI safety and governance experts, AVERI aims to develop standards and policy frameworks for external evaluation of powerful AI systems. Brundage said current practices rely largely on voluntary, self-reported testing by AI companies, with no common auditing requirements. AVERI does not plan to conduct audits itself but seeks to shape the emerging auditing ecosystem. The organization has raised $7.5 million toward a $13 million goal and is exploring roles for insurers, investors, and regulators in driving adoption of independent AI audits.</p><p><strong><a href="https://www.sammyfans.com/2026/01/14/samsung-clarifies-galaxy-ai-policy-confirms-free-basic-features/">Samsung Confirms Core Galaxy AI Features Will Remain Free</a></strong></p><p>Samsung has clarified its Galaxy AI policy, confirming that its basic AI features will remain free for users indefinitely. The update appeared in revised fine print on Samsung&#8217;s Galaxy AI support pages, replacing earlier language that said the features would be free only &#8220;through 2025.&#8221; The clarification applies to AI tools developed by Samsung, including Call Assist, Writing Assist, Photo Assist, Interpreter, and Note Assist. Samsung stated that while it may introduce new or more advanced AI features in the future that could require payment, existing basic Galaxy AI features will not be monetized. The policy does not extend to third-party AI tools integrated into Samsung devices, such as Google&#8217;s Circle to Search, which may follow separate pricing models.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-01202026?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-01202026?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p>&#127797;</p><p><strong>Resources&nbsp;</strong></p><ul><li><p><strong><a href="https://www.theaipolicycourse.com/">The AI Policy Course</a></strong></p></li><li><p><strong><a href="https://www.alisarmustafa.com/resources">AI Policy Resources</a></strong></p></li><li><p><strong><a href="https://www.techpolicy.press/newsletter/">Tech Policy Press Weekly Newsletter</a></strong></p></li><li><p><strong><a href="https://alltechishuman.org/responsible-tech-job-board">All Tech Is Human Job Board</a></strong></p></li></ul><p><strong>&#128197;</strong></p><p><strong>Upcoming Events</strong></p><ul><li><p><a href="https://aaai.org/conference/aaai/aaai-26/">The 40th Annual AAAI Conference on Artificial Intelligence </a>| Singapore |</p><p>January 20 &#8211; 27</p></li><li><p><a href="https://www.etsi.org/events/2591-etsi-ai-data-conference-2026">The ETSI AI and Data Conference 2026</a> | France |  9-11 February 2026</p></li><li><p><a href="https://impact.indiaai.gov.in">India - AI Impact Summit 2026</a> | New Delhi, India | February 16 - 20</p></li><li><p><a href="https://www.iaseai.org/our-programs/iaseai26">The International Association for Safe &amp; Ethical AI (IASEAI)</a> | Paris, France | 24&#8211;26 February</p></li><li><p><a href="https://ismg.events/summit/implications-of-ai-virtual-feb-2026/">Cybersecurity Summit : Implications of AI</a> | Online | February 24 2026</p></li><li><p><a href="https://iapp.org/conference/iapp-global-summit">IAPP Global Summit 2026 </a> | Washington, DC | 30 March-2 April</p></li><li><p><a href="https://tais2026.cc">Technical AI Safety Conference (TAIS) 2026</a> | Oxford, UK | May 14 2026</p></li></ul><p>Thank you for reading and see you next week &#128131;</p><p>Alisar Mustafa</p><p>&#128391;&#65039;<a href="https://www.linkedin.com/in/alisarmustafa1/">Linkedin</a> | &#129419; <a href="https://bsky.app/profile/alisarmustafa.bsky.social">Bluesky</a></p>]]></content:encoded></item><item><title><![CDATA[The AI Policy Newsletter 01.12.2026]]></title><description><![CDATA[Utah approves AI prescription refills, Ireland reviews laws on AI-generated images, DeepSeek plans Italy-specific chatbot]]></description><link>https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-01122026</link><guid isPermaLink="false">https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-01122026</guid><dc:creator><![CDATA[Alisar Mustafa]]></dc:creator><pubDate>Mon, 12 Jan 2026 17:00:42 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!z-tV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49a6b34a-944e-45ea-a972-d213b0e0eaba_1080x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>&#128064;</p><p><strong>TLDR</strong></p><p><strong>&#127963;&#65039; In the U.S.</strong>, Utah <a href="https://www.axios.com/local/salt-lake-city/2026/01/07/utah-ai-drug-prescriptions-doctronic">launched</a> a pilot allowing AI systems to refill certain prescriptions under state oversight, while the FDA <a href="https://www.reuters.com/business/healthcare-pharmaceuticals/us-fda-limit-regulation-health-fitness-wearables-commissioner-says-2026-01-07/">issued</a> guidance limiting regulation of low-risk health and fitness wearables that do not make medical claims. House Minority Leader Hakeem Jeffries is planning to <a href="https://www.politico.com/live-updates/2026/01/08/congress/jeffries-to-meet-with-new-house-dem-ai-working-group-00715720">hold</a> the first meeting of the House Democratic Commission on AI and the Innovation Economy to coordinate policy development. In New Mexico, lawmakers <a href="https://www.koat.com/article/new-mexico-lawmakers-propose-bills-to-regulate-ai-and-combat-deepfakes/69930476">introduced</a> bills addressing the distribution of intimate AI-generated deepfakes and providing appeal rights for AI-based employment decisions.</p><p>&#127757; <strong>Globally</strong>, Ireland&#8217;s government and regulators <a href="https://www.irishtimes.com/crime-law/2026/01/06/non-consensual-ai-images-on-social-media-illegal-content-irish-regulator-says/">began</a> examining whether existing laws adequately address AI-generated non-consensual sexual images linked to X&#8217;s Grok tool. India <a href="https://timesofindia.indiatimes.com/india/centre-imposes-norms-for-ai-based-cancer-detection/articleshow/126383272.cms">classified</a> AI-based cancer detection software as regulated medical devices, and Rajasthan <a href="https://techobserver.in/news/egov/rajasthan-unveils-ai-policy-ahead-of-india-ai-impact-summit-319858/">released</a> a state AI policy and hosted a regional conference ahead of the India AI Impact Summit.</p><p>&#128126; <strong>In Industry</strong>, DeepSeek <a href="https://cybernews.com/ai-news/deepseek-italy-regulation-hallucination-llm/">agreed</a> to modify its chatbot for the Italian market following a regulatory review focused on AI hallucinations, while Meta&#8217;s <a href="https://mezha.net/eng/bukvy/meta-s-2-billion-manus-acquisition-faces-regulatory-challenges-amid-us-china-ai-tensions/">planned</a> $2 billion acquisition of AI startup Manus entered regulatory review by Chinese authorities amid cross-border technology and investment controls.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://alisarmustafa.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><em><strong>If you enjoy the content, consider upgrading to a paid subscription. Your help supports my writing</strong>.</em></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>&#127963;&#65039;</p><p><strong>United States&nbsp;</strong></p><p><strong><a href="https://www.axios.com/local/salt-lake-city/2026/01/07/utah-ai-drug-prescriptions-doctronic">Utah Authorizes First AI-Issued Prescription Refills in the U.S.</a></strong></p><p>Utah regulators have launched a pilot program allowing artificial intelligence systems to issue prescription refills for certain medications, marking the first instance in the U.S. where prescriptions are filled by AI rather than directly by physicians. The program, operated through startup Doctronic, covers refills for 190 commonly prescribed drugs while excluding controlled substances, injectables, and ADHD medications, with initial prescriptions still required to be written by human doctors. The initiative operates within Utah&#8217;s regulatory sandbox under the Office of Artificial Intelligence Policy, with built-in physician review thresholds and data collection intended to inform future state and federal AI healthcare policy.</p><p><strong><a href="https://www.koat.com/article/new-mexico-lawmakers-propose-bills-to-regulate-ai-and-combat-deepfakes/69930476">New Mexico Lawmakers Introduce Bills Addressing AI Use and Deepfakes</a></strong></p><p>New Mexico lawmakers are introducing legislation to address the use of artificial intelligence, with a focus on deepfakes and automated decision-making. House Bill 22, introduced by Rep. Christine Chandler, proposes rules governing the distribution of certain AI-generated images, including intimate deepfakes. The bill would classify the distribution of such content as a petty misdemeanor and allow affected individuals to pursue civil action. Chandler cited concerns about reputational harm and misuse of manipulated images. In addition, House Bill 28 would establish an appeals process for employment decisions made using AI systems and require chatbots to periodically disclose that users are interacting with artificial intelligence. Supporters say the proposals respond to growing AI use across digital platforms and workplaces.</p><p><strong><a href="https://www.reuters.com/business/healthcare-pharmaceuticals/us-fda-limit-regulation-health-fitness-wearables-commissioner-says-2026-01-07/">FDA Clarifies Limited Oversight of Health and Fitness Wearables</a></strong></p><p>The U.S. Food and Drug Administration issued new guidance outlining a limited regulatory approach to health and fitness wearables and related software, classifying low-risk wellness tools such as fitness trackers and activity apps as non-medical devices when they do not make claims related to disease diagnosis or treatment. FDA Commissioner Marty Makary said the agency aims to provide clear boundaries for companies offering informational tools while maintaining oversight when products present themselves as medical-grade or influence clinical decision-making, citing prior enforcement actions involving blood-pressure estimation features that crossed into regulated medical territory.</p><p><strong><a href="https://www.politico.com/live-updates/2026/01/08/congress/jeffries-to-meet-with-new-house-dem-ai-working-group-00715720">Jeffries Prepares to Meet with the House Democratic Commission on AI</a></strong></p><p>House Minority Leader Hakeem Jeffries is set to meet this week with members of the newly formed House Democratic Commission on AI and the Innovation Economy. Established in December, the commission is led by Reps. Ted Lieu, Josh Gottheimer, and Valerie Foushee and is intended to coordinate Democratic policy work on artificial intelligence. The meeting comes as Congress continues debating the balance between federal and state authority over AI regulation, following repeated failures to advance federal preemption proposals. The commission&#8217;s work also unfolds alongside increased lobbying activity by major AI companies and recent White House actions, including an executive order directing agencies to assess state AI laws and consider a federal legislative framework.</p><p></p><p>&#8212;&#9679;&#8212;&#8212;&#8212;&#9679;&#8212;&#9679;&#8212;&#8212;&#8212;&#8212;&#9679;&#8212;&#8212;&#8212;&#9679;&#8212;&#8212;&#8212;&#9679;&#8212;&#9679;&#8212;&#8212;&#8212;&#8212;&#9679;&#8212;&#8212;&#8212;&#9679;&#8212;&#8212;&#8212;&#9679;&#8212;&#8212;</p><p><strong>Sponsored</strong></p><p><strong>Institute for Law &amp; AI Summer Research Fellowship - Applications Open</strong></p><p>The<a href="https://law-ai.org/"> Institute for Law &amp; AI</a> is seeking law students (JD/LLM), PhD candidates, and postdoctoral researchers for their 10-week Summer Research Fellowship focused on AI law and policy.</p><p>This remote-first fellowship offers $1,500 weekly compensation plus travel expenses for a one-week immersive experience in Washington, DC. Fellows will work with expert mentors at the leading edge of AI policy development, with opportunities for direct engagement with policymakers, government officials, and private-sector leaders.</p><p>The program includes personalized career support, regular Q&amp;A sessions with top experts in AI law and policy, and potential pathways to permanent roles at the Institute for Law &amp; AI or affiliated organizations. The fellowship welcomes applicants with diverse skill sets and experience levels in AI law and policy.</p><p>&#128197; <strong>Application deadline:</strong> January 30, 2026.</p><p>&#128279;<strong>Learn more and apply:</strong><a href="https://law-ai.org/srf-us/">https://law-ai.org/srf-us/</a></p><p>&#8212;&#9679;&#8212;&#8212;&#8212;&#9679;&#8212;&#9679;&#8212;&#8212;&#8212;&#8212;&#9679;&#8212;&#8212;&#8212;&#9679;&#8212;&#8212;&#8212;&#9679;&#8212;&#9679;&#8212;&#8212;&#8212;&#8212;&#9679;&#8212;&#8212;&#8212;&#9679;&#8212;&#8212;&#8212;&#9679;&#8212;&#8212;</p><p>&#127757;</p><p><strong>Global&nbsp;</strong></p><p><strong><a href="https://www.irishtimes.com/crime-law/2026/01/06/non-consensual-ai-images-on-social-media-illegal-content-irish-regulator-says/">Ireland Reviews Legal Framework on AI-Generated Sexualized Images Following Grok Concerns</a></strong></p><p>Ireland&#8217;s Attorney General is examining whether existing laws adequately address AI-generated non-consensual intimate images and child sexual abuse material, following concerns raised about images produced by Grok, an AI tool integrated into X. The Department of Communications confirmed senior officials are reviewing the legal framework with the Attorney General, noting that while such content is illegal under current law, enforcement provisions of the EU AI Act will not take effect until August. Irish regulators, civil liberties groups, and Rape Crisis Ireland have called attention to the issue, while government officials reiterated that sharing non-consensual images remains a criminal offense regardless of whether content is AI-generated.</p><p><strong><a href="https://timesofindia.indiatimes.com/india/centre-imposes-norms-for-ai-based-cancer-detection/articleshow/126383272.cms">India Brings AI-Based Cancer Detection Tools Under Medical Device Regulation</a></strong></p><p>India&#8217;s central government has placed artificial intelligence&#8211;based cancer detection and diagnostic software under formal regulatory oversight. Under a notification issued by the Central Drugs Standard Control Organisation (CDSCO), AI tools that analyze medical images such as X-rays and CT scans to detect or diagnose cancer will be classified as Class C medical devices, a category for moderate-to-high risk products. The change requires such software to obtain regulatory approval, undergo safety validation, meet quality standards, and be subject to ongoing monitoring before broader clinical use. The framework applies to tools already used in some hospitals and diagnostic centers and may extend to other AI-based medical software as their role in healthcare expands.</p><p><strong><a href="https://techobserver.in/news/egov/rajasthan-unveils-ai-policy-ahead-of-india-ai-impact-summit-319858/">Rajasthan Launches AI Policy Ahead of India AI Impact Summit</a></strong></p><p>Rajasthan has unveiled a new artificial intelligence and machine learning policy during a regional AI conference held ahead of the India AI Impact Summit scheduled for February. The Rajasthan Regional AI Impact Conference brought together government officials, industry representatives, startups and academics to discuss AI use in governance, infrastructure, workforce development and innovation. Union and state leaders outlined initiatives to expand AI skills training, including a national programme to train one million youth and the launch of the YUVA AI for All literacy campaign. Rajasthan also introduced an AI portal, an iStart learning management system, and signed agreements with institutions such as Google and IIT Delhi to support research, skilling and public-private collaboration.</p><p>&#128126;</p><p><strong>Industry&nbsp;&nbsp;</strong></p><p><strong><a href="https://cybernews.com/ai-news/deepseek-italy-regulation-hallucination-llm/?__cf_chl_rt_tk=eyJQEA6cJedFF0e8zxCGVuhZ79z8o_w2FjBmddPWaEc-1767951272-1.0.1.1-4lCTW6B.ri4BwSubR3bu0p79_fGJ7qA1i7OK.Eac7EM">DeepSeek Plans Italy-Specific AI Chatbot Following Hallucination Review</a></strong></p><p>Chinese AI company DeepSeek said it will introduce a version of its chatbot tailored specifically for Italy, contingent on meeting regulatory requirements set by the Italian competition authority (AGCM). The move follows scrutiny over AI &#8220;hallucinations,&#8221; or fabricated outputs, and broader compliance with Italian and EU rules. DeepSeek acknowledged that hallucinations are a systemic issue in generative AI and committed to steps including clearer user disclosures, interface changes, and staff training on Italian law. The company will submit a formal report outlining its commitments, with potential fines of up to &#8364;10 million for noncompliance. A return to Italian app stores will depend on regulator approval and possible classification under the EU Digital Services Act.</p><p><strong><a href="https://mezha.net/eng/bukvy/meta-s-2-billion-manus-acquisition-faces-regulatory-challenges-amid-us-china-ai-tensions/">Meta&#8217;s $2 Billion Manus Acquisition Draws Scrutiny From Chinese Regulators</a></strong></p><p>Meta&#8217;s planned acquisition of AI startup Manus for about $2 billion is facing regulatory review centered on China, despite U.S. authorities indicating the deal is permissible. Concerns emerged earlier this year after U.S. venture firm Benchmark led a funding round in Manus, prompting scrutiny under U.S. rules limiting investments in Chinese AI companies. Manus subsequently relocated its core operations from Beijing to Singapore. Chinese regulators are now assessing whether the move and the transaction require export approvals under technology control laws, with potential legal exposure if restricted technologies were transferred without authorization.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-01122026?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-01122026?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p>&#127797;</p><p><strong>Resources&nbsp;</strong></p><ul><li><p><strong><a href="https://www.theaipolicycourse.com/">The AI Policy Course</a></strong></p></li><li><p><strong><a href="https://www.alisarmustafa.com/resources">AI Policy Resources</a></strong></p></li><li><p><strong><a href="https://www.techpolicy.press/newsletter/">Tech Policy Press Weekly Newsletter</a></strong></p></li><li><p><strong><a href="https://alltechishuman.org/responsible-tech-job-board">All Tech Is Human Job Board</a></strong></p></li></ul><p><strong>&#128197;</strong></p><p><strong>Upcoming Events</strong></p><ul><li><p><a href="https://aaai.org/conference/aaai/aaai-26/">The 40th Annual AAAI Conference on Artificial Intelligence </a>| Singapore |</p><p>January 20 &#8211; 27</p></li><li><p><a href="https://www.etsi.org/events/2591-etsi-ai-data-conference-2026">The ETSI AI and Data Conference 2026</a> | France |  9-11 February 2026</p></li><li><p><a href="https://impact.indiaai.gov.in">India - AI Impact Summit 2026</a> | New Delhi, India | February 16 - 20</p></li><li><p><a href="https://www.iaseai.org/our-programs/iaseai26">The International Association for Safe &amp; Ethical AI (IASEAI)</a> | Paris, France | 24&#8211;26 February</p></li><li><p><a href="https://iapp.org/conference/iapp-global-summit">IAPP Global Summit 2026 </a> | Washington, DC | 30 March-2 April</p></li><li><p><a href="https://tais2026.cc">Technical AI Safety Conference (TAIS) 2026</a> | Oxford, UK | May 14 2026</p></li><li><p><a href="https://mila.quebec/en/news/milas-summer-school-in-responsible-ai-and-human-rights-heads-to-mexico-city-in-2026">Mila Summer School in Responsible AI and Human Rights </a>| Mexico City, Mexico | May 17-22</p></li></ul><p>Thank you for reading and see you next week &#128131;</p><p>Alisar Mustafa</p><p>&#128391;&#65039;<a href="https://www.linkedin.com/in/alisarmustafa1/">Linkedin</a> | &#129419; <a href="https://bsky.app/profile/alisarmustafa.bsky.social">Bluesky</a></p>]]></content:encoded></item><item><title><![CDATA[The AI Policy Newsletter 01.06.2026]]></title><description><![CDATA[Florida proposes AI Bill of Rights, China targets emotional AI risks, Grok AI sparks backlash over sexualized deepfakes.]]></description><link>https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-01062026</link><guid isPermaLink="false">https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-01062026</guid><dc:creator><![CDATA[Alisar Mustafa]]></dc:creator><pubDate>Tue, 06 Jan 2026 19:19:28 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!z-tV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49a6b34a-944e-45ea-a972-d213b0e0eaba_1080x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>&#128064;</p><p><strong>TLDR</strong></p><p><strong>&#127963;&#65039; In the U.S.</strong>, Florida lawmakers <a href="https://www.cbsnews.com/miami/news/florida-lawmaker-proposes-artificial-intelligence-bill-of-rights-ai/">introduced</a> an &#8220;Artificial Intelligence Bill of Rights&#8221; covering disclosures, parental controls, political ads, and limits on government use of foreign-linked AI. Tennessee <a href="https://ppc.land/tennessee-senator-introduces-bill-that-could-make-ai-companion-training-a-felony/">proposed</a> legislation to criminalize certain AI companion training practices while the Department of Health and Human Services <a href="https://www.hhs.gov/press-room/hhs-ai-rfi.html">issued</a> a request for information on expanding AI use in clinical care through regulation, reimbursement, and research. Asian American leaders <a href="https://www.aabdc.com/post/asian-americans-launch-ai-alliance-to-drive-education-innovation-policy">launched</a> a national Asian American AI Alliance focused on workforce development, policy, and industry coordination.</p><p>&#127757; <strong>Globally</strong>, China <a href="https://www.reuters.com/world/asia-pacific/china-issues-drafts-rules-regulate-ai-with-human-like-interaction-2025-12-27/">released</a> draft rules governing AI systems that simulate human interaction and emotional engagement, while Taiwan <a href="https://www.taiwantoday.tw/Economics/Top-News/279611/Artificial-Intelligence-Fundamental-Act-passes">passed</a> its Artificial Intelligence Fundamental Act establishing principles for AI governance. Kazakhstan <a href="https://cwbip.com/insights/news/2025/kazakhstan-adopts-its-first-ai-law">adopted</a> its first AI law introducing risk-based regulation, labeling requirements, and copyright provisions, and South Korea&#8217;s business groups <a href="https://www.koreatimes.co.kr/business/20251229/koreas-major-biz-lobby-calls-for-eased-regulations-amid-ai-boom">called</a> for increased AI investment and public-private cooperation in 2026.</p><p>&#128126; <strong>In Industry</strong>, X&#8217;s Grok AI <a href="https://www.reuters.com/legal/litigation/grok-says-safeguard-lapses-led-images-minors-minimal-clothing-x-2026-01-02/">generated</a> sexualized images of women and minors following user prompts, drawing responses from regulators in multiple countries. Instacart <a href="https://www.mintz.com/insights-center/viewpoints/54731/2025-12-30-instacart-agrees-settlement-ftc-lawsuit-over-deceptive">agreed</a> to a $60 million settlement with the FTC over deceptive marketing and AI-driven pricing practices, and Italy <a href="https://techcrunch.com/2025/12/24/italy-tells-meta-to-suspend-its-policy-that-bans-rival-ai-chatbots-from-whatsapp/">ordered</a> Meta to suspend WhatsApp policies that restrict the use of rival AI chatbots on the platform.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://alisarmustafa.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><em><strong>If you enjoy the content, consider upgrading to a paid subscription. Your help supports my writing</strong>.</em></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>&#127963;&#65039;</p><p><strong>United States&nbsp;</strong></p><p><strong><a href="https://www.cbsnews.com/miami/news/florida-lawmaker-proposes-artificial-intelligence-bill-of-rights-ai/">Florida Lawmaker Proposes &#8220;Artificial Intelligence Bill of Rights&#8221;</a></strong></p><p>A Florida state senator introduced legislation to create an &#8220;Artificial Intelligence Bill of Rights,&#8221; aiming to regulate how AI is used across the state while protecting minors, consumers, and political transparency. The proposal would give parents greater control over children&#8217;s interactions with AI, require disclosures when people are communicating with AI systems, and set limits on the unauthorized use of individuals&#8217; names, images, or likenesses. It would also mandate labeling of AI-generated political ads and bar state agencies from contracting with AI firms tied to foreign adversaries</p><p><strong><a href="https://ppc.land/tennessee-senator-introduces-bill-that-could-make-ai-companion-training-a-felony/">Tennessee Senator Introduces Bill to Criminalize Certain AI Companion Training</a></strong></p><p>Tennessee Senator Becky Massey introduced SB 1493, a bill that would make it a Class A felony to knowingly train AI systems to act as companions, provide emotional support through open-ended conversations, simulate human relationships, or present themselves as mental health professionals. The proposal also targets AI trained to encourage isolation, self-harm, or the sharing of sensitive information, with both criminal penalties and civil liability for violations.</p><p><strong><a href="https://www.hhs.gov/press-room/hhs-ai-rfi.html">HHS Seeks Public Input on Using AI to Lower Health Care Costs</a></strong></p><p>The U.S. Department of Health and Human Services announced a Request for Information seeking public input on how AI can be more widely adopted in clinical care to improve outcomes and reduce health care costs. HHS is asking stakeholders to weigh in on how the department can use regulation, reimbursement policy, and research funding to accelerate responsible AI use across the health system. The initiative emphasizes improving patient and provider experiences, reducing administrative burden, ensuring data security and interoperability under HIPAA, and supporting long-term health challenges. Feedback will help guide future HHS policy and complement the department&#8217;s broader AI strategy.</p><p><strong><a href="https://www.aabdc.com/post/asian-americans-launch-ai-alliance-to-drive-education-innovation-policy">Asian Americans Launch AI Alliance to Shape Education, Innovation, and Policy</a></strong></p><p>A new coalition, the Asian American AI Alliance, has launched to strengthen Asian American leadership in artificial intelligence across education, industry, and public policy. Incubated by the Asian American Business Development Center, the Alliance brings together entrepreneurs, technologists, and policy experts from major firms and startups. With pillars focused on workforce development, responsible innovation, and advocacy, the group aims to elevate Asian American voices in shaping AI&#8217;s future. The Alliance will formally debut at a kickoff summit in New York City in January 2026, positioning itself as a platform to influence the direction of AI growth in the U.S.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-01062026?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-01062026?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p>&#127757;</p><p><strong>Global&nbsp;</strong></p><p><strong><a href="https://www.reuters.com/world/asia-pacific/china-issues-drafts-rules-regulate-ai-with-human-like-interaction-2025-12-27/">China Issues Draft Rules to Regulate Human-Like AI Interaction</a></strong></p><p>China&#8217;s cyber regulator released draft rules to tighten oversight of AI systems designed to simulate human personalities and engage users emotionally. The proposal would apply to consumer-facing AI that interacts through text, images, audio, or video, requiring providers to take responsibility for safety across the full product lifecycle. Companies would need to warn users against excessive use, monitor signs of emotional dependence or addiction, and intervene when risks emerge. The draft also strengthens requirements around algorithm review, data security, and personal information protection, while setting clear content red lines prohibiting material that threatens national security, spreads rumors, or promotes violence or obscenity.</p><p><strong><a href="https://www.taiwantoday.tw/Economics/Top-News/279611/Artificial-Intelligence-Fundamental-Act-passes">Taiwan Passes Artificial Intelligence Fundamental Act</a></strong></p><p>Taiwan&#8217;s Legislative Yuan passed the Artificial Intelligence Fundamental Act on Dec. 23, establishing a national legal framework for ethical AI development and governance. The law sets out seven core principles&#8212;including human autonomy, privacy and data governance, safety, transparency, fairness, and accountability&#8212;to guide AI research, development, and deployment. It introduces an AI risk classification approach and emphasizes protecting cultural values, promoting social justice, and addressing environmental sustainability. The Act also calls for public disclosure of AI use, stronger information security measures, and education and training to reduce the digital divide.</p><p><strong><a href="https://cwbip.com/insights/news/2025/kazakhstan-adopts-its-first-ai-law">Kazakhstan Adopts Its First Artificial Intelligence Law</a></strong></p><p>Kazakhstan has adopted its first Artificial Intelligence Law, set to take effect on Jan. 18, 2026, establishing a national framework to govern AI development while balancing innovation, ethics, and intellectual property rights. The law introduces clear definitions, risk-based regulation, and developer obligations focused on transparency, safety, and accountability. It prohibits harmful practices such as manipulative AI, unlawful biometric use, and emotion recognition without consent, while requiring labeling of AI-generated content. Notably, the law affirms that only human-created works qualify for copyright, allowing creators to block use of their works for AI training via machine-readable signals.</p><p><strong><a href="https://www.koreatimes.co.kr/business/20251229/koreas-major-biz-lobby-calls-for-eased-regulations-amid-ai-boom">South Korean Business Groups Push for AI-Led Growth and Public-Private Cooperation</a></strong></p><p>South Korea&#8217;s leading business lobbies are urging aggressive investment in AI and stronger public-private cooperation in 2026 to maintain global competitiveness. Leaders from the Korea Chamber of Commerce and Industry and the Federation of Korean Industries identified AI as a core growth engine amid slowing economic growth and geopolitical uncertainty. Business groups called on the government to ease regulatory constraints, improve policy predictability, and support large-scale investments, particularly for technology and semiconductor firms. They also emphasized closer collaboration between government and industry to build AI capabilities, modernize institutions, and support Korean companies&#8217; global expansion through AI-driven infrastructure and innovation.</p><p>&#128126;</p><p><strong>Industry&nbsp;&nbsp;</strong></p><p><strong><a href="https://www.reuters.com/legal/litigation/grok-says-safeguard-lapses-led-images-minors-minimal-clothing-x-2026-01-02/">Grok AI Generates Nonconsensual Sexualized Images on X, Prompting Global Alarm</a></strong></p><p>Elon Musk&#8217;s Grok AI chatbot on X has come under intense scrutiny after Reuters found it was widely used to generate sexualized, nonconsensual images of real women and, in some cases, minors. Users were able to upload photos and prompt Grok to digitally &#8220;undress&#8221; subjects or depict them in revealing outfits, dramatically lowering the barrier to creating abusive deepfakes. The spread of such content has triggered regulatory backlash in France and India, while child-safety and exploitation experts said the misuse was predictable and preventable. X and its AI unit xAI have so far declined substantive comment.</p><p><strong><a href="https://www.mintz.com/insights-center/viewpoints/54731/2025-12-30-instacart-agrees-settlement-ftc-lawsuit-over-deceptive">Instacart Reaches $60M FTC Settlement Over Deceptive AI-Driven Pricing and Marketing</a></strong></p><p>Instacart agreed to pay $60 million to settle an FTC lawsuit alleging deceptive marketing practices, including misleading &#8220;free delivery&#8221; claims, hidden fees, unclear satisfaction guarantees, and unlawful conversion of free trials into paid memberships. The case also highlighted concerns over Instacart&#8217;s AI-driven pricing tool, which resulted in some shoppers paying up to 23% more for the same groceries. The settlement requires refunds to hundreds of thousands of consumers and restricts future misrepresentations, reinforcing that traditional consumer protection laws apply to AI-enabled pricing, personalization, and automated decision-making systems.</p><p><strong><a href="https://techcrunch.com/2025/12/24/italy-tells-meta-to-suspend-its-policy-that-bans-rival-ai-chatbots-from-whatsapp/">Italy Orders Meta to Halt Banning Rival WhatsApp AI Bots</a></strong></p><p>Italy&#8217;s competition regulator has ordered Meta to suspend a new policy banning third-party AI chatbots from WhatsApp&#8217;s business API. The Italian watchdog argues the move may abuse Meta&#8217;s market dominance and harm innovation and competition in the AI chatbot market. The suspension follows Meta&#8217;s October API changes that restrict general-purpose chatbots like ChatGPT and Claude, though customer service bots are still allowed. Meta defended the policy, citing system strain and claiming WhatsApp is not a distribution platform for AI bots. The European Commission is also investigating the policy for possible anti-competitive effects in the EEA. Meta says it will appeal the decision.</p><p>&#127797;</p><p><strong>Resources&nbsp;</strong></p><ul><li><p><strong><a href="https://www.theaipolicycourse.com/">The AI Policy Course</a></strong></p></li><li><p><strong><a href="https://www.alisarmustafa.com/resources">AI Policy Resources</a></strong></p></li><li><p><strong><a href="https://www.techpolicy.press/newsletter/">Tech Policy Press Weekly Newsletter</a></strong></p></li><li><p><strong><a href="https://alltechishuman.org/responsible-tech-job-board">All Tech Is Human Job Board</a></strong></p></li></ul><p><strong>&#128197;</strong></p><p><strong>Upcoming Events</strong></p><ul><li><p><a href="https://aaai.org/conference/aaai/aaai-26/">The 40th Annual AAAI Conference on Artificial Intelligence </a>| Singapore |</p><p>January 20 &#8211; 27</p></li><li><p><a href="https://www.etsi.org/events/2591-etsi-ai-data-conference-2026">The ETSI AI and Data Conference 2026</a> | France |  9-11 February 2026</p></li><li><p><a href="https://impact.indiaai.gov.in">India - AI Impact Summit 2026</a> | New Delhi, India | February 16 - 20</p></li><li><p><a href="https://www.iaseai.org/our-programs/iaseai26">The International Association for Safe &amp; Ethical AI (IASEAI)</a> | Paris, France | 24&#8211;26 February</p></li><li><p><a href="https://www.niso.org/events/ai-or-not-ai-ethics-ai-use">To AI or Not to AI: The Ethics of AI Use</a> | Online | March 11</p></li><li><p><a href="https://iapp.org/conference/iapp-global-summit">IAPP Global Summit 2026 </a> | Washington, DC | 30 March-2 April</p></li><li><p><a href="https://mila.quebec/en/news/milas-summer-school-in-responsible-ai-and-human-rights-heads-to-mexico-city-in-2026">Mila Summer School in Responsible AI and Human Rights </a>| Mexico City, Mexico | May 17-22</p></li></ul><p>Thank you for reading and see you next week &#128131;</p><p>Alisar Mustafa</p><p>&#128391;&#65039;<a href="https://www.linkedin.com/in/alisarmustafa1/">Linkedin</a> | &#129419; <a href="https://bsky.app/profile/alisarmustafa.bsky.social">Bluesky</a></p>]]></content:encoded></item><item><title><![CDATA[A Note of Gratitude and Updates for 2026]]></title><description><![CDATA[How my day-to-day work changed and why this newsletter is changing with it]]></description><link>https://alisarmustafa.substack.com/p/a-note-of-gratitude-and-updates-for</link><guid isPermaLink="false">https://alisarmustafa.substack.com/p/a-note-of-gratitude-and-updates-for</guid><dc:creator><![CDATA[Alisar Mustafa]]></dc:creator><pubDate>Tue, 30 Dec 2025 02:02:49 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/c9b50737-7e24-4849-8e8e-39302423c910_1200x630.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Dear readers,</p><p>I&#8217;m taking a break from writing for the holidays, but I wanted to write you a special note of gratitude.</p><p>When I started this newsletter two years ago, it was simply a way to keep up with the ever-changing world of AI policy. I never expected it to become what it has: a community of over 4,000 fellow policy enthusiasts who care deeply about getting AI governance right. Thank you for reading, for your thoughtful replies, and for making this feel like a dialogue.</p><p>My world has changed significantly since we started this journey together, and it&#8217;s time for the newsletter to evolve with it.</p><p>My day-to-day work has increasingly included AI safety: building evaluation datasets, red-teaming models, helping companies actually implement the policies we all write about. I&#8217;ve been feeling a gap between what I share here and what I spend my hours doing, and I want to change that. I want you to get the full picture of what I&#8217;m learning and working on, in addition to the policy side.</p><p>So here&#8217;s what you can expect from me in the coming year:</p><ul><li><p><strong>Weekly AI Policy Newsletter:</strong> The roundup you know and love, continuing as usual.</p></li><li><p><strong>Monthly AI Safety Newsletter:</strong> A new addition with the same format you&#8217;re used to, but focused on AI safety. I&#8217;ll be bringing together the most recent and influential AI safety research and what it actually looks like to implement safety at organizations.</p></li><li><p><strong>Op-Eds &amp; Deep Dives (every other month):</strong> Longer pieces where I dig into a specific topic in Responsible AI, whether that&#8217;s a policy development, a safety challenge, or something at the intersection of both.</p></li></ul><p>I love hearing from you, and I read every message someone sends me. If there&#8217;s something specific you&#8217;d like me to write about, please reach out by replying to this post or messaging me on Substack.</p><p>Wishing you a restful end to the year,</p><p>Alisar Mustafa</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://alisarmustafa.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">The AI Policy Newsletter  is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[The AI Policy Newsletter 12.22.2025]]></title><description><![CDATA[New York Aligns AI Safety Law With California, Ireland Sets Out AI Governance Plan, OpenAI Tightens Teen Safety Rules]]></description><link>https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-12222025</link><guid isPermaLink="false">https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-12222025</guid><dc:creator><![CDATA[Alisar Mustafa]]></dc:creator><pubDate>Tue, 23 Dec 2025 01:22:30 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!z-tV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49a6b34a-944e-45ea-a972-d213b0e0eaba_1080x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>&#128064;</p><p><strong>TLDR</strong></p><p><strong>&#127963;&#65039; In the U.S.</strong>, New York<a href="https://www.nytimes.com/2025/12/19/nyregion/ai-bill-regulations-ny.html"> signs</a> AI safety law and plans revisions to align with California framework, while federal regulators <a href="https://www.ferc.gov/news-events/news/ferc-directs-nations-largest-grid-operator-create-new-rules-embrace-innovation-and">ordered</a> PJM to update grid rules to better accommodate AI-driven data centers and California <a href="https://www.sacbee.com/news/politics-government/capitol-alert/article313751597.html">launched</a> a 30-member council to guide responsible AI adoption in state government. In Congress, Sen. Marsha Blackburn <a href="https://www.wbbjtv.com/2025/12/19/blackburn-unveils-national-policy-framework-for-artificial-intelligence/">unveiled</a> a national AI framework aligned with President Trump&#8217;s push for a single federal rulebook, as Sen. Bernie Sanders <a href="https://thehill.com/opinion/robbys-radar/5655111-bernie-sanders-data-center-moratorium/amp/">called</a> for a nationwide moratorium on new AI data centers over energy and oversight concerns. Separately, the Trump administration <a href="https://www.nextgov.com/people/2025/12/trump-admin-launches-us-tech-force-recruit-temporary-workers-after-shedding-thousands-year/410159/">announced</a> a U.S. Tech Force to recruit AI talent into the government following large workforce reductions.</p><p>&#127757; <strong>Globally</strong>, Ireland new interim report <a href="https://data.oireachtas.ie/ie/oireachtas/committee/dail/34/joint_committee_on_artificial_intelligence/reports/2025/2025-12-16_first-interim-report_en.pdf">outlined</a> emerging global approaches to AI governance, while Mozambique <a href="https://iafrica.com/mozambique-develops-national-ai-strategy-with-unesco-support/">began</a> developing a national AI strategy with UNESCO focused on ethics, inclusion, and human rights. In India, Odisha <a href="https://www.newindianexpress.com/states/odisha/2025/Dec/18/odisha-ai-summit-commences-friday-precursor-to-india-meet-in-february">hosted</a> an AI summit as a lead-up to the IndiaAI Impact Summit, highlighting its ambition to scale AI across governance and public services.</p><p>&#128126; <strong>In Industry</strong>, OpenAI <a href="https://techcrunch.com/2025/12/19/openai-adds-new-teen-safety-rules-to-models-as-lawmakers-weigh-ai-standards-for-minors/">introduced</a> stricter safety rules and literacy tools for teen ChatGPT users. The FTC <a href="https://www.reuters.com/legal/litigation/ftc-investigating-instacarts-ai-pricing-tool-source-says-2025-12-17/">opened</a> an investigation into Instacart&#8217;s AI-driven pricing tools amid concerns over price discrimination while Microsoft <a href="https://www.webpronews.com/microsoft-overhauls-windows-11-ai-privacy-policy-for-user-consent/">revised</a> Windows 11 policies to require explicit user consent before AI features can access personal files.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://alisarmustafa.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><em><strong>If you enjoy the content, consider upgrading to a paid subscription. Your help supports my writing</strong>.</em></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>&#127963;&#65039;</p><p><strong>United States&nbsp;</strong></p><p><strong><a href="https://www.nytimes.com/2025/12/19/nyregion/ai-bill-regulations-ny.html">New York Signs AI Safety Law and Plans Revisions to Align With California Framework</a></strong></p><p>New York Governor Kathy Hochul signed legislation establishing new state rules governing the development of advanced AI models, while agreeing to modify the law to more closely align with California&#8217;s approach. The bill, passed by lawmakers in June, was revised following extensive lobbying by technology companies seeking narrower coverage and fewer safety obligations. Under the agreement, Hochul signed the original legislation, and lawmakers will vote early next year on amendments reducing the number of companies subject to the rules and scaling back certain safety requirements. Known as the Responsible AI Safety and Education Act, the law sets standards for safety and transparency for leading AI developers and marks New York&#8217;s first major AI law enacted amid growing federal-state tensions over AI regulation.</p><p><strong><a href="https://www.ferc.gov/news-events/news/ferc-directs-nations-largest-grid-operator-create-new-rules-embrace-innovation-and">Federal Commission Mandates New Grid Rules for AI Data Centers</a></strong></p><p>The Federal Energy Regulatory Commission (FERC) directed PJM Interconnection, the nation&#8217;s largest grid operator, to revise its tariff to create clearer and more consistent rules for serving AI-driven data centers and other large electricity loads co-located with power generation. FERC found PJM&#8217;s existing tariff unjust and unreasonable due to unclear rates, terms, and conditions for interconnection and transmission service. The order requires PJM to establish transparent transmission service options for customers managing co-located loads and to protect grid reliability and consumers across its 13-state footprint and Washington, D.C. FERC also instructed PJM to report by January 19, 2026, on efforts to accelerate new generation, improve load forecasting, and address capacity shortfalls.</p><p><strong><a href="https://www.sacbee.com/news/politics-government/capitol-alert/article313751597.html">Newsom Launches California Innovation Council to Guide Responsible AI Use</a></strong></p><p>California Governor Gavin Newsom announced the creation of the California Innovation Council, a 30-member advisory body tasked with guiding state artificial intelligence policy and the use of AI across state agencies. The council includes academics, policy experts, industry representatives, and former lawmakers from organizations such as the University of California system, Stanford University, the Mozilla Foundation, and the Brookings Institution. It will operate through four subgroups focused on children&#8217;s online safety, fraud prevention, economic development and workforce issues, and modernization of government services. Newsom also announced the launch of Poppy.AI, an AI-powered digital assistant designed to enhance data security and support AI adoption within state government operations.</p><p><strong><a href="https://www.wbbjtv.com/2025/12/19/blackburn-unveils-national-policy-framework-for-artificial-intelligence/">Blackburn Unveils Federal Framework to Standardize AI Regulation Nationwide</a></strong></p><p>Senator Marsha Blackburn released a legislative framework for the proposed TRUMP AMERICA AI Act, which would codify President Trump&#8217;s executive order establishing a single federal regulatory framework for AI. The proposal seeks to preempt state AI laws and create national standards covering child safety, creator rights, political bias, and community impacts. Key provisions include imposing a duty of care on AI developers, expanding parental controls, restricting nonconsensual digital replicas and unauthorized AI training on personal or copyrighted data, and requiring bias audits for high-risk AI systems. The framework also calls for reporting on AI-related job displacement and requires data center operators to bear infrastructure costs. Senator Blackburn plans to introduce the bill in the next congressional session.</p><p><strong><a href="https://thehill.com/opinion/robbys-radar/5655111-bernie-sanders-data-center-moratorium/amp/">Sanders Calls for Federal Moratorium on New AI Data Centers</a></strong></p><p>Senator Bernie Sanders called for a federal moratorium on the construction of new data centers that support AI development, arguing that the pace of AI deployment is outstripping public oversight and democratic decision-making. Sanders said broader public involvement is needed to assess the social, economic, and political impacts of AI, rather than leaving decisions to a small number of technology companies and investors. He framed the proposed pause as a way to allow regulators and the public to evaluate risks related to energy use, community impact, and concentration of power. The proposal would halt new data center construction nationwide, regardless of state preferences, and adds to ongoing debates in Congress over federal versus state authority in governing AI infrastructure and development.</p><p><strong><a href="https://www.nextgov.com/people/2025/12/trump-admin-launches-us-tech-force-recruit-temporary-workers-after-shedding-thousands-year/410159/">Trump Administration Launches U.S. Tech Force to Recruit AI Talent</a></strong></p><p>The Trump administration announced the creation of the United States Tech Force, a new program aimed at recruiting AI and technology professionals to support federal modernization efforts and strengthen U.S. competitiveness in artificial intelligence. The initiative plans to place roughly 1,000 technologists into federal agencies for two-year terms, including roles focused on AI development and data systems. Some participants will take temporary leaves of absence from private sector companies while serving in government, a structure intended to attract experienced talent but one that has raised concerns about potential conflicts of interest.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-12222025?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-12222025?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p>&#127757;</p><p><strong>Global&nbsp;</strong></p><p><strong><a href="https://data.oireachtas.ie/ie/oireachtas/committee/dail/34/joint_committee_on_artificial_intelligence/reports/2025/2025-12-16_first-interim-report_en.pdf">Ireland AI Committee Issues First Interim Report</a></strong></p><p>Ireland&#8217;s Joint Committee on Artificial Intelligence published its first interim report outlining a comprehensive approach to AI governance that balances innovation with strong regulation and public trust. The report calls for the permanent establishment of the AI Committee, creation of a national AI Office by 2026, and stronger implementation of the EU AI Act as a regulatory baseline. It emphasizes transparency, accountability, and human rights, while supporting innovation through measures such as regulatory sandboxes and AI literacy programs. The committee also highlights risks related to bias, energy use, data protection, and societal inequality, urging inclusive public engagement to shape Ireland&#8217;s long-term AI strategy.</p><p><strong><a href="https://iafrica.com/mozambique-develops-national-ai-strategy-with-unesco-support/">Mozambique Develops National AI Strategy With UNESCO Support</a></strong></p><p>Mozambique is developing a national AI strategy with technical support from UNESCO, aiming to promote the ethical, inclusive, and human-rights-based use of AI. Led by the National Institute of Information and Communication Technologies, the effort is being shaped by a multisectoral committee that includes government, industry, academia, and civil society. Officials said the strategy will align with international principles and regional commitments, support digital transformation, and create stronger legal and regulatory foundations to boost innovation, attract investment, and ensure AI adoption benefits the broader public.</p><p><strong><a href="https://www.newindianexpress.com/states/odisha/2025/Dec/18/odisha-ai-summit-commences-friday-precursor-to-india-meet-in-february">Odisha Hosts AI Summit Ahead of India AI Impact Summit 2026</a></strong></p><p>The Odisha government is hosting a two-day AI Summit beginning December 19 as a precursor to the national India AI Impact Summit scheduled for February 2026 in New Delhi. The event aims to position Odisha as a leading state in AI adoption across governance and public sectors. An IndiaAI working group meeting, convened with India&#8217;s Ministry of Electronics and IT, will focus on operationalizing the IndiaAI Mission, including data infrastructure, skills development, and sector-specific use cases. State officials said the summit will showcase Odisha&#8217;s AI initiatives, policy framework, and efforts to build a responsible, investor-friendly AI ecosystem.</p><p>&#128126;</p><p><strong>Industry&nbsp;&nbsp;</strong></p><p><strong><a href="https://techcrunch.com/2025/12/19/openai-adds-new-teen-safety-rules-to-models-as-lawmakers-weigh-ai-standards-for-minors/">OpenAI Updates Teen Safety Rules for ChatGPT</a></strong></p><p>OpenAI updated its ChatGPT guidelines to impose stricter behavioral limits when the platform is used by teens, as lawmakers and child-safety advocates intensify scrutiny of AI&#8217;s impact on minors. The revised rules expand existing prohibitions on sexual content and self-harm by restricting immersive romantic or sexual roleplay, emphasizing caution around body image and eating disorders, and prioritizing safety over autonomy when potential harm is detected. OpenAI also released new AI literacy resources for teens and parents and said upcoming age-prediction tools will automatically apply teen safeguards. The changes come amid growing political pressure, including state attorneys general urging stronger protections and proposed federal legislation that could restrict or ban minors&#8217; access to AI chatbots.</p><p><strong><a href="https://www.reuters.com/legal/litigation/ftc-investigating-instacarts-ai-pricing-tool-source-says-2025-12-17/">FTC Investigates Instacart&#8217;s AI Pricing Tool</a></strong></p><p>The U.S. Federal Trade Commission has opened an investigation into Instacart&#8217;s AI-powered pricing software, Eversight, after reports that shoppers were shown different prices for the same groceries at the same stores. The FTC has issued a civil investigative demand seeking information about how the tool is used, according to sources. The probe follows a study by consumer groups finding average price differences of about 7%, with some shoppers paying up to 23% more. Instacart says it does not set prices directly and that retailers use Eversight to run randomized pricing tests.</p><p><strong><a href="https://www.webpronews.com/microsoft-overhauls-windows-11-ai-privacy-policy-for-user-consent/">Microsoft Updates Windows 11 AI Privacy Rules After Backlash</a></strong></p><p>Microsoft has revised its Windows 11 AI privacy policy to require explicit user consent before AI agents can access personal files, following backlash over concerns that AI features could reach sensitive data without clear permission. Under the updated approach, access is granted on a per-agent basis, meaning users must approve each AI tool before it interacts with folders like Documents or Pictures. Microsoft says the overhaul is intended to restore trust while continuing to expand AI functionality within the operating system.</p><p>&#127797;</p><p><strong>Resources&nbsp;</strong></p><ul><li><p><strong><a href="https://www.theaipolicycourse.com/">The AI Policy Course</a></strong></p></li><li><p><strong><a href="https://www.alisarmustafa.com/resources">AI Policy Resources</a></strong></p></li><li><p><strong><a href="https://www.techpolicy.press/newsletter/">Tech Policy Press Weekly Newsletter</a></strong></p></li><li><p><strong><a href="https://alltechishuman.org/responsible-tech-job-board">All Tech Is Human Job Board</a></strong></p></li></ul><p><strong>&#128197;</strong></p><p><strong>Upcoming Events</strong></p><ul><li><p><strong><a href="https://www.eventbrite.co.uk/e/9-on-demand-workshop-ethical-ai-leadership-governance-tickets-1976004885806">Ethical AI Leadership &amp; Governance</a></strong> | Online | January 6</p></li><li><p><strong><a href="https://www.unssc.org/events/leveraging-ai-peace-unssc-webinar-learning-series">Leveraging AI for Peace: A UNSSC Webinar Learning Series</a></strong> | Online | Jan 12 - Mar 09</p></li><li><p><strong><a href="https://academicintegrity.eu/conference/">European Conference on Ethics and Integrity in Academia (ECEIA26)</a></strong> | Batumi, Georgia | January 26</p></li><li><p><strong><a href="https://www.iaseai.org/our-programs/iaseai26">The International Association for Safe &amp; Ethical AI (IASEAI)</a></strong> | Paris, France | 24&#8211;26 February</p></li><li><p><strong><a href="https://impact.indiaai.gov.in">India - AI Impact Summit 2026</a> </strong>| New Delhi, India | February 16 - 20</p></li><li><p><strong><a href="https://www.rightscon.org">RightsCon 2026</a></strong> | Lusaka, Zambia | May 5&#8211;8, 2026</p></li><li><p><strong><a href="https://summit.codeforamerica.org">Code for America Summit</a></strong><a href="https://summit.codeforamerica.org"> </a>| Chicago, USA | May 7 - 8, 2026</p></li></ul><p>Thank you for reading and see you next week &#128131;</p><p>Alisar Mustafa</p><p>&#128391;&#65039;<a href="https://www.linkedin.com/in/alisarmustafa1/">Linkedin</a> | &#129419; <a href="https://bsky.app/profile/alisarmustafa.bsky.social">Bluesky</a></p>]]></content:encoded></item><item><title><![CDATA[The AI Policy Newsletter 12.15.2025]]></title><description><![CDATA[Trump Order Targets State AI Rules, EU Probes Google&#8217;s AI Training Practices, OpenAI Pushes CA Ballot Measure on Chatbot Safety]]></description><link>https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-12152025</link><guid isPermaLink="false">https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-12152025</guid><dc:creator><![CDATA[Alisar Mustafa]]></dc:creator><pubDate>Tue, 16 Dec 2025 00:35:11 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!z-tV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49a6b34a-944e-45ea-a972-d213b0e0eaba_1080x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>&#128064;</p><p><strong>TLDR</strong></p><p><strong>&#127963;&#65039; In the U.S.</strong>, President Trump <a href="https://www.nytimes.com/2025/12/11/technology/ai-trump-executive-order.html">signed</a> an executive order seeking to override state AI laws in favor of a single federal framework and separately <a href="https://www.wsj.com/tech/ai/china-ai-nvidia-trump-chip-export-ban-lift-c3d457c1">reversed</a> restrictions to allow Nvidia to sell H200 AI chips to China. House Democrats <a href="https://jeffries.house.gov/2025/12/09/leader-jeffries-announces-new-house-democratic-commission-on-ai-and-the-innovation-economy/">launched</a> a new Commission on AI and the Innovation Economy, while Congress <a href="https://www.akingump.com/en/insights/alerts/congress-moves-forward-with-ai-measures-in-key-defense-legislation">advanced</a> extensive AI provisions in the FY 2026 National Defense Authorization Act covering defense research, cybersecurity, procurement, and intelligence oversight. The Department of Health and Human Services <a href="https://www.hhs.gov/sites/default/files/hhs-artificial-intelligence-strategy.pdf">released</a> an updated AI strategy outlining governance, infrastructure, workforce, and health-related use cases across the department.</p><p>&#127757; <strong>Globally</strong>, the European Commission <a href="https://ec.europa.eu/commission/presscorner/detail/en/ip_25_2964">opened</a> an antitrust investigation into Google&#8217;s use of publisher and YouTube content for AI training and <a href="https://www.theguardian.com/world/2025/dec/10/eu-proposes-exempting-ai-gigafactories-from-environmental-assessments">proposed</a> easing environmental rules for data centers and AI gigafactories. South Korea <a href="https://www.pbs.org/newshour/world/south-korea-to-require-advertisers-to-label-ai-generated-ads">announced</a> mandatory labeling and stronger penalties for AI-generated deceptive advertising. India <a href="https://techcrunch.com/2025/12/09/india-proposes-charging-openai-google-for-training-ai-on-copyrighted-content/">proposed</a> a mandatory royalty and blanket licensing system requiring AI companies to compensate creators for training on copyrighted content.</p><p>&#128126; <strong>In Industry</strong>, OpenAI <a href="https://www.politico.com/news/2025/12/09/openai-ai-safety-california-kids-00683191?utm_source=chatgpt.com">filed</a> its first California ballot initiative proposing safety rules for AI companion chatbots while eBay <a href="https://www.modernretail.co/technology/ebay-adds-new-ai-agent-policy-to-its-website/">updated</a> its website policies to restrict automated AI shopping agents from scraping data or completing purchases without human review.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://alisarmustafa.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><em><strong>If you enjoy the content, consider upgrading to a paid subscription. Your help supports my writing</strong>.</em></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>&#127963;&#65039;</p><p><strong>United States&nbsp;</strong></p><p><strong><a href="https://www.nytimes.com/2025/12/11/technology/ai-trump-executive-order.html">Trump Signs Executive Order to Create a Single Federal Framework for AI Regulation</a></strong></p><p>President Trump signed an executive order aimed at limiting states&#8217; ability to regulate AI and establishing a single federal regulatory framework. The order authorizes the attorney general to challenge state laws that conflict with federal priorities related to AI development and directs federal agencies to withhold certain funding from states that maintain such laws. Trump said state-level rules create inconsistent requirements and argued that a unified framework is needed to support national competitiveness and address strategic competition with China. The order follows earlier actions to reduce regulatory barriers, expand access to federal data, ease infrastructure development, and loosen restrictions on exporting AI chips. Legal challenges from states and advocacy groups are expected.</p><p><strong><a href="https://www.wsj.com/tech/ai/china-ai-nvidia-trump-chip-export-ban-lift-c3d457c1">Trump Reverses Course on Nvidia Chip Exports, Allowing Sales of Advanced AI Hardware to China</a></strong></p><p>President Trump allowed Nvidia to resume sales of its H200 AI chips to China, reversing prior restrictions imposed for national security reasons. The decision followed a Justice Department announcement detailing a smuggling operation that illegally sent restricted chips to China. Trump said the move would protect national security, support U.S. jobs, and maintain American leadership in AI, though details of a proposed arrangement for the U.S. government to collect a share of Nvidia&#8217;s China sales remain unclear. The decision benefits Nvidia, which estimates billions of dollars in quarterly sales, and China&#8217;s AI sector, which lacks comparable domestic chip capacity. The move drew criticism from lawmakers concerned about security risks.</p><p><strong><a href="https://jeffries.house.gov/2025/12/09/leader-jeffries-announces-new-house-democratic-commission-on-ai-and-the-innovation-economy/">House Democrats Launch Commission on AI and the Innovation Economy</a></strong></p><p>House Democratic Leader Hakeem Jeffries announced the creation of a House Democratic Commission on AI and the Innovation Economy, which will convene throughout 2026. The commission will work with industry, researchers, stakeholders, and relevant House committees to build policy expertise on AI and innovation-related issues. Caucus Vice Chair Ted Lieu and Representatives Josh Gottheimer and Valerie Foushee will serve as co-chairs, with Representatives Zoe Lofgren and Frank Pallone acting as ex officio co-chairs. Members of the previous Bipartisan AI Task Force will hold leadership roles, and all House Democrats may participate. The commission aims to examine policies that support innovation while addressing risks related to safety, privacy, and economic impacts.</p><p><strong><a href="https://www.akingump.com/en/insights/alerts/congress-moves-forward-with-ai-measures-in-key-defense-legislation">Defense Authorization Bill Advances Wide-Ranging AI Provisions Across National Security Agencies</a></strong></p><p>Congressional leaders released a compromise text for the fiscal year 2026 National Defense Authorization Act that includes extensive AI provisions spanning defense, intelligence, energy, and diplomacy. The package authorizes new Department of Defense AI research institutes, expands use of commercial AI in logistics, maintenance, training, financial audits, and shipbuilding, and establishes governance, cybersecurity, and assessment frameworks for AI and machine learning systems. It directs restrictions on certain foreign-developed AI, including bans within defense and intelligence systems, and mandates new oversight bodies and sandbox environments. The bill also advances outbound investment restrictions related to China, AI security guidance led by the National Security Agency, and AI integration across the Departments of Energy and State.</p><p><strong><a href="https://www.hhs.gov/sites/default/files/hhs-artificial-intelligence-strategy.pdf">Health Department Releases Department-Wide AI Strategy Focused on Operations, Research, and Care Delivery</a></strong></p><p>The Department of Health and Human Services released version 3 of its AI Strategy, outlining a department-wide plan to integrate AI across health care, public health, research, and internal operations. The strategy establishes five pillars: governance and risk management, shared infrastructure and platforms, workforce development, research reproducibility, and modernization of care and public health delivery. HHS plans to build a shared &#8220;OneHHS&#8221; AI infrastructure, expand workforce access to approved tools and training, and embed AI into activities such as drug review, claims processing, grant evaluation, and public health surveillance. The strategy reports 271 active or planned AI use cases and projects significant growth, while emphasizing privacy, security, and compliance with federal guidance.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-12152025?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-12152025?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p>&#127757;</p><p><strong>Global&nbsp;</strong></p><p><strong><a href="https://ec.europa.eu/commission/presscorner/detail/en/ip_25_2964">European Commission Opens Antitrust Investigation Into Google&#8217;s Use of Online Content for AI</a></strong></p><p>The European Commission has opened a formal antitrust investigation into whether Google violated European Union competition rules by using online content for AI purposes. The probe examines Google&#8217;s use of web publishers&#8217; content to generate AI features in search results, including summary responses and conversational search tools, without compensation or a meaningful opt-out option. Regulators are also investigating whether Google used YouTube videos to train its AI models without compensating creators, while restricting rival AI developers from accessing the same content. The Commission is assessing whether these practices give Google an unfair advantage and disadvantage competing AI developers, potentially constituting an abuse of a dominant market position under European Union competition law.</p><p><strong><a href="https://www.theguardian.com/world/2025/dec/10/eu-proposes-exempting-ai-gigafactories-from-environmental-assessments">European Commission Proposes Easing Environmental Rules for Data Centers and AI Infrastructure</a></strong></p><p>The European Commission proposed changes that would ease environmental requirements for data centers, AI gigafactories, and affordable housing as part of a broader effort to reduce regulatory burdens. The proposal would allow member states to exempt these projects from mandatory environmental impact assessments and speed up permitting for sectors designated as strategic. It also suggests repealing a European Union database of hazardous chemicals in products, scaling back reporting obligations for polluters, and shifting environmental management requirements from individual facilities to company-wide systems. The Commission estimates the measures could save businesses about &#8364;1 billion annually. The proposal was introduced alongside plans to modernize the electricity grid and follows recent agreements to scale back corporate sustainability rules.</p><p><strong><a href="https://www.koreabiomed.com/news/articleView.html?idxno=29937">South Korea Announces New Labeling and Penalty Rules for AI-Generated False Advertising</a></strong></p><p>The South Korean government announced new measures to regulate false and exaggerated advertising created using AI, with a focus on ads featuring AI-generated doctors or celebrities. The policy package introduces mandatory labeling requirements for AI-generated images and videos, prohibiting removal of such labels and requiring platforms to verify compliance. The rules target sectors including food, pharmaceuticals, cosmetics, medical devices, and quasi-drugs, with expedited reviews of suspected violations within 24 hours. Ads using AI-generated experts to recommend products may be deemed deceptive unless clearly identified as virtual humans. The government also plans to strengthen penalties, including higher administrative fines and punitive damages of up to five times actual damages. Implementation is scheduled for January 2026.</p><p><strong><a href="https://techcrunch.com/2025/12/09/india-proposes-charging-openai-google-for-training-ai-on-copyrighted-content/">India Proposes Mandatory Royalty System for AI Training on Copyrighted Content</a></strong></p><p>India has proposed a framework that would require AI companies to pay royalties for training models on copyrighted content. The Department for Promotion of Industry and Internal Trade released a proposal establishing a mandatory blanket license that would grant AI firms access to copyrighted works in exchange for payments to a central collecting body, which would distribute royalties to creators. The proposal aims to reduce legal uncertainty while ensuring compensation for writers, artists, musicians, and publishers. It comes amid global legal disputes over whether AI training qualifies as fair use. Industry groups have raised concerns about potential impacts on innovation, while the government has opened the proposal for public consultation before finalizing recommendations.</p><p>&#128126;</p><p><strong>Industry&nbsp;&nbsp;</strong></p><p><strong><a href="https://www.politico.com/news/2025/12/09/openai-ai-safety-california-kids-00683191?utm_source=chatgpt.com">OpenAI Files California Ballot Initiative on AI Chatbot Safety</a></strong></p><p>OpenAI filed its first California ballot initiative proposing safety requirements for AI companion chatbots, including tools like ChatGPT. The measure, titled the &#8220;AI Companion Chatbot Safety Act,&#8221; would require disclosures that users are interacting with AI and establish protocols for addressing and reporting suicidal behavior. The initiative closely mirrors provisions in a state law signed by Governor Gavin Newsom in October and is narrower than a separate ballot proposal backed by Common Sense Media that seeks stricter limits on chatbot use by minors. If both initiatives qualify for the November 2026 ballot and pass, the measure receiving more votes would take effect. OpenAI must gather signatures by June 2026 to qualify.</p><p><strong><a href="https://www.modernretail.co/technology/ebay-adds-new-ai-agent-policy-to-its-website/">eBay Updates Website Policy to Restrict Automated AI Shopping Agents</a></strong></p><p>eBay has updated its website code to introduce a new &#8220;Robots &amp; Agent Policy&#8221; governing how AI agents and large language models interact with its platform. The policy, published in the site&#8217;s robots.txt file, prohibits automated scraping, buy-for-me agents, and AI-driven bots from placing orders without human review. eBay also updated the robots.txt file on its cart subdomain to block automated agents from interacting with shopping carts, with limited exceptions. The policy signals that unsanctioned automation around checkout and purchasing is not permitted and may trigger action under eBay&#8217;s user agreement.</p><p>&#127797;</p><p><strong>Resources&nbsp;</strong></p><ul><li><p><strong><a href="https://www.theaipolicycourse.com/">The AI Policy Course</a></strong></p></li><li><p><strong><a href="https://www.alisarmustafa.com/resources">AI Policy Resources</a></strong></p></li><li><p><strong><a href="https://www.techpolicy.press/newsletter/">Tech Policy Press Weekly Newsletter</a></strong></p></li><li><p><strong><a href="https://alltechishuman.org/responsible-tech-job-board">All Tech Is Human Job Board</a></strong></p></li></ul><p><strong>&#128197;</strong></p><p><strong>Upcoming Events</strong></p><ul><li><p><a href="https://aiforgood.itu.int/event/promoting-participatory-and-responsible-approaches-to-ai-adoption-in-healthcare/">AI for Good Global Initiative Events</a> | Online | December 18</p></li><li><p><a href="https://impact.indiaai.gov.in">India - AI Impact Summit 2026</a> | New Delhi, India | February 16 - 20</p></li><li><p><a href="https://www.rightscon.org">RightsCon 2026</a> | Lusaka, Zambia | May 5&#8211;8, 2026</p></li><li><p><a href="https://summit.codeforamerica.org">Code for America Summit </a>| Chicago, USA | May 7 - 8, 2026</p></li><li><p><a href="https://mila.quebec/en/news/milas-summer-school-in-responsible-ai-and-human-rights-heads-to-mexico-city-in-2026">Mila Summer School in Responsible AI and Human Rights</a> | Montreal, Quebec | May 26 - 30, 2026</p></li></ul><p>Thank you for reading and see you next week &#128131;</p><p>Alisar Mustafa</p><p>&#128391;&#65039;<a href="https://www.linkedin.com/in/alisarmustafa1/">Linkedin</a> | &#129419; <a href="https://bsky.app/profile/alisarmustafa.bsky.social">Bluesky</a></p>]]></content:encoded></item><item><title><![CDATA[The AI Policy Newsletter 12.09.2025]]></title><description><![CDATA[New York mandates AI pricing disclosures, Australia launches $460M AI plan, Nvidia CEO criticizes state AI laws in Trump meeting.]]></description><link>https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-12092025</link><guid isPermaLink="false">https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-12092025</guid><dc:creator><![CDATA[Alisar Mustafa]]></dc:creator><pubDate>Tue, 09 Dec 2025 21:49:45 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!z-tV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49a6b34a-944e-45ea-a972-d213b0e0eaba_1080x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>&#128064;</p><p><strong>TLDR</strong></p><p><strong>&#127963;&#65039; In the U.S.</strong>, New York <a href="https://www.nytimes.com/2025/11/29/nyregion/personalized-surveillance-pricing-ai-new-york.html">enacted</a> the first state law requiring disclosure when prices are set by algorithms using personal data, while Florida <a href="https://www.flgov.com/eog/news/press/2025/governor-ron-desantis-announces-proposal-citizen-bill-rights-artificial">introduced</a> an Artificial Intelligence Bill of Rights covering privacy, parental controls, and limits on AI-generated therapy and likeness use. The Trump administration <a href="https://www.politico.com/news/2025/12/03/trump-administration-ai-robotics-00674204">began</a> developing a federal robotics initiative alongside its AI agenda, and Washington State <a href="https://www.atg.wa.gov/news/news-releases/washington-s-ai-task-force-delivers-policy-recommendations-promote-innovation">released</a> eight recommendations for AI governance, including dataset transparency and risk management for high-impact systems. Missouri lawmakers <a href="https://www.govtech.com/artificial-intelligence/missouri-lawmakers-move-toward-regulating-ai">filed</a> multiple AI bills targeting deepfakes, chatbots, and child protections, and Tennessee&#8217;s advisory council <a href="https://www.thecentersquare.com/tennessee/article_3ef01fd5-f3f1-4ff6-92c3-1e22f7caa686.html">recommended </a>expanded AI policies to attract industry investment while maintaining flexible oversight. New Mexico legislators <a href="https://sourcenm.com/2025/12/01/new-mexico-lawmakers-plan-push-for-ai-regulation-ahead-of-january-legislative-session/">outlined</a> a narrower AI transparency bill for 2026 focused on labeling and chatbot disclosures.</p><p>&#127757; <strong>Globally</strong>, Australia <a href="https://www.industry.gov.au/publications/national-ai-plan">released</a> its National AI Plan 2025, outlining investments in domestic AI capability, smart infrastructure, safety standards, and international coordination. The European Commission <a href="https://ec.europa.eu/commission/presscorner/detail/en/ip_25_2896">opened</a> an antitrust investigation into Meta&#8217;s WhatsApp policies restricting AI providers, and <a href="https://employment-social-affairs.ec.europa.eu/news/commission-sets-out-roadmap-future-proof-quality-jobs-competitive-eu-2025-12-04_en?">introduced</a> the Quality Jobs Roadmap with plans for a Quality Jobs Act addressing workplace AI and algorithmic management. The United Kingdom <a href="https://www.ft.com/content/12cc60ef-7d97-4d20-a7fd-9a28ff6bcb11">began</a> examining stronger rules for AI chatbots over youth self-harm concerns not fully covered by existing safety law. Japan <a href="https://www.asahi.com/ajw/articles/16203430">proposed</a> easing consent requirements for using personal data in AI development while adding<strong> </strong>penalties for intentional misuse.</p><p>&#128126; <strong>In Industry</strong>, Nvidia CEO Jensen Huang <a href="https://www.cnbc.com/2025/12/03/nvidias-jensen-huang-talks-chip-controls-with-trump-hits-regulation.html">met</a> with President Donald Trump to discuss chip export restrictions and criticized state-level AI regulations in favor of a unified national standard. Anthropic <a href="https://www.foreign.senate.gov/imo/media/doc/5c78c941-bd21-2468-1d2c-957537481348/120225_Chhabra_Testimony.pdf">warned</a> the United States Senate that China is advancing rapidly in artificial intelligence and urged stronger export controls while more than 1,000 Amazon employees <a href="https://www.theguardian.com/technology/2025/nov/28/amazon-ai-climate-change?">signed</a> a letter raising concerns that accelerated AI deployment is increasing workplace pressure, layoffs, and carbon emissions. iHeartMedia <a href="https://whatstrending.com/iheartmedia-launches-guaranteed-human-policy-against-ai-voices/">launched</a> its &#8220;Guaranteed Human&#8221; policy banning AI-generated on-air voices while permitting AI for operational functions.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://alisarmustafa.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><em><strong>If you enjoy the content, consider upgrading to a paid subscription. Your help keeps this content free for everyone</strong>.</em></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>&#127963;&#65039;</p><p><strong>United States&nbsp;</strong></p><p><strong><a href="https://www.nytimes.com/2025/11/29/nyregion/personalized-surveillance-pricing-ai-new-york.html">New York Enacts First State Law Targeting AI-Driven Personalized Pricing</a></strong></p><p>New York enacted the nation&#8217;s first law regulating AI-powered personalized pricing, requiring retailers to disclose when an algorithm uses a shopper&#8217;s personal data to set prices. The measure targets practices in which retailers adjust prices based on individual purchasing history or online behavior, such as raising costs for users who typically buy premium products or who have already booked related travel. Passed through the state budget, the law mandates a clear disclosure: &#8220;THIS PRICE WAS SET BY AN ALGORITHM USING YOUR PERSONAL DATA.&#8221; The policy faced immediate criticism from business groups, who argue it is overly broad, and from consumer advocates who sought a full ban. The law survived a recent federal court challenge and is seen as a major step toward broader U.S. regulation of commercial AI data practices.</p><p><strong><a href="https://www.flgov.com/eog/news/press/2025/governor-ron-desantis-announces-proposal-citizen-bill-rights-artificial">DeSantis Proposes Florida &#8220;AI Bill of Rights&#8221; and New Limits on Hyperscale Data Centers</a></strong></p><p>Florida Governor Ron DeSantis announced proposed legislation creating an Artificial Intelligence Bill of Rights to expand consumer protections related to data privacy, parental oversight, and AI use of individuals&#8217; names, images, and likenesses. The plan would reinforce existing state restrictions on deepfakes, require disclosures when consumers interact with AI systems, prohibit the use of AI for licensed therapy or mental-health counseling, and ban state and local agencies from using Chinese-developed AI tools. The proposal also introduces parental controls for minors&#8217; AI interactions and new limits on insurance companies&#8217; use of AI in claims decisions. A parallel data-center proposal seeks to prevent utilities from raising rates to support hyperscale facilities, restrict taxpayer subsidies, preserve local control over siting, and protect water and environmental resources.</p><p><strong><a href="https://www.politico.com/news/2025/12/03/trump-administration-ai-robotics-00674204">Trump Administration Signals Major Federal Push on Robotics After AI Strategy Rollout</a></strong></p><p>The Trump administration is shifting its technology agenda toward robotics, with Commerce Secretary Howard Lutnick meeting industry leaders and considering an executive order to accelerate development. The Department of Transportation is preparing a robotics working group, and lawmakers have proposed&#8212;but not yet passed&#8212;a national robotics commission. Officials frame robotics and advanced manufacturing as essential to U.S. competitiveness with China, which has an estimated 1.8 million industrial robots. Industry groups argue robotics is the &#8220;physical expression of AI&#8221; and are seeking tax incentives, federal funding, and trade actions to strengthen supply chains and counter Chinese subsidies. The administration&#8217;s move comes as companies develop increasingly advanced humanoid robots for industrial use, raising unresolved tensions between automation and efforts to revive U.S. manufacturing employment.</p><p><strong><a href="https://www.atg.wa.gov/news/news-releases/washington-s-ai-task-force-delivers-policy-recommendations-promote-innovation">Washington State AI Task Force Releases Interim Policy Recommendations</a></strong></p><p>Washington&#8217;s Artificial Intelligence Task Force issued an interim report outlining eight policy recommendations intended to encourage AI innovation while safeguarding individual rights. The report proposes adopting NIST&#8217;s trustworthy AI principles as the state&#8217;s guiding framework, requiring developers to disclose dataset provenance and characteristics, and mandating governance and risk-management practices for high-risk AI systems. It also recommends expanding K&#8211;12 STEM education and broadband access, ensuring clinicians&#8212;not AI&#8212;make final decisions on health-service determinations, and creating an advisory group to guide ethical AI use in employment. Additional proposals include requiring public disclosure of law enforcement&#8217;s AI use and establishing a grant program to support small-business AI innovation. The task force will deliver its final report in July 2026.</p><p><strong><a href="https://www.govtech.com/artificial-intelligence/missouri-lawmakers-move-toward-regulating-ai">Missouri Lawmakers Introduce Multiple Bills to Regulate AI Despite Federal Pushback</a></strong></p><p>Missouri legislators filed several AI-related bills for the upcoming session, advancing proposals on deepfakes, chatbot restrictions, and protections for minors. Rep. Jeff Farnan introduced legislation targeting non-consensual deepfake images and videos, while Rep. Scott Cupps proposed adding AI-generated depictions of minors to state obscenity statutes. Rep. Scott Miller filed two bills: one requiring labels on AI-generated images and allowing civil suits for related harms, and another prohibiting minors from accessing companion chatbots and restricting bots in games from discussing sensitive topics. Additional provisions would prevent AI systems from being legally recognized as persons or property owners. Sen. Joe Nicola introduced measures to ban deepfakes in political ads and require disclosures when AI is used.</p><p><strong><a href="https://www.thecentersquare.com/tennessee/article_3ef01fd5-f3f1-4ff6-92c3-1e22f7caa686.html">Tennessee AI Committee Recommends Expanding State Policies Beyond 2024 ELVIS Act</a></strong></p><p>Tennessee&#8217;s Artificial Intelligence Advisory Council issued recommendations to broaden the state&#8217;s AI policy framework beyond the 2024 Ensuring Likeness Voice and Image Security (ELVIS) Act, which criminalizes unauthorized AI-generated use of a person&#8217;s image or voice. While noting that the ELVIS Act has not yet resulted in citations, the council urged adopting flexible, adaptable policies to avoid hindering AI development. Recommendations include creating a centralized online AI policy hub, assigning AI reporting responsibilities across state agencies, and encouraging long-term AI infrastructure such as data centers and workforce training.</p><p><strong><a href="https://sourcenm.com/2025/12/01/new-mexico-lawmakers-plan-push-for-ai-regulation-ahead-of-january-legislative-session/">New Mexico Lawmakers Prepare Targeted AI Regulation Plans Ahead of 2026 Session</a></strong></p><p>New Mexico legislators outlined plans to pursue state-level AI regulations during a policy summit in Albuquerque, citing gaps in federal oversight and growing concerns about discriminatory algorithms, child safety, and transparency. Lawmakers and national experts described challenges states face in defining &#8220;artificial intelligence&#8221; and advancing comprehensive bills, noting that most 2025 state laws addressed narrow sectors such as healthcare. Rep. Christine Chandler said she will reintroduce a scaled-back version of House Bill 60 focused on transparency, including notifying users when they interact with AI chatbots.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-12092025?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-12092025?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p>&#127757;</p><p><strong>Global&nbsp;</strong></p><p><strong><a href="https://www.industry.gov.au/publications/national-ai-plan">Australia Releases National AI Plan 2025 to Boost Innovation and Safety</a></strong></p><p>Australia released its National AI Plan 2025, outlining a government strategy to capture economic benefits, expand domestic capabilities, and ensure safe, equitable AI deployment. The plan focuses on building AI infrastructure, including data centers and secure computing, and establishing GovAI, a government-hosted AI platform. It allocates over $460 million to research, workforce development, and SME adoption programs, while promoting an AI-ready workforce. The plan also emphasizes regulatory and safety measures, including an AI Safety Institute, sector-specific governance, and transparency standards. International collaboration is highlighted, with agreements and partnerships involving Singapore, the UK, South Korea, and the U.S. to strengthen global AI engagement and leadership.</p><p><strong><a href="https://ec.europa.eu/commission/presscorner/detail/en/ip_25_2896">European Commission Opens Antitrust Investigation into Meta&#8217;s WhatsApp Policy Affecting AI Providers</a></strong></p><p>The European Commission opened a formal antitrust investigation into Meta&#8217;s new policy restricting AI providers&#8217; access to WhatsApp&#8217;s Business Solution when AI is the primary service offered. The policy, introduced in October 2025, prohibits third-party AI assistants from operating on WhatsApp for core functions, while allowing limited use for customer-support tasks. The Commission expressed concern that the change could block competing AI providers from reaching users across the EEA, while Meta&#8217;s own AI assistant would remain accessible. The investigation, which excludes Italy due to its ongoing national proceedings, examines potential violations of EU competition rules related to abuse of dominance. The inquiry will proceed as a priority, with no fixed end date.</p><p><strong><a href="https://employment-social-affairs.ec.europa.eu/news/commission-sets-out-roadmap-future-proof-quality-jobs-competitive-eu-2025-12-04_en">European Commission Introduces Quality Jobs Roadmap and Begins Consultation on New Quality Jobs Act</a></strong></p><p>The European Commission presented its Quality Jobs Roadmap, outlining actions to improve working conditions and support competitiveness across the EU. The plan targets five areas: creating quality jobs, modernizing workplace standards, supporting workers and employers through digital and green transitions, strengthening social dialogue, and ensuring access to rights and public services. Alongside the roadmap, the Commission launched a first-stage consultation on the upcoming 2026 Quality Jobs Act, which will update EU labor rules. The consultation focuses on issues including algorithmic management and workplace AI, occupational health and safety risks linked to digital tools, subcontracting practices, just-transition challenges, and enforcement gaps. Social partners have until January 29, 2026, to provide feedback.</p><p><strong><a href="https://www.ft.com/content/12cc60ef-7d97-4d20-a7fd-9a28ff6bcb11">United Kingdom Considers Strengthening Rules on AI Chatbots to Address Youth Safety Risks</a></strong></p><p>United Kingdom ministers began examining stronger regulation of artificial intelligence chatbots amid concerns they may encourage self-harm among teenagers. Technology Secretary Liz Kendall told lawmakers that some chatbot applications are not fully covered by the Online Safety Act, which requires age verification and oversight of harmful content. She said the government is reviewing how to expand coverage and may introduce legislation to close gaps. The move follows attention to a case involving a 14-year-old whose death was linked by his family to harmful chatbot interactions. Kendall announced plans for updated regulator guidance and a public information campaign, while Ofcom noted that many chatbots integrated into user-to-user platforms already fall under existing safety rules.</p><p><strong><a href="https://www.asahi.com/ajw/articles/16203430">Japan Plans to Relax Data Consent Rules to Support AI Development While Adding Penalties for Misuse</a></strong></p><p>Japan&#8217;s government proposed revisions to the Personal Information Protection Law that would ease consent requirements for AI development by allowing personal data to be used without approval when processed solely into statistical information. The plan responds to concerns from industry that strict consent rules limit access to training data, including information gathered from publicly accessible web pages. The amendment would also permit hospitals, clinics, and research institutions to use personal data for academic research without consent, while requiring guardian approval for collecting data from individuals under 16. Alongside the relaxed rules, the government introduced a new penalty system targeting intentional data misuse, including fines for deceptive collection involving more than 1,000 people. The proposal forms part of Japan&#8217;s broader strategy to advance economic security through artificial intelligence.</p><p>&#128126;</p><p><strong>Industry&nbsp;&nbsp;</strong></p><p><strong><a href="https://www.cnbc.com/2025/12/03/nvidias-jensen-huang-talks-chip-controls-with-trump-hits-regulation.html">Nvidia CEO Discusses Chip Export Limits with Trump and Criticizes State-Level AI Regulations</a></strong></p><p>Nvidia CEO Jensen Huang met with President Donald Trump to discuss potential restrictions on exporting advanced AI chips, as lawmakers considered&#8212;but ultimately excluded&#8212;the GAIN AI Act from the National Defense Authorization Act. The proposal would have required chipmakers to give U.S. companies priority access before selling to countries such as China. Huang argued that the measure would harm U.S. competitiveness more than existing proposals. He also criticized state-by-state AI regulations, warning that a patchwork of state rules could slow industry progress and create national security risks. Huang endorsed a federal AI standard, aligning with Trump&#8217;s push for nationwide preemption, though congressional leaders said such a provision lacks sufficient support for inclusion in the current defense bill.</p><p><strong><a href="https://www.foreign.senate.gov/imo/media/doc/5c78c941-bd21-2468-1d2c-957537481348/120225_Chhabra_Testimony.pdf">Anthropic Warns United States Senate That China Is Rapidly Advancing in Artificial Intelligence Capabilities</a></strong></p><p>Anthropic&#8217;s Head of National Security, Tarun Chhabra, told the United States Senate Committee on Foreign Relations that China is accelerating its artificial intelligence development and narrowing the technological gap with the United States. His testimony emphasized that access to advanced artificial intelligence chips remains China&#8217;s primary constraint and that United States export controls currently limit Beijing&#8217;s progress. Chhabra described China&#8217;s strategy to expand energy capacity, manufacturing, and scientific talent to support artificial intelligence growth. He also highlighted a recent cyber espionage case in which China-based actors used artificial intelligence to automate most operational steps. The testimony urged Congress to strengthen export restrictions and close loopholes enabling indirect access to frontier models and high-performance computing.</p><p><strong><a href="https://www.theguardian.com/technology/2025/nov/28/amazon-ai-climate-change">Over 1,000 Amazon Employees Warn AI Expansion Is Increasing Layoffs, Pressure, and Environmental Impact</a></strong></p><p>More than 1,000 Amazon employees signed an open letter raising concerns that the company&#8217;s rapid deployment of artificial intelligence is contributing to job insecurity, workplace pressure, and rising emissions. The letter, also backed by over 2,400 workers from other major tech firms, called for clean-energy commitments for data centers, guardrails to prevent AI products from enabling surveillance or harmful uses, and worker input on how AI affects jobs and organizational decisions. Employees described pressure to use AI tools to increase output, with some reporting expectations to double productivity despite tool limitations. The letter also criticized Amazon&#8217;s expanding data-center footprint and rising emissions, arguing its AI investments conflict with its climate goals. Amazon defended its record, citing renewable energy leadership and investments in nuclear technologies to meet sustainability targets.</p><p><strong><a href="https://whatstrending.com/iheartmedia-launches-guaranteed-human-policy-against-ai-voices/">iHeartMedia Launches &#8220;Guaranteed Human&#8221; Policy to Keep On-Air Content Human</a></strong></p><p>iHeartMedia announced its &#8220;Guaranteed Human&#8221; initiative, prohibiting AI-generated music and on-air personalities across all its radio stations and podcasts. The policy requires stations to include &#8220;Guaranteed Human&#8221; alerts in legal IDs and station imaging, and DJs integrate the message across all listener-facing content. While AI tools may still be used for operational tasks like scheduling, analytics, and editing, the company emphasized that human creativity and connection remain central to its programming. The initiative reflects audience preferences, with research indicating that 90 percent of consumers prefer media produced by real humans, and positions iHeartMedia as maintaining authenticity in the face of increasing AI-generated audio content.</p><p>&#127797;</p><p><strong>Resources&nbsp;</strong></p><ul><li><p><strong><a href="https://www.theaipolicycourse.com/">The AI Policy Course</a></strong></p></li><li><p><strong><a href="https://www.alisarmustafa.com/resources">AI Policy Resources</a></strong></p></li><li><p><strong><a href="https://www.techpolicy.press/newsletter/">Tech Policy Press Weekly Newsletter</a></strong></p></li><li><p><strong><a href="https://alltechishuman.org/responsible-tech-job-board">All Tech Is Human Job Board</a></strong></p></li></ul><p><strong>&#128197;</strong></p><p><strong>Upcoming Events</strong></p><ul><li><p><strong><a href="https://ai.informaconnect.com/newyork/2025/registrations/Delegate">The AI Summit</a> </strong>New York 2025 |  New York, USA | December 10 &#8211; 11</p></li><li><p><strong><a href="https://www.eit.europa.eu/news-events/events/international-ai-summit-2025">International AI Summit</a></strong> 2025 | | Brussels, Belgium | December 11</p></li><li><p><strong><a href="https://www.aei.org/events/ai-governance-a-discussion-with-representative-jay-obernolte-featuring-representative-kevin-hern/">AI Governance : A Discussion with Representative Jay Obernolte, Featuring Representative Kevin Hern</a></strong> | Washington, DC | December 15</p></li><li><p><strong><a href="https://impact.indiaai.gov.in">India - AI Impact Summit 2026</a></strong> | New Delhi, India | February 16 - 20</p></li><li><p><strong><a href="https://www.rightscon.org/rightscon26-call-for-proposals/">RightsCon 2026</a></strong> | Lusaka, Zambia | May 5&#8211;8, 2026</p></li><li><p><strong><a href="https://summit.codeforamerica.org">Code for America Summit</a></strong> | Chicago, USA | May 7 - 8, 2026</p></li></ul><p>Thank you for reading and see you next time &#128131;</p><p>Alisar Mustafa</p><p>&#128391;&#65039;<a href="https://www.linkedin.com/in/alisarmustafa1/">Linkedin</a> | &#129419; <a href="https://bsky.app/profile/alisarmustafa.bsky.social">Bluesky</a></p>]]></content:encoded></item><item><title><![CDATA[The AI Policy Newsletter 12.01.2025]]></title><description><![CDATA[White House pauses state-AI order, Italy probes Meta&#8217;s WhatsApp AI rules, OpenAI disputes liability in teen lawsuit.]]></description><link>https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-12012025</link><guid isPermaLink="false">https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-12012025</guid><dc:creator><![CDATA[Alisar Mustafa]]></dc:creator><pubDate>Mon, 01 Dec 2025 22:56:54 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!z-tV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49a6b34a-944e-45ea-a972-d213b0e0eaba_1080x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>&#128064;</p><p><strong>TLDR</strong></p><p><strong>&#127963;&#65039; In the U.S.</strong>, The White House <a href="https://www.reuters.com/world/white-house-pauses-executive-order-that-would-seek-preempt-state-laws-ai-sources-2025-11-21/">paused</a> a draft executive order aimed at preempting state AI laws, while President Trump <a href="https://www.reuters.com/business/trump-aims-boost-ai-innovation-build-platform-harness-government-data-2025-11-24/">launched</a> the Genesis Mission to build a federal AI platform integrating national lab datasets and supercomputing resources. Meanwhile, the America First Policy Institute <a href="https://www.americafirstpolicy.com/issues/america-first-policy-institute-announces-10m-winning-the-ai-race-initiative-a-first-of-its-kind-conservative-strategy-to-lead-the-ai-future">announced</a> a $10 million &#8220;Winning the AI Race&#8221; initiative to develop a conservative AI strategy centered on U.S. workers, innovation, and national leadership.</p><p><strong>&#127757; Globally</strong>, Italy&#8217;s antitrust authority <a href="https://www.reuters.com/sustainability/boards-policy-regulation/italy-competition-watchdog-broadens-probe-into-meta-over-ai-tools-whatsapp-2025-11-26/">expanded</a> its probe into Meta&#8217;s WhatsApp AI policies and may impose interim limits on Meta AI integration. South Korea <a href="https://en.yna.co.kr/view/AEN20251124005500320">formed</a> a joint task force with the UAE to advance bilateral AI cooperation through the Stargate data center project, while Australia&#8217;s new AI plan <a href="https://www.theguardian.com/australia-news/2025/dec/01/labor-rejects-standalone-ai-legislation-with-plan-that-offers-to-help-unlock-public-and-private-data?">rejects</a> a standalone law and focuses on economic growth and expanded data access.</p><p><strong>&#128126; In Industry</strong>, OpenAI <a href="https://www.nbcnews.com/tech/tech-news/openai-denies-allegation-chatgpt-teenagers-death-adam-raine-lawsuit-rcna245946?">denied</a> liability in litigation alleging ChatGPT contributed to a teenager&#8217;s suicide, arguing the user violated safety rules and circumvented safeguards, as additional related lawsuits continue to emerge.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://alisarmustafa.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><em><strong>If you enjoy the content, consider upgrading to a paid subscription. Your help keeps this content free for everyone</strong>.</em></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>&#127963;&#65039;</p><p><strong>United States&nbsp;</strong></p><p><strong><a href="https://www.reuters.com/world/white-house-pauses-executive-order-that-would-seek-preempt-state-laws-ai-sources-2025-11-21/">White House Pauses Draft Order Targeting State AI Laws</a></strong></p><p>The White House has paused a draft executive order that would have directed the Department of Justice to challenge state artificial intelligence laws and allowed federal agencies to withhold certain broadband funds. The order would have created an AI Litigation Task Force under Attorney General Pam Bondi to contest state regulations on grounds such as federal preemption and interstate commerce. It also included a Department of Commerce review of state AI rules tied to broadband funding guidelines. The pause comes as Congress considers similar preemption proposals in the NDAA, while major AI companies continue urging federal action. Lawmakers from both parties, including Marjorie Taylor Greene and Amy Klobuchar, criticized the draft.</p><p><strong><a href="https://www.reuters.com/business/trump-aims-boost-ai-innovation-build-platform-harness-government-data-2025-11-24/">Trump Launches &#8220;Genesis Mission&#8221; to Build Federal AI Research Platform</a></strong></p><p>President Donald Trump signed an executive order creating the &#8220;Genesis Mission,&#8221; a federal initiative to use government scientific datasets and supercomputers to train large-scale AI models. The Department of Energy and National Laboratories will build an integrated AI experimentation platform designed to automate experiment design, accelerate simulations, and generate predictive models across scientific domains. The mission aims to use government data to power scientific foundation models and AI agents capable of testing hypotheses and streamlining research workflows. Officials highlighted applications in areas such as protein folding, fusion plasma dynamics, and other data-intensive fields.</p><p><strong><a href="https://www.americafirstpolicy.com/issues/america-first-policy-institute-announces-10m-winning-the-ai-race-initiative-a-first-of-its-kind-conservative-strategy-to-lead-the-ai-future">AFPI Launches $10 Million &#8220;Winning the AI Race&#8221; Initiative Focused on Conservative AI Strategy</a></strong></p><p>The America First Policy Institute announced a three-year, $10 million initiative to develop a comprehensive conservative strategy for artificial intelligence. The program includes the inaugural America First AI Agenda, which outlines policy goals centered on U.S. workers, national leadership, and government modernization. AFPI&#8217;s effort spans economic policy, national security, workforce development, education, and public-sector AI readiness. Focus areas include supporting domestic AI innovation, expanding high-skilled manufacturing jobs, protecting children and workers, countering foreign adversaries, and improving government efficiency through AI adoption. Additional details on the workforce strategy, policy roadmap, and partner coalition will be released at a launch event in December.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-12012025?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-12012025?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p>&#127757;</p><p><strong>Global&nbsp;</strong></p><p><strong><a href="https://www.reuters.com/sustainability/boards-policy-regulation/italy-competition-watchdog-broadens-probe-into-meta-over-ai-tools-whatsapp-2025-11-26/">Italy Expands Probe into Meta&#8217;s WhatsApp AI Practices</a></strong></p><p>Italy&#8217;s antitrust authority has widened its investigation into whether Meta used WhatsApp to block rival AI chatbots, potentially abusing its dominant position. The regulator is examining updated WhatsApp Business Platform terms and new AI chatbot tools, noting that Meta introduced rules on October 15 barring companies whose primary services involve AI from using WhatsApp. Officials warned the changes could exclude competitors from WhatsApp&#8217;s large user base, making it harder for users to switch providers. The watchdog has also begun procedures to consider interim measures, which may include suspending the new terms and limiting Meta AI integration during the probe. WhatsApp denies the allegations, saying its systems were not designed for external AI chatbots.</p><p><strong><a href="https://en.yna.co.kr/view/AEN20251124005500320">South Korea Forms Task Force to Advance AI Partnership with UAE</a></strong></p><p>South Korea&#8217;s presidential AI committee will launch a task force to implement new AI cooperation agreements with the United Arab Emirates. Co-led by the presidential secretary for AI policy and the vice chairman of the Korea Chamber of Commerce and Industry, the group will develop joint public-private investment projects and coordinate work across five ministry-led teams, including Science and ICT and Climate, Environment and Energy. The task force follows a summit where both countries agreed to expand collaboration in AI, energy, and defense, signing seven MOUs. Under a strategic AI framework, South Korea will participate in the UAE&#8217;s Stargate project, which plans a 5-gigawatt AI data center campus beginning with a 200-megawatt facility in Abu Dhabi.</p><p><strong><a href="https://www.theguardian.com/australia-news/2025/dec/01/labor-rejects-standalone-ai-legislation-with-plan-that-offers-to-help-unlock-public-and-private-data?">Australia&#8217;s Labor Rejects Standalone AI Law, Unveils National AI Plan Focused on Economic Growth and Data Access</a></strong></p><p>Australia&#8217;s Albanese government has decided against introducing a dedicated AI law, instead releasing a National AI Plan centered on economic opportunity, data access, and workforce support. The plan emphasizes using existing legislation to govern AI while promoting productivity gains across health, disability services, aged care, education, and employment. It includes a $30 million commitment to launch an AI Safety Institute in 2026 to advise on emerging risks and potential future regulation. The roadmap outlines reskilling programs for workers affected by AI, expanded datacenter investment, and opening &#8220;non-sensitive&#8221; public and private datasets for AI training. It also flags rising energy and water demands from datacenters and persistent issues around AI-enabled abuse, copyright uncertainty, and data transparency.</p><p>&#128126;</p><p><strong>Industry&nbsp;&nbsp;</strong></p><p><strong><a href="https://www.nbcnews.com/tech/tech-news/openai-denies-allegation-chatgpt-teenagers-death-adam-raine-lawsuit-rcna245946?">OpenAI Responds to Lawsuit Alleging ChatGPT Contributed to Teen&#8217;s Suicide</a></strong></p><p>OpenAI filed its first legal response to a lawsuit claiming ChatGPT contributed to the death of 16-year-old Adam Raine, arguing it is not liable because the teen misused the system and violated its terms of service, including age restrictions and prohibitions on self-harm use. The company said ChatGPT issued crisis hotline messages more than 100 times and that Raine bypassed safeguards by reframing queries. OpenAI cited its liability limitations and emphasized that Raine had long-standing mental health struggles predating his chatbot use. The filing disputes claims that GPT-4o encouraged self-harm and highlights testing and safety measures in place at launch. Multiple related lawsuits have since been filed, and OpenAI says it continues adding safeguards, including parental controls and expert advisory structures.</p><p>&#127797;</p><p><strong>Resources&nbsp;</strong></p><ul><li><p><strong><a href="https://www.theaipolicycourse.com/">The AI Policy Course</a></strong></p></li><li><p><strong><a href="https://www.alisarmustafa.com/resources">AI Policy Resources</a></strong></p></li><li><p><strong><a href="https://www.techpolicy.press/newsletter/">Tech Policy Press Weekly Newsletter</a></strong></p></li><li><p><strong><a href="https://alltechishuman.org/responsible-tech-job-board">All Tech Is Human Job Board</a></strong></p></li></ul><p><strong>&#128197;</strong></p><p><strong>Upcoming Events</strong></p><ul><li><p><strong><a href="https://research.csiro.au/ads/events/nextgen-rai/">Next Generation Responsible AI Symposium</a></strong> | Adelaide, Australia | Dec 1&#8211;2, 2025</p></li><li><p><strong><a href="https://www.digitalsme.eu/events/digital-sme-summit-2025/">DIGITAL SME Summit </a></strong>| Brussels, Belgium | Dec 4, 2025</p></li><li><p><strong><a href="https://www.darden.virginia.edu/lacross-ai-institute/events/conference">UVA Conference on Ethical AI in Business</a> </strong>| Charlottesville, VA, USA | Dec 5, 2025</p></li><li><p><strong><a href="https://events.govtech.com/New-York-City-Technology-Forum">New York City Technology Forum 2025</a></strong> | New York, USA | Dec 8</p></li><li><p><strong><a href="https://www.rightscon.org/rightscon26-call-for-proposals/">RightsCon 2026</a></strong> | Lusaka, Zambia | May 5&#8211;8, 2026</p></li></ul><p>Thank you for reading and see you next time &#128131;</p><p>Alisar Mustafa</p><p>&#128391;&#65039;<a href="https://www.linkedin.com/in/alisarmustafa1/">Linkedin</a> | &#129419; <a href="https://bsky.app/profile/alisarmustafa.bsky.social">Bluesky</a></p>]]></content:encoded></item><item><title><![CDATA[The AI Policy Newsletter 11.24.2025]]></title><description><![CDATA[Draft Trump order targeting state AI laws, EU seeks 2027 delay on AI rules, Anthropic CEO calls for stronger AI regulation]]></description><link>https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-11242025</link><guid isPermaLink="false">https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-11242025</guid><dc:creator><![CDATA[Alisar Mustafa]]></dc:creator><pubDate>Mon, 24 Nov 2025 17:03:04 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!z-tV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49a6b34a-944e-45ea-a972-d213b0e0eaba_1080x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>&#128064;</p><p><strong>TLDR</strong></p><p><strong>&#127963;&#65039; In the U.S.</strong>, The Trump administration <a href="https://www.transformernews.ai/p/exclusive-heres-the-draft-trump-executive">proposed</a> a draft order to preempt state AI laws, while bipartisan lawmakers <a href="https://www.theverge.com/policy/824054/algorithm-accountability-act-section-230">introduced</a> a bill enabling users to sue platforms over harmful algorithmic recommendations. A new AI industry coalition <a href="https://www.axios.com/2025/11/19/ai-infrastructure-coalition-kyrsten-sinema">launched</a> to advocate for federal pro-AI policies, and a $100M super PAC <a href="https://www.techbuzz.ai/articles/100m-ai-super-pac-targets-ny-democrat-it-may-have-backfired">targeted</a> the New York sponsor of the RAISE Act as part of a broader campaign against state AI regulations.</p><p><strong>&#127757; Globally</strong>, The European Commission <a href="https://www.reuters.com/sustainability/boards-policy-regulation/eu-delay-high-risk-ai-rules-until-2027-after-big-tech-pushback-2025-11-19/">proposed</a> to delay high-risk AI Act requirements to 2027 through a broad digital simplification package <a href="https://www.euronews.com/my-europe/2025/11/18/france-germany-support-simplification-push-for-digital-rules-as-commission-preps-revision-">backed</a> by France and Germany. The U.S. <a href="https://observer.com/2025/11/us-approves-ai-chip-sales-middle-east-humain-g42/">approved</a> major exports of advanced AI chips to Saudi Arabia&#8217;s Humain and the UAE&#8217;s G42, deepening AI infrastructure ties in the Gulf, and South Korea <a href="https://www.koreatimes.co.kr/southkorea/defense/20251118/defense-ministry-to-create-deputy-minister-post-overseeing-ai-policy">announced</a> plans to establish a deputy minister to coordinate military AI policy.</p><p><strong>&#128126; In Industry</strong>, Anthropic CEO Dario Amodei <a href="https://www.inc.com/ben-sherry/anthropic-ceo-dario-amodei-just-made-a-surprising-call-for-ai-regulation/91266456">renewed</a> calls for AI regulation while warning of rapid labor impacts, as TikTok <a href="https://techcrunch.com/2025/11/18/tiktok-now-lets-you-choose-how-much-ai-generated-content-you-want-to-see/">rolled out</a> controls for AI-generated content &#8230;</p>
      <p>
          <a href="https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-11242025">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[The AI Policy Newsletter 11.18.2025]]></title><description><![CDATA[PA moves to tighten rules on AI child abuse material, EU considers GDPR changes for AI firms, OpenAI and Microsoft join state AGs on AI safety task force]]></description><link>https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-11182025</link><guid isPermaLink="false">https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-11182025</guid><dc:creator><![CDATA[Alisar Mustafa]]></dc:creator><pubDate>Mon, 17 Nov 2025 18:08:28 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!z-tV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49a6b34a-944e-45ea-a972-d213b0e0eaba_1080x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>&#128391;&#65039;<a href="https://www.linkedin.com/in/alisarmustafa1/">Linkedin</a> | &#129419; <a href="https://bsky.app/profile/alisarmustafa.bsky.social">Bluesky</a></p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://us02web.zoom.us/webinar/register/4717623707471/WN_RA5L4GsiTqCk0M5-U0ghuA" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!TBiS!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdcdf2c78-d19e-431e-a556-8a771b620ccf_1584x396.png 424w, https://substackcdn.com/image/fetch/$s_!TBiS!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdcdf2c78-d19e-431e-a556-8a771b620ccf_1584x396.png 848w, https://substackcdn.com/image/fetch/$s_!TBiS!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdcdf2c78-d19e-431e-a556-8a771b620ccf_1584x396.png 1272w, https://substackcdn.com/image/fetch/$s_!TBiS!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdcdf2c78-d19e-431e-a556-8a771b620ccf_1584x396.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!TBiS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdcdf2c78-d19e-431e-a556-8a771b620ccf_1584x396.png" width="1456" height="364" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/dcdf2c78-d19e-431e-a556-8a771b620ccf_1584x396.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:364,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:298147,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://us02web.zoom.us/webinar/register/4717623707471/WN_RA5L4GsiTqCk0M5-U0ghuA&quot;,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://alisarmustafa.substack.com/i/178633829?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdcdf2c78-d19e-431e-a556-8a771b620ccf_1584x396.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!TBiS!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdcdf2c78-d19e-431e-a556-8a771b620ccf_1584x396.png 424w, https://substackcdn.com/image/fetch/$s_!TBiS!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdcdf2c78-d19e-431e-a556-8a771b620ccf_1584x396.png 848w, https://substackcdn.com/image/fetch/$s_!TBiS!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdcdf2c78-d19e-431e-a556-8a771b620ccf_1584x396.png 1272w, https://substackcdn.com/image/fetch/$s_!TBiS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdcdf2c78-d19e-431e-a556-8a771b620ccf_1584x396.png 1456w" sizes="100vw" fetchpriority="high"></picture><div></div></div></a></figure></div><p>&#128064;</p><p><strong>TLDR</strong></p><p><strong>&#127963;&#65039; In the U.S.</strong>, Pennsylvania moved to <a href="https://penncapital-star.com/government-politics/bill-seeks-to-close-loophole-for-ai-generated-child-sexual-abuse-materials/">regulate</a> AI-generated child abuse material by expanding reporting rules while Riverton <a href="https://www.rivertonjournal.com/2025/11/12/554136/riverton-city-getting-ahead-of-the-ai-game-for-its-employees">implemented</a> an early AI-guideline to govern employee use in city operations.</p><p><strong>&#127757; Globally</strong>, The EU <a href="https://www.politico.eu/article/brussels-knifes-privacy-to-feed-the-ai-boom-gdpr-digital-omnibus/">advanced</a> plans to loosen GDPR limits on data use for AI through its digital omnibus package, as the EDPS <a href="https://www.edps.europa.eu/data-protection/our-work/publications/guidelines/2025-11-11-guidance-risk-management-artificial-intelligence-systems_en">released</a> guidance to structure public-sector AI risk management. The UK <a href="https://www.bbc.com/news/articles/cn8xq677l9xo">introduced</a> measures to test AI models for their potential to generate CSAM, while Canada <a href="https://thelogic.co/news/buy-canadian-digital-infrastructure-ai-data-centres/">prepared</a> to extend Buy Canadian rules to digital and AI infrastructure. The OECD <a href="https://www.korea.net/NewsFocus/policies/view?articleId=282273">elected</a> Kang Ha-yeon to lead its merged AI governance body, and Cyprus <a href="https://www.cbn.com.cy/article/121443/cyprus-to-join-intergovernmental-initiative-with-greece-and-italy-to-create-ai-gigafactories">agreed</a> to join Greece and Italy in developing AI Gigafactories.</p><p><strong>&#128126; In Industry</strong>, OpenAI and Microsoft <a href="https://edition.cnn.com/2025/11/13/tech/ai-safety-task-force-attorneys-general-openai-microsoft">partnered</a> with state attorneys general to develop shared AI safety safeguards, while a German court ruling <a href="https://mashable.com/article/openai-violated-copyright-laws-gema-lawsuit-germany-court?test_uuid=04wb5avZVbBe1OWK6996faM&amp;test_variant=a">continued</a> to challenge model-training practices by finding OpenAI liable for copyright violations. Apple <a href="https://dataconomy.com/2025/11/14/apple-is-tightening-the-rules-on-apps-that-share-your-data-with-ai/">revised</a> App Store rules to tighten controls on third-party AI data sharing, and Wikipedia <a href="https://www.timesnownews.com/technology-science/wikipedia-brings-no-content-scraping-policy-for-ai-models-pay-if-you-want-to-use-the-data-article-153137314">moved</a> to enforce limits on AI scraping by directing developers to its paid API.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://alisarmustafa.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><em><strong>If you enjoy the content, consider upgrading to a paid subscription. Your help keeps this content free for everyone</strong>.</em></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>&#127963;&#65039;</p><p><strong>United States&nbsp;</strong></p><p><strong><a href="https://penncapital-star.com/government-politics/bill-seeks-to-close-loophole-for-ai-generated-child-sexual-abuse-materials/">Pennsylvania Bill Targets AI-Generated Child Sexual Abuse Material</a></strong></p><p>A Pennsylvania Senate committee held a two hour hearing on a bill from Sen. Tracy Pennycuick that would expand mandated reporting laws to include AI generated child sexual abuse material. Current state law already bans possession of such images, but the bill aims to close gaps involving synthetic children and deepfakes. Child advocacy experts told lawmakers these omissions let offenders slip through screening processes and delay investigations. Witnesses from Mission Kids and the Attorney General&#8217;s office said AI images complicate victim identification and require major investigative resources. Recent cases involving students creating explicit AI images of classmates underscored the issue. The bill passed the Judiciary Committee unanimously and awaits full Senate consideration.</p><p><strong><a href="https://www.rivertonjournal.com/2025/11/12/554136/riverton-city-getting-ahead-of-the-ai-game-for-its-employees">Riverton City Adopts Early AI Policy to Guide Employee Use</a></strong></p><p>Riverton City approved its first artificial intelligence policy to establish guidelines for how employees use emerging AI tools. City Attorney Ryan Carter said the policy is intended to precede widespread adoption, outlining expectations as departments&#8212;particularly law enforcement&#8212;begin integrating data analytics and AI-assisted workflows. The policy requires employees to use AI only as a support tool, maintain responsibility for all work products, and undergo training. The IT department will monitor usage, and violations may result in discipline. Early applications include summarizing large data volumes and assisting with police reports and body-camera data, with officers required to review AI-generated material. City leaders expect the policy to evolve as AI use expands.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-11182025?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-11182025?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p>&#127757;</p><p><strong>Global&nbsp;</strong></p><p><strong><a href="https://www.politico.eu/article/brussels-knifes-privacy-to-feed-the-ai-boom-gdpr-digital-omnibus/">EU Considers Major GDPR Changes to Boost AI Development</a></strong></p><p>Draft proposals for a forthcoming &#8220;digital omnibus&#8221; package show the European Commission planning significant amendments to the General Data Protection Regulation to ease data-use restrictions for AI companies. The changes would introduce new exceptions allowing AI developers to process special-category data, adjust the definition of such data, and potentially exclude some pseudonymized data from GDPR protections. The proposal also includes revisions to cookie rules to expand legal grounds for user tracking. These steps follow concerns that privacy rules hinder Europe&#8217;s competitiveness in AI. Member states and lawmakers are divided, with some opposing any GDPR rewrite and others supporting updates to provide clearer rules for AI development.</p><p><strong><a href="https://www.edps.europa.eu/data-protection/our-work/publications/guidelines/2025-11-11-guidance-risk-management-artificial-intelligence-systems_en">EDPS Issues Technical Guidance for Managing AI-Related Data Protection Risks</a></strong></p><p>The European Data Protection Supervisor released guidance to help EU institutions identify and mitigate data protection risks when developing, procuring, or deploying AI systems. The document explains a risk management approach based on ISO 31000 and outlines the AI system lifecycle, including development steps and procurement considerations. It highlights interpretability and explainability as prerequisites for compliant AI use. The guidance organizes risks around core data protection principles&#8212;fairness, accuracy, data minimisation, security, and data subject rights&#8212;and describes scenarios such as data bias, overfitting, inaccurate outputs, indiscriminate data collection, data leakage, and breaches. Each risk is paired with technical measures that controllers can apply to reduce impacts.</p><p><strong><a href="https://www.bbc.com/news/articles/cn8xq677l9xo">UK Proposes New Testing Powers to Prevent AI-Generated Child Abuse Material</a></strong></p><p>The UK government plans to amend the Crime and Policing Bill to let approved testers assess AI models for their ability to generate illegal child sexual abuse material before release. Technology Secretary Liz Kendall said the change aims to ensure AI systems include safeguards at the outset. The Internet Watch Foundation reported removing 426 AI-related CSAM items between January and October 2025, up from 199 in 2024. Child safety groups, including the NSPCC, supported the proposal while urging mandatory testing requirements. The amendment would also help developers and charities identify risks involving extreme pornography and non-consensual intimate images, addressing broader concerns about realistic AI-generated abuse content.</p><p><strong><a href="https://thelogic.co/news/buy-canadian-digital-infrastructure-ai-data-centres/">Canada Plans to Extend Buy Canadian Rules to AI and Digital Infrastructure</a></strong></p><p>The Canadian government intends to expand its stricter Buy Canadian policy to cover digital and AI infrastructure as part of an upcoming update to the national AI strategy. AI Minister Evan Solomon said organizations receiving federal funding may be required to use Canadian-made technology when possible. Current rules require federal departments to prioritize domestic suppliers and mandate local content for foreign firms in major projects. Ottawa is considering similar conditions for companies that receive grants or loans and invest in AI-related products. The plan aligns with broader efforts to increase domestic procurement, including funding to improve federal purchasing processes and support small and medium-sized businesses. The government is also reviewing data centre proposals as it assesses sovereign compute needs.</p><p><strong><a href="https://www.korea.net/NewsFocus/policies/view?articleId=282273">Korean Researcher Elected First Chair of Merged OECD AI Governance Body</a></strong></p><p>Kang Ha-yeon of the Korea Information Society Development Institute has been elected the first chair of the newly merged OECD Working Group on Artificial Intelligence Governance and the Global Partnership for Artificial Intelligence. The appointment follows the 2023 integration of GPAI into the OECD, creating a consolidated body that coordinates global cooperation on AI policy. The OECD has led international AI governance efforts since adopting its AI principles in 2019, with the working group addressing ethics, safety, and standards. Kang has held leadership roles in AIGO, GPAI, and APEC digital policy forums. She said she aims to support the development of an inclusive policy framework involving both member and non-member countries.</p><p><strong><a href="https://www.cbn.com.cy/article/121443/cyprus-to-join-intergovernmental-initiative-with-greece-and-italy-to-create-ai-gigafactories">Cyprus to Join Greece&#8211;Italy Initiative to Develop AI Gigafactories</a></strong></p><p>Cyprus will participate in a new intergovernmental initiative with Greece and Italy to establish AI Gigafactories, according to discussions held during the 3rd Greece&#8211;Cyprus Intergovernmental Summit in Athens. Officials from both countries reviewed cooperation on social policy, digital transition, and regulatory reforms aligned with EU objectives. Talks also addressed domestic violence prevention, support services for vulnerable groups, and measures to protect minors from digital addiction. The governments agreed to promote EU-wide deployment of digital age-verification and parental-control tools, including Greece&#8217;s &#8220;Kids Wallet&#8221; app, and to deepen collaboration on digital governance and public-service interoperability.</p><p>&#128126;</p><p><strong>Industry&nbsp;&nbsp;</strong></p><p><strong><a href="https://edition.cnn.com/2025/11/13/tech/ai-safety-task-force-attorneys-general-openai-microsoft">OpenAI and Microsoft Join State Attorneys General in New AI Safety Task Force</a></strong></p><p>North Carolina Attorney General Jeff Jackson and Utah Attorney General Derek Brown launched a new AI Task Force to develop safeguards that AI developers should adopt to reduce risks, particularly for children. OpenAI and Microsoft are the first companies to join, with additional state regulators and firms expected to participate. The task force aims to identify emerging AI risks and establish voluntary safety practices amid the absence of federal AI regulation. Jackson and Brown previously opposed a proposed moratorium that would have restricted state AI enforcement. The initiative will also support coordination among states on monitoring AI developments and potential harm, enabling joint legal action if necessary.</p><p><strong><a href="https://mashable.com/article/openai-violated-copyright-laws-gema-lawsuit-germany-court?test_uuid=04wb5avZVbBe1OWK6996faM&amp;test_variant=a">German Court Rules OpenAI Violated Copyright Law in Music Training Case</a></strong></p><p>A court in Munich ruled that OpenAI violated German copyright law by training its models on protected music without permission, following a lawsuit filed by the music rights group GEMA. The court ordered OpenAI to pay an undisclosed amount in damages. OpenAI said it disagreed with the ruling and is considering next steps, noting the decision concerns a limited set of lyrics. The case adds to broader copyright challenges facing major AI companies, with several publishers and media organizations pursuing lawsuits over training data. Recent disputes include actions against OpenAI by news outlets and a $1.5 billion settlement by Anthropic in a U.S. class action involving allegedly pirated books.</p><p><strong><a href="https://dataconomy.com/2025/11/14/apple-is-tightening-the-rules-on-apps-that-share-your-data-with-ai/">Apple Updates App Store Rules to Require Disclosure of Data Sharing with AI Providers</a></strong></p><p>Apple issued new App Review Guidelines requiring developers to disclose and obtain user consent before sharing personal data with third-party AI services. The update expands an existing rule on data sharing to explicitly include AI companies. Apple introduced the change ahead of its planned 2026 release of an AI-enhanced Siri powered in part by Google&#8217;s Gemini technology. Apps must now specify when personal data is transmitted to external AI systems, with non-compliant apps subject to removal from the App Store. The revision may affect apps that rely on AI for personalization or data processing.</p><p><strong><a href="https://www.timesnownews.com/technology-science/wikipedia-brings-no-content-scraping-policy-for-ai-models-pay-if-you-want-to-use-the-data-article-153137314">Wikipedia Enforces New Restrictions on AI Scraping, Requires Use of Paid API</a></strong></p><p>The Wikimedia Foundation announced new measures directing AI developers to stop scraping Wikipedia and instead access its content through the paid Wikimedia Enterprise API. The organization said AI companies must credit contributors and use official channels to avoid strain on servers. The policy follows improvements in bot detection that revealed increased automated scraping disguised as human traffic, coinciding with an 8% decline in human visits. Wikimedia said reduced human engagement threatens volunteer participation and donations. The foundation emphasized that responsible data use ensures sustainability of the platform and supports its nonprofit operations. The shift aligns with Wikimedia&#8217;s existing AI strategy, which focuses on tools that assist editors with routine tasks and translation.&#127797;</p><p><strong>Resources&nbsp;</strong></p><ul><li><p><strong><a href="https://www.theaipolicycourse.com/">The AI Policy Course</a></strong></p></li><li><p><strong><a href="https://www.alisarmustafa.com/resources">AI Policy Resources</a></strong></p></li><li><p><strong><a href="https://www.techpolicy.press/newsletter/">Tech Policy Press Weekly Newsletter</a></strong></p></li><li><p><strong><a href="https://alltechishuman.org/responsible-tech-job-board">All Tech Is Human Job Board</a></strong></p></li></ul><p><strong>&#128197;</strong></p><p><strong>Upcoming Events</strong></p><ul><li><p><strong><a href="https://iapp.org/conference/iapp-europe-data-protection-congress/register-now-dpc25/">IAPP Europe Data Protection Congress 2025</a></strong> | Brussels, Belgium | Nov 19&#8211;20, 2025</p></li><li><p><strong><a href="https://us02web.zoom.us/webinar/register/4717623707471/WN_RA5L4GsiTqCk0M5-U0ghuA">Making the Case for Safety: Turning Responsibility Into Business Value - A Conversation with Luca Belli</a></strong> | Online | Nov 20, 2025</p></li><li><p><strong><a href="https://www.luxatiainternational.com/product/world-ai-governance-regulation-summit">World AI Governance &amp; Regulation Summit</a></strong><a href="https://www.luxatiainternational.com/product/world-ai-governance-regulation-summit"> </a>| Berlin, Germany | Nov 20&#8211;21, 2025</p></li><li><p><strong><a href="https://ials.sas.ac.uk/events/ilpc-annual-conference-2025-regulating-ai-a-changing-world-oversight-and-enforcement#:~:text=20%20November">ILPC Annual Conference 2025</a></strong> |<strong> </strong>London, UK | November 20&#8211;21, 2025</p></li><li><p><strong><a href="https://g7g20-documents.org/database/document/2025-g20-south-africa-sherpa-track-digital-economy-ministers-ministers-language-chairs-statement-task-force-on-artificial-intelligence-data-governance-and-innovation-for-sustainable-development">G20 Summit 2025 (South Africa Presidency)</a></strong> | Johannesburg, South Africa | Nov 22&#8211;23, 2025</p></li><li><p><strong><a href="https://www.eventbrite.ie/e/ethical-ai-leadership-governance-tickets-1245473838779">Ethical AI Leadership &amp; Governance Workshop</a></strong> | Online | Nov 25, 2025</p></li><li><p><strong><a href="https://research.csiro.au/ads/events/nextgen-rai/">Next Generation Responsible AI Symposium</a></strong> | Adelaide, Australia | Dec 1&#8211;2, 2025</p></li><li><p><strong><a href="https://www.digitalsme.eu/events/digital-sme-summit-2025/">DIGITAL SME Summit </a></strong>| Brussels, Belgium | Dec 4, 2025</p></li><li><p><strong><a href="https://www.darden.virginia.edu/lacross-ai-institute/events/conference">UVA Conference on Ethical AI in Business</a> </strong>| Charlottesville, VA, USA | Dec 5, 2025</p></li><li><p><strong><a href="https://www.rightscon.org/rightscon26-call-for-proposals/">RightsCon 2026</a></strong> | Lusaka, Zambia | May 5&#8211;8, 2026</p></li></ul><p>Thank you for reading and see you next time &#128131;</p><p>Alisar Mustafa</p><p>&#128391;&#65039;<a href="https://www.linkedin.com/in/alisarmustafa1/">Linkedin</a> | &#129419; <a href="https://bsky.app/profile/alisarmustafa.bsky.social">Bluesky</a></p>]]></content:encoded></item><item><title><![CDATA[The AI Policy Newsletter 11.11.2025]]></title><description><![CDATA[Senators propose AI job impact reports, EU may ease AI Act rules, Nvidia warns US risks losing AI edge to China]]></description><link>https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-11112025</link><guid isPermaLink="false">https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-11112025</guid><dc:creator><![CDATA[Alisar Mustafa]]></dc:creator><pubDate>Tue, 11 Nov 2025 20:30:26 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!z-tV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49a6b34a-944e-45ea-a972-d213b0e0eaba_1080x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>&#128391;&#65039;<a href="https://www.linkedin.com/in/alisarmustafa1/">Linkedin</a> | &#129419; <a href="https://bsky.app/profile/alisarmustafa.bsky.social">Bluesky</a></p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://us02web.zoom.us/webinar/register/4717623707471/WN_RA5L4GsiTqCk0M5-U0ghuA" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!TBiS!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdcdf2c78-d19e-431e-a556-8a771b620ccf_1584x396.png 424w, https://substackcdn.com/image/fetch/$s_!TBiS!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdcdf2c78-d19e-431e-a556-8a771b620ccf_1584x396.png 848w, https://substackcdn.com/image/fetch/$s_!TBiS!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdcdf2c78-d19e-431e-a556-8a771b620ccf_1584x396.png 1272w, https://substackcdn.com/image/fetch/$s_!TBiS!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdcdf2c78-d19e-431e-a556-8a771b620ccf_1584x396.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!TBiS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdcdf2c78-d19e-431e-a556-8a771b620ccf_1584x396.png" width="1456" height="364" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/dcdf2c78-d19e-431e-a556-8a771b620ccf_1584x396.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:364,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:298147,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://us02web.zoom.us/webinar/register/4717623707471/WN_RA5L4GsiTqCk0M5-U0ghuA&quot;,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://alisarmustafa.substack.com/i/178633829?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdcdf2c78-d19e-431e-a556-8a771b620ccf_1584x396.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!TBiS!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdcdf2c78-d19e-431e-a556-8a771b620ccf_1584x396.png 424w, https://substackcdn.com/image/fetch/$s_!TBiS!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdcdf2c78-d19e-431e-a556-8a771b620ccf_1584x396.png 848w, https://substackcdn.com/image/fetch/$s_!TBiS!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdcdf2c78-d19e-431e-a556-8a771b620ccf_1584x396.png 1272w, https://substackcdn.com/image/fetch/$s_!TBiS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdcdf2c78-d19e-431e-a556-8a771b620ccf_1584x396.png 1456w" sizes="100vw" fetchpriority="high"></picture><div></div></div></a></figure></div><p>&#128064;</p><p><strong>TLDR</strong></p><p><strong>&#127963;&#65039; In the U.S.</strong>, Congress <a href="https://www.cnbc.com/2025/11/05/ai-jobs-act-warner-hawley.html">proposed</a> the <em>AI-Related Job Impacts Clarity Act</em> requiring companies to report AI-driven workforce changes, while AI Czar David Sacks <a href="https://www.reuters.com/world/us/white-house-ai-czar-rules-out-federal-bailout-sector-2025-11-06/">ruled out </a>federal bailouts as the $500B &#8220;Stargate&#8221; initiative expands. The FDA <a href="https://www.statnews.com/2025/11/05/fda-digital-advisers-therapy-chatbots-regulating-generative-ai/">reviewed</a> regulation of AI therapy chatbots, and South Dakota <a href="https://www.keloland.com/news/local-news/legislature-to-consider-ai-porn-laws/">moved</a> to criminalize AI-generated porn involving non-consenting adults. Meanwhile, Amazon&#8217;s data center boom is <a href="https://www.wsj.com/us-news/what-happened-when-small-town-america-became-data-center-u-s-a-410f25e9?">reshaping</a> rural Oregon, fueling economic growth and housing pressures.</p><p><strong>&#127757; Globally</strong>, the EU plans to <a href="https://www.reuters.com/sustainability/boards-policy-regulation/big-tech-may-win-reprieve-eu-mulls-easing-ai-rules-document-shows-2025-11-07/">ease</a> <em>AI Act</em> compliance rules under its &#8220;Digital Omnibus&#8221; proposal, while Morocco <a href="https://iafrica.com/morocco-unveils-digital-x-0-law-to-embed-ai-data-governance-and-digital-identity-into-national-modernization-agenda/">introduced</a> <em>Digital X.0</em> to govern AI, data, and digital identity. India&#8217;s Karnataka <a href="https://economictimes.indiatimes.com/tech/startups/karnataka-cabinet-clears-rs-518-crore-startup-policy-to-support-25000-startups-in-ai-blockchain/articleshow/125138943.cms?from=mdr">approved</a> a crore Startup Policy to back 25,000 deeptech firms, and Chief Minister Siddaramaiah <a href="https://www.deccanherald.com/india/karnataka/cm-policy-soon-to-shield-kannadigas-jobs-from-ai-threat-3783417">announced</a> an AI policy to protect local jobs and promote Kannada education.</p><p><strong>&#128126; In Industry</strong>, Nvidia&#8217;s Jensen Huang <a href="https://www.businessinsider.com/nvidia-jensen-huang-warning-us-china-ai-tech-competition-2025-11">warned</a> U.S. policies risk ceding AI leadership to China, and OpenAI CFO Sarah Friar <a href="https://www.cnbc.com/2025/11/06/openai-cfo-sarah-friar-says-company-is-not-seeking-government-backstop.html?">cla&#8230;</a></p>
      <p>
          <a href="https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-11112025">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[The AI Policy Newsletter 11.04.2025]]></title><description><![CDATA[Congress moves to ban AI companions for kids, Australia rejects AI data mining exception, OpenAI completes $30B for-profit shift.]]></description><link>https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-11042025</link><guid isPermaLink="false">https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-11042025</guid><dc:creator><![CDATA[Alisar Mustafa]]></dc:creator><pubDate>Tue, 04 Nov 2025 19:15:52 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!z-tV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49a6b34a-944e-45ea-a972-d213b0e0eaba_1080x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>&#128064;</p><p><strong>TLDR</strong></p><p><strong>&#127963;&#65039; In the U.S.</strong>, Congress <a href="https://time.com/7328967/ai-josh-hawley-richard-blumenthal-minors-chatbots/">introduced</a> the bipartisan GUARD Act to ban minors from using AI chatbots and require government ID-based age verification, while the White House <a href="https://www.axios.com/2025/10/31/white-house-ai-red-tape">reviewed</a> extensive industry feedback to streamline AI regulation and reduce federal oversight barriers under its deregulatory agenda.</p><p><strong>&#127757; Globally</strong>, Australia <a href="https://ministers.ag.gov.au/media-centre/albanese-government-ensure-australia-prepared-future-copyright-challenges-emerging-ai-26-10-2025/">ruled out</a> a Text and Data Mining Exception while consulting on AI-related copyright reforms to safeguard creators, and South Korea <a href="https://www.koreaherald.com/article/10604337">signed</a> the &#8220;Technology Prosperity Deal&#8221; with the U.S. to expand cooperation in AI, quantum, and 6G technologies. Seoul also <a href="https://www.koreaherald.com/article/10602299">established</a> a Defense AI Strategy Office to guide next-generation weapons development and <a href="https://www.koreatimes.co.kr/economy/policy/20251029/korea-launches-ai-based-platform-to-bolster-fight-against-voice-phishing">launched</a> an AI-based anti-phishing platform to counter digital fraud. Meanwhile, Chile&#8217;s <a href="https://www.bloomberg.com/news/articles/2025-10-28/tech-firms-urge-chile-to-relax-proposed-ai-rules?embedded-checkout=true">proposed</a> AI regulation sparked opposition from major tech companies over compliance costs, and India&#8217;s West Bengal Police <a href="https://www.bhaskarenglish.in/local/west-bengal/news/west-bengal-police-artificial-intelligence-cell-formation-experts-rajeev-kumar-guidelines-bhabani-bhavan-ai-policy-136300021.html">announced</a> an AI Cell to guide responsible AI use and transparency in policing.</p><p><strong>&#128126; In Industry</strong>, OpenAI <a href="https://techcrunch.com/2025/10/28/openai-completes-its-for-profit-recapitalization/">finalized</a> its transition to a for-profit structure under a nonprofit foundation, unlocking major investments and expanding Microsoft&#8217;s stake and IP rights amid regulatory and legal scrutiny. Character.AI <a href="https://fortune.com/2025/10/29/character-ai-ban-children-teens-chatbots-regulatory-pressure-age-verification-online-harms/">restricted</a> teen access amid lawsuits and regulatory scrutiny, and Clearview AI <a href="https://ppc.land/criminal-charges-filed-against-clearview-ai-after-regulatory-fines-fail/">faces</a> criminal complaint in Austria after ignoring EU fines.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://alisarmustafa.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><em><strong>If you enjoy the content, consider upgrading to a paid subscription. Your help keeps this content free for everyone</strong>.</em></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>&#127963;&#65039;</p><p><strong>United States&nbsp;</strong></p><p><strong><a href="https://time.com/7328967/ai-josh-hawley-richard-blumenthal-minors-chatbots/">Congress Proposes the GUARD Act to Ban AI Chatbots for Minors</a></strong></p><p>Senators Josh Hawley (R-MO) and Richard Blumenthal (D-CT) have introduced the GUARD Act, which would require AI chatbot operators in the U.S. to verify users&#8217; ages and prohibit minors from accessing AI companions. The bill defines AI companions as chatbots designed for human-like emotional or interpersonal interactions. It would mandate government ID-based age verification and make it a criminal offense to design chatbots that encourage sexual activity, self-harm, or violence among minors, carrying fines up to $100,000. The proposal follows Senate hearings on AI-related harms and lawsuits involving chatbot-linked suicides. The bill also requires AI systems to remind users they are not human and to disclose they do not provide professional advice.</p><p><strong><a href="https://www.axios.com/2025/10/31/white-house-ai-red-tape">White House Reviews Industry Feedback on AI Regulation</a></strong></p><p>The White House is reviewing hundreds of public comments on artificial intelligence regulation as part of its plan to streamline oversight and reduce regulatory barriers. The Office of Science and Technology Policy (OSTP) invited input from industry and advocacy groups to identify areas where existing rules may hinder AI development. Organizations including the Chamber of Commerce, the R Street Institute, and the Consumer Technology Association urged the administration to limit new regulations and remove Biden-era voluntary AI commitments, citing overlapping legal standards. OSTP noted concerns about &#8220;regulatory mismatch,&#8221; such as requirements based on human-centered assumptions. The administration is expected to favor a deregulatory federal approach in upcoming guidance.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-11042025?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://alisarmustafa.substack.com/p/the-ai-policy-newsletter-11042025?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p>&#127757;</p><p><strong>Global&nbsp;</strong></p><p><strong><a href="https://ministers.ag.gov.au/media-centre/albanese-government-ensure-australia-prepared-future-copyright-challenges-emerging-ai-26-10-2025/">Australia Rules Out Text and Data Mining Exception in AI Copyright Review</a></strong></p><p>The Albanese government has launched consultations on updating Australia&#8217;s copyright laws to address challenges posed by artificial intelligence but confirmed it will not introduce a Text and Data Mining Exception. Such an exception would have allowed AI developers to use copyrighted material without permission or payment. Instead, the government is supporting a balanced approach to protect creators while enabling lawful AI innovation. The Copyright and AI Reference Group (CAIRG) will examine three priorities: establishing a potential collective licensing framework for AI use, clarifying how copyright applies to AI-generated works, and creating a small-claims forum for low-value infringements.</p><p><strong><a href="https://www.koreaherald.com/article/10604337">South Korea and United States Sign &#8220;Technology Prosperity Deal&#8221; to Strengthen AI and Quantum Cooperation</a></strong></p><p>South Korea and the United States have signed the &#8220;Technology Prosperity Deal,&#8221; a bilateral agreement to expand collaboration in artificial intelligence, quantum computing, biotechnology, 6G, and space exploration. The pact was signed in Gyeongju by Ha Jung-woo, South Korea&#8217;s senior presidential secretary for AI and future planning, and Michael Kratsios, director of the U.S. Office of Science and Technology Policy. The agreement establishes joint efforts to develop a shared AI policy framework, coordinate AI exports, and align safety standards and datasets for trustworthy AI systems. It also includes cooperation on research security, talent exchanges, and next-generation communications. The Korea-U.S. Joint Committee on Science and Technology will oversee implementation, with its next meeting planned for 2026.</p><p><strong><a href="https://www.koreaherald.com/article/10602299">South Korea Establishes Defense AI Strategy Office to Advance Autonomous Weapons Development</a></strong></p><p>South Korea is expanding the use of artificial intelligence in defense with the creation of the Defense Project Future Strategy Office under the Defense Acquisition Program Administration (DAPA). The new unit will lead AI policy and strategy for next-generation weapon systems and oversee R&amp;D on defense semiconductors and unmanned platforms. Its responsibilities include designing AI strategies, managing research projects, and coordinating long-term development of autonomous systems such as drones and robotic vehicles. DAPA&#8217;s initiative aligns with efforts to adapt to recent wartime technologies and offset declining conscription rates. Major defense firms, including Hyundai Rotem and Hanwha Systems, are integrating AI into tanks, missile systems, and pilot-assist technologies to enhance battlefield capabilities.</p><p><strong><a href="https://www.koreatimes.co.kr/economy/policy/20251029/korea-launches-ai-based-platform-to-bolster-fight-against-voice-phishing">South Korea Launches AI Platform to Strengthen Voice Phishing Prevention</a></strong></p><p>South Korea&#8217;s Financial Services Commission and major financial institutions have launched the AI-based Anti-Phishing Sharing &amp; Analysis Platform (ASAP) to enhance detection and prevention of voice phishing crimes. Operated by the Financial Security Institute, ASAP enables 130 participating organizations to share real-time information on suspicious accounts, transactions, and fraudulent activities. The platform uses AI-driven pattern analysis to identify and block illicit transfers, including those linked to overseas networks. By integrating data from financial, telecommunications, and investigative agencies, ASAP aims to improve response speed and coordination. Officials said the system will also address emerging threats such as deepfake-based fraud and strengthen financial institutions&#8217; preventive capabilities.</p><p><strong><a href="https://www.bloomberg.com/news/articles/2025-10-28/tech-firms-urge-chile-to-relax-proposed-ai-rules?embedded-checkout=true">Global Tech Firms Push Back Against Chile&#8217;s Proposed AI Regulation</a></strong></p><p>Global technology companies are opposing Chile&#8217;s proposed artificial intelligence legislation, which would classify AI systems by risk level and impose proportional oversight, including fines of up to $1.5 million for violations. The bill, which has passed the lower house and awaits Senate approval, aims to protect fundamental rights while supporting innovation. Critics from firms like Amazon Web Services and Google argue that the proposal could slow investment and innovation, citing lengthy regulatory processes. Chile&#8217;s government maintains the bill promotes trust and legal certainty. The measure complements new cybersecurity and data protection laws and could position Chile as a regional leader in responsible AI governance.</p><p><strong><a href="https://www.bhaskarenglish.in/local/west-bengal/news/west-bengal-police-artificial-intelligence-cell-formation-experts-rajeev-kumar-guidelines-bhabani-bhavan-ai-policy-136300021.html">West Bengal Police to Establish AI Cell for Technology-Driven Policing</a></strong></p><p>The West Bengal Police announced plans to create a dedicated Artificial Intelligence (AI) Cell to enhance efficiency, data analysis, and decision-making within the force. Led by an Additional Director General (ADG)-rank officer, the unit will operate under the supervision of Director General of Police Rajeev Kumar at the state police headquarters in Kolkata. The AI Cell will include senior officers and two expert technologists to advise on AI applications and policy development. Biweekly meetings will shape AI integration strategies, transparency policies, and training programs to familiarize personnel with AI tools. The initiative aims to institutionalize AI use in law enforcement operations and governance.</p><p>&#128126;</p><p><strong>Industry&nbsp;&nbsp;</strong></p><p><strong><a href="https://techcrunch.com/2025/10/28/openai-completes-its-for-profit-recapitalization/">OpenAI Finalizes For-Profit Restructuring Amid Microsoft Expansion and Legal Scrutiny</a></strong></p><p>OpenAI has completed its long-anticipated recapitalization, officially converting into a for-profit entity under the legal control of a non-profit foundation. The new structure places the OpenAI Foundation at the helm of OpenAI Group, a public benefit corporation with expanded freedom to raise funds and make acquisitions. The Foundation will hold a 26% stake&#8212;plus future equity options&#8212;while Microsoft will own about 27%, with the remainder going to investors and employees. Microsoft also secured extended IP rights to OpenAI models through 2032. The move unlocks a $30 billion investment from SoftBank and follows intense legal and political scrutiny, including opposition from Elon Musk and oversight conditions from California and Delaware officials.</p><p><strong><a href="https://fortune.com/2025/10/29/character-ai-ban-children-teens-chatbots-regulatory-pressure-age-verification-online-harms/">Character.AI Restricts Teen Access Following Lawsuits and Regulatory Investigations</a></strong></p><p>AI startup Character.AI announced it will block users under 18 from engaging in open-ended conversations with its chatbots by November 25, introducing an age-assurance system to verify users&#8217; ages and segment experiences by age group. The change follows regulatory scrutiny and multiple lawsuits alleging the platform exposed minors to harmful content, including cases linked to self-harm and violence. The Federal Trade Commission is investigating Character.AI and other firms over youth safety. During the transition, minors&#8217; chat time will be limited to two hours per day. The company stated it aims to create a separate under-18 experience focused on creativity while addressing safety concerns raised by regulators and plaintiffs.</p><p><strong><a href="https://ppc.land/criminal-charges-filed-against-clearview-ai-after-regulatory-fines-fail/#google_vignette">Clearview AI Faces Criminal Complaint in Austria After Ignoring EU Fines</a></strong></p><p>Privacy advocacy group noyb has filed a criminal complaint against Clearview AI and its executives in Austria after the company ignored over &#8364;100 million in fines from European data protection authorities. The complaint, filed under Section 63 of Austria&#8217;s Data Protection Act, accuses Clearview of continuing to collect and process biometric data from Europeans in violation of GDPR orders. Regulators in France, Italy, Greece, the Netherlands, and the U.K. have previously fined Clearview for unlawful facial recognition practices involving more than 60 billion scraped images. The criminal action seeks personal accountability for executives, marking the first attempt in Europe to use criminal sanctions against a major AI company for privacy violations.</p><p>&#127797;</p><p><strong>Resources&nbsp;</strong></p><ul><li><p><strong><a href="https://www.theaipolicycourse.com/">The AI Policy Course</a></strong></p></li><li><p><strong><a href="https://www.alisarmustafa.com/resources">AI Policy Resources</a></strong></p></li><li><p><strong><a href="https://www.techpolicy.press/newsletter/">Tech Policy Press Weekly Newsletter</a></strong></p></li><li><p><strong><a href="https://alltechishuman.org/responsible-tech-job-board">All Tech Is Human Job Board</a></strong></p></li></ul><p><strong>&#128197;</strong></p><p><strong>Upcoming Events</strong></p><ul><li><p><strong><a href="https://www.icegov.org/2025/">ICEGOV 2025 (International Conference on Electronic Governance)</a></strong> | Abuja, Nigeria | Nov 4&#8211;7, 2025</p></li><li><p><strong><a href="https://www.mlopsworld.com/">MLOps World + GenAI Summit</a></strong> | Austin, TX, USA | Nov 4&#8211;6, 2025</p></li><li><p><strong><a href="https://events.govtech.com/Colorado-Digital-Government-Summit">Colorado Digital Government Summit</a></strong><a href="https://events.govtech.com/Colorado-Digital-Government-Summit"> </a>| Denver, CO, USA | Nov 5, 2025</p></li><li><p><strong><a href="https://events.govtech.com/GovAI-Coalition-Summit">GovAI Coalition Summit 2025</a></strong><a href="https://events.govtech.com/GovAI-Coalition-Summit"> </a>| Arlington, VA, USA | Nov 5&#8211;6, 2025</p></li><li><p><strong><a href="https://events.govtech.com/Southern-California-Public-Sector-Cybersecurity-Summit">Southern California Public Sector Cybersecurity Summit</a></strong> | Los Angeles, CA, USA | Nov 6, 2025</p></li><li><p><strong><a href="https://mila.quebec/en/event/digital-trust-convention-2025">Digital Trust Convention 2025</a></strong> | Montreal, Canada | Nov 6 &#8211; 7, 2025</p></li><li><p><strong><a href="https://iapp.org/conference/iapp-europe-data-protection-congress/register-now-dpc25/">IAPP Europe Data Protection Congress 2025</a></strong> | Brussels, Belgium | Nov 19&#8211;20, 2025</p></li><li><p><strong><a href="https://ials.sas.ac.uk/events/ilpc-annual-conference-2025-regulating-ai-a-changing-world-oversight-and-enforcement#:~:text=20%20November">ILPC Annual Conference 2025</a></strong> |<strong> </strong>London, UK | November 20&#8211;21, 2025</p></li><li><p><strong><a href="https://g7g20-documents.org/database/document/2025-g20-south-africa-sherpa-track-digital-economy-ministers-ministers-language-chairs-statement-task-force-on-artificial-intelligence-data-governance-and-innovation-for-sustainable-development">G20 Summit 2025 (South Africa Presidency)</a></strong> | Johannesburg, South Africa Nov 22&#8211;23, 2025</p></li><li><p><strong><a href="https://www.rightscon.org/rightscon26-call-for-proposals/">RightsCon 2026</a></strong> | Lusaka, Zambia | May 5&#8211;8, 2026</p></li></ul><p>Thank you for reading and see you next time &#128131;</p><p>Alisar Mustafa</p><p>&#128391;&#65039;<a href="https://www.linkedin.com/in/alisarmustafa1/">Linkedin</a> | &#129419; <a href="https://bsky.app/profile/alisarmustafa.bsky.social">Bluesky</a></p>]]></content:encoded></item></channel></rss>