-
AI Safety Laws Are Not (Necessarily) a First Amendment Problem
Whatever their policy merits, safety limitations on AI development generally do not raise First Amendment issues. -
Lawfare Daily: OpenAI’s Shutdown of State-Backed Information Operations with Alex Iftimie
Discussing OpenAI's response to state-backed information operations using its AI services -
To Protect Kids Online, Follow the Law
Courts have repeatedly struck down states’ child safety bills. Looking to past cases gives lawmakers a better playbook for future legislation. -
Lawfare Daily: Ashley Deeks and Mark Klamberg on AI and National Security
How is the military use of AI being regulated? -
The U.S. and China Need an AI Incidents Hotline
Ironically, the two countries can look to the past, not the future, for inspiration on how to mitigate AI-related risk. -
Cyber, MacGyver, and the Limits of Covert Power
A review of Lennart Maschmeyer, “Subversion: From Covert Operations to Cyber Conflict” (Oxford University Press, 2024) -
TikTok Manipulation Report Is Too Little Too Late
The latest edition of the Seriously Risky Business cybersecurity newsletter, now on Lawfare. -
Standards of Care and Safe Harbors in Software Liability: A Primer
Deciphering the Biden administration’s nascent software liability efforts. -
Rational Security: The “Cute Little Ears” Edition
This week, Alan Rozenshtein and Scott Anderson sat down with Lawfare all-stars Natalie Orpett, Eugenia Lostri, and Kevin Frazier -
Lawfare Daily: Former FCC Chair Tom Wheeler on AI Regulation
How should policymakers approach AI regulation? -
What We Don’t Know About AI and What It Means for Policy
AI’s future cost and the trajectory of its development are currently unknown. Good AI policy will take that into account. -
Cloud Un-Cover: CSRB Tells It Like It Is But What Comes Next Is on Us
Lagging policy upholds a status quo in which cloud vendor’s design decisions about how their systems work (and work together) are almost entirely opaque.