Author archives

Lawrence Lessig is the Roy L. Furman Professor of Law and Leadership at Harvard Law School. Prior to returning to Harvard, he taught at Stanford Law School, where he founded the Center for Internet and Society, and at the University of Chicago. He clerked for Judge Richard Posner on the 7th Circuit Court of Appeals and Justice Antonin Scalia on the United States Supreme Court. Lessig is the founder of Equal Citizens and a founding board member of Creative Commons, and serves on the Scientific Board of AXA Research Fund. A member of the American Academy of Arts and Sciences and the American Philosophical Society, he has received numerous awards including a Webby, the Free Software Foundation’s Freedom Award, Scientific American 50 Award, and Fastcase 50 Award. Once cited by The New Yorker as “the most important thinker on intellectual property in the Internet era,” Lessig has turned his focus from law and technology to “institutional corruption”—relationships which, while legal, weaken public trust in an institution—especially as that affects democracy. His books are: They Don’t Represent Us: Reclaiming Our Democracy (November 2019), Fidelity & Constraint: How the Supreme Court Has Read the American Constitution (May 2019), America, Compromised (2018), Republic, Lost v2 (2015), The USA is Lesterland (2014), One Way Forward (2012), Republic, Lost: How Money Corrupts Congress—and a Plan to Stop It (2011), Remix: Making Art and Commerce Thrive in the Hybrid Economy (2008), Code v2 (2006), Free Culture (2004), The Future of Ideas (2001), and Code and Other Laws of Cyberspace (1999). Lessig holds a BA in economics and a BS in management from the University of Pennsylvania, an MA in philosophy from Cambridge University, and a JD from Yale.

How AI could take over elections and undermine democracy

Archon Fung, Professor of Citizenship and Self-Government, Harvard Kennedy School, and Lawrence Lessig, Professor of Law and Leadership, Harvard University, pose the question: “Could organizations use artificial intelligence language models such as ChatGPT to induce voters to behave in specific ways? Sen. Josh Hawley asked OpenAI CEO Sam Altman this question in a May 16, 2023, U.S. Senate hearing on artificial intelligence. Altman replied that he was indeed concerned that some people might use language models to manipulate, persuade and engage in one-on-one interactions with voters. Altman did not elaborate, but he might have had something like this scenario in mind. Imagine that soon, political technologists develop a machine called Clogger – a political campaign in a black box. Clogger relentlessly pursues just one objective: to maximize the chances that its candidate – the campaign that buys the services of Clogger Inc. – prevails in an election. While platforms like Facebook, Twitter and YouTube use forms of AI to get users to spend more time on their sites, Clogger’s AI would have a different objective: to change people’s voting behavior.

Subjects: AI, Communications Law, Congress, Constitutional Law, KM, Legal Research, Social Media