Author archives

Nathan Sanders is a data scientist focused on creating open technology to help vulnerable communities and all stakeholders participate in the analysis and development of public policy. As a Berkman Klein Center Fellow in 2020-2021, Nathan helped create the Harvard Climate Justice Design Fellowship program and the Massachusetts Platform for Legislative Engagement (MAPLE). Nathan has helped build and lead data science teams in industry at Legendary Pictures, WarnerMedia, and Flagship Pioneering, developing and applying methods in Bayesian inference, natural language processing, computer vision, and deep learning. In the policy domain, he has built open source applications for participatory oversight of environmental regulation in collaboration with the Mystic River Watershed Association in Massachusetts; developed statistical methods for public health analysis modeling long term trends in the rate of mass public shootings; and served as a science policy fellow in the Massachusetts Senate and House of Representative. Nathan is a co-founder of the astrophysical literature digest Astrobites, the multi-lingual association of graduate student science writing collaboratives ScienceBites, and the international science communication workshop series ComSciCon. He serves on the Board of Directors of the American Institute of Physics and is an Associate Editor of the Harvard Data Science Review. Nathan did his undergraduate work in Physics and Astrophysics at Michigan State University and earned his PhD in Astronomy and Astrophysics from Harvard University.

Can you trust AI? Here’s why you shouldn’t

Security expert Bruce Schneier and data scientist Nathan Sanders believe that people who come to rely on AIs will have to trust them implicitly to navigate daily life. That means they will need to be sure the AIs aren’t secretly working for someone else. Across the internet, devices and services that seem to work for you already secretly work against you. Smart TVs spy on you. Phone apps collect and sell your data. Many apps and websites manipulate you through dark patterns, design elements that deliberately mislead, coerce or deceive website visitors. This is surveillance capitalism, and AI is shaping up to be part of it.

Subjects: AI, Big Data, Civil Liberties, Cyberlaw, Human Rights, Legal Research, Privacy