As a researcher of social media and AI, Prof. Anjana Susarla recognizes the immensely transformative potential of generative AI models, but believes that these systems pose risks. In particular, in the context of consumer protection, these models can produce errors, exhibit biases and violate personal data privacy.
This new bi-monthly column by Sabrina I. Pacifici highlights news, reports, government and industry documents, and academic papers on the subject of AI’s fast paced impact on many facets of the global financial system. The chronological links provided are to the primary sources, and as available, indicate links to alternate free versions. Each entry includes the publication name, date published, article title, abstract and tags. Pacifici is also compiling a list of actionable subject matter resources at the end of each column that will be updated regularly.
As a technology ethics educator and researcher, Carey Fiesler has thought about AI systems amplifying harmful biases and stereotypes, students using AI deceptively, privacy concerns, people being fooled by misinformation, and labor exploitation. Fiesler characterizes this not at technical debt but as accruing ethical debt. Just as technical debt can result from limited testing during the development process, ethical debt results from not considering possible negative consequences or societal harms. And with ethical debt in particular, the people who incur it are rarely the people who pay for it in the end.
Jordan Furlong writes the legal profession is about to go through what manufacturing already has. In the next few years, legally trained generative AI will replace lawyer labour on a scale we’ve never seen before. An enormous amount of lawyer activity consists of researching, analyzing, writing, developing arguments, critiquing counter-claims, and drafting responses. A machine has now come along that does most of these things, much faster than we do. Today, the machine needs lawyers to carefully review its efforts. Within two years, I doubt it will.
Privacy and cybersecurity issues impact every aspect of our lives – home, work, travel, education, health and medical records – to name but a few. On a weekly basis Pete Weiss highlights articles and information that focus on the increasingly complex and wide ranging ways technology is used to compromise and diminish our privacy and online security, often without our situational awareness. Four highlights from this week: It’s Their Content, You’re Just Licensing it; Understanding the NIST Cybersecurity Framework; Here’s how Google Maps cracked down on fake contributions last year; and Clearview AI scraped 30 billion images from Facebook and gave them to cops.
Professor Nicole Gillespie, and Research Fellows Caitlin Curtis, Javad Pool and Steven Lockey, discuss their new 17-country study involving over 17,000 people reveals how much and in what ways we trust AI in the workplace, how we view the risks and benefits, and what is expected for AI to be trusted. They find that only one in two employees are willing to trust AI at work. Their attitude depends on their role, what country they live in, and what the AI is used for. However, people across the globe are nearly unanimous in their expectations of what needs to be in place for AI to be trusted.
Privacy and cybersecurity issues impact every aspect of our lives – home, work, travel, education, health and medical records – to name but a few. On a weekly basis Pete Weiss highlights articles and information that focus on the increasingly complex and wide ranging ways technology is used to compromise and diminish our privacy and online security, often without our situational awareness. Four highlights from this week: Canceling subscriptions is notoriously difficult. A proposed FTC rule wants to change that; Analysts share 8 ChatGPT security predictions for 2023; It’s impossible to review security cameras in the age of breaches and ransomware; and TikTok parent ByteDance owns a bunch of other popular apps. Seems relevant!
In preparation for a presentation about race and academic libraries, Curtis Kendrick, formerly Dean and currently Binghamton University Libraries Faculty and Staff mentor, tried ChatGPT (Jan 9 version) to see what it (they?) had to say. He was curious about how it worked and how accurately it responded to queries. For our consideration, Kendrick offers his analysis of this interaction.
Attorney and legal technologist Nicole Black cautions user that ChatGPT is a great start, but that’s all it is. No matter what you’re using ChatGPT for, whether for personal or professional reasons, you’ll need to have a full understanding of the topic at hand and thoroughly review, edit, and supplement the draft language it provides you.
Disquiet in the archives: archivists make tough calls with far-reaching consequences – they deserve our support
Stuart Kells, Adjunct Professor, College of Arts, Social Sciences and Commerce, La Trobe University explains why for technological, ethical and political reasons, the world’s archivists are suddenly very busy. Advances in digital imaging and communications are feeding an already intense interest in provenance, authorship and material culture. Two recent discoveries – a woman’s name scratched in the margins of an 8th-century manuscript, and John Milton’s annotations in a copy of Shakespeare’s First Folio held in the Free Library of Philadelphia – are examples of how new tools are revealing new evidence, and how distant scholars are making fascinating connections. At the same time, and even more importantly, the holdings of archives, libraries and museums – “memory institutions” – are being scrutinised as the world grapples with legacies of racism, imperialism, slavery and oppression. Some of the holdings speak to heinous episodes and indefensible values. And some of them were flat-out stolen.