The pace of generative AI development (and hype) over the past year has been intense, and difficult even for us experienced librarians, masters of information that we are, to follow. Not only is there a constant stream of new products, but also new academic papers, blog posts, newsletters, and more, from people evaluating, experimenting with, and critiquing those products. With that in mind, Rebecca Fordon shares her favorites, as well as recommendations from her co-bloggers.
Hallucinations in generative AI are not a new topic. If you watch the news at all (or read the front page of the New York Times), you’ve heard of the two New York attorneys who used ChatGPT to create fake cases entire cases and then submitted them to the court. After that case, which resulted in a media frenzy and (somewhat mild) court sanctions, many attorneys are wary of using generative AI for legal research. But vendors are working to limit hallucinations and increase trust. And some legal tasks are less affected by hallucinations. Law Librarian and attorney Rebecca Fordon guides us to an understanding of how and why hallucinations occur and how we can effectively evaluate new products and identify lower-risk uses.
Privacy and cybersecurity issues impact every aspect of our lives – home, work, travel, education, finance, health and medical records – to name but a few. On a weekly basis Pete Weiss highlights articles and information that focus on the increasingly complex and wide ranging ways technology is used to compromise and diminish our privacy and online security, often without our situational awareness. Four highlights from this week: Zoom Contradicts Its Own Policy About Training AI On Your Data; ‘Hypnotized’ ChatGPT, Bard Generate Malicious Code, Bad Advice; SEC charges big banks with doing business through messaging apps without keeping records; and White House announces cybersecurity plan to protect nation’s public schools.
Nicole A. Cooke, Augusta Baker Endowed Chair and a Professor at the School of Library and Information Science, at the University of South Carolina, identifies the significant and socially charged work of librarians who are defending the rights of readers and writers in the battles raging across the U.S. over censorship, book challenges and book bans. Cooke states, “as long as there have been book challenges, there have been those who defend intellectual freedom and the right to read freely. Librarians and library workers have long been crucial players in the defense of books and ideas. At the 2023 annual American Library Association Conference, scholar Ibram X. Kendi praised library professionals and reminded them that “if you’re fighting book bans, if you’re fighting against censorship, then you are a freedom fighter.”
Several polls in the past couple of years (including from Ipsos, YouGov and most recently Savanta on behalf of Kings College Policy Institute and the BBC) have been examining the kinds of conspiratorial beliefs people have. The findings have led to a lot of concern and discussion. There are several revealing aspects of these polls. Magda Osman, Principal Research Associate in Basic and Applied Decision Making, Cambridge Judge Business School, is interested in what claims are considered conspiratorial and how these are phrased. But she is also interested in the widespread belief that conspiracy theories are apparently on the rise, thanks to the internet and social media. Is this true and how concerned should we really be about conspiracy theories?
As a technology ethics educator and researcher, Carey Fiesler has thought about AI systems amplifying harmful biases and stereotypes, students using AI deceptively, privacy concerns, people being fooled by misinformation, and labor exploitation. Fiesler characterizes this not at technical debt but as accruing ethical debt. Just as technical debt can result from limited testing during the development process, ethical debt results from not considering possible negative consequences or societal harms. And with ethical debt in particular, the people who incur it are rarely the people who pay for it in the end.
In this article, Saikiran Chandha, CEO and founder of SciSpace, discusses the impact of GPT-3 and related models on research, the potential question marks, and the steps that scholarly publishers can take to protect their interests.
As services like ChatGPT continue to grow in terms of both its capabilities and usage – including in education and academia – Professor Stephen Dobson asks is it high time for universities to revert to the time-tested oral exam?
In a recent paper, Prof. Chantelle Gray coined the term “algopopulism”: algorithmically aided politics. The political content in our personal feeds not only represents the world and politics to us. It creates new, sometimes “alternative”, realities. It changes how we encounter and understand politics and even how we understand reality itself.