The pace of generative AI development (and hype) over the past year has been intense, and difficult even for us experienced librarians, masters of information that we are, to follow. Not only is there a constant stream of new products, but also new academic papers, blog posts, newsletters, and more, from people evaluating, experimenting with, and critiquing those products. With that in mind, Rebecca Fordon shares her favorites, as well as recommendations from her co-bloggers.
Privacy and cybersecurity issues impact every aspect of our lives – home, work, travel, education, finance, health and medical records – to name but a few. On a weekly basis Pete Weiss highlights articles and information that focus on the increasingly complex and wide ranging ways technology is used to compromise and diminish our privacy and online security, often without our situational awareness. Four highlights from this week: Hundreds of millions of individuals’ personally identifiable information” is impacted by the privacy weaknesses, according to the Government Accountability Office; Report: Insider Cybersecurity Threats Have Jumped 40% in 4 Years; iOS 17: iPhone Users Report Worrying Privacy Settings Change After Update; and China Cyber Threat Overview and Advisories.
Hallucinations in generative AI are not a new topic. If you watch the news at all (or read the front page of the New York Times), you’ve heard of the two New York attorneys who used ChatGPT to create fake cases entire cases and then submitted them to the court. After that case, which resulted in a media frenzy and (somewhat mild) court sanctions, many attorneys are wary of using generative AI for legal research. But vendors are working to limit hallucinations and increase trust. And some legal tasks are less affected by hallucinations. Law Librarian and attorney Rebecca Fordon guides us to an understanding of how and why hallucinations occur and how we can effectively evaluate new products and identify lower-risk uses.
Jim Calloway, Director of the Oklahoma Bar Association’s Management Assistance Program and Julie Bays, OBA Practice Management Advisor, aiding attorneys in using technology and other tools to efficiently manage their offices, recommend that now is a good time to experiment with specific AI-powered tools and suggest the best techniques for using them.
Investigative data journalist Jon Keegan and former tech lawyer Linda Woods Hyman teach you how to spot tricks and hidden disclosures within these interminable documents—and even how to claw back some privacy.
As a researcher of social media and AI, Prof. Anjana Susarla recognizes the immensely transformative potential of generative AI models, but believes that these systems pose risks. In particular, in the context of consumer protection, these models can produce errors, exhibit biases and violate personal data privacy.
This new bi-monthly column by Sabrina I. Pacifici highlights news, reports, government and industry documents, and academic papers on the subject of AI’s fast paced impact on many facets of the global financial system. The chronological links provided are to the primary sources, and as available, indicate links to alternate free versions. Each entry includes the publication name, date published, article title, abstract and tags. Pacifici is also compiling a list of actionable subject matter resources at the end of each column that will be updated regularly.
As a technology ethics educator and researcher, Carey Fiesler has thought about AI systems amplifying harmful biases and stereotypes, students using AI deceptively, privacy concerns, people being fooled by misinformation, and labor exploitation. Fiesler characterizes this not at technical debt but as accruing ethical debt. Just as technical debt can result from limited testing during the development process, ethical debt results from not considering possible negative consequences or societal harms. And with ethical debt in particular, the people who incur it are rarely the people who pay for it in the end.
Jordan Furlong writes the legal profession is about to go through what manufacturing already has. In the next few years, legally trained generative AI will replace lawyer labour on a scale we’ve never seen before. An enormous amount of lawyer activity consists of researching, analyzing, writing, developing arguments, critiquing counter-claims, and drafting responses. A machine has now come along that does most of these things, much faster than we do. Today, the machine needs lawyers to carefully review its efforts. Within two years, I doubt it will.
Privacy and cybersecurity issues impact every aspect of our lives – home, work, travel, education, health and medical records – to name but a few. On a weekly basis Pete Weiss highlights articles and information that focus on the increasingly complex and wide ranging ways technology is used to compromise and diminish our privacy and online security, often without our situational awareness. Four highlights from this week: It’s Their Content, You’re Just Licensing it; Understanding the NIST Cybersecurity Framework; Here’s how Google Maps cracked down on fake contributions last year; and Clearview AI scraped 30 billion images from Facebook and gave them to cops.