David H. Rothman’s timely, out of the box commentary addresses the growing wave of news outlets abruptly closing down their websites, laying off staff, and in some cases, eliminating access to their respective archives. Rothman proposes an alternative to “how do I charge them enough” to stem the tide of closures, an avenue he prompts billionaire Jeff Bezos, owner of the Washington Post, to consider. A good-sized trust or corporate equivalent would enable the Washington Post to be run as a sustainable enterprise in the public interest, rather than as a mere profit generator.
Noted journalist and scholarly communication observer Richard Poynder explains why he has given up on the open access movement. This email interview was conducted by Rick Anderson.
An interview by Ryan Tate with the New York Times reporter and long time privacy journalist Kashmir Hill on how investigating Clearview AI helped her appreciate facial recognition—and envision a chaotic future.
Whether speaking with lawyers and law students who haven’t gotten around to trying ChatGPT or collaborating with post-doc explainable and legal AI experts with 20+ years of machine learning and Natural Language Processing experience, Colin Lachance, legal tech innovator and leader, is no closer to understanding in what way and precisely when permanent change will come, but is unshakeably convinced that change will be enormous, uneven, disruptive and, in many cases, invisible.
Privacy and cybersecurity issues impact every aspect of our lives – home, work, travel, education, finance, health and medical records – to name but a few. On a weekly basis Pete Weiss highlights articles and information that focus on the increasingly complex and wide ranging ways technology is used to compromise and diminish our privacy and online security, often without our situational awareness. Six highlights from this week: LinkedIn Phishing Scam Exploits Smart Links to Steal Microsoft Accounts; Digital Dystopia – The Danger in Buying What the EdTech Surveillance Industry is Selling; Login.gov to add facial recognition tech; Temporary moratorium on use of facial recognition in NY; The Fake Browser Update Scam Gets a Makeover; and How to Spot and Avoid Zelle Scams in 2023.
Crime predictions generated for the police department in Plainfield, New Jersey, rarely lined up with reported crimes, an analysis by The Markup has found, adding new context to the debate over the efficacy of crime prediction software. Geolitica, known as PredPol until a 2021 rebrand, produces software that ingests data from crime incident reports and produces daily predictions on where and when crimes are most likely to occur. Aaron Sankin, Investigative Reporter and Surya Mattu, Senior Data Engineer and Investigative Data Journalist examined 23,631 predictions generated by Geolitica between Feb. 25 to Dec. 18, 2018 for the Plainfield Police Department (PD). Each prediction they analyzed from the company’s algorithm indicated that one type of crime was likely to occur in a location not patrolled by Plainfield PD. In the end, the success rate was less than half a percent. Fewer than 100 of the predictions lined up with a crime in the predicted category, that was also later reported to police.
Jocelyn Stilwell-Tong, Law Librarian, California Court of Appeal, Sixth Appellate District, has determined that although free AI online is useful, the developing products from major legal research platforms show great promise. These paid products control for issues like hallucinations, and provide citations supporting their work so a researcher can confirm the accuracy and context of the materials the AI is pulling from. Issues surrounding data governance (what the company does with your uploaded material and search history) can be controlled by contract, and the legal vendors understand that this is a concern for most legal clients.
The pace of generative AI development (and hype) over the past year has been intense, and difficult even for us experienced librarians, masters of information that we are, to follow. Not only is there a constant stream of new products, but also new academic papers, blog posts, newsletters, and more, from people evaluating, experimenting with, and critiquing those products. With that in mind, Rebecca Fordon shares her favorites, as well as recommendations from her co-bloggers.
Privacy and cybersecurity issues impact every aspect of our lives – home, work, travel, education, finance, health and medical records – to name but a few. On a weekly basis Pete Weiss highlights articles and information that focus on the increasingly complex and wide ranging ways technology is used to compromise and diminish our privacy and online security, often without our situational awareness. Four highlights from this week: Hundreds of millions of individuals’ personally identifiable information” is impacted by the privacy weaknesses, according to the Government Accountability Office; Report: Insider Cybersecurity Threats Have Jumped 40% in 4 Years; iOS 17: iPhone Users Report Worrying Privacy Settings Change After Update; and China Cyber Threat Overview and Advisories.
Hallucinations in generative AI are not a new topic. If you watch the news at all (or read the front page of the New York Times), you’ve heard of the two New York attorneys who used ChatGPT to create fake cases entire cases and then submitted them to the court. After that case, which resulted in a media frenzy and (somewhat mild) court sanctions, many attorneys are wary of using generative AI for legal research. But vendors are working to limit hallucinations and increase trust. And some legal tasks are less affected by hallucinations. Law Librarian and attorney Rebecca Fordon guides us to an understanding of how and why hallucinations occur and how we can effectively evaluate new products and identify lower-risk uses.