Category «Legal Technology»

Predictive Policing Software Terrible At Predicting Crimes

Crime predictions generated for the police department in Plainfield, New Jersey, rarely lined up with reported crimes, an analysis by The Markup has found, adding new context to the debate over the efficacy of crime prediction software. Geolitica, known as PredPol until a 2021 rebrand, produces software that ingests data from crime incident reports and produces daily predictions on where and when crimes are most likely to occur. Aaron Sankin, Investigative Reporter and Surya Mattu, Senior Data Engineer and Investigative Data Journalist examined 23,631 predictions generated by Geolitica between Feb. 25 to Dec. 18, 2018 for the Plainfield Police Department (PD). Each prediction they analyzed from the company’s algorithm indicated that one type of crime was likely to occur in a location not patrolled by Plainfield PD. In the end, the success rate was less than half a percent. Fewer than 100 of the predictions lined up with a crime in the predicted category, that was also later reported to police.

Subjects: Big Data, Civil Liberties, Criminal Law, Data Mining, Privacy, Spyware, Technology Trends

2023 Developments in Legal AI and the Courts

Jocelyn Stilwell-Tong, Law Librarian, California Court of Appeal, Sixth Appellate District, has determined that although free AI online is useful, the developing products from major legal research platforms show great promise. These paid products control for issues like hallucinations, and provide citations supporting their work so a researcher can confirm the accuracy and context of the materials the AI is pulling from. Issues surrounding data governance (what the company does with your uploaded material and search history) can be controlled by contract, and the legal vendors understand that this is a concern for most legal clients.

Subjects: AI, KM, Law Librarians, Legal Research, LEXIS, Technology Trends, Westlaw

Pete Recommends – Weekly highlights on cyber security issues, October 7, 2023

Privacy and cybersecurity issues impact every aspect of our lives – home, work, travel, education, finance, health and medical records – to name but a few. On a weekly basis Pete Weiss highlights articles and information that focus on the increasingly complex and wide ranging ways technology is used to compromise and diminish our privacy and online security, often without our situational awareness. Four highlights from this week: Delete your digital history from dozens of companies with this app; Need a VPN? Here Are the Ones You Can Officially Trust; H&R Block, Meta, and Google Slapped With RICO Suit; and 3 Chatbot Privacy Risks and Concerns You Should Know About.

Subjects: AI, Congress, Cybercrime, Cybersecurity, Financial System, Firewalls, Legal Research, Legislative, Privacy, United States Law

LLRX September 2023 Issue

Articles and Columns for September 2023 Adding a ‘Group Advisory Layer’ to Your Use of Generative AI Tools Through Structured Prompting: The G-A-L Method – The emergence of Large Language Models (LLMs) in legal research signifies a transformative shift – Dennis Kennedy asks us to Imagine a world where expert advice is at your fingertips, instantly …

Subjects: KM

Adding a ‘Group Advisory Layer’ to Your Use of Generative AI Tools Through Structured Prompting: The G-A-L Method

Dennis Kennedy asks us to Imagine a world where expert advice is at your fingertips, instantly available, tailored just for you. Think of a tool that’s always ready to give expert advice, without the need for complex coding or tech skills. The Group Advisory Layer Method (G-A-L Method™) revolutionizes decision-making by merging traditional principles of mastermind groups and advisory boards with the cutting-edge capabilities of generative AI. Traditional advisory boards, often hindered by logistics and time constraints, meet their match as the G-A-L Method offers on-demand, diverse, and tailored insights, all without the real-world hassle. It’s like having a virtual team you can chat with any time, made up of tireless AI-created ‘personas’ that act like real people. Instead of juggling schedules or waiting for feedback, you get quick and practical tips from this always-on expert team. The G-A-L Method pioneers dynamic group interactions using personas to give you practical, just-in-time expert advice. What’s more, it makes sure real people (like you) are involved where they add the most value. With the G-A-L Method, you’re not just listening to machines – you’re teaming up with them. This white paper by Dennis Kennedy, well-known legal tech and innovation advisor, law professor, infotech lawyer, professional speaker, author, and podcaster, is an invitation to unlock the untapped potential of these generative AI tools in a practical, structured way to move your efforts forward. Kennedy states that we are poised at the brink of a transformative era where informed decisions can be made rapidly and confidently. The G-A-L Method is more than a technique—it’s a game-changer.

Subjects: AI, Education, KM, Legal Research

Keeping Up With Generative AI in the Law

The pace of generative AI development (and hype) over the past year has been intense, and difficult even for us experienced librarians, masters of information that we are, to follow. Not only is there a constant stream of new products, but also new academic papers, blog posts, newsletters, and more, from people evaluating, experimenting with, and critiquing those products. With that in mind, Rebecca Fordon shares her favorites, as well as recommendations from her co-bloggers.

Subjects: AI, Education, KM, Legal Education, Legal Research, Legal Technology, Librarian Resources, Social Media, Technology Trends

Pete Recommends – Weekly highlights on cyber security issues, September 30, 2023

Privacy and cybersecurity issues impact every aspect of our lives – home, work, travel, education, finance, health and medical records – to name but a few. On a weekly basis Pete Weiss highlights articles and information that focus on the increasingly complex and wide ranging ways technology is used to compromise and diminish our privacy and online security, often without our situational awareness. Four highlights from this week: Hundreds of millions of individuals’ personally identifiable information” is impacted by the privacy weaknesses, according to the Government Accountability Office; Report: Insider Cybersecurity Threats Have Jumped 40% in 4 Years; iOS 17: iPhone Users Report Worrying Privacy Settings Change After Update; and China Cyber Threat Overview and Advisories.

Subjects: Cybercrime, Cybersecurity, E-Commerce, Government Resources, Privacy, Technology Trends

Taylor Swift and the end of the Hollywood writers strike – a tale of two media narratives

Professor Aarushi Bhandari was taken aback when she learned that not a single student had heard that the Writers Guild of America had reached a deal with the Alliance of Motion Picture and Television Producers, or AMPTP, after a nearly 150-day strike. This historic deal includes significant raises, improvements in health care and pension support, and – unique to our times – protections against the use of artificial intelligence to write screenplays. Across online media platforms, the WGA announcement on Sept. 24, 2023, ended up buried under headlines and posts about the celebrity duo of Taylor Swift and Chiefs tight end Travis Kelce. To Bhandari, this disconnect felt like a microcosm of the entire online media ecosystem.

Subjects: Communications, Internet Trends, KM, News Resources, Social Media

The Truth About Hallucinations in Legal Research AI: How to Avoid Them and Trust Your Sources

Hallucinations in generative AI are not a new topic. If you watch the news at all (or read the front page of the New York Times), you’ve heard of the two New York attorneys who used ChatGPT to create fake cases entire cases and then submitted them to the court. After that case, which resulted in a media frenzy and (somewhat mild) court sanctions, many attorneys are wary of using generative AI for legal research. But vendors are working to limit hallucinations and increase trust. And some legal tasks are less affected by hallucinations. Law Librarian and attorney Rebecca Fordon guides us to an understanding of how and why hallucinations occur and how we can effectively evaluate new products and identify lower-risk uses.

Subjects: AI, Education, KM, Legal Education, Legal Research, Legal Research Training, Search Engines, Technology Trends

Gliding, not searching: Here’s how to reset your view of ChatGPT to steer it to better results

Human factors engineer James Intriligator makes a clear and important distinction for researchers: that unlike a search engine, with static and stored results, ChatGPT never copies, retrieves or looks up information from anywhere. Rather, it generates every word anew. You send it a prompt, and based on its machine-learning training on massive amounts of text, it creates an original answer. Most importantly, each chat retains context during a conversation, meaning that questions asked and answers provided earlier in the conversation will inform responses it generates later. The answers, therefore, are malleable, and the user needs to participate in an iterative process to shape them into something useful.

Subjects: AI, KM, Search Engines, Search Strategies