Articles and Columns for September 2023
- Adding a ‘Group Advisory Layer’ to Your Use of Generative AI Tools Through Structured Prompting: The G-A-L Method – The emergence of Large Language Models (LLMs) in legal research signifies a transformative shift – Dennis Kennedy asks us to Imagine a world where expert advice is at your fingertips, instantly available, tailored just for you. Think of a tool that’s always ready to give expert advice, without the need for complex coding or tech skills. The Group Advisory Layer Method (G-A-L Method™) revolutionizes decision-making by merging traditional principles of mastermind groups and advisory boards with the cutting-edge capabilities of generative AI. Traditional advisory boards, often hindered by logistics and time constraints, meet their match as the G-A-L Method offers on-demand, diverse, and tailored insights, all without the real-world hassle. It’s like having a virtual team you can chat with any time, made up of tireless AI-created ‘personas’ that act like real people. Instead of juggling schedules or waiting for feedback, you get quick and practical tips from this always-on expert team. The G-A-L Method pioneers dynamic group interactions using personas to give you practical, just-in-time expert advice. What’s more, it makes sure real people (like you) are involved where they add the most value. With the G-A-L Method, you’re not just listening to machines – you’re teaming up with them. This white paper by Dennis Kennedy, well-known legal tech and innovation advisor, law professor, infotech lawyer, professional speaker, author, and podcaster, is an invitation to unlock the untapped potential of these generative AI tools in a practical, structured way to move your efforts forward. Kennedy states that we are poised at the brink of a transformative era where informed decisions can be made rapidly and confidently. The G-A-L Method is more than a technique—it’s a game-changer.
- Keeping Up With Generative AI in the Law – The pace of generative AI development (and hype) over the past year has been intense, and difficult even for us experienced librarians, masters of information that we are, to follow. Not only is there a constant stream of new products, but also new academic papers, blog posts, newsletters, and more, from people evaluating, experimenting with, and critiquing those products. With that in mind, Rebecca Fordon shares her favorites, as well as recommendations from her co-bloggers.
- AI in Banking and Finance – September 30, 2023 – This semi-monthly column by Sabrina I. Pacifici highlights news, government reports, industry white papers and academic papers on the subject of AI’s fast paced impact on the banking and finance sectors. Four highlights from this week: European Central Bank Is Experimenting With a New Tool: A.I.; UM expert testifies on the dangers of AI in banking; 80% of Large Enterprise Finance Teams Will Use Internal AI Platforms by 2026.; and Five Use Cases for CFOs with Generative AI. Q&A with Alex Bant.
- Taylor Swift and the end of the Hollywood writers strike – a tale of two media narratives – Taylor Swift and the end of the Hollywood writers strike – a tale of two media narratives – Professor Aarushi Bhandari was taken aback when she learned that not a single student had heard that the Writers Guild of America had reached a deal with the Alliance of Motion Picture and Television Producers, or AMPTP, after a nearly 150-day strike. This historic deal includes significant raises, improvements in health care and pension support, and – unique to our times – protections against the use of artificial intelligence to write screenplays. Across online media platforms, the WGA announcement on Sept. 24, 2023, ended up buried under headlines and posts about the celebrity duo of Taylor Swift and Chiefs tight end Travis Kelce. To Bhandari, this disconnect felt like a microcosm of the entire online media ecosystem.
- The Truth About Hallucinations in Legal Research AI: How to Avoid Them and Trust Your Sources – Hallucinations in generative AI are not a new topic. If you watch the news at all (or read the front page of the New York Times), you’ve heard of the two New York attorneys who used ChatGPT to create fake cases entire cases and then submitted them to the court. After that case, which resulted in a media frenzy and (somewhat mild) court sanctions, many attorneys are wary of using generative AI for legal research. But vendors are working to limit hallucinations and increase trust. And some legal tasks are less affected by hallucinations. Law Librarian and attorney Rebecca Fordon guides us to an understanding of how and why hallucinations occur and how we can effectively evaluate new products and identify lower-risk uses.
- The Generations War comes to the law firm – The Greek philosopher Heraclitus taught “Change is the only constant in Life.” It is not rhetorical to state that we are living in a time of seismic change. Jordan Furlong frames the challenges and opportunities as It’s not about who’s right, Boomers or Millennials. It’s about the most profound change to the fabric of the legal profession in 40 years, and how we’re going to get through it.
- Gliding, not searching: Here’s how to reset your view of ChatGPT to steer it to better results – Human factors engineer James Intriligator makes a clear and important distinction for researchers: that unlike a search engine, with static and stored results, ChatGPT never copies, retrieves or looks up information from anywhere. Rather, it generates every word anew. You send it a prompt, and based on its machine-learning training on massive amounts of text, it creates an original answer. Most importantly, each chat retains context during a conversation, meaning that questions asked and answers provided earlier in the conversation will inform responses it generates later. The answers, therefore, are malleable, and the user needs to participate in an iterative process to shape them into something useful.
- Artificial Intelligence Tools and Tips – Jim Calloway, Director of the Oklahoma Bar Association’s Management Assistance Program and Julie Bays, OBA Practice Management Advisor, aiding attorneys in using technology and other tools to efficiently manage their offices, recommend that now is a good time to experiment with specific AI-powered tools and suggest the best techniques for using them.
- Can you trust AI? Here’s why you shouldn’t – Security expert Bruce Schneier and data scientist Nathan Sanders believe that people who come to rely on AIs will have to trust them implicitly to navigate daily life. That means they will need to be sure the AIs aren’t secretly working for someone else. Across the internet, devices and services that seem to work for you already secretly work against you. Smart TVs spy on you. Phone apps collect and sell your data. Many apps and websites manipulate you through dark patterns, design elements that deliberately mislead, coerce or deceive website visitors. This is surveillance capitalism, and AI is shaping up to be part of it.
- LLMs Do Not Obviate the Need for UX – Legaltech Hub’s – Nicola Shaver discusses why it is time to level-set about advanced AI: it can’t do everything. Or perhaps more practically, a large language model can’t replace all of the other technology you already have. One of the main reasons for this is the importance of an interface and a built-out user experience (UX) that offers a journey through the system that is aligned with the way users actually work. There are other reasons a large language model (LLM) won’t replace all of your technology (one of which being advanced AI is simply unnecessary to do all things), but this article will focus on UX.
- Google Chrome just rolled out a new way to track you and serve ads. Here’s what you need to know – Late last week, Google announced something called the Privacy Sandbox has been rolled out to a “majority” of Chrome users, and will reach 100% of users in the coming months. But what is it, exactly? The new suite of features represents a fundamental shift in how Chrome will track user data for the benefit of advertisers. Erica Mealy explains that Instead of third-party cookies, Chrome can now tap directly into your browsing history to gather information on advertising “topics.” Understanding how it works – and whether you want to opt in or out – is important, since Chrome remains the most widely used browser in the world, with a 63% market share as of May 2023.
- AI in Banking and Finance – September 15, 2023 – This semi-monthly column by Sabrina I. Pacifici highlights news, government reports, industry white papers and academic papers on the subject of AI’s fast paced impact on the banking and finance sectors. Five highlights from this week: AI in the financial industry: Machine learning in banking; Machine Learning Boosts Profits – Banking Giant’s Deep Dive; Banks embracing the AI future need to pay attention to its risks; and the S.E.C.’s Chief Is Worried About A.I.
- Pete Recommends – Weekly highlights on cyber security issues, September 30, 2023 – Four highlights from this week: Hundreds of millions of individuals’ personally identifiable information” is impacted by the privacy weaknesses, according to the Government Accountability Office; Report: Insider Cybersecurity Threats Have Jumped 40% in 4 Years; iOS 17: iPhone Users Report Worrying Privacy Settings Change After Update; and China Cyber Threat Overview and Advisories.
- Pete Recommends – Weekly highlights on cyber security issues, September 23, 2023 – Four highlights from this week: New Privacy Badger Prevents Google From Mangling More of Your Links and Invading Your Privacy; Microsoft AI team accidentally leaks 38TB of private company data; California legislature passes ‘Delete Act’ to protect consumer data; and Starlink lost over 200 satellites in two months.
- Pete Recommends – Weekly highlights on cyber security issues, September 16, 2023 – Four highlights from this week: Appeals Court Upholds Public.Resource.Org’s Right to Post Public Laws and Regulations Online; Hackers Are Salivating Over Electric Cars; and How Google Assistant and Amazon Alexa Target You With Ads.
- Pete Recommends – Weekly highlights on cyber security issues, September 9, 2023 – Four highlights from this week: Cars Are the Worst Product Category We Have Ever Reviewed for Privacy; Artificial Intelligence’s Use and Rapid Growth Highlight Its Possibilities and Perils; How To Stop Facebook Using Your Personal Data To Train AI; and CBP Tells Airports Its New Facial Recognition Target is 75% of Passengers Leaving the US.
- Pete Recommends – Weekly highlights on cyber security issues, September 2, 2023 – Four highlights from this week: CX [Twitter] to collect biometric, employment information from paid users and will use your twitter data to train Musk’s AI ; Hacking campaign bruteforces Cisco VPNs to breach networks; When Apps Go Rogue; NCSC Issues Cyber Warning Over AI Chatbots; and Is it safe to charge my phone at a public charging station?
LLRX.com® – the free web journal on law, technology, knowledge discovery and research for Librarians, Lawyers, Researchers, Academics, and Journalists. Founded in 1996.