Category «AI»

Pete Recommends – Weekly highlights on cyber security issues, October 29, 2023

Privacy and cybersecurity issues impact every aspect of our lives – home, work, travel, education, finance, health and medical records – to name but a few. On a weekly basis Pete Weiss highlights articles and information that focus on the increasingly complex and wide ranging ways technology is used to compromise and diminish our privacy and online security, often without our situational awareness. Four highlights from this week: Victims of Deepfakes Are Fighting Back; Without a Trace: How to Take Your Phone Off the Grid; Microsoft Fixes Excel Feature That Forced Scientists to Rename Human Genes; and Flipper Zero can now spam Android, Windows users with Bluetooth alerts.

Subjects: AI, Congress, Cybercrime, Cybersecurity, Legal Research, Privacy

Schrödinger’s AI – Where Everything and Nothing Changes

Whether speaking with lawyers and law students who haven’t gotten around to trying ChatGPT or collaborating with post-doc explainable and legal AI experts with 20+ years of machine learning and Natural Language Processing experience, Colin Lachance, legal tech innovator and leader, is no closer to understanding in what way and precisely when permanent change will come, but is unshakeably convinced that change will be enormous, uneven, disruptive and, in many cases, invisible.

Subjects: AI, Competitive Intelligence, Information Management, KM, Legal Education, Legal Profession, Legal Technology, Technology Trends

Why Google, Bing and other search engines’ embrace of generative AI threatens $68 billion SEO industry

Dr. Ravi Sen discusses how Google, Microsoft and others boast that generative artificial intelligence tools like ChatGPT will make searching the internet better than ever for users. For example, rather than having to wade through a sea of URLs, users will be able to just get an answer combed from the entire internet. There are also some concerns with the rise of AI-fueled search engines, such as the opacity over where information comes from, the potential for “hallucinated” answers and copyright issues. But one other consequence is that I believe it may destroy the US$68 billion search engine optimization industry that companies like Google helped create.

Subjects: AI, AI in Banking and Finance, Economy, Search Engines, Search Strategies

Pete Recommends – Weekly highlights on cyber security issues, October 21, 2023

Privacy and cybersecurity issues impact every aspect of our lives – home, work, travel, education, finance, health and medical records – to name but a few. On a weekly basis Pete Weiss highlights articles and information that focus on the increasingly complex and wide ranging ways technology is used to compromise and diminish our privacy and online security, often without our situational awareness. Six highlights from this week: LinkedIn Phishing Scam Exploits Smart Links to Steal Microsoft Accounts; Digital Dystopia – The Danger in Buying What the EdTech Surveillance Industry is Selling; Login.gov to add facial recognition tech; Temporary moratorium on use of facial recognition in NY; The Fake Browser Update Scam Gets a Makeover; and How to Spot and Avoid Zelle Scams in 2023.

Subjects: AI, Civil Liberties, Cybercrime, Cybersecurity, Education, Government Resources, Privacy, Technology Trends

2023 Developments in Legal AI and the Courts

Jocelyn Stilwell-Tong, Law Librarian, California Court of Appeal, Sixth Appellate District, has determined that although free AI online is useful, the developing products from major legal research platforms show great promise. These paid products control for issues like hallucinations, and provide citations supporting their work so a researcher can confirm the accuracy and context of the materials the AI is pulling from. Issues surrounding data governance (what the company does with your uploaded material and search history) can be controlled by contract, and the legal vendors understand that this is a concern for most legal clients.

Subjects: AI, KM, Law Librarians, Legal Research, LEXIS, Technology Trends, Westlaw

Pete Recommends – Weekly highlights on cyber security issues, October 7, 2023

Privacy and cybersecurity issues impact every aspect of our lives – home, work, travel, education, finance, health and medical records – to name but a few. On a weekly basis Pete Weiss highlights articles and information that focus on the increasingly complex and wide ranging ways technology is used to compromise and diminish our privacy and online security, often without our situational awareness. Four highlights from this week: Delete your digital history from dozens of companies with this app; Need a VPN? Here Are the Ones You Can Officially Trust; H&R Block, Meta, and Google Slapped With RICO Suit; and 3 Chatbot Privacy Risks and Concerns You Should Know About.

Subjects: AI, Congress, Cybercrime, Cybersecurity, Financial System, Firewalls, Legal Research, Legislative, Privacy, United States Law

Adding a ‘Group Advisory Layer’ to Your Use of Generative AI Tools Through Structured Prompting: The G-A-L Method

Dennis Kennedy asks us to Imagine a world where expert advice is at your fingertips, instantly available, tailored just for you. Think of a tool that’s always ready to give expert advice, without the need for complex coding or tech skills. The Group Advisory Layer Method (G-A-L Method™) revolutionizes decision-making by merging traditional principles of mastermind groups and advisory boards with the cutting-edge capabilities of generative AI. Traditional advisory boards, often hindered by logistics and time constraints, meet their match as the G-A-L Method offers on-demand, diverse, and tailored insights, all without the real-world hassle. It’s like having a virtual team you can chat with any time, made up of tireless AI-created ‘personas’ that act like real people. Instead of juggling schedules or waiting for feedback, you get quick and practical tips from this always-on expert team. The G-A-L Method pioneers dynamic group interactions using personas to give you practical, just-in-time expert advice. What’s more, it makes sure real people (like you) are involved where they add the most value. With the G-A-L Method, you’re not just listening to machines – you’re teaming up with them. This white paper by Dennis Kennedy, well-known legal tech and innovation advisor, law professor, infotech lawyer, professional speaker, author, and podcaster, is an invitation to unlock the untapped potential of these generative AI tools in a practical, structured way to move your efforts forward. Kennedy states that we are poised at the brink of a transformative era where informed decisions can be made rapidly and confidently. The G-A-L Method is more than a technique—it’s a game-changer.

Subjects: AI, Education, KM, Legal Research

Keeping Up With Generative AI in the Law

The pace of generative AI development (and hype) over the past year has been intense, and difficult even for us experienced librarians, masters of information that we are, to follow. Not only is there a constant stream of new products, but also new academic papers, blog posts, newsletters, and more, from people evaluating, experimenting with, and critiquing those products. With that in mind, Rebecca Fordon shares her favorites, as well as recommendations from her co-bloggers.

Subjects: AI, Education, KM, Legal Education, Legal Research, Legal Technology, Librarian Resources, Social Media, Technology Trends

The Truth About Hallucinations in Legal Research AI: How to Avoid Them and Trust Your Sources

Hallucinations in generative AI are not a new topic. If you watch the news at all (or read the front page of the New York Times), you’ve heard of the two New York attorneys who used ChatGPT to create fake cases entire cases and then submitted them to the court. After that case, which resulted in a media frenzy and (somewhat mild) court sanctions, many attorneys are wary of using generative AI for legal research. But vendors are working to limit hallucinations and increase trust. And some legal tasks are less affected by hallucinations. Law Librarian and attorney Rebecca Fordon guides us to an understanding of how and why hallucinations occur and how we can effectively evaluate new products and identify lower-risk uses.

Subjects: AI, Education, KM, Legal Education, Legal Research, Legal Research Training, Search Engines, Technology Trends

Gliding, not searching: Here’s how to reset your view of ChatGPT to steer it to better results

Human factors engineer James Intriligator makes a clear and important distinction for researchers: that unlike a search engine, with static and stored results, ChatGPT never copies, retrieves or looks up information from anywhere. Rather, it generates every word anew. You send it a prompt, and based on its machine-learning training on massive amounts of text, it creates an original answer. Most importantly, each chat retains context during a conversation, meaning that questions asked and answers provided earlier in the conversation will inform responses it generates later. The answers, therefore, are malleable, and the user needs to participate in an iterative process to shape them into something useful.

Subjects: AI, KM, Search Engines, Search Strategies