Day archives: March 30th, 2023

AI tools are generating convincing misinformation. Engaging with them means being on high alert

Lisa M. Given, Professor of Information Sciences & Director, Social Change Enabling Impact Platform, RMIT University writes: AI tools can help us create content, learn about the world and (perhaps) eliminate the more mundane tasks in life – but they aren’t perfect. They’ve been shown to hallucinate information, use other people’s work without consent, and embed social conventions, including apologies, to gain users’ trust. For example, certain AI chatbots, such as “companion” bots, are often developed with the intent to have empathetic responses. This makes them seem particularly believable. Despite our awe and wonder, we must be critical consumers of these tools – or risk being misled. Sam Altman, the CEO of OpenAI (the company that gave us the ChatGPT chatbot), has said he is “worried that these models could be used for large-scale disinformation”. As someone who studies how humans use technology to access information, so am I.

Subjects: AI, Communications, Internet Trends, KM

A survey of over 17,000 people indicates only half of us are willing to trust AI at work

Professor Nicole Gillespie, and Research Fellows Caitlin Curtis, Javad Pool and Steven Lockey, discuss their new 17-country study involving over 17,000 people reveals how much and in what ways we trust AI in the workplace, how we view the risks and benefits, and what is expected for AI to be trusted. They find that only one in two employees are willing to trust AI at work. Their attitude depends on their role, what country they live in, and what the AI is used for. However, people across the globe are nearly unanimous in their expectations of what needs to be in place for AI to be trusted.

Subjects: AI, Information Management, Software, Technology Trends