Lisa M. Given, Professor of Information Sciences & Director, Social Change Enabling Impact Platform, RMIT University writes: AI tools can help us create content, learn about the world and (perhaps) eliminate the more mundane tasks in life – but they aren’t perfect. They’ve been shown to hallucinate information, use other people’s work without consent, and embed social conventions, including apologies, to gain users’ trust. For example, certain AI chatbots, such as “companion” bots, are often developed with the intent to have empathetic responses. This makes them seem particularly believable. Despite our awe and wonder, we must be critical consumers of these tools – or risk being misled. Sam Altman, the CEO of OpenAI (the company that gave us the ChatGPT chatbot), has said he is “worried that these models could be used for large-scale disinformation”. As someone who studies how humans use technology to access information, so am I.
Read more
Subjects: