ChatGPT has taken the world by storm. Within two months of its release it reached 100 million active users, making it the fastest-growing consumer application ever launched. Users are attracted to the tool’s advanced capabilities – and concerned by its potential to cause disruption in various sectors. A much less discussed implication is the privacy risks ChatGPT poses to each and every one of us. Just yesterday, Google unveiled its own conversational AI called Bard, and others will surely follow. Technology companies working on AI have well and truly entered an arms race. Uri Gal identifies a significant issue not discussed in the current hype – this technology is fuelled by our personal data.
Hannah Della Bosca, PhD Candidate and Research Assistant at Sydney Environment Institute, University of Sydney addresses a distinct form of emerging climate denial. You may have experienced it and not even realised. It’s called implicatory denial, and it happens when you consciously recognise climate change as a serious threat without making significant changes to your everyday behaviour in response.
Privacy and cybersecurity issues impact every aspect of our lives – home, work, travel, education, health and medical records – to name but a few. On a weekly basis Pete Weiss highlights articles and information that focus on the increasingly complex and wide ranging ways technology is used to compromise and diminish our privacy and online security, often without our situational awareness. Four highlights from this week: Have a Conversation (Not a Lecture) About Fraud With Older Adults; List of consumer reporting companies; Cybersecurity High-Risk Series: Challenges in Securing Federal Systems and Information; and NIST debuts long-anticipated AI risk management framework.