How Meta Brings in Millions Off Political Violence
CalMatters and The Markup used Facebook’s AI model to count the millions of dollars it makes after violent news events By Colin Lecher and Tomas Apodaca.
CalMatters and The Markup used Facebook’s AI model to count the millions of dollars it makes after violent news events By Colin Lecher and Tomas Apodaca.
Privacy and cybersecurity issues impact every aspect of our lives – home, work, travel, education, finance, health and medical records – to name but a few. On a weekly basis Pete Weiss highlights articles and information that focus on the increasingly complex and wide ranging ways technology is used to compromise and diminish our privacy and online security, often without our situational awareness. Four highlights from this week: Why Multi-factor authentication (MFA) alone won’t protect you in the age of adversarial AI; HHS to crack down on providers blocking access to electronic medical records; Justice Department, Microsoft disrupt Russian intelligence cyber scheme; and Reports: China hacked Verizon and AT&T, may have accessed US wiretap systems.
Privacy and cybersecurity issues impact every aspect of our lives – home, work, travel, education, finance, health and medical records – to name but a few. On a weekly basis Pete Weiss highlights articles and information that focus on the increasingly complex and wide ranging ways technology is used to compromise and diminish our privacy and online security, often without our situational awareness. Five highlights from this week: FTC Says Social Media Platforms Engage in ‘Vast Surveillance’ of Users; AI voices are officially too realistic; Tor Network Denies Report That ‘Anonymity Is Completely Canceled’; ‘Terrorgram’ Charges Show US Has Had Tools to Crack Down on Far-Right Terrorism All Along; and DuckDuckGo Joins AI Chat, Promises Enhanced Anonymity.
Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. Dr. Michael Townsen Hicks, Dr. James Humphries and Dr. Joe Slater argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. They distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. They further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.
Since the release of ChatGPT in November 2022, the world has seen an incredible surge in investment, development and use of artificial intelligence (AI) applications. According to one estimate, the amount of computational power used for AI is doubling roughly every 100 days. Researchers Gordon Noble and Fiona Berry turn our attention to the environmental impacts which have been largely overlooked. A single query to an AI-powered chatbot can use up to ten times as much energy as an old-fashioned Google search.
Referencing an article in this month’s Georgetown Law Technology Review, “…traditional AI algorithms normally operate by carrying out a specific function or completing a task using a data set that contains information on how that function or task has previously been done In other words, traditional AI is able to follow a set of rules, make predictions, or utilize instructions to complete a task; but it is not creating anything new in doing so. Generative AI (GAI) has the ability to create something new, specifically new content.” Marcus P. Zillman’s new resource guide spans subject matters including law, economics, education, information technology, planning and strategic deployment and use of GAI, as well a best practices and governance.
Privacy and cybersecurity issues impact every aspect of our lives – home, work, travel, education, finance, health and medical records – to name but a few. On a weekly basis Pete Weiss highlights articles and information that focus on the increasingly complex and wide ranging ways technology is used to compromise and diminish our privacy and online security, often without our situational awareness. Five highlights from this week: Want free and anonymous access to AI chatbots? DuckDuckGo’s new tool is for you; Windows Recall is changing in 3 key aspects after user backslash; Harvard, MIT and Wharton research reveals pitfalls of relying on junior staff for AI training; AI in law enforcement is risky, but holds promise; and The NSA’s guide to keeping your phone and yourself safe.
David H. Rothman’s timely, outside the box commentary addresses the growing wave of news outlets abruptly closing down their websites, laying off staff, and in some cases, eliminating access to their respective archives. Rothman proposes an alternative to “how do I charge them enough” to stem the tide of closures, an avenue he prompts billionaire Jeff Bezos, owner of the Washington Post, to consider. A good-sized trust or corporate equivalent would enable the Washington Post to be run as a sustainable enterprise in the public interest, rather than as a mere profit generator.
Noted journalist and scholarly communication observer Richard Poynder explains why he has given up on the open access movement. This email interview was conducted by Rick Anderson.
An interview by Ryan Tate with the New York Times reporter and long time privacy journalist Kashmir Hill on how investigating Clearview AI helped her appreciate facial recognition—and envision a chaotic future.