Pete Recommends – Weekly highlights on cyber security issues, April 22, 2023

Subject: p.s. Twitter forces all links …
Source: Fedi.Tips

p.s. Twitter forces all links to go through its own link shortener. Most people probably don’t realise this, because the links just look like normal links.You can see this on Firefox on a computer. Hovering the mouse over a link shows its true URL in the bottom left corner.

On Twitter, all links are actually addresses, so Twitter will be able to track people clicking on them.

On Mastodon, links are exactly what they appear to be.

See the screenshots for a comparison.

You don’t need to use link shorteners on Mastodon!

All links on Mastodon count as 23 characters towards your post limit, no matter how long the links actually are.

Link shortener services like, etc track users who click on their links. This is really bad for privacy. If you use link shorteners on Mastodon, people may assume you are just doing it for tracking purposes.

More info at

Subject: New ChatGPT4.0 Concerns: A Market for Stolen Premium Accounts
Source: Check Point Software blog

Since December 2022, Check Point Research (CPR) has raised concerns about ChatGPT’s implications for cyber security. Now, CPR also warns that there is an increase in the trade of stolen ChatGPT Premium accounts, which enable cyber criminals to get around OpenAI’s geofencing restrictions and get unlimited access to ChatGPT.The market of account takeovers (ATOs), stolen accounts to different online services, is one of the most flourishing markets in the hacking underground and in the dark web. Traditionally this market’s focus was on stolen financial services accounts (banks, online payment systems, etc.), social media, online dating websites, emails, and more.

Since March 2023, CPR sees an increase in discussion and trade of stolen ChatGPT accounts, with a focus on Premium accounts:

Category SECURITY:

Category RESEARCH:


Subject: NIST wants to mitigate smart home telehealth cyber security risks
Source: GCN

The COVID-19 pandemic proliferated the use of smart speakers and other internet of things technologies for telehealth purposes, however, using smart speakers to share sensitive personal health information for telehealth purposes could pose a cybersecurity and privacy risk, which the government is trying to address, according to a notice filed in the Federal Register on Monday.The National Institute of Standards and Technology is looking for comments and products to help it mitigate cybersecurity risks in telehealth smart home integration as part of the National Cybersecurity Center of Excellence project addressing this issue. Consumers are utilizing their own commercial devices and incorporating them into a health delivery organization’s telehealth solution, as a result these organizations may have difficulty identifying and addressing cybersecurity risks because they are not in control of these items.

The NCCoE project aims to build a reference architecture utilizing the NIST Risk Management Framework, NIST Cybersecurity Framework and the NIST Privacy Framework to help find ys to address these issues.

Subject: ‘Shut it off immediately’: The health industry responds to data privacy crackdown

A series of federal data privacy crackdowns is complicating how health care companies market their services online.The Federal Trade Commission has led the way in the new enforcement push, fining telehealth companies for violating their customers’ privacy and barring them from doing so in the future. The director of HHS’ Office for Civil Rights said her staff has launched its own investigation, calling online health data collection “problematic” and “widespread.” The agency also recently sought to update health data privacy protections to bar providers and insurers from releasing information about a patient seeking or obtaining a legal abortion.

HHS’ Office for Civil Rights surprised insurers and health care providers in December when it issued a bulletin expanding its definition of personally identifiable health information and restricting the use of certain marketing technology.

The office warned that entities covered by HIPAA aren’t allowed to wantonly disclose HIPAA-protected data to vendors or use tracking technology that would cause “impermissible” disclosures of protected health information.

That protected data can include email addresses, IP addresses, or geographic location information that can be tied to an individual, under HHS’ 22-year-old HIPAA privacy rule.

“Firms that think they can cash in on consumers’ health data because HIPAA doesn’t apply should think again,” said Samuel Levine, director of the FTC’s Bureau of Consumer Protection. “Our recent actions against GoodRx and BetterHelp make clear that we are prepared to use every tool to protect Americans’ health privacy, and hold accountable those who abuse it.”

Filed under:

Subject: EFF on the UN Cybercrime Treaty
Source: Schneier on Security blog>cybercrime-treaty.html

EFF has a good explainer on the problems with the new UN Cybercrime Treaty, currently being negotiated in Vienna.The draft treaty has the potential to rewrite criminal laws around the world, possibly adding over 30 criminal offenses and new expansive police powers for both domestic and international criminal investigations…

Tags: cybercrime, laws, treaties, UN

See EFF’s Deeplinks Blog:

Subject: Hijacked AI assistants can now hack your data
Source: The Hill[h/t Sabrina Pacifici]

In February, a team of cybersecurity researchers successfully cajoled a popular AI assistant into trying to extract sensitive data from unsuspecting users by convincing it to adopt a “data pirate” persona. The AI’s “ahoy’s” and “matey’s” in pursuit of personal details were humorous, but the implications for the future of cybersecurity are not: The researchers have provided proof of concept for a future of rogue hacking AIs.Building on OpenAI’s viral launch of ChatGPT, a range of companies are now empowering their AI assistants with new abilities to browse the internet and interact with online services. But potential users of these powerful new aides need to carefully weigh how they balance the benefits of cutting-edge AI agents with the fact that they can be made to turn on their users with relative ease.

The researchers’ attack — dubbed “indirect prompt injection” — exploits a significant vulnerability in these AI systems. Though usually highly capable, these models can occasionally exhibit gullibility, irrationality and an inability to recognize their own limits. That, mixed with their programming to eagerly follow instructions, means that certain cleverly worded commands can “convince” systems such as ChatGPT to override their built-in safeguards.

prompt injection now poses a serious cyberecurity risk — exploiting weaknesses in AI systems’ intelligence, rather than traditional software code.

Early adopters of powerful new AI tools should recognize that they are subjects of a large-scale experiment with a new kind of cyberattack. AI’s new capabilities may be alluring, but the more power one gives to AI assistants, the more vulnerable one is to attack. Organizations, corporations, and governmental departments with security concerns would be wise to disallow their personnel from using such AI assistants, at least until the risks are better known.


See also:

Subject: AI Incident Database
Source: AID

AID: “Intelligent systems are currently prone to unforeseen and often dangerous failures when they are deployed to the real world. Much like the transportation sector before it (e.g., FAA and FARS) and more recently computer systems, intelligent systems require a repository of problems experienced in the real world so that future researchers and developers may mitigate or avoid repeated bad outcomes. What is an Incident? The initial set of more than 1,000 incident reports have been intentionally broad in nature. Current examples include,…


Abstracted from beSpacific
Copyright © 2023 beSpacific, All rights reserved.

Subject: Mullvad VPN was subject to a search warrant. Customer data not compromised
Source: Mullvad VPN blog

On April 18 at least six police officers from the National Operations Department (NOA) of the Swedish Police visited the Mullvad VPN office in Gothenburg with a search warrant.
They intended to seize computers with customer data.In line with our policies such customer data did not exist. We argued they had no reason to expect to find what they were looking for and any seizures would therefore be illegal under Swedish law. After demonstrating that this is indeed how our service works and them consulting the prosecutor they left without taking anything and without any customer information.
Posted in: AI, Computer Security, Cybercrime, Cybersecurity, Healthcare, Privacy