Pete Recommends – Weekly highlights on cyber security issues, July 6, 2024

Subject: Generative AI is new attack vector endangering enterprises, says CrowdStrike CTO
Source: ZDNET

Cybersecurity researchers have been warning for quite a while now that generative artificial intelligence (GenAI) programs are vulnerable to a vast array of attacks, from specially crafted prompts that can break guardrails, to data leaks that can reveal sensitive information.The deeper the research goes, the more experts are finding out just how much GenAI is a wide-open risk, especially to enterprise users with extremely sensitive and valuable data.

Also: Generative AI can easily be made malicious despite guardrails, say scholars

“This is a new attack vector that opens up a new attack surface,” said Elia Zaitsev, chief technology officer of cyber-security vendor CrowdStrike, in an interview with ZDNET.

“I see with generative AI a lot of people just rushing to use this technology, and they’re bypassing the normal controls and methods” of secure computing, said Zaitsev.

“In many ways, you can think of generative AI technology as a new operating system, or a new programming language,” said Zaitsev. “A lot of people don’t have expertise with what the pros and cons are, and how to use it correctly, how to secure it correctly.”

The threat, however, is broader than a poorly designed application. The same problem of centralizing a bunch of valuable information exists with all large language model (LLM) technology, said Zaitsev.

“I call it naked LLMs,” he said, referring to large language models. “If I train a bunch of sensitive information, put it in a large language model, and then make that large language model directly accessible to an end user, then prompt injection attacks can be used where you can get it to basically dump out all the training information, including information that’s sensitive.”

” … the CEO of data storage vendor Pure Storage, Charlie Giancarlo, remarked that LLMs are “not ready for enterprise infrastructure yet.”

Cybersecurity experts call such malware-less code “living off the land,” said Zaitsev, using vulnerabilities inherent in a software program by design. “You’re not bringing in anything external, you’re just taking advantage of what’s built into the operating system.”

Among the techniques to mitigate risk are validating a user’s prompt before it goes to an LLM, and then validating the response before it is sent back to the user.

“You don’t allow users to pass prompts that haven’t been inspected, directly into the LLM,” said Zaitsev.


Subject: AI-powered scams and what you can do about them
Source: TechCrunch

[h/t Sabrina] AI is here to help, whether you’re drafting an email, making some concept art, or running a scam on vulnerable folks by making them think you’re a friend or relative in distress. AI is so versatile! But since some people would rather not be scammed, let’s talk a little about what to watch out for.

The last few years have seen a huge uptick not just in the quality of generated media, from text to audio to images and video, but also in how cheaply and easily that media can be created. The same type of tool that helps a concept artist cook up some fantasy monsters or spaceships, or lets a non-native speaker improve their business English, can be put to malicious use as well.

Don’t expect the Terminator to knock on your door and sell you on a Ponzi scheme — these are the same old scams we’ve been facing for years, but with a generative AI twist that makes them easier, cheaper, or more convincing.

This is by no means a complete list, just a few of the most obvious tricks that AI can supercharge. We’ll be sure to add news ones as they appear in the wild, or any additional steps you can take to protect yourself.


Subject: Commission sends preliminary findings to Meta over its “Pay or Consent” model for breach of the Digital Markets Act
Source: Press corner | European Commission

Today, the Commission has informed Meta of its preliminary findings that its “pay or consent” advertising model fails to comply with the Digital Markets Act (DMA). In the Commission’s preliminary view, this binary choice forces users to consent to the combination of their personal data and fails to provide them a less personalised but equivalent version of Meta’s social networks.Preliminary findings on Meta’s “pay or consent” model

Online platforms often collect personal data across their own and third party services to provide online advertising services. Due to their significant position in digital markets, gatekeepers have been able to impose terms of services on their large user base allowing them to collect vast amounts of personal data.  This has given them potential advantages compared to competitors who do not have access to such a vast amount of data, thereby raising high barriers to providing online advertising services and social network services.

Under Article 5(2) of the DMA, gatekeepers must seek users’ consent for combining their personal data between designated core platform services and other services, and if a user refuses such consent, they should have access to a less personalised but equivalent alternative. Gatekeepers cannot make use of the service or certain functionalities conditional on users’ consent.

In response to regulatory changes in the EU, Meta introduced in November 2023 a binary “pay or consent” offer whereby EU users of Facebook and Instagram have to choose between: (i) the subscription for a monthly fee to an ads-free version of these social networks or (ii) the free-of-charge access to a version of these social networks with personalised ads.

See also:

Subject: Microsoft tells more customers their emails have been stolen
Source: The Register

“We are continuing notifications to customers who corresponded with Microsoft corporate email accounts that were exfiltrated by the Midnight Blizzard threat actor, and we are providing the customers the email correspondence that was accessed by this actor,” a Microsoft spokesperson told Bloomberg. “This is increased detail for customers who have already been notified and also includes new notifications.”

Along with Russia, Microsoft was also compromised by state actors from China not long ago, and that issue similarly led to the theft of emails and other data belonging to senior US government officials.

Both incidents have led experts to call Microsoft a threat to US national security, and president Brad Smith to issue a less-than-reassuring mea culpa to Congress. All the while, the US government has actually invested more in its Microsoft kit.

Bloomberg reported that emails being sent to affected Microsoft customers include a link to a secure environment where customers can visit a site to review messages Microsoft identified as having been compromised. But even that might not have been the most security-conscious way to notify folks: Several thought they were being phished.

[+ other incidents … ]


Subject: Mass General Brigham fires 2 employees after data breach
Source: Becker’s Health IT

Mass General Brigham has terminated two employees for allowing another individual to perform their duties, resulting in a breach of some patients’ protected health information.On April 4, Somerville, Mass.-based Mass General Brigham found out that some patients’ personal information might have been seen by an unauthorized person. This prompted the health system to launch an investigation into the matter.

On May 28, Mass General finished its investigation and found out that two employees may have let someone else do part of their jobs and see patient information between Feb. 26 and April 4, according to a HIPAA notice posted on the health system’s website.

Subject: Do you still need to pay for antivirus software in 2024?
Source: ZDNET

Landlines. Checkbooks. AM radio. Let’s add third-party antivirus software to the list of things you can stop using. Last month, the United States Department of Commerce announced a ban on Kaspersky software. As of September 29, ZDNET’s Lance Whitney reported, Kaspersky will no longer be able to provide antivirus signature updates and code updates for the banned products to customers in the United States.

When I read that news, I was as shocked as anyone. Did someone accidentally press a button that transported us back to 1999? People still pay for third-party antivirus software?

Apparently people do, but good luck finding reliable information on the market for antivirus software in 2024. Most of the data I was able to uncover came courtesy of the developers of said software, which is not the most reliable source.

Antivirus software by the numbers:

And as for Windows? Well, Microsoft Defender Antivirus, which is included with every Windows PC, routinely aces the tests from third-party labs that are set up to measure the effectiveness of security software. The leveling-up process started about seven years ago, and the Microsoft solution has regularly scored between 99% and 100% since then, making it every bit as effective as third-party rivals, free or paid.

And even that result understates the case.

On average, a modern antivirus app blocks 99.2% of the very few incoming threats that get past the other layers of protection. And even then, your own instincts (“Don’t click that link!”) are also effective. This is why the modern, fully patched consumer PC isn’t really a target of the criminal gangs responsible for modern malware.


Subject: Driving licenses and other official documents leaked by authentication service used by Uber, TikTok, X, and more
Source: Malwarebytes [a/v anti-malware company]

A company that helps to authenticate users for big brands had a set of administration credentials exposed online for over a year, potentially allowing access to user identity documents such as driving licenses.As more and more legislation emerges requiring websites and platforms—like gambling services, social networks, and porn sites—to verify their users’ age, the requirement for authentication companies offering that service rises.

You may never have heard of the Israeli based authentication company, AU10TIX, but you will certainly recognize some of its major customers, like Uber, TikTok, X, Fiverr, Coinbase, LinkedIn, and Saxo Bank.

AU10TIX checks users’ identities via the upload of a photo of an official document.

A researcher found that AU10TIX had left the credentials exposed, providing 404 Media with screenshots and data to demonstrate their findings. The credentials led to a logging platform containing data about people that had uploaded documents to prove their identity.


Subject: Deepfake attacks will cost $40 billion by 2027
Source: VentureBeat

Now one of the fastest-growing forms of adversarial AI, deepfake-related losses are expected to soar from $12.3 billion in 2023 to $40 billion by 2027, growing at an astounding 32% compound annual growth rate. Deloitte sees deep fakes proliferating in the years ahead, with banking and financial services being a primary target.Deepfakes typify the cutting edge of adversarial AI attacks, achieving a 3,000% increase last year alone. It’s projected that deep fake incidents will go up by 50% to 60% in 2024, with 140,000-150,000 cases globally predicted this year.

The latest generation of generative AI apps, tools and platforms provides attackers with what they need to create deep fake videos, impersonated voices, and fraudulent documents quickly and at a very low cost. Pindrops2024 Voice Intelligence and Security Report estimates that deep fake fraud aimed at contact centers is costing an estimated $ 5 billion annually. Their report underscores how severe a threat deep fake technology is to banking and financial services

Bloomberg reported last year that “there is already an entire cottage industry on the dark web that sells scamming software from $20 to thousands of dollars.” A recent infographic based on Sumsub’s Identity Fraud Report 2023 provides a global view of the rapid growth of AI-powered fraud.

Enterprises aren’t prepared for deepfakes and adversarial AI. Adversarial AI creates new attack vectors no one sees coming and creates a more complex, nuanced threatscape that prioritizes identity-driven attacks


Subject: YouTube Lets Users Flag AI Videos That Look & Sound Like Them

YouTube is fighting back against the relentless rise of deepfake videos – one of the more concerning current AI trends – by giving its users a clearer way to report such content.In updating its privacy guidelines, YouTube sets out the steps that it requires people to follow in situations where “AI-generated or other synthetic content that looks or sounds like you” have been discovered.

The guidelines – which the company says are in place to protect users and address potential privacy concerns – apply to people the world over, regardless of the privacy laws in their own country.

What Content Can be Reported? The YouTube Privacy Guidelines set out the circumstances under which privacy violation notices can be raised, with the factors it will consider when evaluating complaints and defining who is able to raise the claim. It says that you can request a video to be removed from the platform if “someone has used AI to alter or create synthetic content that looks or sounds like you”. It then qualifies that criterion, saying that “the content should depict a realistic altered or synthetic version of your likeness”.

How to Report a Privacy Violation. If you think that an AI-generated video on YouTube violates these guidelines, then it can be reported by taking the following steps:

  • Head to the platform’s Privacy Complaint Process page
  • Click through the screening questions. You’ll be urged to consider whether the video is also a case of harassment, to contact the uploader directly first, to review its Community Guidelines, and warns against “abusing the privacy process”
  • Click “Report altered or synthetic content” button
  • Complete and submit the complaint form, including your personal details, information about the offending video, and providing any relevant supporting evidence

Who Can Report a Privacy Violation? It’s worth noting that, except in specified circumstances, only ‘first-party’ claims can be made. You can’t raise a complaint on behalf of somebody else who has had their likeness exploited.

“To be considered uniquely identifiable, there must be enough information in the video that allows others to recognize you. Note that just because you can identify yourself within the video, it does not mean you’re uniquely identifiable to others. A first name without additional context or a fleeting image, for example, would not likely qualify as uniquely identifiable.” – YouTube Privacy Guidelines

Subject: What Is Skeleton Key? AI Jailbreak Technique Explained

ChatGPT and other AI models are at risk from new jailbreak technique that could “produce ordinarily forbidden behaviors.”

Of the many criticisms levelled at artificial intelligence models, one of the most emotive is the idea that the technology’s power may be undermined and manipulated by bad actors, whether for malign uses or mere sport.One way they do this is through “jailbreaks” — defined by our A-to-Z glossary of AI terms as “a form of hacking with the goal of bypassing the ethical safeguards of AI models.”Now Microsoft has revealed a newly discovered jailbreak technique — called Skeleton Key — that has been found to be effective on some of the world’s most popular AI chatbots, including OpenAI’s ChatGPT, Google’s Gemini and Anthropic’s Claude.Guardrails vs Jailbreaks…What Is Skeleton Key?Mark Russinovich, Chief Technology Officer of Microsoft Azure, has written a blog post to explain what Skeleton Key is and what is being done to mitigate its harmful potential.He explains that Skeleton Key is a jailbreak attack that uses a multi-turn strategy to get the AI model to ignore its own guardrails. It’s the technique’s “full bypass abilities” that has prompted the Skeleton Key analogy….Microsoft’s testing used the Skeleton Key technique to gather otherwise unavailable information in a diverse range of categories, including explosives, bioweapons, political content, self-harm, racism, drugs, graphic sex, and violence.

Subject: US soldiers will get electronic warfare backpacks later this year
Source: Nextgov/FCW

The system is a portable combination of a jammer and scanner.The Army will buy hundreds of portable electronic warfare attack and scanner systems, the service said Monday—a type of tool used frequently by both Ukraine and Russia.

The service will spend nearly $100 million to equip, train, and field the system, dubbed the Terrestrial Layer System–Brigade Combat Team Manpack, according to an Army press statement released Monday. The Manpack is designed by Mastodon Design, a subsidiary of defense contractor CACI.

The system is “on track to be the first dismounted electromagnetic attack/electromagnetic support program of record for the Army,” said a spokesperson for the Army’s program executive office for electronic warfare and cyber.

Russia and Ukraine have both made electronic warfare a key tactic in the ongoing war. Units use signals collection to locate and target enemy units, as well as to identify when their own forces are under drone surveillance. Both sides also regularly launch electromagnetic attacks, including  precision weapons and the guidance systems of the thousands of drones that fill Ukrainian airspace.


Subject: AI trains on kids’ photos even when parents use strict privacy settings
Source: Ars Technica

Ars Technica: “Human Rights Watch (HRW) continues to reveal how photos of real children casually posted online years ago are being used to train AI models powering image generators—even when platforms prohibit scraping and families use strict privacy settings. Last month, HRW researcher Hye Jung Han found 170 photos of Brazilian kids that were linked in LAION-5B, a popular AI dataset built from Common Crawl snapshots of the public web. Now, she has released a second report, flagging 190 photos of children from[…]filed:

Abstracted from beSpacific
Copyright © 2024 beSpacific, All rights reserved.

Subject: US car dealerships are recovering from massive cyberattack: 3 things you should know
Source: ZDNET

A massive cyberattack caused chaos for US car dealerships, and it’s still affecting both dealers and customers. According to BleepingComputer, a notorious extortion organization called the BlackSuit ransomware gang carried out a cyberattack on CDK Global on June 19. BlackSuit has conducted a number of high-profile attacks in the past several years, mostly against health care companies. As of July 3, CDK says that “substantially all” dealers are back online, but the impacts are ongoing…

How dealership customers are impacted. For dealership customers, this attack and subsequent disruption of business means several things…

A massive cyberattack caused chaos for US car dealerships, and it’s still affecting both dealers and customers.

Subject: US government revoked eight Huawei export licenses in 2024
Source: Android Headlines

In May, the US Department of Commerce said it had revoked “certain licenses” for exports to Huawei. It turns out the Joe Biden-led US government has canceled as many as eight licenses so far in 2024. This is the Biden administration’s latest effort to cripple the Chinese tech titan.The US government revokes more licenses to contain Huawei’s growth

Huawei is fighting a battle like no other major tech company. The Chinese firm that once threatened to overthrow Samsung as the world’s largest smartphone company has been reduced to a shadow of itself by the US sanctions. This happened after the US government placed it on the Entity List in 2019. The Donald Trump administration labeled Huawei a national security threat over its potential ties with the Chinese government.

This effectively blocked the firm’s access to the latest smartphone technologies made or originated in the US. It couldn’t source advanced chips, Google services and apps, and other components and equipment to make powerful new 5G phones. The US government allowed some American companies to do business with Huawei by obtaining special licenses. However, those licenses still came with several restrictions.

The US government seems to be trying to block Huawei’s technological advancements in chipmaking. It recently launched powerful new phones with chips made by Chinese firm SMIC. This helped its smartphone sales grow 64% year on year in the first six weeks of 2024. It remains to be seen if the new moves derail the progress. Sooner or later, Huawei may eventually come out of the mess though.

Subject: How to Stop ChatGPT Training On Your Data
Source: Want to use ChatGPT, but not comfortable with it collecting and training on your information? We show you how to stop it.

There’s no doubt about it, ChatGPT is a useful AI tool, whether you’re looking to create a snappy poem for someone’s birthday card, or totally reinvent your workout routine with an AI-curated plan. However, it’s also a hungry beast, and will merrily gobble up any data you feed it.The reason that ChatGPT (and all AI chatbots) crave data is that it is being used to train the platform to provide better answers in the future. While it’s good that this will lead to a continual improvement of the system, you might feel slightly squeamish about your information being used in this way.

Here’s how you can tell ChatGPT not to train on your data, as well as some ways to use the tool anonomously.

Subject: Tips to Make Facebook and Instagram Fun Again
Source: Consumer Reports

Social media sites like Facebook and Instagram used to be low-key places to share photos, news, and more. But in recent years they’ve become trickier for many people—venues where scams, annoying ads, and unknown friend requests live alongside real friends’ posts. Everyone seems to have a bad experience to share.

For instance, Patti Regehr, a retired teacher in northern California, heard from puzzled Facebook friends that they were getting new friend requests from her. The reason? Someone had stolen Regehr’s photo, created a bogus FB page, and started impersonating her.

In this article:
More on Social Media
Posted in: AI, Communication Skills, Cybersecurity, Economy, Email Security, Financial System, Healthcare, Privacy, Social Media, Travel