Pete Recommends – Weekly highlights on cyber security issues, March 31, 2024

Subject: Recent ‘MFA Bombing’ Attacks Targeting Apple Users
Source: Krebs on Security

Several Apple customers recently reported being targeted in elaborate phishing attacks that involve what appears to be a bug in Apple’s password reset feature. In this scenario, a target’s Apple devices are forced to display dozens of system-level prompts that prevent the devices from being used until the recipient responds “Allow” or “Don’t Allow” to each prompt. Assuming the user manages not to fat-finger the wrong button on the umpteenth password reset request, the scammers will then call the victim while spoofing Apple support in the caller ID, saying the user’s account is under attack and that Apple support needs to “verify” a one-time code.

“All of my devices started blowing up, my watch, laptop and phone,” Patel told KrebsOnSecurity. “It was like this system notification from Apple to approve [a reset of the account password], but I couldn’t do anything else with my phone. I had to go through and decline like 100-plus notifications.”

But the attackers in this campaign had an ace up their sleeves: Patel said after denying all of the password reset prompts from Apple, he received a call on his iPhone that said it was from Apple Support (the number displayed was 1-800-275-2273, Apple’s real customer support line).

Patel said the goal of the voice phishers is to trigger an Apple ID reset code to be sent to the user’s device, which is a text message that includes a one-time password. If the user supplies that one-time code, the attackers can then reset the password on the account and lock the user out. They can also then remotely wipe all of the user’s Apple devices.

“I said I would call them back and hung up,” Chris said, demonstrating the proper response to such unbidden solicitations. “When I called back to the real Apple, they couldn’t say whether anyone had been in a support call with me just then. They just said Apple states very clearly that it will never initiate outbound calls to customers — unless the customer requests to be contacted.”

Update: This article was updated on March 15, 2024, to reflect information provided by Eken after publication regarding FCC IDs and other topics. The company had not responded to queries before publication. New information appears below in italics. The article was originally published on Feb. 29, 2024.

Blair was able to capture those images because he and fellow test engineer David Della Rocca had found serious security flaws in this doorbell, along with others sold under different brands but apparently made by the same manufacturer. The doorbells also lack a visible ID issued by the Federal Communications Commission (FCC) that’s required by the agency’s regulations, making them illegal to distribute in the U.S. (The doorbell manufacturer, Eken, did not respond to queries before publication, but it contacted CR after publication and stated that new packaging with the ID would be available in about a month.)

Danger at the Door – Blair and Della Rocca discovered the problems while evaluating a number of video doorbells for our regular ratings program. They were sold under two brand names, Eken and Tuck.

The security issues are serious. People who face threats from a stalker or estranged abusive partner are sometimes spied on through their phones, online platforms, and connected smartphone devices. The vulnerabilities CR found could allow a dangerous person to take control of the video doorbell on their target’s home, watching when they and their family members come and go.

“The fact that they aren’t using encryption is egregious,” says Beau Woods, a digital security researcher with the cybersecurity advocacy group I Am The Cavalry. “It indicates there may be a whole host of bad practices.”

Anyone who can physically access one of the doorbells can take over the device—no tools or fancy hacking skills needed. Let’s imagine that an abusive ex-boyfriend wants to watch the comings and goings of his former partner and her children. He’d simply need to create an account on the Aiwit smartphone app, then go to his target’s home and hold down the doorbell button to put it into pairing mode. He could then connect the doorbell to a WiFi hotspot and take control of the device.

Subject: Survey: Few states have ‘established’ privacy program
Source: Route Fifty

As states race to protect Americans’ data, the number of chief privacy officers has increased in state government. Still, a majority reported in a recent survey that they are building their programs.The U.S. currently doesn’t have a national data privacy law. In its absence is an executive order and pending legislation in Congress that would punish data brokers for transferring Americans’ personal information to foreign rivals like China.

Congress’ inability to pass national data privacy legislation, even as it has held numerous hearings on the topic, has left states to fill in the gaps. And to date, 22 have done just that, with 15 enacting comprehensive protections for their residents and the rest addressing targeted issues such as protecting biometric identifiers and health data.

A new survey from the National Association of State Chief Information Officers shows just how far along many of these efforts are and how much work is still ahead as states race to protect Americans’ personal information and stay competitive with other countries that have comprehensive national data privacy regulations in place.
To start, the survey identified how many states have created a chief privacy officer role, or tasked someone with protecting privacy.

Only four states specified what privacy framework they use: Three said they follow the National Institute of Standards and Technology’s Privacy Framework, while the remaining one said they follow privacy-by-design principles.

Those that have implemented a framework say they have established, trained and certified a point of contact at every agency; conducted privacy impact assessments; developed rules, policies, statements and guidance; conducted broader training; and implemented data-sharing programs, mapping and governance.

Local Gov’t PRIVACY case studies:



RSS Feed:

Subject: Use Consumer Reports’ Security Planner to Stay Safer Online
Source: Consumer Reports free tool gives you a customized plan to stay safer in the digital world

In October 2020, Consumer Reports launched a free tool called CR Security Planner that can help. We’ve since released an upgrade to make it even easier for you to use Security Planner to learn how to safeguard your online accounts, devices, and identity.

To use Security Planner, simply go to and answer a few simple questions about the types of devices you own and your biggest security concerns. Then, you’ll receive an individualized action plan.

Security Planner was originally developed and maintained by the Citizen Lab, an interdisciplinary research group based at the Munk School of Global Affairs & Public Policy at the University of Toronto. The first version was released in December 2017, with the support of independent security experts and organizations, including Consumer Reports and the Cyberlaw Clinic at Harvard Law School’s Berkman Klein Center for Internet & Society.

“With Security Planner, our aim from the beginning was to create an accessible, expertly reviewed, and regularly updated guide to help users take immediate steps to improve their digital hygiene,” says Ron Deibert, director of the Citizen Lab. “With Security Planner in the hands of Consumer Reports, we can rest assured that it reaches the widest possible audience and is shepherded by an independent organization with expertise and integrity.”

You don’t need to be a Consumer Reports member to use Security Planner, nor do you need to input any data that identifies you. Security Planner builds on Consumer Reports’ core mission of working with consumers to create fairness, safety, and transparency in the marketplace.

Subject: 3 Ways AI Could Transform Your Insurance Policy
Source: NerdWallet

Your insurance company may know more about you than you realize. The technology that saturates today’s world — smart-home devices, drone images, fitness trackers, social media posts and telematics programs that monitor your driving habits — can help insurers piece together a detailed picture of your behavior. Your permission isn’t always required. Many facts about your house, car and neighborhood are public records. Data brokers also gather and sell details about your activity, like which stores you visit, what you click online and the whereabouts of your mobile phone.

For a human, all that data is too much to process. But the ability of artificial intelligence to interpret data could upend the process of buying an insurance policy and filing a claim. As insurers face questions about fairness and privacy, some people may find it’s harder to get coverage. Others will benefit from cheaper rates, quicker applications and easier claims.[Unlike Credit Report laws/procedures, how do correct wrong “AI” information? and how do you know that it is wrong? /pmw1]

Subject: These Digital Kiosks Snatch Your Phone’s Data When You Walk By
Source: Gizmodo

Digital kiosks from Soofa take your phone’s location data, then share it with local governments and advertisers. Digital kiosks from Soofa seem harmless, giving you bits of information alongside some ads. However, these kiosks popping up throughout the United States take your phone’s information and location data whenever you walk near them, and sell them to local governments and advertisers, first reported by NBC Boston Monday.

“At Soofa, we developed the first pedestrian impressions sensor that measures accurate foot traffic in real-time,” says a page on the company’s website. “Soofa advertisers can check their analytics dashboard anytime to see how their campaigns are tracking towards impressions goals.”

Local civil rights groups are trying to stop the sale of location data. The Massachusetts American Civil Liberties Union warns these datasets can often end up in the wrong hands, and could help identify when people are nearing sensitive locations, such as abortion clinics or protests.

A Soofa spokesperson said it does not share data with any 3rd parties in an email to Gizmodo, and it only offers the dashboard to an organization that bought the kiosk. The company also claims to anonymize your MAC address by the time it gets to advertisers and local governments.

Posted in: AI, Cybercrime, Cyberlaw, Cybersecurity, Privacy