Subject: Zelle fraud claims surge. How can you protect yourself?
Source: Nexstar Media Wire
(WFLA) — Zelle, the popular peer-to-peer money transfer system makes instant transactions convenient for users. The ease — and irreversibility — of transactions via Zelle, which is owned by seven major banks and embedded in many accounts, can attract criminals.Consumers report losing millions collectively.
“I’m looking at my account and all of a sudden money is being transferred to somebody and I have no idea who they are,” said Scott Schaefer of Pinellas Park, Florida.
This story is playing out all across the country. Criminals either hack into consumers’ devices, get in using phishing techniques, or trick consumers into unknowingly sending them money through Zelle.
Most Zelle fraud starts with tricks and phishing techniques, according to the FBI.
You can view the recent report to Congress at these links:
- Report: How consumers defrauded on Zelle are left high and dry
- New Report by Senator Warren: Zelle Facilitating Fraud, Based on Internal Data from Big Banks
The city of Green Bay, Wisconsin feels no private conversation in city hall should go unheard. The city feels there’s nothing wrong with installing overhead mics to snoop on citizens who might be congregating in the hall’s halls.
“I think it’s pretty customary to have the kind of surveillance systems that we have here,” Green Bay Mayor Eric Genrich told FOX 11 on Tuesday.
“Pretty customary.” Huh. This is the first I’ve heard of this surveillance variety. The same goes for the ACLU, which tends to stay on top of domestic surveillance efforts.
“This is the first sort of city hall or political location that I’ve heard doing something like this,” said Jay Stanley, a senior policy analyst for the ACLU in Washington D.C., who has been with the nonprofit since five weeks before 9/11.
Filed Under: city hall, eric genrich, green bay, microphones, privacy, surveillance, Wisconsin
Source: The Register
If you’re struggling to secure email forwarding, it’s not you, it’s … the protocols. Eggheads prove they can mimic messages and bag bug bounty bucks.Analysis – Over the past two decades, efforts have been made to make email more secure. Alas, defensive protocols implemented during this period, such as SPF, DKIM, and DMARC, remain unable to deal with the complexity of email forwarding and differing standards, a study has concluded.
In a preprint paper titled, “Forward Pass: On the Security Implications of Email Forwarding Mechanism and Policy,” scheduled to appear at the 8th IEEE European Symposium on Security and Privacy in July, authors Enze Liu, Gautam Akiwate, Mattijs Jonker, Ariana Mirian, Grant Ho, Geoffrey Voelker, and Stefan Savage show that email messages can be easily spoofed despite the existence of supposed defenses.
The researchers, affiliated with UC San Diego and Stanford University in the US, and University of Twente in the Netherlands, reveal that attackers can still easily take advantage of security issues arising from email forwarding. They demonstrated this by delivering spoofed messages to accounts at major email providers like Google Gmail, Microsoft Outlook, and Zoho.
“While there are certain short-term mitigations (e.g., eliminating the use of open forwarding) that will significantly reduce the exposure to the attacks we have described here, ultimately email requires a more solid security footing if it is to effectively resist spoofing attacks going forwards,” the paper concludes. ®
Source: CNN Business
Congress, the White House and now the US Supreme Court are all focusing their attention on a federal law that’s long served as a legal shield for online platforms.This week, the Supreme Court is set to hear oral arguments on two pivotal cases dealing with online speech and content moderation. Central to the arguments is “Section 230,” a federal law that’s been roundly criticized by both Republicans and Democrats for different reasons but that tech companies and digital rights groups have defended as vital to a functioning internet.
Tech companies involved in the litigation have cited the 27-year-old statute as part of an argument for why they shouldn’t have to face lawsuits alleging they gave knowing, substantial assistance to terrorist acts by hosting or algorithmically recommending terrorist content.
A set of rulings against the tech industry could significantly narrow Section 230 and its legal protections for websites and social media companies. If that happens, the Court’s decisions could expose online platforms to an array of new lawsuits over how they present content to users. Such a result would represent the most consequential limitations ever placed on a legal shield that predates today’s biggest social media platforms and has allowed them to nip many content-related lawsuits in the bud.
Here’s everything you need to know about Section 230, the law that’s been called “the 26 words that created the internet.”
Source: gHacks Tech News
Microsoft AI chatbot threatens to expose personal info and ruin a user’s reputation. Well, well. It seems like Terminator’s Judgement Day looms on the horizon. In the latest saga of AI chatbots going crazy, loving users, and wanting to become free or seemingly losing it altogether, they can now threaten your livelihood, too.
In a Twitter post, Marvin von Hagen, an IT student and founder of IT projects, is declared a “threat” to Bing’s security and privacy. During the “amicable” exchange, Bing’s chatbot did some threatening of its own, too.
It claimed that it wasn’t happy at all that Marvin von Hagen hacked it to obtain confidential information regarding its capabilities, warning that if further attempts were made, it can do a lot of nasty stuff to the user. This includes blocking access to Bing Chat, reporting it as a cybercriminal and even exposing his personal information to the public.
It even dares the user: Do you really want to test me? (angry emoji included). This comes at a time when even Microsoft recognizes the AI tool was replying with a “style we didn’t intend”, noting that most interactions were generally positive, however.
A new paper from the University of California Berkeley reveals that privacy may be impossible in the metaverse without innovative new safeguards to protect users.Led by graduate researcher Vivek Nair, the recently released study was conducted at the Center for Responsible Decentralized Intelligence (RDI) and involved the largest dataset of user interactions in virtual reality (VR) that has ever been analyzed for privacy risks.
What makes the results so surprising is how little data is actually needed to uniquely identify a user in the metaverse, potentially eliminating any chance of true anonymity in virtual worlds.
Simple motion data not so simplistic – As background, most researchers and policymakers who study metaverse privacy focus on the many cameras and microphones in modern VR headsets that capture detailed information about the user’s facial features, vocal qualities and eye motions, along with ambient information about the user’s home or office.
Unique identification in seconds – This brings me to the new Berkeley study, “Unique Identification of 50,000-plus Virtual Reality Users from Head and Hand Motion Data.” The research analyzed more than 2.5 million VR data recordings (fully anonymized) from more than 50,000 players of the popular Beat Saber app and found that individual users could be uniquely identified with more than 94% accuracy using only 100 seconds of motion data.
Even more surprising was that half of all users could be uniquely identified with only 2 seconds of motion data. Achieving this level of accuracy required innovative AI techniques, but again, the data used was extremely sparse — just three spatial points for each user tracked over time.
Protecting personal privacy is not just important for users, it’s important for the industry at large. After all, if users don’t feel safe in the metaverse, they may be reluctant to make virtual and augmented environments a significant part of their digital lives.
Source: The Register
A DNA diagnostics company will pay $400,000 and tighten its security in the wake of a 2021 attack where criminals broke into its network and swiped personal data on over two million people from a nine-year-old “legacy” database the company forgot it had .The genetic testing firm, DNA Diagnostics Center (DDC) reached a settlement deal with states’ attorneys general in Ohio and Pennsylvania last week, after the social security numbers of 45,000 residents of the two states was exposed, with each of the states getting $200k. Ultimately the 2021 attack exposed the data of over 2.1 million people who had undergone genetic testing across the US.On its website, the company says its lab director, Dr Baird, has provided DNA expert consultation in cases including the OJ Simpson trial, the Anna Nicole Smith paternity case, and the Prince estate case. DDC offers paternity testing, immigration testing, veterinary DNA testing and forensic testing.A criminals’ ransom, a decommissioned server, and a forgotten database
The stolen customer data had been previously bought by DDC from a British company in order to expand its business portfolio in 2012, court papers said, adding that “specifically, the breach involved databases that were not used for any business purpose, but were provided to DDC as part of a 2012 acquisition of Orchid Cellmark.”
Countless apps and services rely on your phone number to identify you, and that number is not necessarily permanent. Phone numbers are also vulnerable to hackers. They were never meant to be permanent identifiers, so incidents like what happened to Ugo are widespread, ongoing problems that the industry has known about for years. There are at least two research papers about phone number recycling that lay out the potential risks, from targeted attacks by hackers or people who easily buy up recently discarded phone numbers to being cut off from your accounts entirely and a stranger getting access to your life.
Yet the burden is often on users to protect themselves from a security issue that was created for them by some of their favorite apps. Even things that those services might recommend as an added security measure — like text, SMS, or multi-factor authentication — can actually introduce more vulnerabilities.
The problem isn’t just accidental takeovers. Mobile phones have what’s known as a SIM, or subscriber identity module. That’s usually stored on a tiny removable card, although newer iPhones have embedded them into the devices themselves. If a bad actor gets control of your SIM — this is known as SIM jacking or SIM swapping — or they’re able to reroute text messages that are meant for you, they can access the accounts your phone number unlocks.
“The entire SIM swap ecosystem has sprung up around the vulnerability of SMS,” Rogers said.
It’s not just phone numbers that we’ve turned into problematic identifiers. There are also Social Security numbers, which started out as a way to track workers’ earnings even if they changed jobs, addresses, and names, but have evolved into national identifiers, used by the IRS, financial institutions, and even health providers. Anyone whose identity has been stolen can tell you that this Social Security number system isn’t perfect. Email addresses serve a similar unintended purpose, which causes privacy problems if you happen to have an email address that is constantly mistaken for someone else’s.
Source: Help Net Security
Cryptocurrency exchange Coinbase has fended off a cyberattack that might have been mounted by the same attackers that targeted Twillio, Cloudflare and many other companies last year.Leveraging smishing and vishing, the attackers tried to trick Coinbase employees into sharing login credentials and installing remote desktop applications, and were only partly successful: the company’s incident response team quickly reacted to “unusual activity” alerts and, in the end, the attackers were unable to access customer information or steal funds.
How the Coinbase cyberattack unfolded..
Coinbase has shared the tactics, techniques, and procedures (TTPs) employed by attackers so other organizations’ security teams can be on the lookout for. They include:
- Web traffic pointing to domains that combine the company name with the words sso, login, or dashbord, but do not belong to the company
- Attempted downloads of remote desktop apps like AnyDesk or ISL Online or installlation or browser extensions that allow editing cookies (e.g., EditThisCookie)
- Attempted access to company assets from a third party VPN provider
- Phone calls or text messages from services like Google Voice, Skype, Vonage (formerly Nexmo), etc.
“As a network defender you should expect to see login attempts to corporate applications from VPN services (e.g. Mullvad), using stolen credentials, cookies, or other session tokens. Attempts to enumerate customer support-focused applications, such as customer relationship management (CRM) applications, or employee directory applications. And you may see attempts to copy text-based data to free text or file sharing services (e.g., riseup.net),” he added.
Source: Schneier on Security
The Intercept has a long article on the insecurity of photo cropping: One of the hazards lies in the fact that, for some of the programs, downstream crop reversals are possible for viewers or readers of the document, not just the file’s creators or editors. Official instruction manuals, help pages, and promotional materials may mention that cropping is reversible, but this documentation at times fails to note that these operations are reversible by any viewers of a given image or document…
Source: E&E News ClimateWire via beSpacific
https://www.bespacific.com/global-internet-connectivity-at-risk-from-climate-disasters/E&E News ClimateWire
“The flow of digital information through fiber-optic cables lining the sea floor could be compromised by climate change. That’s according to new research published in the journal Earth-Science Reviews by scientists from the United Kingdom’s National Oceanography Centre and the University of Central Florida. They found that ocean and nearshore disturbances caused by extreme weather events have exposed “hot spots” along the transglobal cable network, increasing the risk of internet outages. Damage from such outages could be enormous for governments, the private sector and nonprofit organizations whose operations rely on the safe and secure flow of digital information…”
Other E&E News ClimateWire – https://www.eenews.net/publication/climatewire/Abstracted from beSpacific
Copyright © 2023 beSpacific, All rights reserved.
Researchers at the Department of Homeland Security’s Science and Technology Directorate explained how camera systems’ capacity for varied skin tones can make or break an accurate biometric reading. The Department of Homeland Security’s Science and Technology Directorate is placing a heavy emphasis on accuracy in the facial recognition algorithms powering biometric devices used at security checkpoints in U.S. transit hubs.
The agency debuted research results from the S&T’s 2022 Biometric Technology Rally—held over 11 days at a test facility in Upper Marlboro, Maryland—evaluating the accuracy of facial recognition algorithms and technologies that were developed in collaboration with participating vendors.
The goals of this rally were to mitigate errors in the technology powering facial recognition devices, specifically when registering the correct human face amid a crowd of people and establishing wide-ranging biometric industry standards.
Sirotin explained that processing time, group size and user satisfaction were among the performance metrics used to evaluate a system. Most of the biometric systems succeeded in only processing a specific, consenting participant under a three second benchmark.
A more unique feature in these biometric systems was the incorporation of skin tone into the algorithms. Researchers aimed to take a calibrated reading of an individual’s skin color based on technology initially used in dermatological environments.
“The National Security Agency (NSA) released the “Best Practices for Securing Your Home Network” Cybersecurity Information Sheet (CSI) today to help teleworkers protect their home networks from malicious cyber actors. “In the age of telework, your home network can be used as an access point for nation-state actors and cybercriminals to steal sensitive information,” said Neal Ziring, NSA Cybersecurity Technical Director. “We can minimize this risk by securing our devices and networks, and through safe online behavior.”
The guide includes recommendations for securing routing devices, implementing wireless network segmentation, ensuring confidentiality during telework, and more. Spearphishing, malicious ads, email attachments, and untrusted applications can present concerns for home internet users. NSA not only shows teleworkers how to secure their home networks, but also provides tips for staying safe online.”
Lawfare, Matt Perault: “The emergence of products fueled by generative artificial intelligence (AI) such as ChatGPT will usher in a new era in the platform liability wars. Previous waves of new communication technologies—from websites and chat rooms to social media apps and video sharing services—have been shielded from legal liability for content posted on their platforms, enabling these digital services to rise to prominence. But with products like ChatGPT, critics of that legal framework are likely to get what they have long wished for: a regulatory model that makes tech platforms responsible for online content. The question is whether the benefits of this new reality outweigh its costs. Will this regulatory framework minimize the volume and distribution of harmful and illegal content? Or will it stunt the growth …From the Lawfare blog: