Subject: New York to crack down on hospital cybersecurity
Source: Becker’s Healthcare – Health IT
New York is planning to tighten regulation of hospital cybersecurity practices, according to draft rules reviewed by The Wall Street Journal. [sharable]
The regulations will require hospitals to develop incident response plans, adopt secure software design practices for in-house applications and install security technologies such as multifactor identification, among other preventive measures.
Copyright © 2023 Becker’s Healthcare.
Ads for fake versions of Google’s generative AI tool, Bard, are showing up on Facebook to steal social media accounts of U.S. small businesses, according to a lawsuit from Google filed Monday. The phony Facebook ads ask users to download Bard, but the AI doesn’t need to be downloaded – it’s a completely web-based product. Naive users actually downloaded malware that stole social media credentials and compromised their accounts. Google’s lawsuit aims to disable any current domains related to the trap and bar the alleged fraudsters, located in Vietnam and India, from setting up any more. This is considered the first lawsuit to protect users of a major tech company’s flagship AI product, Google’s general counsel Halimah DeLaine Prado said to the Wall Street Journal Monday.
In a separate lawsuit also filed on Monday, Google sued a group of bad actors who abused copyright law to wrongly remove over 100,000 businesses’ websites, costing them millions of dollars and thousands of hours in lost employee time.
Source: heise online
Abstracted from beSpacific
Copyright © 2023 beSpacific, All rights reserved.
The payments platform, Zelle, will now refund victims of imposter fraud, which cost Americans $2.6 billion in losses across the industry last year.Zelle, a payments app owned by seven of America’s largest banks, began reimbursing victims of imposter scams on the platform, according to an emailed statement from the company. Early Warning Services, the network operator of Zelle, says it will process refunds for scams dating back to June, which is a change of attitude by American bankers.
Zelle has always reimbursed certain victims, like if a hacker gets into your Zelle account and steals your money. This new reimbursement policy announced Monday, however, is a different category of fraud involving imposter scams, where users are duped into sending money to a fraudulent Zelle account claiming to be someone else. America’s largest banks have long tried to escape responsibility on this front.
According to Unciphered, a cryptocurrency recovery company, an untold number of crypto wallets were designed with baked-in flaws that leave a backdoor in the code that hackers could easily break open. Encrypted software systems like crypto wallets often rely on random number generators, but the company found that a significant number of wallets were built on open-source software that used numbers that aren’t nearly random enough. These vulnerable wallets use keys with numbers that are one in several thousand instead of one in a trillion, making them susceptible to brute-force attacks.
Social Links, a surveillance company that had thousands of accounts banned after Meta accused it of mass-scraping Facebook and Instagram, is now using ChatGPT to make sense of data its software grabs from social media.Most people use ChatGPT to answer simple queries, draft emails, or produce useful (and useless) code. But spyware companies are now exploring how to use it and other emerging AI tools to surveil people on social media.
In a presentation at the Milipol homeland security conference in Paris on Tuesday, online surveillance company Social Links demonstrated ChatGPT performing “sentiment analysis,” where the AI assesses the mood of social media users or can highlight commonly-discussed topics amongst a group. That can then help predict whether online activity will spill over into physical violence and require law enforcement action.
That’s a problem not just because this kind of technological eavesdropping could amplify inaccuracies or biases. It could also chill online discourse because everyone feels “that they’re being watched, not necessarily by humans, but by AI agents that have the ability to report things to humans who can bring consequences down on your head,” Stanley added.
ChatGPT maker OpenAI didn’t respond to requests for comment. Its usage policy says it does not allow “activity that violates people’s privacy,” including “tracking or monitoring an individual without their consent.”
He warned, however, that law enforcement must be transparent with its use of AI because of its reliability and bias issues. “There is never going to be a way of making AI unbiased,” he said, noting, as have others, that technologies programmed by humans reflect human fallibility.
Source: Ars Technica via Schneier on Security
Interesting article about a surprisingly common vulnerability: programmers leaving authentication credentials and other secrets in publicly accessible software code: Researchers from security firm GitGuardian this week reported finding almost 4,000 unique secrets stashed inside a total of 450,000 projects submitted to PyPI, the official code repository for the Python programming language. Nearly 3,000 projects contained at least one unique secret. Many secrets were leaked more than once, bringing the total number of exposed secrets to almost 57,000.
The credentials exposed provided access to a range of resources, including Microsoft Active Directory servers that provision and manage accounts in enterprise networks, OAuth servers allowing single sign-on, SSH servers, and third-party services for customer communications and cryptocurrencies. Examples included:
Site RSS feed: https://www.schneier.com/feed/