Pete Recommends – Weekly highlights on cyber security issues February 9, 2019

Subject: FBI teaches rental car employees how to spot potential attackers in new video
Source: CNNPolitics
https://www.cnn.com/2019/02/01/politics/fbi-dhs-rental-car-video/index.html

Washington (CNN)The FBI and Department of Homeland Security are telling rental car employees that they play an “integral role in helping to identify and protect our nation against terrorism” in a video released this week. In the 11-minute instructional production, actors portray an aspiring attacker and a rental car clerk, who prevents the plot after picking up on a number of key signs that the FBI calls “tripwires.” The video is the latest overture from law enforcement to the vehicle rental industry as they seek to prevent vehicle ramming attacks, which have become an increasingly common tool for terrorists in the United States and around the world.


Subject: New Report From Future of Privacy Forum (FPF): IoT Devices Should Deal with Privacy Impacts for People with Disabilities
Source: The Future of Privacy Forum via LJ infoDOCKET
https://www.infodocket.com/2019/02/02/new-report-from-future-of-privacy-forum-fpf-iot-devices-should-deal-with-privacy-impacts-for-people-with-disabilities/

From the Future of Privacy Forum: The Future of Privacy Forum today [Jan. 31] released The Internet of Things (IoT) and People with Disabilities: Exploring the Benefits, Challenges, and Privacy Tensions. This paper explores the nuances of privacy considerations for people with disabilities using IoT services and provides recommendations to address privacy considerations, which can include transparency, individual control, respect for context, the need for focused collection and security.

[Clip]

PF recommends companies and policymakers follow these recommendations to improve the experiences of people with disabilities when they use IoT-enables devices and respect their privacy:

FPF RSS site feed: https://fpf.org/feed/

FPF Tags: disability inclusion, Internet of Things, privacy by design

example TAG feed: https://fpf.org/tag/internet-of-things/feed/


Subject: How to Turn Off Smart TV Snooping Features
Source: Consumer Reports
https://www.consumerreports.org/privacy/how-to-turn-off-smart-tv-snooping-features/

Smart TVs collect data about what you watch with a technology called ACR. Here’s how to turn it off…


Subject: The Best Websites to Find out If You’ve Been Hacked
Source: Digital Trends
https://www.digitaltrends.com/computing/best-websites-for-finding-out-if-youve-been-hacked/

The best way is to find out if your security has been compromised as soon as possible so you can take personal action, which is where data breach detection websites come into play. These sites allow you to securely search through the latest hacked data to see if any of your sensitive information is at risk. We’ve rounded up four of the best for you to use based on your security needs, so let’s dive in.


Subject: As Police Go Increasingly High-tech, Are Freedoms Being Compromised?
Source: Digital Trends
https://www.digitaltrends.com/cool-tech/future-police-technology-2019/

Futurists and tech companies often use the idea of freedom to promote products, but as technology gets ever more complicated and spreads into every facet of life, it provides authorities with ever more tools and opportunities to observe the populace and potentially infringe on those freedoms.

Law enforcement agencies, in particular, are rapidly incorporating cutting-edge tech into their workflow, and while some of these gadgets may make it easier to catch criminals, they’re also raising concerns about the erosion of privacy and the seeming ubiquity of surveillance.

Perhaps nowhere is the dichotomy between security and intrusiveness more apparent than in facial-recognition software.

Various RSS feeds from Digital Trends:
https://www.digitaltrends.com/rss-home/


Subject: How your health information is sold and turned into ‘risk scores’
Source: POLITICO
https://www.politico.com/story/2019/02/03/health-risk-scores-opioid-abuse-1139978

Information used to gauge opioid overdose risk is unregulated and used without patient consent. Companies are starting to sell “risk scores” to doctors, insurers and hospitals to identify patients at risk of opioid addiction or overdose, without patient consent and with little regulation of the kinds of personal information used to create the scores. While the data collection is aimed at helping doctors make more informed decisions on prescribing opioids, it could also lead to blacklisting of some patients and keep them from getting the drugs they need, according to patient advocates.

There’s no guarantee of the accuracy of the algorithms and “really no protection” against their use, said Sharona Hoffman, a professor of bioethics at Case Western Reserve University. Overestimating risk might lead health systems to focus their energy on the wrong patients; a low risk score might cause a patient to fall through the cracks.

No law prohibits collecting such data or using it in the exam room. Congress hasn’t taken up the issue of intrusive big data collection in health care. It’s an area where technology is moving too fast for government and society to keep up.

According to addiction experts, however, predicting who’s at risk is an inexact science. Past substance abuse is about the only clear red flag when a doctor is considering prescribing opioid painkillers.

Congress has shown some interest in data privacy; a series of hearings last year looked into thefts of data or suspect data sharing processes by big companies like Facebook. But it hasn’t really delved into the myriad health care and health privacy implications of data crunching.

Research into opioid risk factors is nascent. The University of Pittsburgh was awarded an NIH grant last year to determine whether computer programs incorporating Medicaid claims and clinical data are more accurate than ones based on claims alone.

Milliman won an FDA innovation challenge to create an artificial intelligence-based algorithm that predicts whether patients will receive an opioid use disorder diagnosis in the next six months. The company offers to provide a list of high-risk patients to payers, who can hand the relevant information to clinicians.

This article tagged under:

[NB — IF a patient becomes addicted even though the AI or score predict not, who is liable? /pmw1]


Subject: Is your VPN secure?
Source: The Conversation
http://theconversation.com/is-your-vpn-secure-109130

About a quarter of internet users use a virtual private network, a software setup that creates a secure, encrypted data connection between their own computer and another one elsewhere on the internet. Many people use them to protect their privacy when using Wi-Fi hotspots, or to connect securely to workplace networks while traveling. Other users are concerned about surveillance from governments and internet providers.

Many VPN companies promise to use strong encryption to secure data, and say they protect users’ privacy by not storing records of where people access the service or what they do while connected. If everything worked the way it was supposed to, someone snooping on the person’s computer would not see all their internet activity – just an unintelligible connection to that one computer. Any companies, governments or hackers spying on overall internet traffic could still spot a computer transmitting sensitive information or browsing Facebook at the office – but would think that activity was happening on a different computer than the one the person is really using.

However, most people – including VPN customers – don’t have the skills to double-check that they’re getting what they paid for. A group of researchers I was part of do have those skills, and our examination of the services provided by 200 VPN companies found that many of them mislead customers about key aspects of their user protections.


Subject: Scammer groups are exploiting Gmail ‘dot accounts’ for online fraud
Source: ZDNet
https://www.zdnet.com/article/scammer-groups-are-exploiting-gmail-dot-accounts-for-online-fraud/

Cyber-criminal groups are exploiting a Gmail feature to file for fraudulent unemployment benefits, file fake tax returns, and bypass trial periods for online services. The trick is an old one and has been used in the past. It refers to Gmail’s “dot accounts,” a feature of Gmail addresses that ignores dot characters inside Gmail usernames, regardless of their placement. For example, Google considers [email protected], [email protected], and [email protected] as the same Gmail address.

In a report published today, the team at email security firm Agari says it saw criminal groups use dotted Gmail addresses in many more places all last year.

In an example included in their report, Agari said it saw one group in particular use 56 “dotted” variations of a Gmail address to:


Subject: No more robocalls: How to block unwanted calls from iPhone, Android
Source: The Kim Komando Show via USA Today
https://www.usatoday.com/story/tech/columnist/komando/2019/02/07/no-more-robocalls-how-block-unwanted-calls-iphone-android/2778059002/

[I use #7, so far.  YMMV /pmw1]


Subject: Microsoft Security Lead Outlines the Perils of Still Using Internet Explorer
Source: Digital Trends
https://www.digitaltrends.com/computing/microsoft-security-lead-outlines-perils-internet-explorer/

If you’re still using Internet Explorer as your default browser and don’t know where to turn, be sure to check out our best web browsers for 2019 with options such as Google Chrome, Firefox, Opera, Microsoft Edge, and Vivaldi.


Subject: Google Should Force Better Security on Nest Users
Source: Gizmodo
https://gizmodo.com/google-should-make-two-factor-authentication-the-defaul-1832409728

Nest sent an email to all its customers on Wednesday morning with a warning to better secure your accounts: Enable two-factor authentication, pick strong passwords, and be alert. The message from Nest is that customers have repeatedly messed up by reusing weak passwords and not setting up multi-factor authentication.

The question is, does responsibility to secure something as important as a home camera fall on customers or Google itself? Should one of the richest and most technically advanced companies on Earth ship a product that produces a live stream of video inside your home with security at the level of your Spotify account?

Posted in: Cybercrime, E-Commerce, Email Security, Health, Privacy