Pete Recommends – Weekly highlights on cyber security issues August 11 2018

Subject: U.S. lawmakers pass bill forcing tech companies to disclose foreign source code review after previous …
Source: Reuters via Silicon Valley Business Journal

U.S. lawmakers this week approved a bill that would require all U.S. software companies to report foreign source code reviews — a sometimes controversial condition of doing business in countries like China and Russia.

The new legislation was rolled into the Pentagon’s spending bill, approved in an 87-10 vote in the Senate on Wednesday, Reuters reports.

Subject: As Russians hack the US grid, a look at what’s needed to protect it
Source: The Conversation

The U.S. electricity grid is hard to defend because of its enormous size and heavy dependency on digital communication and computerized control software. The number of potential targets is growing as “internet of things” devices, such as smart meters, solar arrays and household batteries, connect to smart grid systems.

As grid security, we believe that current security standards mandated by federal regulations provide sufficient protection against observed threats. But recent incidents demonstrate the ongoing challenge of ensuring everyone follows the guidelines, which themselves must change over time to keep up with technological shifts.

The threat is real: In late 2015 and again in 2016, Russian hackers shut down parts of Ukraine’s power grid. In March 2018, federal officials warned that Russians had penetrated the computers of multiple U.S. electric utilities and were able to gain access to critical control systems. Four months later, the Wall Street Journal reported that the hackers’ access had included privileges that were sufficient to cause power outages.


Subject: Verizon Didn’t Bother to Write a Privacy Policy for its ‘Privacy Protecting’ VPN
Source: Motherboard

Verizon’s ‘Safe Wi-Fi’ says it “protects your privacy and blocks ad-tracking,” but its current privacy policy is a placeholder that says the exact opposite.

Verizon is rolling out a new Virtual Private Network service called Safe Wi-Fi it developed in conjunction with McAfee. According to Verizon, the $4 per month service “protects your privacy and blocks ad tracking, creating a secure Wi-Fi connection anywhere in the world.”

Besides Verizon’s long history of allowing third parties to monetize its customers’ web-browsing habits, there’s another reason you probably shouldn’t trust this product to “protect your privacy:” Verizon didn’t bother to write a privacy policy for it before releasing it to the public.

Subject: Holding law enforcement accountable for electronic surveillance
Source: MIT Computer Science & Artificial Intelligence Lab (CSAIL) via EurekAlert! Science News

When the FBI filed a court order in 2016 commanding Apple to unlock the San Bernandino shooter’s iPhone, the news made headlines across the globe. Yet every day there are tens tens of thousands of of other court orders asking tech companies to turn over Americans’ private data. Many of these orders never see the light of day, leaving a whole privacy-sensitive aspect of government power immune to judicial oversight and lacking in public accountability.

To protect the integrity of ongoing investigations, these data requests require some secrecy: companies usually aren’t allowed to inform individual users that they’re being investigated, and the court orders themselves are also temporarily hidden from the public.

In many cases, though, charges never actually materialize, and the sealed orders usually end up forgotten by the courts that issue them, resulting in a severe accountability deficit.

To address this issue, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Internet Policy Research Initiative (IPRI) have proposed a new cryptographic system to improve the accountability of government surveillance while still maintaining enough confidentiality for the police to do their jobs.


[sorry, no functional RSS feed]


Subject: Health records ‘put at risk by security bugs’
Source: BBC News

Health records of almost 100 million patients worldwide were put at risk by security issues with a popular patient management system, researchers say.

Almost 30 bugs were found in the OpenEMR system, by a cyber-security group called Project Insecurity.

OpenEMR is one of the world’s most widely used patient and practice management systems.

It said it was “thankful” for Project Insecurity’s work and had now patched many of the bugs it had exposed.

Bug hunters – Cyber-security experts from Project Insecurity found various problems in its investigation of OpenEMR.

Related Topics

Subject: DARPA is racing against time to develop a tool that can spot ‘deepfakes’
Source: fedscoop

Back in April, Buzzfeed published a video of President Barack Obama saying some things that Obama simply never said. With that video, “deepfakes” — the practice of using artificial intelligence to map someone’s head (or in this case, mouth) onto another body — suddenly burst into the public consciousness.

Buzzfeed revealed that the process is pretty easy — the Obama video was created in about 56 hours using a free application called FakeApp. In an era rife with information manipulation big and small, this is scary.

The good news, though, is that there are researchers out there looking at the other side of the equation too — trying to figure out how to use AI to spot these inauthentic images more quickly and effectively than a human can. And the Defense Advanced Research Projects Agency (DARPA) is leading the charge.

For example, one simple method for spotting deepfakes, developed by Siwei Lyu, a professor at the University of Albany – State University of New York, is to look at the way the face blinks. Because deepfakes are trained on still images, the resulting video is often of a face that doesn’t blink, or blinks in an unnatural way.

MediFor is just one of DARPA’s many ongoing AI projects. The agency also has initiatives around “explainable AI” — the idea that as artificial intelligence takes over more roles it will need to be able to explain how the algorithm reached the conclusion that it did to an end human user — as well as using big data and machine learning to understand group biases, and more.

-In this Story-

artificial intelligence, deepfakes, Defense Advanced Research Projects Agency, fake news

NB other fedscoop Tech articles via RSS –

Artificial Intel.

Subject: As the IoT grows, so do the risks
Source: FCW

The advent of the Internet of Things brings with it a host of benefits to the public sector: greater efficiency in operations, expanded machine-to-machine communications, instantaneous access to new data points and much more.

In addition to the high-tech advances, however, widespread IoT adoption brings about a new concern of a comparatively low-tech variety for government entities: the impact on logical and physical security.

Doors, locks, cameras, sensors — nearly all of these commonplace physical security controls are now networked in some way, which introduces additional complexity and additional risk. For federal IT executives, this gives rise to two critical security considerations: the role of physical security controls in the overall IT landscape, and the increased need for testing as a result of the growing logical and physical attack surface.

New devices, new risks

NB other SECURITY articles:

Subject: Millions of Android Devices Are Vulnerable Out of the Box
Source: WIRED

Security meltdowns on your smartphone are often self-inflicted: You clicked the wrong link, or installed the wrong app. But for millions of Android devices, the vulnerabilities have been baked in ahead of time, deep in the firmware, just waiting to be exploited. Who put them there? Some combination of the manufacturer that made it, and the carrier that sold it to you.

That’s the key finding of new analysis from mobile security firm Kryptowire, which details troubling bugs preloaded into 10 devices sold across the major US carriers. Kryptowire CEO Angelos Stavrou and director of research Ryan Johnson will present their research, funded by the Department of Homeland Security, at the Black Hat security conference Friday.

The potential outcomes of the vulnerabilities range in severity, from being able to lock someone out of their device to gaining surreptitious access to its microphone and other functions. They all share one common trait, though: They didn’t have to be there.


Wired Security article page:

and its RSS feed:

Subject: Carnegie Mellon University creates cyber intelligence survey for the Office of the Director of National Intelligence
Source: Pittsburgh Business Times

Carnegie Mellon University announced Thursday it has created a Cyber Intelligence Tradecraft Survey, created by the Emerging Technology Center at the CMU Software Engineering Institute, according to a news release.

CMU created the survey on behalf of the Office of the Director of National Intelligence in an effort to further study cyber intelligence. The survey will conclude by 2019 and aims to explain how the organizations surveyed, which will include U.S. companies, deal with cyber intelligence activity, challenges and best practices.

“As an intellectual discipline, cyber intelligence is still in its relative infancy, which makes it especially important to identify and share best practices,” said Jim Richberg, ODNI’s national intelligence manager for cyber, in a prepared statement. “The insight we gain from this study will improve our ability to produce and share actionable cyber intelligence in both government and the private sector.”

NB you may be interested in this blog post:

Posted in: AI, Cybercrime, Cybersecurity, Energy, Privacy