Pete Recommends – Weekly highlights on cyber security issues July 22 2018

Subject: Five Ways Digital Assistants Pose Security Threats in Home, Office
Source: eWeek

Voice-activated digital assistants in the home—Echo, Cortana, Alexa and Siri—open up a host of new types of vulnerabilities, from issuing commands that aren’t audible to humans to exploiting the accessibility settings activated by digital assistants.

Voice-activated digital assistants—such as the Amazon Echo that sits on your counter to Cortana on your Windows systems or Siri on Apple’s iPhones—are intended to connect users to services through an easy-to-use voice interface. However, the voice assistants are making cyber-attackers’ jobs easier as well.

At the Black Hat conference later this month, for example, four researchers will show how Cortana can be used to bypass the security on locked Windows PCs and other devices. While the group is exploiting a specific vulnerability—dubbed “Open Sesame”—the issues with voice assistants are deeper, said Tal Be’ery, an independent researcher and part of the team.

“Voice interfaces can be a good idea, but it is not relevant to all devices and all actions,” he said. “Enabling everything the PC does, and going through a voice interface on a corporate environment—this is not a very smart architecture decision.”

Here are five ways that voice assistants can be used to attack.

NB other eWeek SECURITY articles:

Subject: U.S. Secret Service Releases Operational Guide for Preventing Targeted School Violence
Source: Homeland Security PR

Release Date: July 13, 2018 – On July 12, 2018, the United States Secret Service National Threat Assessment Center released another tool in support of the effort to end the prevalence of targeted violence effecting the Nation, the world, and most importantly – our schools.  ENHANCING SCHOOL SAFETY USING A THREAT ASSESSMENT MODELAn Operational Guide for Preventing Targeted School Violence, was developed to provide fundamental direction on how to prevent incidents of targeted school violence. The guide provides schools and communities with a framework to identify students of concern, assess their risk for engaging in violence, and identify intervention strategies to mitigate that risk.

Topics: Academic Engagement, Critical Infrastructure Security

Keywords: Active Shooter, schools

NB RSS feeds for other HS PR:

Subject: Government leads the way in crowdsourced security
Source: GCN

To strengthen the defenses and resilience of IT systems, organizations increasingly are turning to ethical hackers and running bug bounty programs that offer rewards for uncovered vulnerabilities. While security researchers can earn big payouts from the likes of Google, Microsoft and other tech companies, they’ve also identified plenty of issues with public-sector websites, and government officials have seen the value of cybersecurity testing that pays only for results. Government use of bug-bounty programs has increased at a year-over-year rate of  125, according to a new report from HackerOne, the company that runs the platform for hosting bug bounty competitions.  That makes government the leading industry sector for adoption of crowdsourced security.

HackerOne’s 2018 Hacker-Powered Security Report examined data from 78,275 security vulnerability reports collected over 1,000 bug bounty and vulnerability disclosure programs it runs around the world.

“Individuals that act in good faith to identify and report potential vulnerabilities should not be legally exposed,” said Mickos, who criticized the CFAA for having “vague wording that has not kept pace with the proliferation of the internet.” Both DOD and GSA have developed such vulnerability disclosure policies, and the Department of Justice issued a framework in July 2017 to help agencies design their own policies.

NB list of GCN RSS feed:

Subject: Robocalls from telemarketers, debt collectors and top businesses are at record highs — and could soon get worse
Source: The Washington Post

Robo-calls ravaged Americans’ smartphones in record numbers last month. But some of the nation’s top businesses — from credit card companies and student lenders to retailers and car dealers — are still urging the Trump administration to make it easier for them to dial and text mobile devices en masse.

For many smartphone owners, there’s rarely a day that they don’t receive an unanticipated call from an unrecognized number, some sporting an area code that’s suspiciously similar to their own. In June, robo-calls rang an estimated 4 billion times, according to data published Thursday by YouMail, a call-blocking app. A quarter of the calls sought to steal financial information or ensnare people in other serious scams.

But major U.S. corporations such as Capital One, Navient and Sirius XM tap that same auto-dialing technology to tout their products or nudge consumers to pay their late bills. Their lobbying blitz to ward off tougher new rules has frustrated public-interest advocates, who say the floodgates soon could be open for businesses to pester consumers with calls and texts that they don’t want — while leaving people with fewer options to stop the onslaught.

What actually counts as a robo-call — and how the federal government plans to regulate it — is the subject of political and legal disputes. Under a 1991 law passed before the arrival of iPhones and Androids, the FCC imposed tough restrictions on any technology that randomly generated and dialed phone numbers. Those who use auto-dialers had to obtain a customer’s explicit permission to contact them, too.

A key voice for the industry, the Student Loan Servicing Alliance, stressed that lenders have “no interest in and get no benefit from calling the wrong person,” said Winfield Crigler, the executive director of the group.

[Ed. Note: unless that otherwise enabled them to contact the “right” person, in which can it does benefit them.  If opt-out and D.N.C. were effective, the issues would most likely be moot(ed).  Not mentioned was the technical load on the infrastructure. /pmw1]

Subject: Surveillance and Legal Research Providers: What You Need to Know
Source: Medium via LLRX [w/ permission]

Legal research companies are selling surveillance data and services to law enforcement agencies including ICE. Their participation in government surveillance raises ethical questions about privacy, confidentiality and financial support: How private is your search history when your legal research vendors also sell surveillance data? Are you funding products that sell your patrons’ and clients’ data to ICE and other law enforcement agencies? Historically, librarians have protected people from unwanted surveillance and safeguarded intellectual freedom. How do librarians uphold their privacy and intellectual freedom standards when they rely on surveillance companies for their research resources?

Thomson Reuters, RELX, and ICE surveillance

Since September 11, 2001, permissive surveillance laws and improving data technology have created a huge market for big data policing products. Thomson Reuters and Reed Elsevier (now branded as RELX), the companies that own Westlaw and Lexis, are competing for contracts to supply troves of personal data and search technology to the government. Both companies have expanded their product lines to take advantage of lucrative surveillance opportunities. Since 2017, Thomson Reuters and RELX have bid on contracts to help ICE track hundreds of thousands of immigrants and target them for arrest.

ICE surveillance data may be used to target noncriminal residents for denaturalization and to locate and arrest people at schools, at courthouses, in hospitals, at work, and at their homes. Some companies are refusing to use and work with companies that build ICE surveillance systems as ICE’s surveillance and enforcement practices raise numerous ethical and legal concerns. Lawyers, including the ABA, have called ICE’s practices unconstitutional and unethical.

Posted in: Civil Liberties, Law Librarians, LEXIS, Online Legal Research Services, Privacy, Social Media, Westlaw each has its own RSS feed within LLRX e.g.,

Subject: The dangers of ‘deep fakes’
Source: GCN

False and doctored media can be used for misinformation campaigns, and advanced technologies like artificial intelligence and machine learning will only make them easier to create and more difficult to detect.

Deep fakes are images or videos that combine and superimpose different audio and visual sources to create an entirely new (and fake) video that can fool even digital forensic and image analysis experts. They only need to appear credible for a short window of time in order to impact an election, Sen. Marco Rubio (R-Fla.) warned at a recent Atlantic Council event.

“One thing the Russians have done in other countries in the past is, they’ve put out incomplete information, altered information and or fake information, and if it’s done strategically, it could impact the outcome of an [election],” Rubio said. “Imagine producing a video that has me or Sen. [Mark] Warner [D-Va., who also spoke at the event] saying something we never said on the eve of an election. By the time I prove that video is fake — even though it looks real — it’s too late.”

The technology is far from flawless, and in many cases a careful observer can still spot evidence of video inconsistencies or manipulation.  But as Chris Meserole and Alina Polyakova noted in a May 2018 article for the Brookings Institution, “bigger data, better algorithms and custom hardware” will soon make such false videos appear frighteningly real.

Researchers at the National Institute of Standards and Technology and the Defense Advanced Projects Agency have been working to develop technology that can detect deep fakes.

Subject: Facebook must adhere to German Holocaust denial laws, says Berlin
Source: Reuters via Yahoo–finance.html

BERLIN (Reuters) – Facebook must stick to German laws which ban Holocaust denial, the Justice Ministry in Berlin said on Thursday after Mark Zuckerberg caused outrage by saying his platform should not delete such comments. Zuckerberg’s remarks have fueled further criticism of Facebook after governments and rights groups have attacked it for not doing enough to stem hate speech. In the interview with tech blog Recode Zuckerberg said he was Jewish and personally found it offensive to deny the Holocaust but he did not think Facebook should delete people’s views. Officials in Germany, which has enforced a law imposing fines of up to 50 million euros ($58 million) on social media sites that fail to remove hateful messages promptly, made it clear that Holocaust denial was a punishable crime.

Josef Schuster, the head of the Central Council of Jews in Germany said the study empirically proved that online anti-Semitism was increasing and becoming more aggressive. “Because words will eventually be followed by deeds. Online anti-Semitism is not virtual, but a real threat,” Schuster said.

Subject: Cybersecurity: Data, Statistics, and Glossaries

This report describes data and statistics from government, industry, and information technology (IT) security firms regarding the current state of cybersecurity threats in the United States and internationally. These include incident estimates, costs, and annual reports on data security breaches, identity thefts, cybercrimes, malware, and network securities.

Much is written on this topic, and this CRS report directs the reader to authoritative sources that address many of the most prominent issues. The annotated descriptions of these sources are listed in reverse chronological order, with an emphasis on material published in the last several years. Included are resources and studies from government agencies (federal, state, local, and international), think tanks, academic institutions, news organizations, and other sources.

Posted in: Cybercrime, Cybersecurity, Ethics, Government Resources, Legal Research, Privacy, Social Media