Subject: The FTC Is Rewriting the Rules of the Internet
Gizmodo sat down with Samuel Levine, the Director of the FTC’s Bureau of Consumer Protection, for an extended interview on how FTC envisions its groundbreaking attack on privacy problems, its plans for the future, and an effort to build a new regulatory environment that protects consumers without stifling a rapidly shifting tech landscape.
Source: The Hill
A pair of Democratic Senators penned a letter to Amazon executives Friday, concerned that the platform’s new healthcare service is putting user’s private data at risk.“Amazon Clinic customers deserve to fully understand why Amazon is collecting their health care data and what the company is doing with it,” Sens. Peter Welch (D-Vt.) and Elizabeth Warren (D-Mass.) wrote.
Citing a recent investigation by The Washington Post, the senators drew attention to the fact that users must sign away the rights to significant amounts of private health data in order to use the company’s services.
Amazon Clinic launched in late 2022 and pledges to offer easy, affordable access to medical professionals who can write prescriptions for as little as a $30 fee, much cheaper than traditional healthcare.
Welch and Warren formally asked Amazon to fully explain what data is collected by Amazon Clinic, what the company does with that data and if it is shared with any third parties for any purpose.
The promised AI revolution has arrived. OpenAI’s ChatGPT set a new record for the fastest-growing user base and the wave of generative AI has extended to other platforms, creating a massive shift in the technology world.It’s also dramatically changing the threat landscape — and we’re starting to see some of these risks come to fruition.
Attackers are using AI to improve phishing and fraud. Meta’s 65-billion parameter language model got leaked, which will undoubtedly lead to new and improved phishing attacks. We see new prompt injection attacks on a daily basis.
Users are often putting business-sensitive data into AI/ML-based services, leaving security teams scrambling to support and control the use of these services. For example, Samsung engineers put proprietary code into ChatGPT to get help debugging it, leaking sensitive data. A survey by Fishbowl showed that 68% of people who are using ChatGPT for work aren’t telling their bosses about it.
Wired (free link) -” Vehicles from Toyota, Honda, Ford, and more can collect huge volumes of data. Here’s what the companies can access…In May, US-based automotive firm Privacy4Cars released a new tool, dubbed the Vehicle Privacy Report, that reveals how much information on your car can be hoovered up. Much like Apple and Google’s privacy labels for apps—which show how Facebook might use your camera, or how Uber might use your location data—the tool indicates what vehicle manufacturers can know. …
Abstracted from beSpacific
Copyright © 2023 beSpacific, All rights reserved.
Source: The Register
Speaking at an event hosted by think tank Hudson Institute, Fick said 30 years ago democratic nations felt they had an “unassailable global advantage in telecoms” thanks to the strength of outfits like Ericsson, Nokia, Samsung, Motorola, Bell Labs, Alcatel and Lucent.
But he feels those titans became complacent, governments stopped watching the tech develop, and “I don’t think we appreciated or acted on the reality that these technologies were going to be central to our geopolitical standing.”
But China noticed. And it “executed a deliberate strategy of IP theft and government subsidies.”
The ambassador described China’s actions in the telecoms industry as “a playbook” and warned the nation will “run it in cloud computing they will run it in AI, they will run it in every core strategic technology area that matters.”
White House reportedly has an eye on Chinese clouds
The Biden administration is reportedly considering action against China’s big clouds. The New York Times yesterday reported officials “have discussed whether they can set tighter rules for the Chinese companies when they operate in the United States, as well as ways to counter the companies’ growth abroad.”
RSS for SECURITY: https://www.theregister.com/security/headlines.atom
June 23 (UPI) — Sens. Ed Markey and Gary Peters on Friday called on the U.S. comptroller general of the U.S. Government Accountability Office to conduct a detailed technology assessment of the potential harms of generative artificial intelligence and how to mitigate them.
The letter said the ability of generative AI to mimic voice, music, text, and product design, along with other content, can have a negative harm on society, exploit communities and create harm to mankind if it goes unchecked.
“Although generative AI holds the promise of many benefits, it is already causing significant harm,” stated a letter from Markey, D-Mass., and Peters, D-Mich., to Comptroller General Gene Dodaro, head of the Government Accountability Office.
They said it can cause harm to data workers, and the potential for the output of generative AI to directly harm vulnerable communities or risk widespread injury, death or even human extinction.
On Wednesday, Senate Majority Leader Chuck Schumer, D-N.Y., outlined a new effort to regulate artificial intelligence. Schumer called on legislators to support his plan, called the Security, Accountability, Foundations, Explain Innovation Framework. That plan seeks to counter the potential job loss, national security and misinformation risks that AI brings.