Subject: Artificial Intelligence: Key Practices to Help Ensure Accountability in Federal Use
Source: U.S. GAO
What GAO Found: “Artificial intelligence (AI) is evolving at a rapid pace and the federal government cannot afford to be reactive to its complexities, risks, and societal consequences. Federal guidance has focused on ensuring AI is responsible, equitable, traceable, reliable, and governable. Third-party assessments and audits are important to achieving these goals. However, a critical mass of workforce expertise is needed to enable federal agencies to accelerate the delivery and adoption of AI. Participants in an October 2021 roundtable convened by GAO discussed agencies’ needs for digital services staff, the types of work that a more technical workforce could execute in areas such as artificial intelligence, and challenges associated with current hiring methods. They noted such staff would require a variety of digital and government-related skills. Participants also discussed challenges associated with existing policies, infrastructure, laws, and regulations that may hinder agency recruitment and retention of digital services staff. During a September 2020 Comptroller General Forum on AI, experts discussed approaches to ensure federal workers have the skills and expertise needed for AI implementation. Experts also discussed how principles and frameworks on the use of AI can be operationalized into practices for managers and supervisors of these systems, as well as third-party assessors. Following the forum, GAO developed an AI Accountability Framework of key practices to help ensure responsible AI use by federal agencies and other entities involved in AI systems. The Framework is organized around four complementary principles: governance, data, performance, and monitoring.
- Science and Technology
- Artificial intelligence
- Best practices
- Federal workforce
- Federal agencies
- Federal hiring
- Health care
- Government programs
- Science and technology
- High-risk issues
Source: Krebs on Security
Countless smartphones seized in arrests and searches by police forces across the United States are being auctioned online without first having the data on them erased, a practice that can lead to crime victims being re-victimized, a new study found. In response, the largest online marketplace for items seized in U.S. law enforcement investigations says it now ensures that all phones sold through its platform will be data-wiped prior to auction.Researchers at the University of Maryland last year purchased 228 smartphones sold “as-is” from PropertyRoom.com, which bills itself as the largest auction house for police departments in the United States. Of phones they won at auction (at an average of $18 per phone), the researchers found 49 had no PIN or passcode; they were able to guess an additional 11 of the PINs by using the top-40 most popular PIN or swipe patterns.
The tech giant says old, disused accounts pose a security threat.Google is putting inactive users on notice. The tech giant says it will soon start deleting accounts that have gone two years or longer without a login or other demonstration of engagement. Stored content on Gmail, Workspace, YouTube, and Photos will all be on the chopping block—along with the associated Google accounts themselves—under the new policy, as outlined in a company blog posted Tuesday.
Though it can be easy to feel that the internet is forever, especially when something you’d rather not have posted makes it online, it’s really not. The internet has proven time and time again to be a rather ephemeral archive. Websites go offline. Repositories of content self-immolate. Servers get wiped. And Google can put an expiration date on whatever it wants.
[link rot; don’t forget https://takeout.google.com/ – /pmw1]
A new report outlines how the generative AI tool can quickly build scripts to thwart attackers and identify security vulnerabilities, but stresses that secure and responsible use of the evolving technology is essential.Agencies can use ChatGPT to quickly build security code, adding new ammunition to their cybersecurity arsenal, according to a new report.
“What we found with ChatGPT was it had such a quick ability to build code that you may be able to use in your own systems at a rate that was … faster than your average employee would be able to develop something,” said Sean Heide, research technical director at the Cloud Security Alliance and an author of the “Security Implications of ChatGPT” report that CSA released April 23.
Download the 54-page PDF: https://cloudsecurityalliance.org/download/artifacts/security-implications-of-chatgpt/
Subject: Cloud Security: Selected Agencies Need to Fully Implement Key Practices
Source: U.S. GAO
Fast Facts – Cloud services—on-demand access to shared resources such as networks, servers, and data storage—can help federal agencies deliver better IT services for less money. But without effective security measures, these services can make agencies vulnerable to risks such as cyberattacks.We looked at how four agencies implemented key cloud security practices—like having a plan to respond to incidents. While the agencies implemented some of the security practices, none of them fully implemented all of the practices for their systems.
We made 35 recommendations to the agencies to fully implement key cloud security practices.
Topics: Information Security
Subject: Digital Privacy Legislation is Civil Rights Legislation
Source: EFF Deeplinks blog
Apple has long used end-to-end encryption for some of the information on your iPhone, like passwords or health data, but the company neglected to offer a way to better protect other crucial data, including iCloud backups, until recently. This came after years of a hard fought battle pushing Apple to encrypt backups and drop its plans for client-side scanning. With Advanced Data Protection, that additional security is now an option, but you have to turn it on yourself. This is a big win for user privacy, and sets a new bar for the safety of cloud device backups.Apple introduced Advanced Data Protection in the United States in December 2022, and released it globally in January 2023. (No list of countries is currently available, but Apple confirmed to EFF that it’s available globally). The idea is simple: you can now enable end-to-end encryption of data that was previously only encrypted in transit and on Apple’s servers, meaning that Apple itself could access the data. In other words, you can now control the encryption keys and Apple will not be able to access any of this data. It also means Apple will not be able to help you regain access to most information on your account. The full list of data categories is available on Apple’s site, but the most notable include the iCloud backup (which includes the backup of Messages), iCloud Drive, photos, notes, reminders, and more.
Source: Verge and Reuters via Newser
Apple is content to offer OpenAI’s ChatGPT to iPhone users, but not to its own employees. The company is restricting workers from using external artificial intelligence tools like ChatGPT and Microsoft-owned GitHub’s Copilot, which automates the writing of software code, for fear that confidential data entered into the programs could be leaked, the Wall Street Journal reports. ChatGPT stores user interactions, which are used to train the AI model. Back in March, it identified a bug that exposed elements of users’ chat history. ChatGPT has since come out with an “incognito mode” that allows users to turn off chat history, per Reuters. But “even with this setting enabled, OpenAI still retains conversations for 30 days with the option to review them ‘for abuse’ before deleting them permanently,” reports the Verge….
Other ChatGPT articles: https://www.newser.com/tag/77126/1/chatgpt.html
If you’re a fan of ChatGPT and its capabilities, you’re probably curious if there is a mobile app you can download for on-the-go chatbot conversations. Well — there is, but for now, it’s only available in the U.S. on iOS.
Also: OpenAI dropped a free ChatGPT app for iPhones. Does it live up to the hype?
But some people have found it difficult to find OpenAI’s official ChatGPT app in the App Store, leaving sketchy and scammy apps more visible to download. Most apps on the App Store that claim to use OpenAI’s technology that powers ChatGPT aren’t legit, and your personal information could be at risk.
Also: How to use ChatGPT in your browser with the right extensions
Scammy apps will ask for unnecessary information and permissions, unload malware onto your device after downloading, or trick you into paying lots of money for a useless subscription. Here are some tips to prevent downloading an app with malicious intent.
Source: Android Headlines
Large companies like Meta, Google, and Microsoft just cannot stay out of trouble with the government. Google has just settled a lawsuit against it which claims that the company profited from using location tracking without its users’ consent. The company had to Shell out almost 40 million dollars to settle, according to Android Authority. Asking large companies whose revenue depends largely on ads to stay out of trouble with the government is like asking a fish to walk on land. These companies do whatever they can to soak up as much revenue as possible. In the case of Google, that means a lot. In 2021, ad revenue made up about 80% of the company’s yearly income.
Google settled a large location tracking lawsuit – Security and privacy are some of the biggest topics in tech today. And, no one ever likes knowing that their location is being tracked by mega-corporations. This is what prompted Washington State to sue Google over its use of its users’ location data.
RSS feed: https://www.androidheadlines.com/feed