Using Netflix as an example and referencing a number of articles touting the company’s expert use of data analytics and algorithms, marketing savant Jason Voiovich argues that data helps make content decisions, but alone does not alone drive the decisions. Data is one asset among many – but humans decide what counts in the analysis. As data analytics increasingly drive corporate decision-making in all sectors, the lessons Voiovich highlights are critical to effective, accurate and responsible business practices.
It is helpful to classify documents or other content items to make them easier to find later. Searching the full text alone can retrieve inaccurate results or miss appropriate documents containing different words from the words entered into a search box. A document or content management system may include features for tagging, keywords, categories, indexing, etc. Taxonomist Heather Hedden identifies the difference between these elements to facilitate the implementation of more effective knowledge and content management.
Nicole Black documents best practices for your firm’s process of transitioning to a paperless environment that includes an infographic on how to train your staff on the ins and outs of working with PDFs.
Marcus Zillman’s guide highlights multifaceted browser alternatives to mainstream search tools that researchers may regularly use by default. There are many reliable yet underutilized applications that facilitate access to and discovery of subject matter specific documents and sources. Free applications included here also offer collaboration tools, resources to build and manage repositories, to employ data visualization, to create and apply metadata management, citations, bibliographies, document discovery and data relationship analysis.
Privacy and security issues impact every aspect of our lives – home, work, travel, education, health and medical records – to name but a few. On a weekly basis Pete Weiss highlights articles and information that focus on the increasingly complex and wide ranging ways technology is used to compromise and diminish our privacy and security, often without our situational awareness. Four highlights from this week: Google to roll out auto-delete controls for location history and activity data; Rights groups challenge warrantless cellphone searches at U.S. border; U.S. cyber spies unmasked many more American identities in 2018; and Spies, Lies, and Algorithms.
Taxonomist Heather Hedden compares and contrasts the work of creating a taxonomy to that of creating a knowledge model, which also involve inputs of people and content, but where more emphasis is on stakeholder/user input. As Hedden says, “content contains information, but people contain knowledge, so knowledge modeling requires the input of various people, with the input gathered in a comprehensive and systematic way.” This article clearly identifies more facets of the role of knowledge management within organizations in many sectors.
Pete Weiss is the author of Pete Recommends – Weekly highlights on cyber security. He is a strong advocate of RSS to keep pace with rapidly changing updates in the news, research and technology to name but a few subjects. Using Sabrina Pacifici’s blog, beSpacific, as an example, Weiss offers more than a dozen regularly updated subject matter specific feeds that you should consider adding to your research portfolio.
Academic Law Library Director Jamie J. Baker discusses the requirements for scholarly research journal content in the context of the global push-back against publisher pricing increases that are beyond the acceptable thresholds of organizational funding and budgets.
Former CPA, writer and teacher Ken Boyd provides readers with an explanation of tax fraud that is clearly presented, instructive and relevant to the ongoing Mueller investigation. Boyd uses the extensive New York Times investigative report of November 2018 that documented a history of tax fraud allegedly committed by Donald Trump, his father and siblings, as the foundation for his lesson on various types of tax fraud. The allegations documented by the Times are under review by the New York State Department of Taxation and Finance.
How big is the Deep Web? It is estimated to comprise 7,500 terabytes – although an exact size is not known, and the figures vary widely on this question. The magnitude, complexity and siloed nature of the Deep Web is a challenge for researchers. You cannot turn to one specific guide or one search engine to effectively access the vast range of information, data, files and communications that comprise it. The ubiquitous search engines index, manage and deliver results from the Surface web. These search results include links, data, information, reports, news, subject matter content and a large volume of advertising that is optimized to increase traffic to specific sites and support marketing and revenue focused objectives. On the other hand, the Deep Web – which is often misconstrued as a repository of dark and disreputable information [Note – it is not the Dark Web], has grown tremendously beyond that characterization to include significant content on a wide range of subject matters covering a broad swath of files and formats, databases, pay-walled content as well as communications and web traffic that is not otherwise accessible through the surface Web. This comprehensive multifaceted guide by Marcus Zillman providers you with an abundance of resources to learn about, search, apply appropriate privacy protections, and maximize your time and efforts to conduct effective and actionable research within the Deep Web.