The Motivation of Manipulating Data and Information to a Desired Outcome

Some recent headlines have reported disturbing news about respected and respectable scholars falsifying or just ignoring data conclusions in scholarly papers. This is another example of the skepticism many of us have with the shifts in misinformation flooding our inboxes and newsfeeds, compelling each of us to exercise our critical thinking skills. And the examples we’re referring to aren’t even results of AI. It is human error, strong bias at play, or manipulative intention for one purpose or another.

This leads us to another topic in our continuing explorations of human motivation. Why do we lie? Why do we cheat? We covered these topics expansively in our book, The Truth About Transformation, in terms of behavior in the workplace. Based on the recent news, we’re going to take a deeper dive.

There are cases of deception with public anti-heroes Elizabeth Holmes of Theranos and Sam Bankman-Fried of FTX, younger poster children for egregious fraud. But there are also reports of Marc Tessier-Lavigne who stepped down last month as president of Stanford University, and in the irony of all ironies, Harvard Business School professor Francesca Gino, who stands accused of cheating in her published studies of why people cheat.

Pretty Big Lies

We’re going to start with a few big lies, breaches, and deliberate misleads that provide context for the smaller partners in crime. These notorious headliners illustrate the implications and impacts of misinformation. Thanks to Business.com for the investigative research into these notorious cases.

  • Equifax

“Equifax is one of the three major credit bureaus in the US, and in 2017 it was involved in a data breach that affected 143 million consumers. Hackers were able to access personal information like credit card numbers, Social Security numbers, home addresses and even driver’s licenses. The breach happened because the company failed to implement sufficient cybersecurity measures. The breach itself was bad enough, but the company deliberately misled customers and withheld information. It also later came out that additional data breaches occurred, but customers weren’t informed. As a result, Equifax had to pay a minimum of $575 million as a settlement with the Federal Trade Commission (FTC), Consumer Financial Protection Bureau (CFPB), and 50 states and territories. CEO Richard Smith was ousted three weeks after the data breach was revealed.

  • Theranos

“Theranos was a consumer healthcare technology startup founded by Elizabeth Holmes in 2003. The company was once valued at $10 billion, thanks to its supposed breakthrough medical technology. It later came out, however, that this breakthrough technology never existed, and that Holmes and company president Ramesh Balwani lied about their invention. The Securities and Exchange Commission (SEC) charged Holmes and Balwani with massive fraud for deceiving investors about the company’s performance. In January 2022, Holmes was found guilty on 11 fraud charges. In November, she was sentenced to more than 11 years in prison.

  • Boeing

“The Boeing 747 Max was grounded from March 2019 to December 2020 after two crashes that took the lives of 346 people. It was later determined by safety regulators that the accidents were caused by flight control design flaws, and the company initially stated that a fix would be ready within a few weeks. Weeks turned into 21 months before the MAX was finally cleared to fly again in all worldwide markets. The bad publicity negatively impacted Boeing and the aviation industry. Boeing CEO Dennis Muilenburg had to testify before Congress and later resigned in December 2019. The company was charged with fraud for hiding information from safety regulators.

  • Facebook and Cambridge Analytica

“In March 2018, it was revealed that a firm called Global Science Research had harvested data from 87 million Facebook users without their consent. It happened because a previous version of Facebook’s privacy policy allowed apps to access big data and personal information about users and their friends. The data were later sold to Cambridge Analytica and utilized to create targeted ads for the 2016 presidential election. The fallout was significant, with CEO Mark Zuckerberg called to testify before Congress. Facebook was later fined $5 billion by the FTC for privacy violations. Cambridge Analytica ended up filing for Chapter 7 bankruptcy.

  • Wells Fargo

“From 2002 to 2016, Wells Fargo executives were pressuring employees to cross-sell products and meet impossibly high sales goals. To reach these high quotas, employees resorted to creating millions of fake accounts for customers without their knowledge or approval. For instance, if a customer opened a checking account at the bank, a Wells employee might also secretly open a credit card under that same customer’s name. The fake accounts caused the company’s short-term profits to soar to over $2 million until the misdeeds were revealed in 2016.” The bank had to pay $3 billion in fines to the SEC and the Department of Justice, and the scandal continues to impact Wells Fargo’s business as current and prospective customers have little if any trust in the bank.

  • Uber

“The FTC alleged that Uber misled its rideshare drivers about their expected earnings and vehicle financing through its Vehicle Solutions Program. The government’s suit against Uber maintained that to recruit new drivers, Uber exaggerated statements about expected earnings, claiming that the average income for drivers in San Francisco was $74,000 and $90,000 in New York, but in reality, less than 10% of drivers in those cities earned that kind of income. The median incomes were actually $53,000 in San Francisco and $61,000 in New York. Uber also claimed that drivers could buy a car through the company for just $140 per week or lease a vehicle for only $119 per week. However the FTC found that the median weekly purchase and lease payments could in fact exceed $200. In 2017, the company agreed to pay $20 million to settle the FTC’s charges.”

Statistical Deception

“There are three kinds of lies: Lies, Damned Lies, and Statistics,” a statement that has been attributed to Mark Twain, who himself attributed it to British Prime Minister Benjamin Disraeli, who might never have said it in the first place. (Highpoint Associates)

So, not to re-state the obvious, but human thought and input are the prompts for data analysis. Machines deliver the output. Humans interpret the information (correctly or incorrectly), and potentially with a spin to their objectives, and seek to distill it into “intelligence.”

It is the interpretation that often trips us up. Who provides the oversight to the data scientists, researchers, and academics? Who ensures the review up to publication–a process for organizational decisions and objectivity of what the data really demonstrate?

Interpretation can be and often is biased. Data can simply be misreported or manipulated, and the output and outcomes are constructed to prove the author/creator’s intended purpose. Unfortunately, the conclusions/findings can be based on faulty assumptions and information. Remember, none of us knows everything. Our minds often fill in the gaps we have with assumptions, faulty conclusions, and connecting dots that do not have a correct inter-relationship, dependence, or influence.

Any decision maker seeking to leverage data and information must become adept at reviewing that data and understanding what is real, inflated, a rabbit hole, aligned to the wrong conclusion, or straight-out false.

On the academic front, three scientists—Joe Simmons, Leif Nelson, and Uri Simons — hunt down published studies built on faulty or fraudulent data, reporting their findings in their blog, Data Colada. According to The Wall Street Journal, “They use tips, number crunching and gut instincts to uncover deception. Over the past decade, they have come to their own finding: Numbers don’t lie but people do. Simmons, a professor at the Wharton School, states, ‘Once you see the pattern across many different papers, it becomes like a one-in-quadrillion chance that there’s some benign explanation. Simmons and his two colleagues are among a growing number of scientists in various fields around the world who moonlight as data detectives, sifting through studies published in scholarly journals for evidence of fraud.”

And this problem is not just an issue of academic integrity. The Journal adds, “The hunt for misleading studies is more than academic. Flawed social science research can lead to faulty corporate decisions about consumer behavior or misguided government rules and policies. Errant medical research risks harm to patients. Researchers in all fields can waste years and millions of dollars in grants trying to advance what turn out to be fraudulent findings.”

There are also situations of bending the truth, omitting information, or structuring the outcomes to satisfy a real or perceived audience. For example, in a highly polarized society, conclusions about or understanding climate change can incite readers and reviewers. Opinion often wins out over focus on substantive findings. Or a publisher’s bias may report findings with a specific slant. And what’s more frequent is the temptation to make headlines to gain attention and … sales.

If misinformation is intentional, the hope is to not be questioned or discovered. Consider a simple example of an organizational report citing the number of customers it has across all its business lines. Stakeholder and market pressure to demonstrate growth may motivate leaders to cherry-pick conclusions from the data to fulfill that goal. Data may be manipulated to prove a point, but once it becomes public, it becomes the basis and comparison reference for any future analysis and reports. In short, the initial manipulation sets a bar that can ensure ongoing manipulation based on false numbers. What happens is that everyone is afraid to call out the initial manipulation as the ramifications are extreme requiring a reset and a lot of explanation. Trust and credibility are eroded, and significant brand damage can be done in the process.

So, it’s great there is some level of peer-to-peer self-policing for academic fraud. But where’s the oversight for business misinformation, even down to the granular level within an organization?

What’s in a Lie?

But first, let’s take a step sideways. Why do people lie in the first place? Psychologist Paul Ekman writes there are several core reasons people lie. Let’s take a look, augmented with our own business-related context.

  • To avoid being punished. Simple example: Shortsighted thinking that a distortion of sales projections will put the team in a better light. Good luck.
  • To obtain a reward not otherwise readily obtainable. For new hire candidates, claiming work experience during a job interview increasing the chances of getting the job.
  • To protect another person from being punished. A loyal colleague covers for a coworker to protect the reputation of the team.
  • To win the admiration of others. Telling lies to increase your popularity can range from “little white lies” to enhance a story being told to creating an entirely new persona.
  • To avoid embarrassment. Misleading investors, misrepresenting facts to the Board, covering up a toxic work environment – all lies that inevitably are revealed – often in an all-too-public forum.
  • To exercise power over others. Herein lies the crux of data misinformation; control the information, control the narrative, control the power. We’ve all seen it when someone misleads information to clients to make the sales proposition look better. Or peers around the table bend the truth to maintain their position and territory.

And what are the most popular misconceptions and assumptions about lying? According to Chris Westfall: No one will ever find out; no one will get hurt; the ends justify the means and “You Can’t Handle the Truth!”

Data Breaches

Why would people falsify data? Beyond sheer ineptitude, people may be controlled by unconscious or subconscious bias. Or even confirmation bias. Statistician Gary Drevitch writes there are three common reasons for falsifying data.

  1. “Amplifying the importance of statistical significance without providing sufficient detail about the process by which statistical significance was determined is one way that a researcher might (even if unwittingly) misrepresent things with statistics. In short, just because a finding is statistically significant does not make it true.
  2. “Capitalizing on Type-I Error. Without getting too in the weeds, Type-I Error exists when a researcher finds that one result is statistically significant but, in fact, the reality is that the findings do not generalize to the broader population of interest. Sometimes a researcher might (again, unwittingly) present a statistically significant finding as a big deal, even if it is, in fact, simply spurious and the result of the researcher conducting too many analytical tests.
  3. “Failing to report on effect size information. Sometimes, for various reasons connected with the hypothesis-testing process, a finding might come out as statistically significant, but it might correspond to a small, and perhaps irrelevant, effect. When a researcher omits information on effect size when presenting statistics, he or she may be over-amplifying the importance of the result.

When reviewing, releasing, or relying on stat-based reports, leaders need to use critical thinking and reasoning skills to evaluate the results. So, while there are so many nuances and equations in advanced statistics, it all can largely be reduced to one simple question: Does the statistical finding matter?

The Truth Is Out There

We leave you with the fundamental metaphysical question: What is the truth? That’s one for the philosophers, but simply speaking, a true belief is objectively the way the world is, and a false belief is the way we want the world to be. Life isn’t so simple, and identifying an observable objective truth is becoming impossible with the internet megaphone of presenting personal beliefs as truth. So we are trapped in a global conversation about what’s true, based on its alignment with our own experience. A true belief confirms our own understanding and worldview. Therefore, truth is a flux and flow. It is personal and subjective.

As we navigate through this brave new world of truths, half-truths, lies, misinformation, misinterpretation, and ultimate confusion, it would be ideal to have a sense of objective truth based on reliable, data that everyone agreed on. But we don’t think that’s going to happen anytime soon. What we can do for our organizations is to not take everything for granted. If we could get a consensus on the truth as part of the collective consciousness, then we just might have a shot of moving forward collectively, with purpose, and intention. Yes, people lie. But they don’t have to.

Posted in: Business Research, Communication Skills, Competitive Intelligence, Cybercrime, Cybersecurity, KM, Leadership, Social Media