Face Scanning and the Freedom To “Be Stupid In Public”: A Conversation with Kashmir Hill

The longtime privacy journalist on how investigating Clearview AI helped her appreciate facial recognition—and envision a chaotic future

Hello, friends,

As we move further into autumn, and the leaves start to turn and sweaters and scarves come out where I am in New York City, I want to take you back to the last holiday season, when a group of lawyers received a not-so-festive surprise at Radio City Music Hall and Madison Square Garden.

In December 2022, three days before Christmas, New York Times reporter Kashmir Hill, along with her colleague Corey Kilgannon, showed how MSG Entertainment, which owns Radio City, the Garden, and other venues, had created an “attorney exclusion list” for lawyers and law firms suing the company.

With facial recognition tools, MSG could instantly detect when any of the lawyers on the list visited one of their venues. One attorney was pulled aside while trying to chaperone her 9-year-old daughter’s Girl Scout troop to the “Christmas Spectacular” at Radio City, the Times said. Others were turned away from Rangers and Knicks games and a Mariah Carey concert.

The lawyers had strong words for MSG—“It’s a dystopian, shocking act of repression,” one told the Times—but, as always, the profession did its real talking in court, with suits filed in a state supreme court and in the federal Southern New York district.

Hill, who began covering digital privacy nearly a decade and a half ago, kept reporting on facial recognition as it spread across multiple industries in the U.S. and Britain. She showed how stores, including supermarkets, have used facial recognition to eject and monitor alleged shoplifters, police forces have used it to arrest people based on false face matches, and increasingly wary tech giants have begun pumping the brakes on their use of the tools.

Last month, Hill released a gripping and disturbing book, Your Face Belongs to Us, about Clearview AI, a startup whose aggressive use of facial recognition has made it a key purveyor to law enforcement and other government agencies around the world. For the book, Hill drew on her extensive reporting on the company, starting with her January 2020 exposé “The Secretive Company That Might End Privacy as We Know It,” which revealed the company’s existence, founders, capabilities, and police client base, generating immediate concern among civil and digital rights groups and government watchdogs.

I recently spoke with Hill, who I’ve known since 2009, when she was writing about privacy at Forbes and I was covering Silicon Valley scandals as Gawker’s Valleywag columnist. We talked about Clearview’s messy origin story; how her own thinking on facial recognition evolved in the course of covering the company and writing the book; how Clearview has changed the world, including tech and law enforcement; possible ways to address the problems created by facial recognition; and much more. You can find our conversation below, edited for brevity and clarity.

Ryan Tate: I expected this book to be a book about technology, but instead I was immediately reading about people who were not hugely technically proficient. Did it surprise you that looking into Clearview AI led you to interview the sort of people who might post on 4Chan?

Kashmir Hill: Yeah, definitely. When I first heard about Clearview AI, I just assumed that there was some mastermind involved in the company that allowed them to do what Facebook and Google and even the government hadn’t been able to do: build this crazy tool that looks through the whole internet for a face. I was surprised to never quite find the technological mastermind.

Instead, it was a different story, essentially that this technology had become accessible enough for marginal characters to create a very powerful tool. The barrier to entry had lowered so much. It’s kind of like my tagline now, that what Clearview did was not a technological breakthrough, it was an ethical one. They were just willing to do what others hadn’t been willing to do.

With so much of this technology now, advances in AI that are really widely accessible, it will be what the marginal characters are willing to do that will create the new lines in the sand. It’s not just the big tech giants that wield these powers anymore.

Tate: You have this fascinating chapter in the book about where you go into this tactical police center in Miami. I felt like you alternated between showing how invasive this technology could be, and almost lamenting how bad some of the surveillance technology was. At one point in the chapter, it’s almost like you’re marveling at this high-resolution camera on top of a hotel that can really zoom in and see people really closely. Then there are these other cameras they have access to that are totally grainy—a crime happens and they don’t capture anything they could run an algorithm against.

In reporting this book, did you ever feel like the reporting put you in the shoes of the users or advocates of facial recognition and gave you insights into why they’re interested in it?

Hill: Yeah, talking to officers, especially talking to one officer from the Department of Homeland Security, who works on all these child crime cases and just hearing about those cases where they find these images of abuse, like on an account in another country, where they have no idea who this person is. Sometimes they can tell that’s in the U.S. because of the electrical outlets, but they have no idea who’s this child, who’s this abuser. They could be anyone in the country.

And I relate a case where they run the abuser’s face and they get a lead to this guy in Las Vegas, and they end up going to his Facebook account, seeing photos of the child. That was the first case that the Department of Homeland Security used Clearview in, and it led them to get a subscription. I see the power of a use case like that.

It was funny, when I was working on the first Clearview story, I was really pregnant, and I would get on the subway to ride from Brooklyn, where I lived at the time, to the office in Manhattan. Sometimes no one would get up for me and let me sit down on the subway. I just remember thinking, “I wish I had Clearview.” I want to know who these people are who aren’t willing to stand for a pregnant lady.

I can see the appeal of tools like this. And I think they can be useful. But I also don’t want to live in a world with no anonymity, where we’re subject to this all the time, because I do think it would be very chilling.

Tate: Do you believe that whatever legislation comes along for facial recognition should have an exception that would allow facial recognition on people not yielding their seats to pregnant people on the subway? [laughs]

Hill: That’s going to be the worst. It’s going to be like, “This guy was manspreading,” and it’s going to have his name attached to it, and there’s going to be a whole cycle of abuse on social media.

When I was working on this book, I thought a lot about this ”vast web of vengeance” story I did. It’s about the serial defamer who would go after people she had grudges against—and anyone related to them and their colleagues. She was defaming hundreds of people online for a slight that happened at a firm she worked at in the 90s. I just think about someone like that who carries a grudge, who’s kind of got a vicious streak, having a tool like Clearview AI or PimEyes, and  you bump into her on the subway and she takes your photo and writes horrible things about you online for years to come—and you have no idea where you even encountered her.

I can imagine those kinds of scenarios where brief slights in the real world carry over, because all of a sudden we’re not strangers anymore, or it could make the world more accountable. So, you don’t slight anyone anymore, because who knows what happens after that.

Tate: Is there a moment in the Clearview story that you’re surprised hasn’t resonated more?

Hill: The one thing that surprised me was that time that Clearview AI went to the attorneys general’s meeting at Gillette Stadium during the Rolling Stones show and was showing all the attorneys general what they had done. They were like, “that’s creepy” or “that’s weird.” There was no more formal reaction to what they’d just been shown. I was surprised that none of those attorneys general launched investigations into the company after seeing it on display, especially because it made them so uncomfortable. [Hill wrote that the event, for Democratic attorneys general, was in a private box at the stadium. It took place six months before Hill’s exposé on Clearview.]

I do feel like that’s something that’s hard with these kinds of cutting-edge technologies is that sometimes people see them, and I think they think it already existed. They don’t realize what they’re looking at, and how new it is or how groundbreaking it is.

I heard the same thing from lawyers when they were getting banned from Madison Square Garden. It was happening for months before the media reported on it. I was like, “Why didn’t you tell anybody this was happening?” They were like, “Oh, I just thought this was a thing that happens in the world.” They didn’t realize that it was such a shocking use of the technology.

I think sometimes people are looking at the future and they don’t realize it.

Tate: Would you put that inability to see the future when it’s in front of you on government employees, and/or attorneys, or do you think that’s happening to all of us?

Hill: I think it’s happening to all of us, this belief that all of technology is so powerful and so good. Just all these kinds of assumptions that smartphones are listening to us—“they must be, because the ads I’m getting are so targeted.” Just the belief that what you’ve seen in science fiction movies is real. I think so many of these companies are basically trying to make dystopian depictions of the future real, and maybe that’s part of it.

But I find there’s real cognitive dissonance between how powerful the technology is and the understanding of how poorly it works, and that it can work really well. I really like the Miami chapter for that. You think that law enforcement is so powerful—that they have these eyes everywhere, they can hear everything that happens. When you’re in the control room, you see, actually, how blurry their vision is and how limited. I think it’s on all of us that we have to try to keep both of those things in our mind.

Tate: The racial inequity problems with this technology are prominent early in the book, but later you write about how “the window of time for that criticism to be effective is closing as top developers have focused on addressing the problems of biased algorithms.” Can you say more about that?

Hill: I think there’s a racial inequity issue in terms of who it will be used on, particularly in policing. Even as the problems have been addressed in terms of the training data and making sure it’s trained on more diverse faces and getting rid of what they call “differential performance” or bias, we’re still seeing—in every single wrongful arrest we know of—that the person is Black.

So, I think there’s clearly still racial problems there. Part of it is just Black people are more subject to policing tools than anyone else. So, they’re suffering the harms of it when it goes wrong.

Tate: Is there momentum behind systemic remedies around facial recognition, like legislation? I was struck by what you wrote about how we could have a world where there are speed cameras everywhere and automatically send speeding tickets to people, and we seem to have chosen not to do that. Is there a world where facial recognition goes into the trash can in a similar way? Or do you think it’s just too useful?

Hill: It’s funny, I was talking to a facial recognition vendor whose company is based in the U.K., and he’s like, “Why is the U.S. so opposed to real-time facial recognition? It really makes you safer.” The U.K. really likes that, and they have been resisting how we use it here, where you use it retroactively to identify criminal suspects. So, there are some cultural differences in how it’s playing out.

There are a lot of technologies that we have constrained, from speed cameras to recording devices. All of our conversations could be recorded by surveillance cameras or on the wires. It would be very easy to just keep records of everything that happens. We have, as a society, resisted that because it’s so chilling to think that every moment can be recorded and that we could have this time machine where you can trace and track everything we’ve ever done.

I don’t think we want that. I also don’t think we want perfect enforcement of the law, because people like to jaywalk and they like to speed. And they like to get drunk and be stupid in public sometimes. They want to fondle their first date at a Beetlejuice theater. [laughs] I think people want a little bit of anonymity and the freedom to make bad decisions, you know, within reason.

I do think that the appeal of facial recognition technology to solve horrible crimes is very real and is a reason why activists who want it completely banned are probably not going to see that happen.

Tate: Are there other interesting ways we might constrain this technology that have emerged? Are there ideas you think are particularly promising in that area that might get some momentum?

Hill: I think constraining the commercial use of it, like we’ve seen in Illinois—where you’re not supposed to be using people’s biometric information, including their face prints without consent—has been a powerful law for facial recognition. It’s just not being widely deployed there.

My favorite example is Madison Square Garden, which originally installed it for security threats and then in the past year, used it to keep lawyers out of their New York City venues like MSG and Beacon Theatre and Radio City Music Hall. But they also have a theater in Chicago, and they don’t use facial recognition technology there, because the Illinois law prevents them from doing that. That’s a law that works. It’s a way to make sure that it’s only used in a way that benefits you—and not in a way of penalizing you.

In terms of police use, Massachusetts passed a law that creates rules for how police are allowed to use facial recognition technology, from getting a warrant to running a search. Detroit is a really interesting place where they’ve had three known cases of bad face matches that have led to arrest, so I think the city is really thinking about this. They want to keep using the tool and they’re trying to use it responsibly, but only use it for serious crimes, violent crimes.

Tate: One of Clearview’s founders, toward the end of the book, mentions background recognition as a potential new feature, to the point where we see this brick in the wall, we can determine the age of the brick, or know that it’s used in this particular neighborhood of London. What other new technologies or approaches might lie in Clearview’s future?

Hill: I don’t know if it would be Clearview, but I’ve been thinking a lot about voice search. You could imagine a Clearview AI that started gathering all the audio that’s been recorded and link[s] it to individuals, so that you can upload a few seconds of somebody’s voice and find anything they’ve ever recorded or said.

The one thing that kept coming up with activists is, if we say it’s okay for Clearview to gather everyone’s photos and create this database, what stops a company from starting to build a genetic database, whether buying clippings from hairstylists, or going out on garbage collection day and collecting samples? Or what Charles Johnson says he’s doing—going to funeral homes and buying genetic material from corpses that you could create a genetic database that you then sell to access to the police, or sell access to whoever might possibly want that.

There’s so many ways that you could reorganize the internet of information and the real world around these markers for us—many of which are quite dystopian.

This article was originally published on The Markup and was republished under the Creative Commons Attribution-NonCommercial-NoDerivatives license.

Posted in: Civil Liberties, Human Rights, Legal Profession, Legal Research, Privacy, Technology Trends