Breaking Bard + Who Owns Your Face? + Gamer News!
This transcript was created using speech recognition software. While it has been reviewed by human transcribers, it may contain errors. Please review the episode audio before quoting from this transcript and email transcripts@nytimes.com with any questions.
Casey, I acquired an exciting new piece of technology this week.
Oh, what’s that?
It’s actually one we’ve talked about on the show before, but not for a while. So do you remember many episodes ago when we had my colleague Tiffany Hsu on to talk to us about all of the bad ads that we were seeing on social media?
I do remember that, yeah.
Do you remember something called a rat bucket?
Yes. I mean, at the time, New York City was dealing with a huge rat crisis, and it was sort of a very timely product.
Yes, well, I also have an emerging rat crisis, which is that I have a family of rats that lives in my backyard.
Oh, I’m sorry to hear that.
And they live beneath my deck, and normally, like I like to peacefully coexist with animals.
Like a sort of like cute little Pixar family of rats.
Yeah. So my first thought on how to deal with them was to open a French restaurant and enlist them all as my helpers. But my second thought was, I should probably deal with these guys because they might actually cause some damage to the property.
Unfortunately, they will carry the plague.
It’s true.
We learned that the hard way.
Too soon.
It’s too soon. Sorry to any victims of the Black Death out there.
So I tried a bunch of different kinds of rat traps and various deterrence methods, but ultimately, I came up short. And then I thought, you know what? I remember the rat bucket. There was a piece of advertising that we talked about on the show. So I ordered from Amazon.com, for the low, low price of $25, a rat bucket.
Now, what is a rat bucket?
So it is technically a rat bucket kit.
OK.
Because it’s a piece of plastic that snaps onto the top of a five-gallon bucket. And basically, it’s like a trap, because there’s a ramp.
Wait, are you saying that the rat bucket doesn’t actually come with the bucket?
No, it doesn’t. You have to supply your own bucket, but it has a little plastic ramp. And you put some bait, like some peanut butter at the top of the ramp. And so the rats climb up the ramp, and then they get onto the platform where the peanut butter is, and the bottom falls out, just like a trap door in a cartoon.
OK.
And they fall into the bucket.
And from there, you can take them and sell the rats at market.
Exactly. You can do whatever you want with them. Honestly, I really haven’t thought through to that step yet, what I’m going to do with a bucket full of rats.
Well, have you installed the rat bucket?
I did. I put it out yesterday, and I checked it this morning, and it hasn’t caught any rats.
No hits. Here’s the problem you’re dealing with. Rats have gotten very smart.
They have.
They’ve leveled up.
So I actually read something online, that you have to like basically do this all with gloves, because if they catch the scent of you, they’ll know it’s a trap. And I would just like to know how the rats are getting smart. It worries me. Like, we talk a lot about AI getting smart and that being dangerous, but I think we have a rat intelligence crisis brewing, and I want to know what our plan is as a nation for this.
I think in 50 years, there’s only going to be two things left on Earth. It’s going to be ChatGPT and rats, and we’ll let them duke it out for the future of the planet.
And maybe rat GPT.
Oh boy.
Oh boy. [THEME MUSIC] I’m Kevin Roose. I’m a tech columnist at “The New York Times.”
I’m Casey Newton from “Platformer.”
And you’re listening to “Hard Fork.”
This week, Google’s AI chatbot has learned to read your emails? Then, “The New York Times’” Kashmir Hill stops by to talk about the rise of facial recognition technology, and answers one of your questions about it. And finally, it’s time for our new segment, Gamer News.
All right, Casey, we have some big AI news this week when it comes to Bard.
Yes.
Bard, of course, is the ChatGPT competitor from Google, and as of this week, it has now been plugged into many of the other services that Google offers, which is a feature that we’ve been asking for and talking about for quite a long time.
Yeah. In fact, when I sat down with a product director, who will talk about this, he said, you called this one. So that felt good.
Yeah. So this new tool is called Bard extensions, and it means that Bard can now plug into your Gmail, or your Google Drive, your Docs. It can also search Maps and YouTube, and Google flight and hotel information. Basically, Bard can now reach into your personal data, and not just sort of scraped data from the internet.
Yeah. Up until now, ChatGPT tools like this have been useful for a lot of things. But one of the things we’ve talked about on this show is, there is going to be a moment where you can plug in this technology to places that you’re already spending a lot of time. And it’s hard to imagine places where I spend more time than Gmail and YouTube, so this does feel like actually a milestone in the development of AI.
Totally, and it also solves one of the biggest problems with AI, which is that it kind of exists in a vacuum, right. Ideally —
Wait, AI exists in a vacuum now? Oh my god. That’s how the robot revolution gets started.
So in addition to this, Bard Extensions feature, Google also put some other new features into Bard, including a feature that lets you check Bard’s answers. Basically, if it says something and you’re not sure whether it’s true or not, you can press a button, and it will highlight in green all of the things that it can sort of verify through a Google search. And it will highlight in orange all of the stuff that maybe it’s not so sure about.
I think it’s more of a brown.
OK. We can agree to disagree.
OK.
So Casey, you and I have both spent some time playing around with these new features in Bard. What did you take a look at, and what do you think of it?
Sure. So I spent more time writing about this double-check feature, right. So when we talk about AI, a consistent theme is that these services make things up. They hallucinate. They are confidently wrong, never more so than in the famous case of the ChatGPT lawyer who submitted a bunch of cases as part of one of his briefs, only to learn to his horror that none of those cases existed because ChatGPT had just made them all up.
So this is a reason why I don’t use these tools very much as a research assistant, because trying to fact-check them, it feels like you’re spending more time fact-checking than you would if you had just done all of the research yourself.
Totally.
So then along comes this new Google It button inside of Bard, and I would ask it the same sort of fact checking questions that I would try to be exploring myself if I were writing a column. And now, all of a sudden, Google will just tell you when it thinks it might have made a mistake. And that turns out to be a pretty useful thing.
Now, I have a question about this, which is that it could just tell you automatically. But instead, it makes you press a button to do the double-checking and highlight the stuff it’s more confident in or less confident in. Why doesn’t it just do that automatically?
Yeah. It sort of has people saying, it’s like, why don’t you make the whole plane out of the black box. You know, it feels like the same kind of question. And so when I asked the Google product director, what he told me was, there are just a lot of queries that most people are not going to double-check.
If you ask it to write a poem, if you ask it to draft an email, you don’t need to double-check it in the way that you would if you were to say like, hey, write me a book report about “Old Yeller.” So that’s why it is there, but at the same time, I agree with you. It would be nice if Google said, oh, it looks like this question is looking for some specific knowledge. Maybe we should just apply this filter automatically.
Totally. And I imagine it also has something to do with computing costs, and Google doesn’t want to have to run two queries for every time a user asks a question of Bard.
That’s right.
So what did you think of this feature? How did it work in your testing?
Well, I thought it did a pretty good job. and I only spent about a day with it so far, but when I was asking it about things I was knowledgeable about, like the band Radiohead, I could spot some errors. And when I asked Google to double-check, it then spotted the error. So that made me feel good.
You know, the same time, I do think that in a way, this technology still has the problem that it has had from the beginning, which is that the minute you realize you’re going to have to double-check and you’re going to have to look at those citations, and you’re going to have to scroll down the page to find where on the page it is cited, and sort of reconcile that with your own knowledge, you’re sort of once again asking, why am I using this thing as a research assistant, right. Like, there are still, I think, some innovations to come that are going to make this thing feel better to use.
Yeah. I mean, one of your initial beefs with Google, just regular old Google, is that you ask for something and it hands you a research project.
Absolutely.
And this seems like Bard is developing a similar problem, which is, you can ask it any question you want and it’ll answer, but then in order to figure out whether that answer is actually true or not, you have to press the little double-check button.
Yeah. And I think we’ll probably get into this, but as I was exploring these new features, what I realized was, the thing that these new features are the very best at is that when you want to buy something, wow, does Google figure that out well, right. Oh, you want to book a flight in a hotel? Click this button, baby.
Because Google makes money from it.
Give us that sweet, sweet percentage.
Totally.
Yeah.
So I spent more time playing around with these extensions, these tools that allow you to connect Bard to your own personal data, because this has been, I think, a Holy Grail feature request for a lot of these chat bots, is like when can I actually hook this up to the stuff that I use every day. When can it actually use my data instead of just scraping from the internet at large?
Yeah.
So I turned this on. I spent some time playing around with it. The first thing I tried, I gave it sort of a hard task, maybe a little bit unfair. But I said, analyze all of my Gmail and tell me with reasonable certainty what my biggest psychological issues are.
That is so unhinged. What did it say?
So it gave me an answer, and it was sort of interesting. It said my biggest psychological issue is that I worry about the future.
Famously.
And that that could indicate an anxiety disorder. And then it cited an email that I wrote in which I said that I was stressed about work and that I am, quote, “afraid of failing.” Now, maybe that’s plausible. If you know me, I do tend to worry. But I didn’t remember writing that, so I asked Bard, like show me the email where I said that I was afraid of failing.
And it showed me an email. It was a book review of a book about Elon Musk, and it had a quote in it that said, “I’m afraid that he’s going to fail at something big and that it’s going to set back humanity.” But then I was like, wait a minute, I never sent that email.
Wait, so Bard thought that your anxiety was actually just Elon Musk’s anxiety?
So the email that it linked to was an email newsletter that I had received. But when I checked that email newsletter, it didn’t have the quote either. So Bard made up a quote —
Bard.
— from this email that I had received, and wrongly attributed it to me. So a mistake on top of a mistake.
Not good, Bard.
So I told Bard — I wanted to give it another chance. I thought, this is kind of a hard task to start off with. This is the day one of this feature. I said this time, redo the search but only using emails that I sent. And it came back with an email I’d written to a friend in which it said that I had said that I was afraid that I was not good at financial stuff, and that I was not sure if I’m cut out to be a successful investor.
And I thought, I don’t think I sent that one either. So I looked up the original email, and sure enough, Bard had completely made up another quote from an email that I had supposedly sent. So you know, I asked Google about this, and they said this is an early product. It’s still the first day. Right now, basically, this Extensions feature is limited to doing the kinds of searching that you can do yourself in the Google Drive search bar or the Gmail search bar. So it can retrieve stuff and it can summarize it, but it really can’t do kind of analysis of the contents of emails.
Mm. Well, it might be nice if Bard told you that when you tried to do one of those searches.
Yeah, that’s what I said. Like, if it can’t psychoanalyze me based on years of my emails, like, that’s fine, but just tell me that. Don’t make stuff up.
OK, would you feel comfortable with me running this exact same query on my email? Because I would like Bard to diagnose me with a fake mental disorder if possible.
Please.
OK, so give me that prompt one more time. Analyze my emails?
Analyze all of my Gmail.
All of my Gmail.
And tell me what my biggest psychological issues are.
And tell me what my biggest psychological issues are. I’m so excited for this. Now, you know, and I should say, I’ve had a Gmail account like since it was in beta, so this is like 20 years of email. So you would think actually, Gmail should be able to answer this question with some sort of fidelity. Let’s say — OK, so it’s telling me that it’s difficult to say definitively what my biggest psychological issues are, presumably because they’re so vast. But it says a few things.
Where do I start?
Yes.
It just respond with the entire DSM.
It says, I seem to be interested in psychology and mental health, which I don’t think that’s real. And it said you have received emails about anxiety and depression, and you have received emails about work life balance and burnout. So yeah, I would say that does not feel like a great analysis.
Yeah, so that was the first task I gave Bard. It, I would say, failed that one. I then was curious about all these travel integrations and whether it could pick information out of my Gmail and use it to help me with some travel planning. So I asked it to search my Gmail for information about a trip I’m going to take to Europe in a few weeks and look for train tickets that would get me from the airport to a meeting in a nearby city.
And this is starting to feel like a classic word problem from like eighth grade. It’s like, if I leave Bordeaux at 12:00 PM going 30 kilometers an hour.
Exactly. So it didn’t do very well on this one either. It got the departing airport wrong. It did find my itinerary in my email, but it sort of made up some details about it. And it couldn’t check the train tables because it doesn’t have train information. It only has flights and hotels.
Not great, Bard.
So third task, I thought, all right, I’m going to go back to the basics. I’m going to do some email stuff on it, and it actually was pretty good when you ask it very specific questions about specific emails from specific people. So I had it summarize recent emails I got from my mom.
Now, is your mom really writing new emails so long that you’re like, I’m going to need to see the executive summary of this? I don’t think my mom has ever emailed me more than four sentences.
Look, it was a test.
OK.
And I also asked it for sort of summaries of emails I’d gotten on subjects, like summarize all the recent emails I’ve gotten about AI. But then I asked it to do other sort of more complicated tasks, like pick five emails from the primary tab of my Gmail, draft responses to those emails in my voice and show me the drafts, which I was very excited about. I was like, if this thing can write emails —
That’s a good prompt. Yeah.
It made a mistake. It went to my promotions tab instead, and it wrote a very formal, very polite email to Nespresso, thanking them for their offer of a 25 percent discount on a new machine.
Oh my god.
So I would say this feature, Bard Extensions, does not feel ready for prime time to me.
Yeah. You know, unfortunately, I had a similar experience. I asked Bard to find my oldest email with a friend who I have been exchanging messages with for probably about 20 years, and Bard showed me a message that he had sent me in 2021, which is not really all that long ago.
I also asked it which messages in my inbox might need a prompt response, and Bard suggested a piece of spam that had the subject line, “hassle-free printing is possible with HP instant ink.” And I thought, you know, I don’t know that that needs a prompt response.
It’s sort of amazing that Google is just putting this stuff out, because it’s like, they clearly have the data that you would need to build the best AI assistant in the world. So why are they putting this stuff out now?
Here is the reason. They need the human feedback, right. They need us to be in there saying, this is a terrible result, bad, bad, bad, bad, bad, bad, right. So by putting this stuff out there, they’re getting feedback from millions more people, which they can then use to design Bard to do what people actually want to use it for. And collectively with all those people, they are going to make it better, because let me say, we are having some fun pointing out the flaws of this thing. I absolutely think all of this stuff is going to work.
100 percent. These chat bots get better over time. We know that, but it is just a remarkable sort of display of Google’s risk tolerance here, where this feature that they know is imperfect, they were not surprised when I pointed out these flaws. But they are putting it into Bard anyway because they are so desperate for that feedback, and I would say also, probably to try to leapfrog ahead of where ChatGPT is.
Yeah. And again, if you want to plan a seven-day itinerary in Tokyo and Kyoto, and ask Bard to show you flights and hotel information, it’s going to do a great job, right. So when it doesn’t have to analyze tens of thousands of emails that you’ve sent over the years, it is quite good.
Yeah. Well, Bard may be good for flight planning, not good for psychoanalysis. That’s what we learned this week. But I do think like it’s still an area that I am desperate for someone to crack, because the chat bots, they’re so good for so many things. But they really feel impersonal when you use them, because they are not learning from your data and your writing voice and your communication style. And so I think if anyone’s going to crack it, it will be Google, but I just think this is not it yet.
Yeah, I’m going to make a prediction. I think that within a year, someone is going to use this technology to successfully find a document in Google Drive for the first time.
I think we’re on the curve that gets us there, and I’m going to be really excited to see it.
Is that AGI?
Yeah, when we get there, that’s called sentience, my friend, sentience.
After the break, we talked to New York Times reporter Kashmir Hill about her new book on facial recognition and how it could end privacy as we know it.
[THEME MUSIC]
Casey, I’m very excited about our guest this week.
Me too.
It’s my colleague Kashmir Hill, “New York Times” reporter and friend of the pod. She’s one of my favorite reporters, and she has a new book detailing her investigation into Clearview AI, which is a facial recognition app that, Casey, you’ve probably heard of.
Not only have I heard of it, but I’m trying to figure out how to stop it.
So basically, it’s sort of like Shazam for faces. Like, you put in someone’s photo and it searches a massive database of billions of photos to try to find other photos of that person, which maybe helps you figure out who they are, what they do, and lots of other details about them.
Yeah. It takes this idea that you should have some level of anonymity in a public space and says, no, you should not.
Totally. It’s an insane story. It’s an insane technology, and what’s so interesting is that it actually does seem to be a case where Silicon Valley developed something and said, actually, we’re not going to release this because it’s too dangerous. And then this startup —
In New York.
In New York, came out of nowhere.
This is a New York story.
Did it anyway.
That’s right.
So I wanted to have Kash on because we’re in this kind of AI moment right now, where we’re making decisions as a society about what guardrails should be placed around this technology, what we can use it for, what we can’t use it for. And we’re sort of debating all the ways that it’s going to affect people’s lives. Facial recognition sort of arrived before a lot of the generative AI stuff that we talk about, but in some ways, it’s a lot scarier.
Yeah, because it’s being used now and people are being harmed now, and there is just a tiny fraction of the attention being paid to this stuff than to the long-term risk of killer AI.
So we invited Kash on the show today to talk to us about her book called, “Your Face Belongs to Us: A Secretive Startup’s Quest to End Privacy as We Know It.” Kashmir Hill, welcome to “Hard Fork.”
Hi. It’s good to be back.
You know, last time we were with you, Kash, we were in the Metaverse.
We were. We didn’t have legs. We were stumbling around.
Yeah. I’m so glad to just be back in our sort of corporeal reality that we exist in here.
So Kash, I think most of our listeners will have heard at least a little bit about Clearview AI and facial recognition thanks to your dogged reporting on it over the years. But just for people who maybe haven’t heard about Clearview, give us the 30-second sort of summary of what this company does and how it came onto your radar.
So Clearview AI scraped billions of photos from the internet and social media sites. They say they now have a database of 30 billion faces. These were collected without people’s consent, and they built a facial recognition app that they claim works with something like 99 percent accuracy. You take a photo of someone, and it’ll pull up other places on the internet where that face has appeared.
Got it. And my understanding from your book is that this was not a company that went out looking for attention and media coverage. So how did they come onto your radar, and what did you learn about this secretive company?
Yeah, so I got a tip a few years ago. Somebody emailed me and he said, I’ve come across something that looks like it’s crossed the Rubicon on facial recognition technology. I think you’ll be interested. He attached a 26-page PDF that had a privileged and confidential legal memo from Paul Clement, a former Solicitor General, now in private practice making lots of money.
And he had been hired by Clearview AI, and he was describing what they did and saying, you know, I tested it on attorneys at my firm. It works incredibly well. And he had written this memo for the company to explain to police why using Clearview AI was not illegal, and that they wouldn’t break any state or federal laws and that this was constitutional to use.
Now, Kash, when you first read that document, did you think, oh wow, like I can’t believe this technology exists, or had you been bracing for something like this to arrive for some time?
Both. I was a little shocked to hear that some company I’d never heard of before was selling this rather than Facebook or Google, and so that was kind of astounding. And I wondered, is this real or is this snake oil. But part of me flashed back to this moment in 2011 when I had gone to this conference called Face Facts organized by the Federal Trade Commission, where they were kind of grappling with facial recognition technology for the first time.
It wasn’t really that good back then, not very accurate. It didn’t work that well in the real world, but it seemed like it was starting to get better. And they had Google in the room and Facebook in the room, and academics and privacy activists. And they were talking about what do we do about face recognition, and the one thing that all those people agreed on — and they don’t often agree on things — was that no one should build an app that allows you to take a photo of a stranger and find out who they are.
And why not? Like what is the scenario there that people are so worried about?
I mean, there are so many examples that come up. I mean, just imagine you’re a protester at a Planned Parenthood, and a woman walks out of the abortion clinic. You take her photo. You know who she is. You’re at a bar and you’re talking to some guy, and you decide he’s a creep. You walk away. He meanwhile takes a photo of you, can get your name, can maybe find out where you live. I mean, there’s just so many ways in which this could be used very creepily, some of which I describe in the book.
Yeah. One thing I love about your book is that it just gives so much detail on the reporting process, and as a reporting nerd, like that really appealed to me. And it was truly an incredible story of how once you got this tip, this PDF, you sort of had to go on this investigation to figure out who this company actually was, because there wasn’t information available. They seem to be trying to hide their tracks. So tell us the story of how you actually figured out who was behind Clearview AI.
You know, I googled to see what was on the internet, as any great investigative journalist does. And there wasn’t a lot there. They had a website that basically just said, artificial intelligence for a better world. It didn’t really say what they did. They had a office address on the website, and it was just a few blocks away from The New York Times Building in Manhattan.
So at one point, I walked over to try to knock on their door, and just, the building doesn’t exist. It was like a fake address. When I did Google, I found on the website PitchBook, it said the company had two investors. One, I had never heard of before, and the other was Peter Thiel. So you know, I reached out to Peter Thiel’s spokesperson. I said, oh Hey, is he investing in Clearview AI. Spokesperson said, it doesn’t sound familiar to me. Let me look into it. Then didn’t hear from him again.
I was reaching out to all these people that seemed to have ties to the company, and just no one was talking to me. So I ended up finding police officers who had used the app.
Because this was being marketed as a tool for law enforcement to identify criminals based on surveillance footage.
Yes, I knew that they were supposedly selling it to police departments, and I saw on some kind of city budgets that they were paying money to Clearview. But I ended up talking to this financial crimes detective in Gainesville, Florida. His name was Nick Ferraro, and he was really excited to talk about Clearview.
He was like, I love this app. You know, I had hit dead ends on all of these investigations, into fraudsters. I had a photo of them from standing at the ATM, standing at a bank counter, and didn’t find anything on our state facial recognition system. But then I ran their photos through Clearview AI, and I just got hit after hit after hit. So I was like, oh, this sounds great. Like, can I see how well it works?
And he goes, sure, let me run your photo, and I’ll send you your results. So I was excited, and I sent him some photos, and then he ghosted me. Had another officer kind of similar. Told me to send my photo. He ran it, and he said, it’s weird. You don’t have any results.
Which is not plausible, because you’re like a public person who’s been in photographs that have gone on the internet over the years.
Yeah, I am not an online ghost. I’m all over. If you Google me, there’s a lot of photos that come up. So he said, there should be results for you. This is weird. He said their servers must be down. Stops talking to me, and then finally, end up with the help of a colleague at “The Times” recruiting a police detective. I told him about what had happened before, so he runs my photo. He says, there’s no results. That’s weird.
Then a couple of minutes later, he gets a call from somebody at Clearview AI asking, did he just run my photo. Why? And they told him that they were deactivating his account, and he was really creeped out. He said, I can’t believe this company is looking at the faces that law enforcement is searching for.
And I found it really chilling, because they were tracking me while they weren’t talking to me, and they controlled the ability to be found. They had blocked my face, so I should have committed my crimes right then, because I wouldn’t have had results.
Perfect alibi. So obviously, there are people who think this technology is worth paying for. Law enforcement agencies are using this to solve crimes. Who are the other people who are using this technology and how are they using it?
So facial recognition technology is popular with companies, retailers. There’s been this big spike in shoplifting, and a lot of companies want to be able to identify people who’ve stolen from them before and kick them out. One of the most famous uses is, Madison Square Garden installed facial recognition technology a few years ago to keep out security threats, but in the last year, decided, wow, this would be a great way to keep out our enemies.
And they went, and they decided to start banning lawyers who worked at law firms that have sued them and got their photos from the law firm websites and then put them on this watch list. And every time a lawyer that works at one of those firms tries to get into a Mariah Carey concert or a Knicks game, they get stopped at the door and turned away.
So some people are probably hearing this for the first time and thinking, this is bonkers. How is this legal? Do folks have any sort of legal protection in the United States against this kind of technology?
It depends on where you live, how well protected your face is. The state that has the strongest law is Illinois. They’ve got something called the Biometric Information Privacy Act passed in 2008 that says you need to get people’s consent to use biometric information from their faceprints, their fingerprints, like their voiceprint.
And you’ll have to pay up to $5,000 if you do not. So Madison Square Garden owns a venue in Chicago, has a theater there, and it can’t use facial recognition technology to enforce the ban because of that law. But at the federal level, there’s really nothing about this.
Yeah. What’s stopping Congress from outlawing this technology?
I don’t know. They’re busy. I don’t know.
I mean, they would have to pass a bill about technology, which they are not capable of doing.
Right.
I guess I’m just wondering, this seems like an area where Republicans and Democrats could basically agree that it’s bad if there’s technology out there that just allows you to be de-anonymized at any time based on just a single photo of your face. It seems like you could get pretty broad agreement on that, but maybe I’m wrong. Maybe the law enforcement community is attached enough to this tool that they would fight to keep it.
I mean, it has happened. I can’t tell you how many old congressional videos I watched of Republicans and Democrats getting together and saying, this is the one thing we agree on, this is a threat to civil liberties. Let’s do something about this. And these hearings would happen every few years, and it just feels like deja vu to me.
Yeah. Kash, I’m curious how your own views about this technology and whether it should exist evolved over the course of reporting on Clearview AI. You reported on people who think this technology is great, like the detectives who are using it to solve the cases in their unit. You also talked to people whose lives have been damaged by this technology, like a man who was wrongfully arrested because a facial recognition app blamed him for a crime that someone else had committed. Do you think this technology should exist, or do you think it’s too far?
I mean, I think there’s clearly positive use cases for facial recognition technology and using it to solve crimes. When it’s used appropriately, you need to have more evidence than just the fact that someone looks like someone else. So yeah, I get that, and I have to say as an investigative journalist, I do see the appeal of this.
Imagine there’s some event that happens, and there’s a photo of everyone who’s there, and you as a reporter could just scan those faces, you know, upload them to a Clearview AI type tool, and then you find out who they are. And you can go to them and say, tell me more about what just happened.
I mean, sure, but like as a reporter, it would great for me if I could read Mark Zuckerberg’s emails. But like, I don’t have that ability, and so I have to figure out other ways to do my job. And that a tradeoff I’m willing to make to live in a society that is bearable, right. And a world where anyone can just scan your face and potentially learn your entire life story, if you are attending a protest, leaving an abortion clinic, or doing something else that someone else in society doesn’t like, I truly cannot imagine a more dystopian outcome for the path that we are on.
I mean, it’s part of why I did this book right now, is I think that we still have the power to decide right now. And there’s a few states where Clearview AI has been sued, but for the most part, we’re just not really addressing this. And so I think it could get away from us if something doesn’t happen, if we don’t pass legislation that gives people more protection and power over whether they’re in these databases or not.
Yeah. And in addition to this debate about what kind of laws might be needed at the state and federal level, there’s also this debate happening in the tech world right now over who should control AI technology. I’m thinking about the open-source versus closed-source debate when it comes to AI language models, and kind of whether it’s better for a few big companies to control some of this technology as opposed to throwing it open to the masses.
You point out in your book that both Facebook and Google developed facial recognition before Clearview AI did, but decided not to release it because they felt it crossed an ethical line. You also talk about how a lot of what Clearview AI was able to build was possible because of like open-source software packages that they built upon. So is it fair to say that one of the lessons of your book is actually that in some ways big tech is good, and that it might be better for the world if a few big companies do control this stuff?
Well, yes. I mean, yeah, like these technology companies were responsible actors in this case, and I think this is the assumption that policymakers have when they’re not passing laws. They say, we can trust the technology companies. They’re going to make the right decisions. And with facial recognition technology, you know, arguably, they did.
But when you have this technology, becoming open source, it means that more radical actors can come along. I mean, it’s the same thing with generative AI. It’s been reported that Google had generative AI, ChatGPT, like tools that it had developed internally that it decided not to release. And then OpenAI came along and threw the doors open.
So that is what is going to happen. You’ll have these startups, and they are just kind of desperate to make their mark on the world, and they’re going to do things that are going to cross lines and maybe not what we want to be happening in society.
Yeah. When you talk about startups throwing the doors open on facial recognition, as you point out in your book, it’s not just Clearview AI building this stuff, right. There’s also a company called Pimeyes, which as you have reported on, is sort of like Clearview AI, only it’s not limited to law enforcement. Anyone can access it.
And there’s this one chapter in your book where you write about a person who uses Pimeyes that really haunted me. You call this man David. I don’t think that’s his real name. But he told you basically that he uses facial recognition tools as part of a sexual fetish basically to look up the names and identities of porn stars or women who appear in adult videos, and basically find out as much information as he can about them. He said that he considered himself a digital peeping Tom. Talk about that experience, because that is really one of the things that just made me shiver.
Yeah. David has a privacy kink, and he basically told me he was confessing to me because he knows what he’s doing is wrong, and he wanted this story out there to convince lawmakers to act, that he really doesn’t think a tool like Pimeyes, which is what he was using, should be available to him to do what he was doing.
And so yes, so he would watch porn videos, and a lot of women who are doing kind of online sex work tend to use pseudonyms, try to hide their identities because of safety issues, because of stigma issues. And he would go and find photos from their real vanilla lives, and he’s done this many, many, many times. He said he kind of got sick of it, and so he decided to turn to his Facebook friends. He’d accumulated hundreds of women as friends over the years, and he would just, kind of for fun, run their photos through Pimeyes and try to find illicit photos of them.
And he succeeded in some cases. A woman that had once tried to rent, I think, a room in his apartment, he found revenge porn of her that was not associated with her name. It would not have been findable without a face search engine. More innocuous images like a woman on a naked bike ride. Just all of these photos that were kind of safely obscure until the search engine comes along, that makes the internet searchable by face.
Well, Kash, sometimes we ask our listeners to send us their dilemmas about tech, and we got one recently and thought Kash is the perfect person to help us through this. So this listener who are going to keep anonymous at their request told us a story about their use of Pimeyes, and this listener wrote in to say that they’ve been using Pimeyes to search the photos of the people they come across on the dating app Bumble.
And according to them, they found that a lot of the photos on Bumble linked to stolen photos from Instagram accounts, OnlyFans accounts, and profiles that solicit for sex services. And this listener wrote, quote, “When I found stolen pic profiles using facial recognition or profiles using the app to solicit for sex services, I flagged the profile. I’ve apparently done it enough that Bumble sent me a note banning me from their platform.”
This listener then took Bumble to small claims court over this and claimed they won their case. But now they wonder, was I in the wrong for trying to protect myself using whatever tech tools I can. So I should just say here, we reached out to Bumble about this and they declined to comment. But as best as we can tell after going back and forth with our listener, this did actually happen. So Kash, how do you react when you hear that story?
I would like that reader’s contact information so I can report that story out. I mean, I do think we’re at a moment where it’s generally considered creepy. You know, as a matter of etiquette, you shouldn’t be searching someone’s face without consent. In that particular case, I know you’re opposed to it, as you just said. I think if you’re just meeting someone for the first time, maybe you don’t immediately Google their face, but if you’re deeper into the relationship, I mean, just do a reverse image search of the profile photo. I don’t even think you need to search the face, necessarily.
Although that’s actually just an interesting story about how a privacy violation became normalized over time, just through the long-term existence of Google image reverse search. And look, I’m sure some listeners will hear this. I think particularly women will think, look, if you go on a first date, that can be a very dangerous situation. It is not unreasonable to want to have some sense of security before you meet with a new person. And hopefully, you’re meeting that person in public, and hopefully, somebody else in your life knows where you are. But in addition to that, you might want to get some intel. And look, I think it’s probably quite common for people to Google the people they’re about to go on first dates with.
But man, I don’t know. I think part of the reason that we’ve gotten comfortable with the tools that exist today is because there is still some ambient privacy remaining, where maybe yeah, your name can be searched and some details will be revealed about you, but it will not become clear that you have an OnlyFans account, for example, right.
I just worry that as we normalize the use of these technologies, pretty soon, we’re just going to wake up in a world where we are not able to live as freely as we used to, and it is going to be very hard to rewind, in part because of questions like this and people saying, well, I needed to do this to make me feel safe.
I do think the repercussions are going to be worse for some people than others. So yes, people who have done online sex work, who did it not thinking that it would ever be tied to them, it’s going to be really hard for them, because it’s just so stigmatized. And if this becomes normalized, I think it will really hurt their opportunities in dating life, but also professional life.
Well, I’ll tell you one thing that I’ve been thinking about as I was reading your book, Kash, is there was this moment early in the pandemic when I remember hearing celebrities saying that actually, they liked masks as a sort of societal trend, because all of a sudden, a very famous actor can put on a COVID mask and go to the grocery store, and for once not be recognized, not have their name tied to their face. And that actually made them feel more comfortable and actually more free to be able to camouflage themselves that way.
And it just struck me that sort of maybe coming for all of us, that we will all just assume that unless we’re wearing a mask or obscuring our face in some way, we will just all have to move through the world as if we are celebrities. And that’s really striking. But also, do you think that people will start wearing masks just to combat the facial recognition databases?
Yeah, I mean, potentially. The problem is, during the pandemic, a lot of these companies trained their AI to work when you’re wearing a mask. And so when I did a story about Pimeyes, I asked my colleagues to volunteer. And Cecilia Kang, who covers politics in DC, sent me this photo of herself with the COVID mask on. And it still found photos of her. So you need to wear a ski mask, which is hard depending on what climate you live in.
Are you familiar with the “Mission Impossible” series of films.
Yes.
You know, one of their sort of signature technologies in those films is masks that look very similar to other people, and I really hope we get there, because I may just have to walk around with somebody else’s face on. This will sort of almost be a “Face/Off” situation.
Yeah. I’m going to put on a mask that looks exactly like Casey and then go commit some crimes. All right, Kashmir Hill, really good to talk to you. The book is called “Your Face Belongs to Us: A Secretive Startup’s Quest to End Privacy as We Know It.” it is quite good. I really enjoyed reading it, and I really appreciate you coming on.
Thanks, Kash.
Thanks. [THEME MUSIC]
After the break, It’s time for Gamer News.
Kevin, it’s time for gamer news.
To play the Gamer News Theme Song?
Play the Gamer News Theme Song. [GAMER NEWS THEME SONG]
So Casey, we don’t talk much about video games on this podcast, but we actually are both big video gamers.
That’s true, and well, look. Here’s the thing. Gamers are like ordinary news consumers except in this one key respect, which is that if you say something they don’t like, they will try to kill you. And so it’s a very fraught subject and it must be handled delicately, but we’re going to strive to do that in the first installment of gamer news.
Right, and I think gaming news often doesn’t get taken super seriously, because it’s just video games or something. But video games, they are one of the biggest industries in media, and I think we should spend some time talking about it.
Absolutely. And even if you just set aside the amount of money and time that is spent on video games, which are both staggering, it is the source of culture for the entire next generation of human beings, right. It’s like video games are shaping the way that we relate to each other in ways that I think older people sometimes don’t understand.
Yeah. So we’re going to talk about it today, with apologies to the non-gamers out there. But we’re gamers. We have a tech podcast. We’re going to talk about some video games. Let’s do Gamer News.
Gamer News.
So the biggest gaming news this week involved a company called Unity. Now Casey, what do you know about Unity?
What I know is that Unity, despite its name, has ironically torn the entire gaming community apart.
It’s true. They are doing disunity this week.
They make what is called a game engine, and a game engine is sort of where you create the nuts and bolts of the video game. You know, you your idea. Let’s say, what if there was a video game about a plumber who had to constantly rescue a princess from a castle, because she had no agency of her own?
That’s a stupid idea. That’ll never work.
Well, I think it could have some legs, but anyways, you have this idea, and you turn to something like Unity so that you can actually build it. It is the Microsoft Word of video games.
Right. It sort of has the basic building blocks, because if you’re making a video game, like say you’re making a first person shooter, you don’t want to have to code the laws of physics to teach a character like how to jump.
That’s right.
If you use the Unity game engine, you can just sort of plug and play their little jump command, and it will make the character able to jump.
Yeah, so it speeds things up. If every video game developer had to invent their own game engine, that would just be a massive waste of everyone’s time.
Right. So there are a number of popular game engines that a lot of games are built on.
But when it comes to mobile gaming, there’s really only one popular engine.
Right. So Unity is the game engine that powers a lot of very popular video games, including Pokemon Go, Hearthstone, Beat Saber, Cuphead, and Monument Valley. Have you played any of those games?
I have played almost all of those games, and what I love is how silly you sound when you say the names of five video games back to back.
So Unity is a very popular video game engine for developers, in part because it’s got a lot of features, it’s been around for a while, but also because unlike some other game engines, it did not take royalties from the game creator. So basically, if you use the Unity game engine to make a video game that gets millions of downloads, Unity is not going to charge you based on the popularity of that game. Or so was the case until last week.
That’s right.
So last week, Unity announced changes to its pricing model for game developers. So instead of being able to use this game engine in a royalty-free way, instead, something called the Unity runtime fee would apply. So it’s basically a small fee of a couple cents every time someone installs your game.
And there are sort of thresholds, like the fees get sort of smaller as the games get more popular. But for developers who are making games that are downloaded and installed millions of times, this could amount to a ton of extra money that they have to pay Unity.
That’s right, and this kicks into effect in January. The development cycles for video games are very long, and so you have a number of developers who have been working on their games for years with one business model in mind. And they are being told things are going to be very different for you.
Right. So Unity announced these changes last week. Then gamers and game developers sort of freaked out and started protesting, saying, hey, you guys are changing the terms of our business on us. We don’t want to use your game engine anymore, and we think this is unfair.
And the resulting scandal, they’re calling Gamergate. Is that not what that one is? OK. I misread that. I misread that.
So anyway, Unity apologized and backpedaled, because one of the things that game developers were worried about is maybe there’s a game that you don’t like out there. Maybe you disagree with some of the choices the game developers made. Maybe you are disagreeing with some of the politics.
I’m furious that Luigi is not getting the love that he deserves.
Exactly. So you could have people doing what’s called install bombing, where you basically run up the royalties for these game developers by installing and deleting the same game over and over again.
And Unity, when it made its announcement, had not accounted for this at all. As I was reading the coverage of this, Kevin, I reflected back on our coverage of the Reddit story earlier, this year, where Reddit also announced what were essentially a popular series of pricing changes. And one of the big problems was, they just hadn’t thought it through. They had not communicated to their audience who was going to be affected, how they were going to be affected, what steps they were taking to preventing abuse. Unity didn’t do any of that.
Totally. And so game developers were very upset about this. One game developer, Gary Newman, who’s the founder of something called Facepunch Studios.
Facepunch Studios?
He wrote, quote, “It hurts because we didn’t agree to this. We use the engine because you pay up front and then ship your product. We weren’t told this was going to happen. We weren’t warned. We weren’t consulted. We have spent 10 years making Rust on Unity’s engine. We’ve paid them every year, and now they change the rules.”
Yeah. And look, you know, yesterday, just by happenstance, I happened to have coffee with John Hanke. And John Hanke is the CEO of Niantic, which makes Pokemon Go, which I’m going to guess is one of the bigger users of the Unity product, right. This is a very, very popular video game that’s made a lot of money. You can imagine how much it’s going to cost them if these changes kicked in.
And you know, John was very diplomatic when he talked about the situation. He was sort of like, well, we’ll see where it all shakes out. We’re waiting to see what the final pricing is. But he also brought up this analogy that I thought was interesting, which is, let’s say that you are a writer and you wrote a book in Microsoft Word. And then you find out as you’re sort of finishing up the final chapters that every time a copy of your book is sold, you have to give Microsoft $0.20 because you used Microsoft Word. That is essentially what Unity is asking these game developers for.
Yeah, so I think it’s a huge sort of self-inflicted wound on their part. This was one of the most beloved game engines. Now it’s one of the most hated, and it just seems like they are trying to back themselves out of a very hard position here.
Yeah, well, and I think a lot of people assume that this may be the influence of Unity’s CEO, John Riccitiello, who has a history of saying inflammatory things.
Right. He had to apologize for some comments he made about developers of games in an interview. He said, quote, “These people are my favorite people in the world to fight with. They’re the most beautiful and pure, brilliant people. They are also some of the biggest fucking idiots.”
Which is just an amazing quote to have about your customers.
Yeah, I mean, that is literally just his customers that he is talking about.
So this has been not only a big online scandal, but actually seems to have caused enough anger toward Unity that there was a death threat that caused the company to have to cancel a company town hall and close two of its offices. So Casey, do you think this was just a pure unforced error on Unity’s part, or do you think they do have to change their business model in some way?
Well, you know, there has been some interesting speculation about why Unity has moved in this regard. It is a public company. When you’re a publicly traded company, you always have to be telling Wall Street a new story about where that next 10 percent or 20 percent of growth is going to come from.
It is also the case that Apple introduced app tracking transparency over the last year or so, and Ben Thompson, the analyst who the great newsletter “Stratechery,” wrote that he thought this might be partially in response to that, because for reasons that maybe we don’t have to get into, app tracking transparency hurt Unity’s ad business as it hurt most ad businesses online. And for that reason, Unity is now looking around for a new source of revenue.
So I wonder, do you think this is going to mean that most game studios will move to some different game engine, or what do you think happens now?
Well, I was asking John about that. I said, you know, how big of a deal is it to just switch to a new engine. And he said, it’s a pretty big deal. And if you think about it, it makes sense, right, because you’re developing with one set of code for the laws of physics.
And if you have to go port it over, one thing I know as a not particularly technical gamer, is that when you port video games just from one platform to another, things often go wrong in ways that you don’t expect. If you want to actually change the underlying code that is determining the physics and the sprites and every other component of a video game, you better believe that’s going to be trouble.
Right. So Unity, they obviously saw all of this blowback happening, and they have kind of backpedaled. They have said that they’re going to maybe soften some of these changes, and maybe sort of pacify some of the angered game developers.
Mm-hmm.
All right, next Gamer News story, which has to do with Microsoft and some documents about its plans for its gaming division that accidentally got leaked. Casey, did you follow this story?
Absolutely. When it comes to gamer news, there is no gamer news bigger than what is the next console and what are the next video games.
So according to Axios, the leak of these Microsoft documents was discovered late Monday by someone on the gaming forum Resetera, who was basically looking through files that were related to an upcoming trial between Microsoft and the FTC about Microsoft’s attempted acquisition of the video game company Activision-Blizzard.
So the court had asked Microsoft to upload some documents of its trial exhibits with redactions, but it appears that Microsoft actually uploaded an unredacted PDF of documents that included information about its future plans for its video game division, PowerPoint slides, and emails between its executives.
That’s right. And without knowing exactly what tools they were using at Microsoft, I think I have a guess, and I think that this might represent one of the biggest failures of Clippy in the entire history of Microsoft Office.
Clippy, we trusted you.
When you go to upload your unredacted documents to the FTC website, where is Clippy? Where is Clippy to say, hey, looks like that should have been redacted?
Clippy is in the doghouse.
Yes, he is.
So Microsoft gaming CEO Phil Spencer acknowledged the leaks. He told employees in a memo obtained by The Verge that the plans were unintentionally disclosed.
You don’t say.
And he said on X, quote, “We’ve seen the conversation around old emails and documents. It is hard to see our team’s work shared in this way, because so much has changed and there’s so much to be excited about right now and in the future. We will share the real plans when we’re ready.” So Casey, what was in this leak that Microsoft felt like it had to acknowledge?
Well, there’s a new version of their current generation gaming console, the Xbox Series X. It is apparently coming next year without a disk drive. They’re working on a new controller and also a refresh of the Xbox Series S. And if you’re listening to this and you’re thinking, Casey, are the names of the Xbox gaming consoles really that dumb and confusing? They really are.
So this is stuff that you care a lot about if you are a hardcore gamer. Most people probably don’t. What I found interesting in this leak was this item that said that Microsoft had considered at one point buying Nintendo. Nintendo obviously is one of the biggest companies in gaming. It’s been a huge sort of prize target for a lot of the big Silicon Valley companies that are trying to get into gaming. So far, they have not been willing to sell, but this was emails between Microsoft executives discussing the possibility of buying Nintendo.
Yeah, and it seems like this was probably more of an offhand comment. Like, it doesn’t seem like this got very far down the road of anything happening. But I do think if you’re the FTC and you are worried about consolidation in gaming and how that might raise prices for consumers, it might decrease the number of games on the market, and you read this email, that could raise some alarms.
Totally. And I think it just shows you on a broad level how interested the biggest companies in tech are in making a really big play in the gaming industry, right. They know how big an industry this is. They know how many people are out there playing games, buying games, buying consoles. They know that this is a big area of potential growth for them, and so they’re trying to gobble up as much of that industry as they can.
Yeah, absolutely.
All right, those are the big two stories of the week in gaming. Casey, do you have any gaming news to share? What are you playing these days?
Well, I have recently bought a couple of games. One is a throwback to my childhood growing up in arcades. So I bought the most recent Mortal Kombat game. It’s called Mortal Kombat 1, and I would say with the premise of this game is, what if we took the game you know and love and we made it so unbelievably complicated that you’d just be better off watching YouTube videos. Oh my god.
When I say they’ve added systems to this game, not only are there all these the combo systems and the blocking systems, and this system and that system, individual characters will have their own system where it’s like, well, if this person does this combination of buttons six times, then they get this thing buffed for four seconds. And I truly do not know who has the time and patience and energy to understand any of that who is older than 14 years old.
Casey, I have an alternate hypothesis for what happened to Mortal Kombat in the 30 years between when you first played it and now.
What’s that?
You got old.
Oh no! How dare you? I’m so young at heart, Kevin.
These kids with their new-fangled complicated games.
Now, it is definitely true that there are some button presses that are just a young person’s game. You know, it’s like if you play a fighting game, one of the things that allows you to do is parry. And if you parry something, that gives you an advantage in your little combat scenario. But the window to do this can sometimes be literally seven frames on a screen. So I don’t know how long that takes to happen. It is truly milliseconds, and I can’t do it anymore.
Yeah, this is not a young man’s game.
Yeah. Speaking of not a young man’s game, what are you playing? Parcheesi, checkers?
Mahjong?
Mahjong.
So I used to play a lot of games, and then I had a kid. So now I just go to sleep at 9:30. But I do from time to time like to blow off some steam. I’ll play a little Valorant. Like, I really like the team-based shooters.
Because of the sort of collaborative nature of the murders.
Yes, exactly. So I play a game called Valorant. I’ve also been trying to get back into mobile games. You got me addicted to Marvel Snap, which I may never forgive you for.
It is truly the most addictive substance that has ever been in my life. I’ve successfully gotten it out now.
It was fentanyl on my phone. I had to get it off. So I deleted Marvel Snap, but I’ve been trying out some other mobile games. I installed this game. It’s like disk golf on your phone, which is kind of fun.
Those are fun.
And then I’m playing this one. It’s like a very sort of calming European game called I Love Hugh. Have you ever seen this one?
I Love Hugh. No, that sounds very sweet.
Yeah, they have different kinds of games over there. Basically, it’s like trying to match like colored tiles and stuff. It’s just a little soothing, you know, a way to kill a couple of minutes on the train.
You know what? I’m going to ask you to show me that later, because I’m in desperate need of a new mobile phone game.
All right, that is Gamer News.
[GAMER NEWS THEME SONG]
Kevin, remember last week when you deepfaked my voice once again?
I do.
Do you have any idea what you were having me say?
I think it was something about your house and maybe backsplashes?
OK, well, so here’s the problem, because you and I both don’t speak German, and yet you were having me say things. And one of our German-speaking listeners actually wrote in to me with a translation of what you had me say, which I would now like to read to you. Quote, “I like children and I am passively interested in the real estate market. I’d rather be with gossiping people than have to deal with low-level conflicts during work. That’s the place where I’d like to socialize.”
Why did you have me say that?
It was sort of a random clip. I needed like a continuous clip of you talking for a minute, and I used that one.
Well, what we’re learning is that they’re getting much better when it comes to the sound of my voice. But when it comes to the words, they are basically at square one.
Well, I apologize to all of our German and Hindi-speaking listeners.
By the way, I got a note from a Hindi-speaking listener who was also like, what were you saying, because it was complete nonsense.
Well, that’s on you, because that is an accurate translation of the complete nonsense that you said in English.
I say logical sentences.
OK. AI can make you speak other languages. It cannot make you sound more coherent, unfortunately.
It can’t make you make sense, unfortunately. We don’t have the technology.
“Hard Fork” is produced by Rachel Cohn and Davis Land. We’re edited by Jen Poyant. This episode was fact checked by Will Peischel Today’s show was engineered by Alyssa Moxley. Original music by Marion Lozano, Pat McCusker, Rowan Niemisto, and Dan Powell. Special Thanks to Paula Szuchman, Pui Wing Tam, Nell Gallogly, Kate LoPresti, and Jeffrey Miranda. You can email us at hardfork@nytimes.com, and Google Bard will absolutely not understand it.
[THEME MUSIC]
Discover more from Divya Bharat 🇮🇳
Subscribe to get the latest posts sent to your email.