David Rind
00:00:00
Welcome back to One Thing, I'm David Rind, and she used to work for an AI powerhouse. Now, she's sounding the alarm.
Zoë Hitzig
00:00:07
I don't think it's enough for them to ask us to trust us, because we've seen how that's gone in the past.
David Rind
00:00:13
Stick around.
Man: Claude AI Ad
00:00:17
How do I communicate better with my mom?
David Rind
00:00:20
You may have seen this commercial during the Super Bowl a couple of weeks ago.
Therapist : Claude AI Ad
00:00:23
Great question. Improved communication with your mom can bring you closer. Here are some techniques you can try.
David Rind
00:00:30
'A man is on a couch talking to a therapist figure, clearly meant to be a stand-in for an AI chatbot. But after some broad yet practical advice, the therapist gets specific.
Therapist : Claude AI Ad
00:00:44
Or if the relationship can't be fixed, find emotional connection with other older women on Golden Encounters, the mature dating site that connects sensitive cubs with roaring cougars.
David Rind
00:00:57
And as the ad ends, what? The text on the screen reads ads are coming to AI, but not to Claude. This is just the latest move into long simmering beef between two of the world's largest artificial intelligence companies, OpenAI and Anthropic, which is the maker of Claude. There's a little backstory here. OpenAI recently announced they'd be testing advertising in certain versions of chat GPT. So Anthropic responded with this Super Bowl ad. OpenAI CEO Sam Altman clapped back, calling the ads dishonest and deceptive and saying Anthropic serves as an expensive product to rich people. This is way more than just a social media spat. There are real questions about the future of AI technology, who should govern it, and who should get access to it. Experts do have concerns about what happens to user safety and privacy when advertising and engagement start ruling the day. And over the last few weeks, more and more of these calls of concern have been coming from people who actually work on these AI products.
Jake Tapper
00:01:58
A CEO who says that we are not taking warnings about artificial intelligence seriously enough.
Boris Sanchez
00:02:05
'The AI chief at Microsoft told the Financial Times, quote, white-collar work, most of the tasks will be fully automated by an AI within the next 12 to 18 months.
Dana Bash
00:02:15
Then you had the head of Anthropic Safeguard's team publicly resign, saying that they're gonna go write poetry, saying the world is in peril.
David Rind
00:02:25
Now we've heard these kind of apocalyptic warnings before, not only about AI, but with basically every new technology throughout human history. Some people once thought that telephone attracted evil spirits, for example. So what's different about this AI panic and how could it impact you and your work?
Zoë Hitzig
00:02:44
I'm Zoe Hitzig. I'm an economist and I work in the AI industry trying to figure out the economic and social impacts of AI.
David Rind
00:02:54
'Zoe Hitzig used to work at OpenAI as a researcher, but earlier this month, she wrote an op-ed in the New York Times about why she was quitting. In it, she said the values she once admired in the company were being eroded, the very values that drove her to work there in the first place in May of 2024.
Zoë Hitzig
00:03:11
What I was really inspired by was that the company was so ambitious, and at the time that I joined, they were ambitious not just in their vision for what the future could look like in terms of what technologies would be available, but also they were really ambitious about coming up with new ways to make society benefit from the technology.
David Rind
00:03:39
So you felt that they weren't just making this stuff to make stuff or to make money, but that it could actually help people.
Zoë Hitzig
00:03:45
Yeah, exactly. And what excited me and what made me think that it was important to leave my quiet role in academia was that I really felt that there was a small window of time where many, many decisions would be made. And each one of those decisions, even if those felt small in the moment, would have massive lasting consequences on the future of this technology. Yeah, I thought that the company was ready to kind of learn from the past and think ahead.
David Rind
00:04:20
Well, so when did you start to realize that you may not personally be aligned with the way things were headed? Was there something in particular that you were like, ugh, I don't know about this?
Zoë Hitzig
00:04:33
Well, one thing I can say here is that pretty early on after I joined, a friend of mine asked me if I had any red lines, like if the company crossed that red line, it would mean that I would have to leave. And interestingly, this was very early on in my time there. And I immediately responded to this friend, you know, if the companies start selling advertising without serious governance. Around it and serious data protections that I can trust, I'll probably leave. And it kept kind of gnawing at me over time. And I kept coming back to it. And as the ads conversations started in the company, what I saw was that maybe this advertising issue was not necessarily a red line in the sense that it would mean that the company had done something. Really bad and icky that I was disgusted by, but rather that the company selling ads without proper governance would be a signal to me that the kind of work that I hoped the company would do, the kind creative big picture thinking about the social impacts of the technology and what to do about it, that that kind of thinking was no longer valued.
David Rind
00:05:57
So you are necessarily opposed to ads as a concept, but you wanted to make sure that they were being rolled out responsibly with some sense of safety, and you don't have a sense that that is what is happening right now.
Zoë Hitzig
00:06:13
Exactly. I think that there are many things to worry about here. I think that advertising touches on some real issues where like, the lessons from social media are directly relevant. Like, of course, we're going to make tons of mistakes in building and deploying AI. But at least I hoped we could learn from the lessons of social media. And the lessons of social media are that when you maximize engagement and create incentives to keep people on a platform without knowing the effects of that engagement, you can get really dark outcomes.
David Rind
00:06:53
Yeah, I mean the title of your New York Times essay was, Open AI is making the mistakes Facebook made, I quit. So I mean, when you look at a company like Facebook and now Metta, like what mistakes do you think they made that Open AI might be replicating here?
Zoë Hitzig
00:07:11
Well, first of all, I didn't choose the title, but.
David Rind
00:07:14
Fair point.
Zoë Hitzig
00:07:15
What I am really worried about is that advertising creates a huge economic engine that creates incentives for the company to keep people on the platform. In the case of social media, optimizing for engagement, you know, in this world where keeping people on a platform directly translates into advertising dollars, that translated into optimizing. For virality and it turned into outcomes like extreme political polarization, it turned into fake news sparking real protests, it turns into disorders for teenagers and other kinds of mental health effects. So what I worry about is that we may be making the same mistake here if we strap an advertising engine. That creates incentives for engagement before we've understood the possible social and psychological effects of maximizing for engagement.
David Rind
00:08:22
Yeah, I guess it gets to the idea of whether the product itself poses some kind of risk by keeping people on for extended periods of time. Do you think that a product like Chad's GPT is inherently dangerous and that by incentivizing Engagement keeping people on for longer is a real risk
Zoë Hitzig
00:08:43
I don't think it's inherently dangerous. I mean, I think some of the people who spend tons of time on ChatGPT are doing enormously wholesome things. They're learning new languages, they're learning about new fields of study. At the same time, what I worry about is that we're beginning to see signs that there are more dangerous kinds of dependence.
David Rind
00:09:10
We should say I reached out to OpenAI about all this, and they pointed me to their policies on ads. They say the ads won't dictate the answers chat GPT provides. The company also says it won't sell user data or conversations to advertisers, and that users can turn off ad personalization based on their chats. They don't plan to advertise in conversations around regulated topics like health, mental health or politics. It's also some age restrictions and they've stressed that they're going to stick to these principles forever. So it does seem like they have thought about some of this stuff in some kind of way. Are those guardrails not enough in your mind?
Zoë Hitzig
00:09:53
I don't know how we should be expected to trust them. I don't know how should be expect to trust that they'll stick to a set of principles, that they are creating billions of dollars in incentives to override. I'd also point to Facebook, which made a lot of the same promises in its early days. And over time, those promises, of course, eroded. So I would be very happy if OpenAI ended up sticking to those principles, and I sincerely hope they do. And part of my interest in bringing attention to this issue is that I hope they will do a better job and create real... Governance structures that force them to stick to those principles, but I don't think it's enough for them to ask us to trust us because we've seen how that's gone in the past.
David Rind
00:10:53
We gotta take a quick break. When we come back, will these chat bots soon become part of your monthly budget? Zoe and I are gonna talk about the angst over access. Stick around. In your essay, you also kind of point through this idea of just how much these products may be excluding certain groups of people from using them. Like OpenAI, for example, has a couple of tiers of chat GPT use, $8 a month for what they call a Go subscription, and that will have ads in it. If you go up the ladder, you get up to $20 a month, for the plus, and the pro is $200 a month. So you say we still have time to do better than exclusion through high subscription fees, like I just mentioned, or manipulation via advertising. You seem to say those are kind of the two paths the way this could go. How much time are we talking about?
Zoë Hitzig
00:11:51
Great question. And yeah, I think my interest in drawing a lot of attention to this issue right now is that I think it's urgent. And I don't think that it's valuable to trivialize this question. For example, some people point to Anthropic and say, look, they're doing it right. They're the good guys. And I say, unfortunately, I just don't they've faced the same kind of problems that OpenAI has. Like, the basic fact is that... AI costs money to run, it's not free, it is not a zero marginal cost.
David Rind
00:12:27
The complete opposite of free. It's exorbitantly expensive.
Zoë Hitzig
00:12:31
It's quite expensive. And I think that what we need to reckon with as a society, what the industry needs to reckon with is that if we want to be building something that is truly transformative, that has the potential to come up with all kinds of new innovations, to transform the economy, to cure cancer, then we need talk seriously about how we're going to distribute that and how we're to make sure that everyone has access.
David Rind
00:12:59
'There's been a bunch of other voices who have kind of spoken up as they've left some of these companies like you did. The head of Anthropics safeguard research team announced he was leaving. He said the world is in peril. The CEO of Hyperright, Matt Schumer, wrote this really long post that went viral about how AI has already made some tech jobs obsolete, that they're coming for non-tech jobs much faster than we realize. It's a lot of like doomsday talk out there. Where do you fall into that idea of just how much? Of a catastrophe this could be when you talk about the change.
Zoë Hitzig
00:13:33
Well, I'm trying to be an optimist, and I think that a lot of that doom saying comes from just the extraordinary pace of change. I believe that as humans, we're simply not wired to deal with changes that are happening this quickly and that have the potential to really rewire so much of society and how we do things. And so a natural response is, oh no, this will change everything and therefore it will be a disaster. But again, to go back to my kind of hope and idealism when I started OpenAI, I still think that we have time to figure out how to be just as creative in thinking through the social consequences and the kinds of institutions that we want as we are in thinking about the technology. And, you know, one basic point that I'll point to is that. Both OpenAI and Anthropic are public benefit companies. And it is kind of unprecedented to have major technology companies of this size, of these valuations, have a explicit mandate that has to do with something other than maximizing profit. So I think there are reasons to be optimistic that we can put pressure on these companies to act in the interests of everyone.
David Rind
00:15:01
Right, but if you look at the way that the companies kind of attract attention on Wall Street and the way these valuations have been driven up, that seems to go right in the opposite way of the public interest model that may have been in place at the start.
Zoë Hitzig
00:15:16
Yeah, unfortunately, I have to concede that you're right on that point.
David Rind
00:15:21
Well, so, I mean, if these companies can't figure out a way to kind of go forward in a way that you see as a little more equitable, a little bit more safe, what is the end result? Like, what keeps you up at night about where this could go? Because I'm Cogniz said that we're talking during a week where Mark Zuckerberg is set to testify in this landmark social media addiction trial where he will face grieving families who allege that Meta and YouTube of intentionally designing addictive features that harden their children's mental health. Companies deny this. But is that where this could be headed, where we're seeing family after family saying, my kid's life was wrecked by this product that these leaders did not reign in.
Zoë Hitzig
00:16:05
Yeah, I don't know the specifics of what it will look like, but what I think is so dangerous here is optimizing for engagement, but in a way that keeps people on the platforms before we understand the social and psychological consequences. And I don't know exactly what those social and psychological consequence will be. It may be a little bit different than social media. It may be similar to social media in other ways, but I think it's really dangerous to be gambling with people's lives.
David Rind
00:16:41
'I mean, yeah, we have seen very extreme cases of, you know, families who say their kids were led to self-harm, you, know, other very serious cases like that. Like you say, it's hard to know when you spread it across a whole population what that looks like, but we have some people who say this is already hurting.
Zoë Hitzig
00:17:00
Exactly. And I just want to stress that the scale is enormous, you know, 800 million weekly active users on ChatGPT. I think there's like something like 700 million monthly users on Gemini. I don't know Anthropix numbers, but there's just a huge portion of the global population turning to these tools that are young. There are technologies that we don't understand yet. End. My view is that we need to understand the effects they have on people before we try to get people to use them more and more and More.
David Rind
00:17:40
So why not try to affect some of that change from the inside? Could you not have made a difference kind of speaking up about this stuff from within OpenAI? How do you see that?
Zoë Hitzig
00:17:52
I personally reached my limit.
David Rind
00:17:59
Well, Zoe Hitsig, thanks very much for the conversation. I really appreciate it.
Zoë Hitzig
00:18:03
Thanks so much. Take care.
David Rind
00:18:08
'A few things we've got to say before we go. In response to the head of Anthropic's safeguard research team leaving the company, Anthropic told CNN that it was grateful for Minarik Sharma's work advancing AI safety research, but noted that he was not head of safety, nor was he in charge of broader safeguards at the company. When it comes to that major social media addiction trial I mentioned earlier, a Meta spokesperson said, quote, we strongly disagree with the allegations in the lawsuit and are confident the evidence will show our longstanding commitment to supporting young peep. It pointed to safety features like teen accounts, default privacy settings, and content restrictions for users under 18. A YouTube spokesperson told CNN the lawsuit's claims are simply not true and that providing young people with safer, healthier experience has always been core to our work. Meanwhile, last year, OpenAI announced a slew of safeguards in chat GPT to cut down on the model's sycophantic tendencies and to better support people in emotional distress. The company said it updated the default model to better recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support. It also added reminders for users to take breaks, and new personal controls for younger users. That's it for us today. If you like the show, I have a favorite to ask, just leave a rating and a review wherever you listen, it helps other people find the show. On Apple Podcasts, I noticed a listener named Danielle wrote us a really nice review. She said I could read her the back of the cereal box and she'd be happy. I don't know about all that, but we really appreciate you listening Danielle and everyone else for tuning in. We'll be back on Sunday. I'll talk to you then.