The AI series: AI and Surveillance Capitalism | Studio B: Unscripted Al Jazeera English Feb 2024 Meredith Whittaker Camille Francois Maria Artificial intelligence, AI, is already transforming our societies and economies, creating jobs and growth. But are we ready for the dark side? How it's consolidating power, weakening nation-states, corrupting our information ecosystem, and destroying democracy. My name is Maria Ressa. And in this episode of Studio B on Artificial Intelligence, we'll be hearing from two inspiring women, courageous whistleblowers in their own right, working to make AI fairer, safer, more responsible. Camille Francois is a researcher experienced in combating disinformation and digital harms. Today, she's helping lead French President Macron's landmark initiative on AI and democracy. Meredith Whitaker blew the whistle inside the industry about AI's largely unchecked harms and led the Google walkout in 2019. Nowadays, she's president of Signal, the secure messaging app. So how do we protect ourselves from mass disinformation and distinguish between what's real and fake online? Is AI embedding surveillance into our lives, creating our new dystopia? And how do we make big tech accountable? Camille: Meredith, I'm so happy to be here with you tonight. So you're the president of Signal, my favorite way to send messages these days. And you're an AI scholar in your own right. You co-founded an AI research organization called AI Now, which you continue to advise. And I actually had the great pleasure of sharing an office with you when we were both colleagues at Google, which is 10 years ago. MW Shh! Two years ago. Cam Back then, you were already interested in machine learning and its impact on society. MW Yeah. I mean, I remember these formative conversations, talking with you, talking with the, let's say, a concerned whisper network about what is going on. Why is this, at that point, more unproven, still unproven technology being infiltrated into so many products and services at Google? Why is everyone being incentivized to develop machine learning? What actually is this? And why are we trusting such significant determinations about our lives and our institutions to systems that are trained on data that we know to be faulty, that we know to be biased, that we know not to reflect the contexts or criteria in which these systems are used? So that was formative. Cam Beginning of what I think at the time, we called a machine learning fairness conversation over and across the company. MW Well, that was, I think it was around 2014. And so I think there's sort of a-- Cam It was 10 years ago. MW It was-- we're looking great. I think that is an important date, because we can zoom out on the conversation on artificial intelligence, which sort of touches everything and nothing at once in our current context. And we can actually recognize that artificial intelligence as a term is over 70 years old. So then we need to confront the question, OK, so why now, why in the last 10 years is it so hyped? Cam But it is a pivotal moment for AI right now. Wouldn't you say that? MW Well, we're certainly being told that. And there's certainly a lot of resources, a lot of attention, a lot of investment that is riding on this being a pivotal moment. But again, what happened in 2012, right? In 2012, it was shown by a number of researchers that techniques developed in the late 1980s could do new things when they were matched with huge amounts of data and huge amounts of computational power. And so why is that important? Well, huge amounts of data and huge server infrastructures, these massive computers with sort of new and more powerful chips, are the resources that are concentrated in the hands of a handful of large companies based in the US and China largely, and that are sort of the product of this surveillance advertising business model that was shaped in the '90s, that grew through the 2010s. And then in 2012, there was a recognition that we can do more than sell advertisers access to demographic profiles for personalized advertising on your Gmail, on your Facebook, wherever you encounter it. We can use these same resources to train AI models, to infiltrate new markets, to make claims about intelligence and computational sophistication that can give us more power over your lives and institutions. So I think we really need to have a political economic view to look at the business models, who wins and who loses, and then look at who's telling us this is a pivotal moment. Cam So that's interesting because the way that our current moment is also being framed is around this rupture. This is sort of the moment that leads to the generation of technology in AI that we call generative AI. And I think that what you're saying is, it's not particularly helpful to see this as a rupture, and it's helpful to see the continuity of the development of AI. MW Well, it's helpful to investors who are recovering from losses on the metaverse, who are recovering from losses on Web 3 to see this as a transformative moment. There's a lot riding on this, but it's not necessarily true. You can make a lot of money by people believing it's true long enough that you get an IPO or an acquisition. It doesn't need to actually be true. But we didn't start talking about this in 2017. We hadn't been inundated with claims about AI mimicking or surpassing humans in the same way we are now. Chat GPT wasn't everywhere. When did we start talking about this? Cam I started playing with GPT, I would say around 2018. But I had a-- MW Well, you have always been ahead of the curve. Cam I think I just like playing with those models. I remember playing with GPT2 and trying to think about what this means for our society if everybody starts having access to the means to create synthetic text. And at that time, there was not a lot of users of these generative AI models for text. And so I was playing with this idea that when we had deep fakes, we were going to have read fakes, sort of now a whole ocean of synthetic text that was going to take over all these online spaces. And I thought it would be fun to have the AI write an op-ed around the consequences of a read face, which it did. It generated this metaphor, which I thought was really interesting, of synthetic text was going to be the 'gray goo' of the internet. It was going to suddenly creeping everywhere as sort of a science fiction idea that it was going to really ruin the internet as we knew it. And I think that was 2019, yeah. MW Mimicking self-awareness. Cam Yeah. Something along these lines. MW Of course, you and some practitioners and people who were deep in the field have been playing with these different techniques for a while. But it wasn't until Microsoft started spending millions of dollars a month to stand up and to create and deploy chat GPT that we started talking about it. And we need to recognize that it costs hundreds of thousands of dollars a day to run this, that it's actually extremely computationally expensive. And so that these are not technologies of the many, right? We are the subjects of AI. We are not the users of AI in most cases. Cam I think this is also why it has, for some of us, felt like a pivotal moment. Because back when it was very much still research projects or conversations between practitioners, we kind of had the luxury to ask ourselves, well, what does it mean, for instance, for disinformation that now everybody can produce synthetic text? Or what does it mean when we know that there are biases and stereotypes that are embedded in this machine? How would we go about thinking the impact on society? And I think that changed the scale in terms of the urgency of those questions when suddenly everybody has access to these technologies and they're suddenly being deployed really quickly in society. MW There's been a community of scholars who have sort of preceded a lot of these advancements or Microsoft deciding just to deploy a text generator with no connection to truth onto the public. Those decisions were not made because they were reviewing the scholarship were reviewing your work, Camille, or were recognizing other social consequences. Those decisions were made because every quarter, they need to report to their board positive predictions or results around profit and growth. And so we have these powerful technologies being ultimately controlled by a handful of companies that will always put those objective functions to use a machine learning term first. How do we leverage power? Well, there's a classic answer to that and that is workers banding together to use their collective leverage to push their employers. Cam I love it. I find it very French of you. MW Yeah. Well, I was almost not here because of a Eurostar strike, so I hope they win. Cam Yes. Let's talk a little bit about those harms that you're saying a lot of people have been talking about have been documented. Let's explain, for instance, what do we mean when we say they're coded bias in these algorithms? MW These machine learning systems in this bigger is better paradigm, which we've been in since 2012, relies on data collected and created about our present and our past. This data in the context of text generation comes from things like 4chan, Reddit, YouTube comments, Wikipedia, and of course, that data reflects our scarred past and present, which is discriminatory, which has been and is racist and misogynist, which sees different people as deserving different treatment. And so of course, that data is going to be reflected in the outputs of this system. And now the danger here is that we buy the hype and that we say this is the product of a sentient and intelligent machine that is giving us objective truth, and that is just where that person fits in society. They have the genes of a janitor, right?. And so I think there's when we see the rise in this eugenic thinking, when we see this blind faith in machines, we really need to recognize what exactly that is naturalizing. Cam And you're right, of course, to talk about how so many of these stereotypes are also here inherited from these training data that are taken from online sources that are not diverse and well-informed online sources. And it has also been an issue with machine learning for a long time. And before it was trained on those large online sources. But those technologies, no matter if it's tax, no matter if it's facial analysis, it was already the case when we were also doing image recognition have had these issues of biases – encoded biases – and spitting out stereotypes. Of course, when we talk about the generative AI, one of the things that I find really important to highlight is-- so I'm an optimist. MW I know it. Cam And I think that sometimes folks have had too much faith in the idea that with bigger models and more data, we were going to head in generally the right direction in slowly getting better at tackling those biases, those discriminatory impacts, those embedded stereotypes. And we now recently have research that says this is actually not what happens. Abiba Burhan and two of her colleagues just published a wonderful paper that show actually when you get to bigger models that are trained on more data, those racial biases and those stereotypes get worse. You get more of them. And here you can sort of see that we're losing the race in underinvesting in how do we tackle and mitigate and understand those sociotechnical impacts of these technologies because we're just scaling them too fast. We're not catching up with the problems that we know exist. MW More of the problem doesn't solve the problem. Cam That's right. MW So I think I never understood the basis for an assertion that, well, we have a little bit of trash and that makes a trashy model. But let's pour a bunch more trash on there because that's going to clean it up. There's a magical thinking and I think a real, like almost emotional desire by a lot of the true believers to avoid the fact that maybe some of these problems are intractable. Maybe we can't sort of create a data set that's unbiased because of course data is always reflecting the perspective of its creators and that is always biased. Cam How do we change this paradigm? How do we make sure that the people who also work on making technology safer, more fair, more responsible, their efforts can also be accelerated and their voice can be centered in the way we talk about AI? MW I mean, I don't think that's a technical problem. That is a problem of the incentives that are driving the tech industry, which are not social benefit. You and I know we got in the way of these people a lot. It was not always appreciated. And I always loved your willingness to ask those questions anyway. But I was pushed out of Google for asking these questions, for loudly asking these questions, for organizing around these questions. So there is a point at which when you're talking billions of dollars versus a liveable future, we have a system that is choosing billions of dollars repeatedly, repeatedly, repeatedly. And in the context of a system that is now giving this kind of authority, surveillance, and social control capabilities to a handful of actors, I think that's an alarm worth raising pretty broadly. Cam Can regulation and governments play a role in re-establishing a little bit of balance in that system? MW Yeah, of course, if they're willing. Labor organizing can help that. Social movements can help that. Regulation can help that. But regulation is an empty signifier till we fill it with specifics. Cam All right. Let's pause here. Let's do a little Q&A, and then we'll talk about how to dismantle those business models and how do we build the future we want. Q: OK. My question is-- you very briefly touched upon that. But when the systems that we encounter on every day in this digitalization and where we're going, the very basis of it is basically based on private interest. How can we really truly create meaningful change? MW I think fundamentally, that's a question about capitalism, not a question about technology. And so how do we change that hamster wheel that is sort of-- in order to avail ourselves of the resources needed to survive, we do waged work. Our waged work contributes to structures that we may not agree with. And so I've come up with-- there are social movements. I participated in labor organizing after being a kind of in-house public intellectual and expert thinking that that was a theory of change. I'm now a tech executive trying to do tech another way that is not profit driven. That's another theory of change. And I think there's international work goes of the world where a union in the US that had a phrase, a slogan that stuck with me, and that's "Dig where you stand." So it's like, what is your role? Who do you know? What can you do with the knowledge and context you have? And I think that's a question to ask yourself every day. But it's not a question somebody else can answer. Cam I agree with that. And I will say I struggle with that question too, because in my fields of practice, both security, interest, and safety, the fires that are in the building are the things that you have to attend to immediately. You also want to think about the immediate future. But it's also important sometimes to put out those fires, because it can be that your election is at risk. It can be that kids are at risk online. It can be that terrible societal impacts are unfolding in front of your eyes. And one of the things that I've been focused on recently is how can we make sure that across the industry, the folks who are focused on putting out those fires are better equipped so that we don't have to reinvent the wheel every time? So making sure that we have the rigorous frameworks, the tools, and that all of this is easily accessible, open source, and that people are properly equipped to do this work so that we can also invent alternative futures while we take care of the immediate harms as something that's been very top of mind. Let's take another question. Q: Do you think that the very public and accelerated move towards an AI-facilitated workforce has the potential to hold a mirror up to some of the absurdities of the capitalist system in its current state? MW I want to point to an example that will perhaps illustrate my views on this, which is the WGA strike in the US. And the WGA, the Writers Guild of America, is a well-established and fairly powerful union that represents writers in Hollywood. So the TV shows and the films you see, and they struck for a long time over the role of AI in the workplace. So they were saying, you are not studio executive signing a contract with Microsoft for GPT, going to introduce that to our labor process in a way that justifies your changing our title, your firing us as full-time workers, and hiring us back as precarious contractors, your reducing our wages, and your ultimately degrading the role of our work. So I think-- and they won some pretty serious concessions in that strike. But what that episode showed is that we're not actually talking about AI replacing workers in a lot of cases. There's a research recently published that estimates 100 million people, 100 million people, are currently employed or have been employed recently in the task of cleaning and curating data required to train these systems, which is extraordinarily labor-intensive. Then you have to do a very serious calibration process, because these things are trained on 4chan and Reddit, and some of the most obscene and disturbing content on the internet that is not often filtered out. And so human beings are the buffer for that. They see this content. They have to say, No, that's not right. No, that's not right. And then you have to have something that Camille is very familiar with, kind of the cleanup crew, the content moderators, the people who deal with the fact that these systems often say the wrong thing. So there's a huge amount of labor that actually powers what we call 'intelligence' in these systems. So we should be aware of that dynamic, know that AI cannot replace workers. It's displacing work, and that it is a tool of employers, governments, those in power are the ones with the resources to decide where it's used. And it will very likely be used on us in ways that we need to push back against. One more question. Q: So I work in people analytics at a big company, and this is something I struggle with on a daily basis. So forget AI and just think more basic algorithmic hiring, for example. So in a process like this, when you already have something where a human being is going to be subject to tremendous biases, could something like a very well-regulated technology or an algorithm actually help? Cam It's a great question. And I think we can start by saying it's a fair question. People have biases. Why does it matter if we also have machines with biases? There's so many reasons for that. And the first one is we don't know just yet how those biases manifest. We have a hard time measuring them. And so we can think about it in two questions. The first one is, what is the right set of areas in which we can deploy AI, knowing that it's imperfect? The second set of question is, in which ways are you able to detect and mitigate for these potential biases? One that's popular, although that too is a meme, is this idea of red teaming. So you would say, this is a system that I want to use. This is the context in which I will use it. I am going to trigger all the bad scenarios that I will want to not happen so that I can understand if they are about to manifest. And I can understand if I'm able to mitigate them. If you're able to answer all of these questions, then yes, by all means, go deploy technologies on areas that you think are not going to hurt people at scale with methods that you have tested and can rely on to ensure that you have awareness of these potential shortcomings and you're able to mitigate them. But that's often not the case. And it's often not the case in people analytics for sure. Yeah. MW I would agree with that. I would also say humans can be held accountable. They can justify their decisions. And what Camille presents, I think, is a very good answer. But it's also a counterfactual to the world we live in. Your whoever is doing the contracting for the vendors is being sold a pitch by some company that's probably reskinning an Amazon or Google API to make claims about detecting inner competence that, again, things like that have no scientific justification. And so I think you could build a system that helps sort through resumes that that requires an ecosystem of good faith actors putting that use case above their self interest in many cases that we simply don't live in as a rule. Cam I am encouraged by the fact that some cases have been demonstrated with people being held into account. So I'll take the example of proctoring software. Throughout the pandemic, a lot of universities and education institutions turned to AI in order to have students take exams at home while being surveilled or monitor[ed]. And they were very clear places where this types of software had simply not been tested. And so what happened is students started sharing, organizing, and saying, my face is not being recognized by the software. I'm getting a bad grade because it thinks that I've cheated. I think that this is a violation of my right. So again, this is a you can still ask the right questions. When you fail to have the right questions asked and put the right measures in place, we see those movements towards accountability. PART TWO] MW Well, hello. Cam Hello again, Meredith. MW Wonderful to be here, Camille. Cam Wonderful to be here with you. Last time we chatted, we talked about risks in AI. Why are some people worried about AI taking over the world and destroying humanity? What is the thing that we call existential risk? Where is this coming from? How do you feel about that? MW Oh, wow. Well, existential risk is a thrilling story at a libidinal and emotional level. It's very activating to think of doom and conflict and these sort of great power scenarios. And it's kind of catnip to a lot of powerful men. The idea that AI, this scraped data, big compute, big models, is going to somehow find the escape velocity to become sentient and superhuman. And we had better hold on because we either need to control that powerful, powerful, powerful AI or we're going to be superseded by it. There's no evidence that existential risk is going to happen. I think there's a lot of questions around like, why now? Did it catch on so powerfully? And I think a part of the answer to that question-- I know, Camille, you think about this as well-- is that while there are some true believers for whom this is very meaningful and I don't want to take that away from them, this is also an extraordinarily good advertisement for these technologies. Because what military, what government, what multinational doesn't want access to this hyper, hyper, hyper powerful AI? Doesn't want to be the one who's sort of controlling it, doesn't want to imagine themselves at the helm of the Death Star. And this advertisement also serves to distract from the fact that these systems continue to be discriminatory and that discriminatory capacity continues to accelerate. The fact that these systems are used by the powerful on those with less power in ways that often obscure accountability for harmful decisions, the fact that we're talking about a technology that is built on the basis of concentrated surveillance power like the world has never seen. But we can erase all of that by being like, look over there, the terminator is coming. Cam You talked about it. It's not exactly a new idea. Nick Bostrom wrote Super Intelligence, now 10 years ago, it's a book that sort of focuses on that idea that AI will accelerate to a point where it can no longer be controlled by human and will pose an existential risk. The fact that today this concept dominates some of our conversation on safety is meaningful and worrisome though, because we're at that pivotal moment where we have governments, for instance, for the first time saying, hey, we would like to organize and to discuss what does safety mean in the context of AI. And so we have governments coming to the table. We saw it with the AI safety summit. We saw a series of first declarations, first regulations. There's the White House executive order in the US, the Hiroshima process coming out of the G7. And so there is this urgency to define what is it that we're worried about and that we want our elected representatives to protect us from and to focus on when we talk about the safety of AI? So I think you're right. It doesn't mean that everybody should be laser focused on avoiding Terminator scenarios. It also means that we need to focus on the very immediate harms to society, the biases, the discrimination, the surveillance implications, which we haven't talked about just yet. I see your surveillance eyes. MW Oh, yeah. Cam Yes. Should we get there? MW Smelt in the furnace of concerns over surveillance and privacy. And I think we were around Google at, I think, 2014 or so we met, but that was the post-Snowden era. So we came out of the '90s in the US with a regulatory framework that had no guardrails on private surveillance. So a private company could surveil anything. And they could surveil it in the name of advertising. And so we get, after the '90s and this sort of permissionless surveillance, you see a lot of very cozy partnerships between the US and other governments and these private surveillance actors. So getting data from them in certain ways, brokering relationships, convincing them to create backdoors in their systems. And this is documented in the Snowden archives, which, of course, happened in 2013. So this was a shock to the system. Cam Take a moment to define what's a backdoor. MW A backdoor is a generally intentional flaw in a secure system that allows a third party access to contents or communications. So if we're using an encrypted system, say, you and I are texting, and we think that is secure, but in fact, the code is allowing a government or a third party to access that and to surveil our communication. So a backdoor is sort of the colloquial term for a flaw in the system that allows that kind of access. Cam I think here the critical security concept is this idea that you can't have a backdoor that's only for the good guys. And so if there's a hole in your system, there's a hole in your system. I think that's why we care so much about a strong end-to-end encryption and making sure that when we say a system is secure, it's secure for everybody and from everybody. Yeah. MW It either works for everyone. And that means I can't see it. That means the UK government can't see it. That means Putin can't see it. That means XYZ hackers can't see it. Or it's broken. And we can all see it. Cam So you were saying 2013 big moment of reckoning in Silicon Valley over privacy and those concepts of surveillance. MW And that was kind of the world I lived in, right? Watching this privatized surveillance apparatus at Google that had been justified on, hey, we have a duty to our customers and we're just giving people more useful ads and more useful services. But Snowden kind of broke that open, right? And since then, there's been a kind of uneasy situation where encryption has been added to some things. But the pipeline of data and data collection and data creation continues because that is, again, monetizing surveillance is the economic engine of the tech industry. And so again, what happened in 2012? There was a recognition that this surveillance data could also be used to train and tune AI and that these AI systems were incredibly good at both conducting surveillance. So think about facial recognition. Think about productivity monitoring. I think that we have to read AI as almost a surveillance derivative, right? It pulls from this surveillance business model. It requires the surveillance data and the infrastructures that are constructed to process and store this data. And it produces more data and sort of heightens the surveillance ecosystem that we all live in. Cam You know, it's also what I've observed working on disinformation and on troll farms in 2017. I was doing some field work before that around 2015 or in working with those journalists and human rights activists, including Maria, who were so often targeted by governments. Their phones were being hacked. We were very concerned about making sure that they had secure software. We could secure their phones, secure their computers. They were very much under heavy surveillance. I remember there were the first ones to say, hey, there's something a bit off that's happening on social media. And we think it's harmful. We think it's violence. And we think it's related to the hacking. We should take it seriously. And we should try to uncover what's really going on. And we should really apply the same rigors and tools that we had in our work on cybersecurity and say, we can analyze this. We can do forensics. We might even be able to attribute it. If we see networks of fake accounts that are deployed against a journalist or a human rights activist with the sole purpose of silencing them, threatening them, and we might be able to hold a few people accountable in this process. We were, of course, sort of slow to do that as an industry. And that created the sort of great reckoning of 2017. What it took for Silicon Valley to care about that is really the US presidential election of 2016 and the fact that Russia was able to use what we now call troll farms, a series of handful of fake accounts to have a campaign against these presidential elections. And what followed after is a full year of technology executives having to go to Congress and justify why they had missed it. So that, I think, was also sort of similarly a really pivotal moment where some new foundations were established for OK. Maybe we now live in a world where, as a society, we feel that technology companies have a responsibility to protect democracies, too. And that we feel technology companies have a responsibility to tackle this information and then to think about how their technologies can be abused to manipulate elections. That is also something that's coming up for us in AI in a really interesting way. MW But what I am concerned about, in addition to those very real, very pernicious problems that happen when you mass-scale a global information and social platform, again incentivized for sort of clicks and engagement and profits and surveillance and advertising, is that the solution space, in my view, seems not to go far enough. So you have something like the UK's Online Safety Act, which is this massive omnibus bill that was catalyzed through these very real concerns. What do we do about these problems? But they rarely look at that business model. And they take as a given these mass social platforms. And then the solutions often look a lot like extending surveillance and control to governments, expanding the surveillance apparatus of large tech companies to government chosen NGOs or government actors who will then have a hand in determining what is acceptable speech, what is acceptable content, but is not actually looking at how do we attack the surveillance business model that is at the heart of this engine? And so this is very real for me. And we're both based in the US. But we now have books being banned in certain states. We have reproductive health care, or health care in general, unavailable to many people in states where reproductive health care has been criminalized. So I really worry about these problems with platforms, about the way they exacerbate hate and allow trolling and disinformation. I also really worry about the solution space when that is handing a key to governments that would lock up a woman and her daughter for accessing health care, that would ban books, and that across the world are trending toward the authoritarian. Cam Absolutely. And so what we need, I think, is also a diversity of these platforms, platforms that are not tied to these surveillance capitalism business models, platforms that can put security and privacy first, that can operate in a public interest. And I think that's what we're doing with Signal. And I want to talk a little bit about that. MW Which means redeeming ourselves from the sins of the '90s. And doing it differently. Cam Redeeming yourselves from the sins of the '90s? MW Speaking of religion, right? Cam This is not a fashion commentary, right? This is-- well, it's fashionable commentary now. And it's a crypto war commentary. It is a crypto war commentary. Cam It is a crypto war commentary. So let's talk a little bit about what's happening with Signal. I was very excited to see that you published a piece about how much it takes to run Signal. And you said it costs $50 million a year to actually operate this technology globally. Why did you do that? And how are you using $50 million a year to make Signal work at scale? MW Well, we did that in part because we are a nonprofit, a rare nonprofit, not a fake nonprofit like OpenAI, an actual nonprofit that operate in a tech space, again, dominated by this business model. So we, one, wanted to be accountable to the people who rely on Signal, the tens and tens of millions of people across the globe who use this as critical infrastructure, who donate to keep us running. And we wanted to offer a cross-section of just how expensive it is to develop and maintain highly available global communications infrastructure, these sort of free products and services. And thus shed light on how profitable this industry is and how significant the monetization of surveillance is in terms of a revenue generator. We're a nonprofit because the engine of profit is invading privacy. And our function, our sole focus, is creating a truly private communication app where we don't have the data. You don't have the data. The cops or Facebook or anyone doesn't have the data because it's only available to you. But then the question is, OK, without the data to create the revenue to cover $50 million a year-- and by the way, $50 million is very cheap-- so how are we going to guarantee privacy while supporting what it takes to actually produce an app that works for everyone? And I think that question is way, way bigger than Signal. And I think it's one we need to be asking of every company out here. Where's the money? Cam And that is a nice nod to how we started this conversation, which is making sure, too, that the money goes to tackling those very risks to safety, to moderation, to privacy, making sure that the investments are also keeping in line with detecting and managing those socio-technical harms. That is a good segue for us to take a few questions, either on the infrastructure, inventing new, resourcing models. Q: So we spoke a lot about corporations, governments, their roles. I think it's almost embarrassingly easy how individuals-- everybody in this room-- hits the Consent button when you want to read a thing on a web page, and we lose all of that data. How do you go from being the minority of individuals need to recognize they protect their own data versus the easy access to information and giving that up? MW I don't think this is a matter of individual choice or individual blame. We can't function in this world without using these services, right? We have to do this to get a job, to function in the workplace, to go to school, to have a robust social life in a world where so many of our public spaces and ways of communicating with each other have been hollowed out by these platforms. I actually think it can be really dangerous to make this an issue of individual will or intellect or consciousness. I think what we're talking about is a deeper collective issue where our lives are shaped by the necessity to use these systems and where, like, look, Facebook creates ghost profiles for people who are not on Facebook in order to fill in your social graph. Data tells you something about the people who aren't represented in the data the same way it tells you about the people who are. Cam I'm encouraged by new frameworks that are emerging that are maybe helping us think a little bit more collectively about our data. And so, for instance, in the United States, a lot of people are working around this idea of data trust and this idea that you have data rights. And you can also work with organizations who may represent your data rights, make it easier for people to collectively say, yes, I will entrust a nonprofit that is trusted to make sure that I can exercise my right. And this entity, for instance, can be also collectively bargaining to make sure that, again, collective rights are being represented. I think that we're heading towards new frameworks, new governance mechanisms, new regulations, where we think a little bit more collectively about our data. MW Let's take another question. Q: So far, our conversation or your conversation has been pretty US-centric and rightfully so. But what do you think about the essentially AI arms race between the US and China and what it means for the relationship between the two countries, as well as the impact on the rest of the world? MW There are very valid concerns about the potential for the misuse of this technology. I'm not going to dismiss those. But for me, this is an economic arms race, which poll the US or China is going to engage as much of the world as possible in AI client states, provide the infrastructure, provide the APIs, provide the affordances. So that they can both extract data and revenue from various countries and maintain control through these companies. So I think there's a lot more to say about that framework. When we talk about a race, we really need to be asking, where are we racing to? Is this a race to the bottom of two poles of an economic surveillance state that are exercising massive social control over the rest of the world? And is that a race we want to win? Cam When I think about governments rushing to make those investments and us talking about those arms races, of course, I also think about the fact that we have little agreement on what are the legitimate ways to deploy AI in military context, in conflict context? How does AI shape the laws of war? MW Gaza. I mean, Gaza, we have investigative reporting that targeting is being done by AI, that there's a massive AI apparatus, and that we're witnessing significant unspeakable civilian casualties, even in that context. Cam I think these are very important questions. We are, unfortunately, in the world where we have multiple wars, and conflicts and seeing governments accelerate to both build and deploy those types of new technologies in conflict context must give us pause and help us ask to what are the rules of the road for the deployments of these technologies in these contexts, too? So when we talk about arms race, this is first where my mind goes, for sure. So we'll take one last question. Q: There are information vacuums right now, which are good breeding grounds for disinformation. And what do we do when you, Meredith, push back on the online safety now act surveillance powers in that? So when you see government and big tech sensors fusing their powers, is the solution break up big tech? MW I, like you, am very concerned about the metastasis of surveillance and censorship powers, again, in the hands of governments and corporations that don't always reflect community norms or the social benefit or the interests of the marginalized or et cetera, et cetera, et cetera. So I don't have a solution for the 'one weird trick' to do to solve it. But I think it's going to require social movements, because again, you're looking at sort of entrenched power and a kind of government – government is willing to weaponize the language of accountability and the language of reducing big tech harm in context where that expands the big tech model or the authority to governments. But what we haven't seen are sort of bold regulations. There is not political will to use the regulatory framework. So I think there needs to be much more demand. And I think about it almost as like a kind of dignified stance. We don't want to live in this world. And we should have the imagination and I think the deep optimism that is willing to recognize a world in trouble, in danger, in terrible peril, isn't looking away from that with Pollyanna eyes, and then is demanding changes to that with a clear map of just how bad it is. Cam That's a very elegant phrasing to say that we should and we are able to invent alternative futures, alternative models. And then to say, indeed, this is not how we want to live with technology. It's not being a Luddite to say these are not models that should continue. Let's invent alternative futures that are more rights-preserving, that are better for society, that are better for the planet too. That's also huge climate implications in everything that you just said around surveillance capitalism that we don't talk nearly enough about. MW Yeah. I mean, there's a history of computation that actually traces it back to plantation management techniques that were used to discipline, control, and surveil enslaved African people as part of the transatlantic slave trade. And I've written on this the history of computation as taking templates from those labor control mechanisms at the birth of industrialization. What paradigms were they reflecting and refracting? And no, that doesn't mean we throw them away. That means we're mindful and we just, in a punk rock spirit, we demand more of them. Cam I love that. I think that this is a perfect ending. Let's embrace that punk rock spirit. Let's demand more. Let's invent better futures. Thank you so much for this conversation together. MW Thank you, Camillle. [APPLAUSE] Maria: In the four episodes of this special series on AI, we've gone beyond the headlines and hype. Spoken to scientists and industry leaders working to align profit motives with safety, as well as examine the coded bias that is already impacting our world. When real life is now stranger than fiction, we need to step back, look at the history of AI, how it's impacting economies around the world, how it is affecting violence and warfare, and what steps we can take now to make AI safe and ethical for us all.