ÌÇÐÄvlog¹ÙÍø

In this Wiener Conference Call, Dean Jeremy Weinstein discusses emerging technology and its intersections with democracy and higher education, including:

  • How technology may help solve our everyday problems, but could come at the cost of undermining our social goals
  • The consequences of rolling out AI to the world almost overnight
  • How technology can generate negative externalities without proper guardrails
  • Although algorithms are powerful and efficient, why we should be concerned about fairness, discrimination, agency and oversight
  • Building an ethical muscle and an ethic of technology among the next generation at ÌÇÐÄvlog¹ÙÍø

Wiener Conference Calls feature Harvard Kennedy School faculty members sharing their expertise and responding to callers’ questions. We are grateful to the Malcolm Hewitt Wiener Foundation for supporting these calls, and for Malcolm Wiener’s role in proposing and supporting this series as well as the Wiener Center for Social Policy at Harvard Kennedy School.

- [Narrator] Welcome to the Wiener conference call series featuring leading experts from Harvard Kennedy schools who answer questions from alumni and friends on public policy and current events.

- Today we're very fortunate to be joined by Jeremy Weinstein, who is Dean and Don K Price professor of public policy at Harvard Kennedy School. I'm also extremely proud to note that Jeremy is an alumnus of the school. He received his PhD here in 2003. His academic expertise spans topics including migration, democracy, and the rule of law and political violence. Beyond his scholarly work, his experience includes senior roles in the US government. He served as both director for development and democracy on the National Security Council and as deputy to the US Ambassador to the United Nations during the Obama administration, Jeremy joined the Kennedy School in July from Stanford where he led the Stanford Impact Labs and the Immigration Policy Labs. In recent years, Jeremy has been working on the intersection of technology and public policy, including co-authoring books, "System Error: Where Big Tech Went Wrong and How We Can Reboot" his thoughts on how to balance the benefits of new technologies with the challenges they pose will be the focus of today's call. I wanna note that Jeremy is speaking in his personal capacity as a scholar who's worked on these issues and not on behalf of Harvard University. We're very fortunate that he's here to share his expertise with the Kennedy School's alumni and friends. Jeremy, over to you.

- Ariadne, thanks so much for that kind introduction and I'm thrilled to join with all of you in this Wiener conference call, my inaugural participation in the Wiener conference call. I want to thank the Wiener family for their generous support of this program. I'm gonna share a few slides. Let me get them up on the screen, gimme a thumbs up if you can see them. Terrific, we're all set. Today I'm gonna share some thoughts and reflections drawing on the book that Ariadne described and really, you know, almost eight years of teaching at Stanford and researching at Stanford in Silicon Valley, thinking about the intersection of technological change and democratic politics. And I think this is important to share in part because I'm stepping into a new role five months in as dean at the Kennedy School, you know, a world class and leading institution in the study of public policy and government. And I think it's apparent to everyone on the call that thinking about the appropriate role of our political institutions in governing in this moment of technological change is an issue that naturally should be at the forefront of my mind as Dean. So I'll start with a bit of framing and take you through remarks for about 25 or 30 minutes and then open it up to questions. So here's a "System Error: Where Big Tech Went Wrong and How We Can Reboot". And if you read my bio, you might ask yourself the question, why is this scholar of foreign policy, international relations, and political and economic development in Africa writing a book about technology policy? And to answer that question, I really need to begin with my time in the US government, and in particular my role as a deputy to the US Ambassador to the United Nations. In that role, I sat on something called the deputies committee, which in the US government is the principle foreign policymaking body at the White House. It meets multiple times a day across a broad range of issues, and it's where the deputy cabinet secretaries come together to talk about the most important foreign policy issues of our time. It was in the context of my service, you know, as Deputy UN ambassador and on the deputies committee, that I really began to realize an extraordinary challenge that we confronted at the senior most levels of the US government, which was a gap between the technical understanding of, you know, the frontiers of new technologies among senior policy makers, and the challenges that we confronted in the policy landscape in a society that was being transformed by technological change in real time. Sometimes this played out in very specific debates, like debates around end-to-end encrypted software and how to balance the innovation potential of our economic engines in Silicon Valley with our concerns and needs around public safety, sometimes the challenges related to cyber threats, and ultimately how as the US government we should be in a position to protect not only our critical infrastructure, but also to incentivize the private sector in the United States to take the set of steps that were required to protect themselves from the kinds of vulnerabilities that the technological age represents. When I came back to Stanford in 2016, the Obama administration had come to an end. I arrived back at Stanford and this was a campus that while I had been gone, had been truly transformed by this second generation moment of technological change. Not the early computer moment, but the internet and social media and mobile applications moment. Computer science was the largest major among men, among women, among domestic students, among international students. You know, there was so much enthusiasm for technology's potential and the question for a public policy person or a social scientist was where and how do I contribute and drawing on the kind of insights and perspectives that I had as a policymaker. My first instinct was to say that issues that I've thought about at the policymaking table, the question of what are the values at stake for us as a democratic society? How do we referee things that are in competition with one another? How do we solve for a set of social goals that these perspectives and skills were skills that were relevant to computer scientists, not just to social scientists? And so I went around campus and I needed some partners and I found partners in Rob Reich, preeminent political philosopher, and Mehran Sahami, the most popular computer science professor at Stanford. And you can thank him because he's the one who invented spam filtering technology. So if you think it works at all to save you from some of the junk mail you don't wanna see in your inbox, that was Mehran Sahami's computer science dissertation. The three of us got together and began to think about in a campus that was consumed with the possibilities of new technologies, what would it mean to help our students build the muscles and the muscle memory to bring an ethical lens to the decisions that they make about the technologies that they design and deploy into the world? Give them an understanding for how to think about measuring the social consequences of the technologies that they build and give them a framework for thinking about the appropriate role of technology companies vis-a-vis political institutions that may solve for some of the broader social problems. We began to teach computer scientists in the classroom, teaching classes regularly of 250 to 300 computer science majors. We taught in the evenings professional technologists giving them an opportunity to engage in these issues as well outside of their professional roles and responsibilities. And when we were all locked at home in the context of the pandemic, we took an opportunity to write this book to share some of our perspectives with a broader audience. What I hope to do today in my short set of remarks is share with you an arc of this book, beginning with where we find ourselves now in 2024. The book came out in 2021 and then in 2022 in paperback, to give you a sense of how I think about these issues in this present moment, to offer you a perspective on why I think we confront some of the challenges that we do in the present moment. What are the kind of historical drivers that put us in the position that we are now sort of navigating some of the real tensions between innovation and social harms of technology? And then to offer you a framework for thinking about the way forward. So let me begin with where we find ourselves now. And I really wanna start with a story. Stories are often illuminating and engaging. And so I want to tell you a story about Joshua Browder who arrived at Stanford as an undergraduate. Like many inspired undergraduates, he felt an urgency to make an impact in the world and the impact in the world that he wanted to make was by designing and rolling out launching a startup. Like many founders, he was motivated by a personal pain point and his pain point was that he really disliked parking tickets. Parking tickets he felt were a tremendous annoyance as a high school student growing up in England, he must have gotten a lot of parking tickets. I don't know whether he was late to school on a regular basis, but he found himself accumulating unpaid bills from parking tickets. And he had an intuition that by using the tools of computer science and machine learning, he could help people efficiently get out of parking tickets. And what this meant was using the fact that now you can contest parking tickets online, you have to fill out a set of forms, you have to make a set of claims. And if he could fill out those forms in an automated way and learn over time what are the most effective counter claims to make, you could actually get people out of parking tickets in a systematic way. He called this company Do Not Pay, and after his freshman year, he went out for a seed round of venture capital and raised a significant sum of money and ultimately dropped out of Stanford to launch the company Do Not Pay. Now why do I start with this story? I start with this story because on the one hand, parking tickets are annoying, so are speeding tickets. So are all sorts of other fines and constraints that we may face in society. And so helping people get out of parking tickets may in fact be a noble cause. On the other hand, we have parking tickets for a reason, right? And, in fact, we have parking tickets for many reasons. One reason that we have parking tickets is we often reserve spaces near buildings for people who are differently abled to enable them to access that physical space far more easily. And so we give people parking tickets if they park in a space without permission. Sometimes we have parking tickets because if you live in a place where you get snow and ice, you need to clear the street to enable people to progress and not to have the sewage system backed up by debris. So people need to park on one side of the street and then they need to switch to the other side of the street. I remember this from living in Washington DC. So you get a parking ticket if you don't move your car to enable street cleaning to happen. Sometimes we use parking tickets because we actually wanna reduce congestion in city centers. So you limit the number of spaces that are available to people for parking in order to reduce emissions, in order to reduce congestion, social goals that we might wanna solve for. And then interestingly, in the case of the UK, fines for parking tickets also are used to support the updating of road and other physical infrastructure. I mention all of those things because Josh Browder and the startup Do Not Pay was not solving for our social goals, right? It was solving for a personal pain point that Josh Browder and other people experience. And it reveals some of the challenges of technological change, right? The opportunity to solve for a problem that causes annoying fines for lots of different people may in fact undermine some of the social goals that our regulatory architecture, our rule of law, has set in place as the parking ticket example reveals. Now, if that feels a little bit cutesy and micro for you, I think part of what you need to understand about this ambitious agenda that Joshua Browder has is that it goes beyond parking tickets. The Do Not Pay is a preview of Joshua Browder's broader interest in transforming the way the legal profession works in replacing human lawyers with robot lawyers, with AI driven lawyers. Now my wife's a lawyer, I very much appreciate the value of the legal profession, but I also understand from her own experiences that there are important legal functions that absolutely would benefit from the use of artificial intelligence tools. And we see lots of firms engaged in those practices now. In particular things like document review, which is what first year and second year associates spend an awful lot of time focused on. But I do think as we think about Josh Browder's broader ambition, we need to grapple with what's at stake when we remove human beings and human judgment from something as fundamental as the rule of law and the administration of justice in the United States. So from parking tickets to a transformation of the rule of law, thinking both about the benefits of new technologies, but also their costs. Now bringing us fast forward to the present moment, we know that we live in a moment of large language models. The expression of large language models in most of your lives is chat GPT or Claude, or other large language models that have probably been a great deal of fun for you to play with to see what these powerful tools enable us to do by virtue of learning from the mass of historical data and knowledge to predict the next word, which is effectively what they do, and to generate letters and memos and summaries of literature and sonnets that you want to provide as a holiday card to someone. It's an absolutely extraordinary technology that's been developed, and it's a preview of both advances in artificial intelligence and what some call artificial general intelligence as the next technological frontier. Now this is a new technology that gets rolled out like so many others. First, it's rolled out in a kind of experimental way. Let's see how people interact with it. OpenAI, when they rolled out chat GPT didn't expect it to take off like wildfire. They really were positioning themselves to kind of learn about its use cases and some of the risks that it posed to society. What happened in fact was that it generated massive enthusiasm overnight. And in an effort to take advantage of that enthusiasm, OpenAI scaled up its compute capacity to deliver this product that people were so enthusiastic about to large of people almost immediately. What happens when such a technology is rolled out to the world almost overnight? Well, I'm a parent of two teenage boys, and so one of the first things that happens is that Silicon Valley rushes to commercialize these technological advances and tries to solve for pain points that we know are important, especially for young people. Like, do I want to read Shakespeare tonight and write that term paper for tomorrow? Because if one can offer flawless grammar and effortless writing at a moment's notice, all of a sudden this really difficult thing that all of us had to struggle with in school, the ability to read and interpret, to express our thoughts, to communicate it in compelling ways, all of a sudden is something that can be solved for with a new technology. Overnight, this becomes a challenge for hundreds of thousands and millions of teachers who without any advanced warning, are now teaching in a classroom being transformed by technology without free time, without additional resources, without that same innovation potential, being focused on how to help people use this new technology in responsible ways. Instead, they have to figure out how to do it in the classroom in real time at that moment. Now we may see that Chat GPT and other large language models become the calculator of the future, but if they do, we need to do so in a responsible way that helps us adapt and evolve our educational infrastructure to take advantage of these new technologies and to realize their benefits. That's a micro example of a consequence of this new technology. At the more macro level, if you're spending any time in Silicon Valley, you've probably heard the expression P . And if you remember back to your economics or statistics, class P in this case means probability. This is the probability of doom. So one consequence of large language models are kind of the consequences in our educational environment and in our classrooms. But at a much more macro level, those who are behind these technologies are interested in far more significant societal consequences, which go by the shorthand doom, right? This is the existential risk that these new technologies might pose to society, the kind of runaway potential separate from human oversight and accountability, that these new technologies because of their capability may simply outpace the ability of human beings to control them. So it's then no surprise given that power that's being unleashed, that someone like Sam Altman, the CEO of OpenAI will show up at Congress, will show up in multiple capitals around the world and say, we need to guard against the potential risks of these new technologies. And this is the moment at which we find ourselves where governments are being asked to step up in important ways, to think with industry about the appropriate regulatory guardrails. And where are those regulatory discussions happening? Well, the bottom right hand picture is Washington DC partisan gridlock has really stood in the way for decades of meaningful regulation of the tech industry in Washington. While most governments around the world have already adopted some form of national privacy law to deal with the amount of personal information that large technology companies now have about us as individuals, the United States is one of the only major countries that doesn't have a national privacy law. So Washington DC simply hasn't been an important regulatory capital. The important regulatory capitals have really been Brussels in the lower left, Beijing in the upper right, and Sacramento, California in the upper left, right? These are the environments in which we see most of the proactive forms of thinking about regulatory oversight unfolding. What can we expect of this next administration before I turn back to the historical origins of this moment? I think it's hard to say. On the one hand, the vice president JD Vance, has been an enthusiast for raising and addressing concerns about the concentrated economic power in a large number of technology companies. In fact, he's been an enthusiastic fan of Lena Kahn, one of President Biden's chief antitrust enforcers. On the other hand, there's tremendous enthusiasm for the Trump administration that comes from a set of Silicon Valley venture capitalists. One of those venture capitalists is Mark Andreessen. Mark Andreessen is the author of a manifesto, which I'll speak about in a moment, that communicates enthusiasm about the potential of unfettered innovation to advance our society and to improve the human condition. Where we end up in this conversation is very hard to say at the present moment. The Trump administration appointed a new AI czar recently, but again, it's hard to know between appointments at the White House, appointments at the key regulatory agencies and appointment at the Department of Justice, how it is that the Trump administration will pursue its technology policies going forward. So let me turn from the present moment really to the book, and the book offers a story with historical perspective about how we find ourselves in this present moment, benefiting in extraordinary ways from technological advances that have changed how we work, how we live, how we relate to one another, but also grappling with a set of social consequences that from many people's perspectives are so damaging that they demand a response and potentially government action. To think about this moment, you really need to begin in some sense with the founding optimism and disruptive potential of the computer science age. This is an excerpt of that Tech Optimist manifesto. It was written by the venture capitalist Mark Andreessen. He was the founder of Netscape earlier his career, one of the real pioneers of the worldwide web and the internet moment. And he felt a couple years ago the need to re-express in a public way what it is that the potential of technological change offers to all of us in society. You can see in his language, our civilization was built on technology. Our civilization is built on technology. We can advance to a far superior way of living and of being. It's time to raise the technology flag, it's time to be techno optimists. The view here that scientific advance paired with the power of venture capital in the market is what enables these step change improvements in the human condition. And when we raise concerns about the potential social harms, when we think about constraints on this innovation potential from this perspective, those stand in the way of the transformations of the human condition that technology makes possible. Of course, this is one view, it's one view in an active conversation about how to handle technological change. Another view and a view that we articulate in the book is that the technological change that makes possible these tremendous improvements in our lives are like so many other market activities that they solve for, you know, a specific problem. They generate products that we consume, but they may generate what economists call negative externalities, right? Ways in which the action of market driven and profit motivated firms produce benefits, but also generate harms that often you need to solve for in important ways through some sort of government oversight or regulation. The classical example of an externality, of course, is pollution. A manufacturing process powers all of the things that we consume, but it may in its process generate byproducts, byproducts in water, byproducts in the air. And ultimately companies don't price those negative externalities into the price of what you consume. And as a result, you need to realign their incentives and you can do so through taxes, you can do so through regulation and oversight to make sure that we benefit from those products, but we also breathe clean air and breathe clean water. That dynamic of market driven innovation and change, but also a set of negative externalities is not unique to the manufacturing industry. It's not unique to any industry at all. And in fact, it's part and parcel of this moment of computer science driven technological change as well. A set of technologists designing products, leading companies, fueling these companies with venture capital who wanna optimize technologies benefits, including for their own commercial returns. And what we would expect is a role of regulators and regulator is just another word for our political institutions, our democratic institutions, to play a role in minimizing technology's negative externalities. But the problem of the current moment is that we really haven't built the muscle memory to think about the appropriate development and use of guardrails around new technologies that we're in a moment where for more than two decades we've been optimizing for technology's benefits without, in a meaningful way, developing regulatory guardrails to address technology's harms. So why has that happened? And the book argues that that has happened really for three key reasons. The first reason is that technology companies and those who oversee them and finance them largely come from the field of engineering and the field of engineering. And computer science is built around a mindset that we call the optimization mindset in its mathematical representations, in its algorithmic representations, in the machine learning models that are built, optimization means something very specific. That you're gonna choose one end goal that you're solving for, and you're gonna design the most clean and efficient strategy for producing that end goal. And with the compute power and the innovation that we have in our own technologies, we can solve for these end goals more efficiently that at any point in human history. But the optimization mindset introduces a set of challenges. On the one hand, you might be solving for something efficiently that might not be something very good for the world as a whole. We call that the problem of bad ends, bad goals or bad objectives. We can think about the development of new weapons systems as systems that may optimize for the mass scale of human destruction, right? Nuclear weapons probably fits in that category. It solved a near term problem in World War II, but ultimately we spent generations building guardrails around it. And we have to ask ourselves, was that the right technology to build in the world? Was that the right goal to optimize for? A second challenge is the problem of finding measurable proxies for good goals. Facebook's goal and its mission statement is to connect people. That's an extraordinarily valuable and worthy goal in society. The measurable proxy for which Facebook has at times optimized is time on the Facebook platform. And there's a distance between what you're optimizing for, that measurable proxy, and the meaningful human connection that Facebook aspires to in its mission statement. But perhaps the most consequential tension that comes up with the optimization mindset is the problem of multiple and conflicting valuable goals. So take something like a technology like Signal or WhatsApp, which many of you probably have on your phone. It solves in its design for privacy. Privacy is a hugely important value in society. But if we only solve for privacy, our ability to solve for other values that we care about, say protecting children from harm, protecting society from coordinated criminal or terrorist activity, these are values that we're trading off when such technologies get introduced into the world. And that may be the right trade off to make, but it's not clear that that trade off should be made in the boardroom of a technology company versus in a broader conversation in our politics about what kind of society we wanna live in. So issue number one that gets us into the current moment is the problems of the optimization mindset. You take that mindset and then you pair it with a financing model. And that financing model is venture capital. And venture capital is an extraordinary financing model because it helped to overcome the challenges of legacy financial institutions by being able to fuel small amounts of capital to inspiring entrepreneurs and give them the flexibility to pursue sort of product market fit without having to worry so much about sort of generating revenue quickly. But the challenge of the business model for venture capital is that it depends in important ways on achieving market dominance as quickly as possible in a small number of companies that realize outsized returns for your whole portfolio. So you're gonna spread a lot of money widely to lots of different startup founders, and as soon as you find something that hits, you wanna lock in that market dominance, you wanna realize those network effects and those scale effects. You wanna make it difficult for anyone else to compete in that landscape because