You and AI – the politics of AI

good evening everybody and a very very warm welcome to to the Royal Society and to this which is the fourth in this discussion series called you and AI my name is Chris Bishop I’m the laboratory director of Microsoft Research in Cambridge my great pleasure to be the hosts for this evenings presentation which I know will be very stimulating and very engaging as I’m sure you all know the Royal Society has played a part in some of the most fundamental some of the most significant and some of the most life-changing discoveries in scientific history and rural society scientists today continue to make outstanding contributions to science in many important and diverse research areas in April 2017 the Royal Society launched its landmark report on machine learning this called for action in a number of key areas over the next five to ten years to create an environment of careful stewardship that can help ensure the benefits of this technology I felt across society it also called for an informed public debate about the development of machine learning and how its benefits are distributed the you and AI lecture series builds upon the legacy of that report it aims to help open up a conversation with leading experts about how a AI affects people’s daily lives its implications and how companies organizations and public institutions should respond now in the adorable lecture in this series back in April we heard from deepmind founder denis hazardous about the capabilities and the frontiers of AI following this we welcome four world leading authorities from four distinct but very different specialisms to provide an insight into how they currently work with AI and its potential for the future then last month professor Cynthia dwarf Harvard University and Microsoft Research opened up the conversation about the fairness and six of AI and that’s a topic that will continue to explore this evening this series which is generously supported by deep mind will help develop a public conversation about why machine learning and AI what they are how these technologies work and the ways in which they may affect our lives in the future so tonight it’s enormous pleasure to welcome Professor Kate Crawford Kate is a widely published researcher academic author who has spent the last decade studying the social impacts of large-scale data machine learning and artificial intelligence I’m very proud to say that she’s a colleague of mine at Microsoft Research she’s based in our New York laboratory she’s also a distinguished research professor at New York University and a visiting professor at the MIT Media Lab in 2016 she co-chaired the Obama White House symposium on the social and economic implications of AI she’s published in academic journals such as nature new media and society information communication and society and was written for the New York Times Harper’s Magazine and the Washington Post she has also advised policymakers in European Union the United Nations the Federal Trade Commission and the city of New York she’s also the co-founder and co-director of the AI now Institute at New York University which is a leading University Institute dedicated to researching the social implications of artificial intelligence and related technologies in what is very much in Interdisciplinary context so please welcome professor Kate Crawford good evening everyone it is such an honor to be here I’d like to say thank you to Chris for that generous introduction and thank you to the Royal Society for inviting me to be here tonight and a particular thank you to all of you for turning up it’s great to see so many welcoming faces here in the room and in the overflow room hello to you as well so as Chris said I’ve really been researching the social implications of AI and related technologies for a long while now but this decade has been a particularly momentous one it seems to me that we are essentially at a crucial inflection point where we’re starting to see all of these technologies move further and further into everyday life and it’s basically a commonplace now to

say that AI is touching healthcare and criminal justice and education and finance but what is less commonly observed is that that is happening at the exact same time as the political road is taking a sharp turn to the authoritarian side and the combination of these two forces I think represents some very difficult issues for the field so tonight I want to talk about the relationship between politics power and artificial intelligence how power works in society how its centralized and how AI is amplifying some of those forces I also want to talk about the role of science because of course we’re here at the Royal Society in both creating and critiquing this type of power and what we can do to address some of these darker dimensions but let me begin with a story so back in February this year researchers from the universities of Southern California UCLA and Lincoln Nebraska presented a paper about predictive policing they claimed that they developed a system that could automatically classify crimes and in particular whether or not a crime was gang-related and they did this by using a whole data than harvested from the LAPD particularly the data on gang members now as some of you know this data is notoriously skewed and it’s riddled with errors in the Cal gang database for example there are 23% of people who’ve been shown to have no relationship to gangs at all and in fact there are dozens of babies listed in Cal gang as active gang members interesting yeah very busy further most people who were added to that database have never been charged and once you’re on it there’s no way to remove your name yet the researchers use this list as their definitive ground truth for training their predictive gang member detection system so people in the audience when this paper was being presented started a clue on that this was a bit of a problem and so people who are sort of raising their hands and saying don’t you think there are some dangerous biases in that data that might propagate into your model one of the authors just stopped and he replied well I’m just an engineer and it’s a curious phrase just an engineer because it recurs time and time again in the history of the 20th century and I want to suggest that we’ve reached a point in AI development where this separation from responsibilities has become the norm but that we can no longer afford it because AI has now moved from being a purely theoretical discipline to one that is being applied at scale companies like axon in the US for example right now are claiming that they’re using their AI lab to predict future criminal activity and to use data from police body worn cameras which right now are everywhere in the US to tell if somebody has a criminal record immediately and then to detect the emotions on their face and to take action based on that so we’re at a time in history where our technical papers are very quickly being built into systems that don’t just reinforce long term structural bias but they’re automating it in ways that are both invisible and unaccountable now frankly this shouldn’t be a particularly controversial thing to say one of the earliest pioneers in a I wrote decades ago that we ran this risk of being so seduced by the potential of AI that we would essentially forget or ignore its deep political ramifications this is Joseph Weizenbaum he was the man who invented Eliza back in the 1960s this was the first chatbot ever and it was also one of the first programs capable of essentially attempting the Turing test now Eliza was actually very simple she was just using pattern matching and scripts to try and convince people that they were having a conversation with her some of you might actually have engaged with Eliza put up your hand if you’ve ever sort of chatted yes several people okay you remember Eliza and the amazing magic trick of Eliza was that people thought that she really understood them they were absolutely convinced that this was an intelligent machine well this frightened wiesen bound he saw that as a type of what he called powerful delusional thinking about the power of a eye his concern above all was that AI systems would allow engineers to put distance between themselves and the true human cost of the systems they designed so for example a war planner would look at the loss of human life as merely a series of probabilities so early on Weizenbaum saw this over reliance on technical accuracy at the

expense of social implications well we’ve had 60 years and over of AI development and we still haven’t learned that lesson the field has worshiped at the altar of the technical prioritizing it over the social and the ethical and frankly this is a driving reason why I and my colleagues established the AI now Institute at NYU its prime directive is to study the social and implication social and political implications first rather than always prioritizing the technical so what I want to convince you all tonight is that these fundamental social challenges are profound and important and urgent and we need to address them with the same commitment and rigor as improving algorithms and optimizing technical systems so my talk tonight is going to address four key themes with some you know additional physical humor that we can add in first of all I’m going to talk about this thing called AI and what it means then I’m going to talk about bias how it gets into systems and why there are no easy tech fixes and then I’m going to talk to you about the big picture the politics of classification itself and the political work that AI systems are doing right now so first of all what is this thing called AI well it’s interesting how often this term is used and how few people actually want to define what it is of course AI is a very hight term right now but it has a very long history it goes all the way back to the Dartmouth conference in 1956 but I want to suggest tonight that AI is three things its technical approaches its social practices but it’s also industrial infrastructure so the technical approaches of AI have changed dramatically with the decades from symbolic logic and expert systems toward is currently in the height of fashion broadly known as machine learning now even machine learning itself is a constellation of technical approaches like deep learning and imitation learning and reinforcement learning but most simply put we can think of the current state of the art of AI as a grab-bag of techniques from statistics and gradient based optimization algorithms the fact that we use the word intelligence inside artificial intelligence I think it’s something of a trap in fact I think I agree with Weizenbaum on this that sometimes we mistakenly assume that these systems are actually in any way similar to human intelligence machine learning approaches right now to be honest are very very far from intelligent but they are good at detecting patterns clustering optimization and making predictions across vast datasets so secondly AI is also social practices who works on these systems who decides what problems we should prioritize and how humans will be classified that powerfully shapes the type of AI systems that we have today just as much as algorithms do it also matters who’s in the room sad to say the AI field is extremely demographically skewed right now the top five tech companies in the world are overwhelmingly male dominated and they come from similar socioeconomic and education backgrounds in computer science and engineering and this creates somewhat of a monoculture and that shapes the kinds of problems that AI is choosing to address and which which populations ultimately are best served by those tools so thirdly AI is also a massive industrial infrastructure this here is an exploded map of a single Amazon echo you might have one in your living room for example this was created by Vlad Angela and I and we made this to essentially just visualize what it takes for you to clap your hands and say Alexa what’s the weather what happens of course begins all the way over on the left from the smelting of rare earth minerals and then the refinement and production that takes them all the way into the actual building of these single units but then we also need to think about the large-scale data centers in places like the American Midwest that are being used to analyze your voice figure out what the command is try and send you back the relevant piece of information that one moment of convenience is calling into being a truly Planetary computational network now the resources to build and maintain a network like this are gargantuan and vanishingly few companies can really do AI at scale like this so even when you hear that there’s this thriving AI startup culture you should know that they’re all in the main hosting their systems and buying their compute cycles from the same small cluster of around half a dozen companies so AI is a profoundly concentrated industrial infrastructure so at each one of these layers the technical the social and the infrastructural AI is rearranging power and it’s about configuring who can do

what with what and how knowledge itself works and I think this makes AI an inherently political tool and essentially that means we all need to give it more ethical weight one of the real moments where the field has been forced to contend with its social implications has been this bias debate so I’m gonna turn to that now essentially it was around oh gosh six to seven years ago now that I first started researching and writing papers about these issues of bias and skews in big data do you remember big data yeah yeah well obviously it was interesting actually demo sand his talk noted that big data was was actually more of a problem than a solution and he’s absolutely right it also had a series of very serious skews and biases built into it and it has become the raw material of AI but it’s brought all of that baggage with it so you’ve likely seen lots of articles like this recently all about how bias and discrimination is being discovered in AI systems everything from women being less likely to be shown high paying job ads through to Amazon Prime not delivering to poor neighborhoods to racial disparities in both policing and education AI systems and in a way this shouldn’t really surprise us because machine learning works by discriminating it detects how your patterns are different to mine how my geolocation is different to yours the differences in our social networks and then uses these particularity x’ if you will to make predictions about how to influence us from what it can try and sell us what products we might be interested in to how much your health insurance to it cost to who should get out of jail all of these systems already exist and they’re trained on data that informs them on how to make those discriminations but often that data is very skewed for example if you do an image search on CEO this is what you get ya notice a few patterns emerging there it’s a lot of white guys in suits and what’s interesting here of course is that these images reflect the images that have been gathered from the web and also from stock photography which is a major thing here and then AI systems reflect them back to us and then people click on the top results and we start to see this powerful feedback loop actually when we have ran this test about six months ago in the lab we were really curious who the first female CEO would be any thoughts who do you think it would be would be a reasonable suggestion yes in actual fact CEO Barbie yes she was the first not a great look but actually this isn’t a new problem back in 1972 there was a Playboy centerfold by the name of Lenna a group of computer scientists brought a copy of Playboy into the lab that day and decided to use her centerfold image to test their system so remember when I was saying that AI is also about social practices and who’s in the room and what their interests are yeah yeah that tells you something she has now become the single most used image in computer science history you will find her everywhere from machine learning papers to all sorts of image processing systems now the very few people actually know that this sort of foundational image of computer science is actually a frame of porn she is in fact Eve in the garden of a eye and in some ways she makes me think that we also have a type of original sin in our technical systems that we don’t think enough about the social context of the data that we use and the tools that we build and this can come back to haunt us so how does this just keep happening well to give you a sense of that I want to open the hood on AI systems and give you a sense of how they’re trained so for example this is a benchmark training data set which is used in machine vision it’s called labelled faces in the wild and training sets like these are ultimately how we teach AI systems to see the world we harvest millions of images and then you train a system to recognize patterns in other words they’re seeing human culture through the lens of our past now labeled faces in the wild has around 13,000 images but it also has some notable skews it’s around 78% male and 84% white so those are the people for whom a system trained on this will work the best now have a guess who the most represented face is anyone be brave who it’d be oh yes you know fantastic we have an expert in the room and George W Bush only makes sense when you know that these images came from a prior training set called faces in the wild which was generated by scraping Yahoo News between

2002 and 2004 so it’s no wonder that W is everywhere because as we know and this week painfully so presidents get a disproportionate amount of news attention my apologies but it’s also a reminder that datasets reflect our social hierarchies and our structures of power let’s look at another benchmark training dataset this is a VA and it’s made of hundreds of movie clips and it was designed so that machines could really detect human actions and human gestures like picking up a glass or sitting in a chair and they chose movies because it was believed that this would represent lifelike activity well if you browse through a category like playing with children for example all you will see is women all women apparently men you never play with children not a thing but don’t worry you are in there in the kicking a person category yeah that’s where you’ll find all the dudes and so ultimately on mass this could concern us the cultural lessons that are being learned here may not be the ones that are going to best serve us and I think it’s making the most stereotypical versions of human culture the basis of the systems that are training our future and this is ultimately creating a standard training set for life and it’s a deeply normative vision and of course these bias problems don’t just play images it’s in text as well the word embedding models that are used in machine learning in everything from image captioning to automatic translation are having lots of problems as several research papers have shown including some of our colleagues words like genius and tactical are associated with mail and terms like home maker and crafts and modeling are associated with female and those biases also turn up in sentiment analysis when Google’s natural language API was really if you typed in words like Jew or gay it was associated with negative sentiment but if you typed in phrases like I’m straight or even white power you would get positive sentiment and the scale of this problem is now being really acknowledged by leaders in industry in just the last year we’ve had Mustafa Suleiman one of the founders of deep mind who’s in the room including the CEO of Microsoft Satya Nadella and also John J Andre are at Apple have all called this a core problem of the field so we’re starting to see a research field crop up on fairness and machine learning I love this chart by Marv it’s hard because you can see that it’s gone from just a few of us thinking about this issue to suddenly this moment in 2016 where it’s a we got a real problem and this surge of interest is of course completely justified because machine learning systems are starting to impact millions of people every day so now we have all these computer scientists working on fairness and bias issues so can’t they just solve this problem with new algorithms and better data well I want to suggest you that thinking about bias in this narrow way is not going to be enough in fact as someone who has been working in a space for quite some time I’m going to be really honest with you tonight I’m worried that some of these narrow technical approaches could be making this problem a lot worse machine learning researchers are sort of miss applying terms like fairness and discrimination and inclusion basically just to refer to math to the statistical performance of a particular predictive model so in recent research that I’ve been publishing with my colleague Solon brokest Hannah Wallach and Aaron Shapiro we’ve basically looked at all of the computer science papers to see what the dominant techniques are for addressing bias so these are three of the big ones scrubbing to neutral is very common the idea here is that you just delete the biased associations and this is certainly what happened with Google sentiment tool as soon as the news stories broke that these somewhat shocking phrases were receiving these kinds of sentiment analyses they immediately deleted those associations and now you’ll see them as all appearing as neutral but that doesn’t solve the problem given them many less obvious gender and race biases still remain and there’s an even bigger question we should ask here whose idea of neutrality is at work here do we assume neutral is just the world as it is today doesn’t seem neutral to us or do we try to account for the fact that we have a very long and complex history that has actually brought us to this place where many populations have experienced centuries of discrimination this problem also comes up with a CEO example that I shared with you if you look now you’ll see that the two big search engines have actually tweet the results and you’ll see a sprinkling of more diverse faces but how do you decide who should be represented there do you go to demographics do you say okay what

percentage female CEOs do we have okay it’s shocking but let’s just stick with that or do you try to say we’ve got a problem here let’s try and tweak it so that we have the version of society that we would like to see knowing that these images produce their own feedback loops well whatever you decide as you think about it right now know that those decisions are ultimately political decisions but they’re rarely acknowledged as that in the industry here’s another example there have been a whole lot of papers recently trying to develop a fairer risk assessment algorithm system for the criminal justice system in the US now a technical community in the Maine has just accepted that predictive risk scoring is a good way to reform criminal justice and they’ve been focusing on these quite narrow computational approaches to make them more fair so that populations in different races will be treated the same but different groups of course are not police or charged in the same way or even treated equally by judges so treating everyone the same in these systems can actually produce this problem which is that parity is not justice and what’s worse machine learning approaches are dependent on metrics so the sort of subjective issues that a judge might bring to bear all drop out of the system and finally there’s no due process defendants can’t query these systems in the way that they would are the forms of evidence so the net result is that some of these ML fear algorithms can actually still be very unfair now frankly at this at this point in time my real concern is that it also shuts down public debate about how these systems work where in the sense that people are told oh this is purely a technical intervention its objective and it’s neutral and then somehow it’s seen as outside of the political this is another version of the just an engineer problem hey it’s just engineering it’s not politics for example a computer scientist said to me recently I don’t really understand anything about criminal justice but I know I can make this algorithm more efficient the question is when we’re working in a racially disparate system what are we optimizing for and might we be optimizing forms of injustice another really important paper recently was from PhD candidate Joey bull and we nee and one of my colleagues Tim McGee brew and they tested three face recognition systems and the results were stark all of the systems performed significantly better on men and they all performed worst on dark-skinned women so how was the study used how do people respond well all of the tech companies said we can fix this we can address it by widening the data set that we’re using so they did that and we’ve made considerable improvements but we you might be getting closer to parity in the detection of faces but it’s done by harvesting more and more images of people from minority groups these are the same groups of course who are the most exposed to surveillance to deportation and to over policing so we just made those people easier to track and easier to surveil so I think there’s a kind of confusion that’s emerging here equal surveillance is standing in for equality and we’re performing essentially with these tools to make them better in ways when we know that they’re actually causing disproportionate harm to marginalized groups and I’m not sure if you saw this it was a very big deal in the u.s recently but one of the CEOs of the facial-recognition companies at Cairo said that he would no longer sell facial recognition to law enforcement because he said and these are his words it opens the door to gross misconduct by the morally corrupt pretty strong words from a CEO but right now we don’t really have any regulation against these tools so as Cynthia Dawg said earlier in this series fairness is a very hard problem and at the moment the attempts to eradicate bias are very narrow and what’s worse we’re seeing the creation of a subfield of computer science where people are only trained to think about algorithmic accuracy and performance but they’re also having to intervene in our most sensitive social institutions personally I think we need to increase the safeguards around things like facial recognition predictive policing and all of these forms of criminal risk assessment because frankly optimizing unjust systems can cause more harm we also need to look at this broader context of how a tool is being used and by whom the little heuristic that I suggest to people is this when you’re working on a system does your technical approach put more power into the hands of the powerful if so you might be deepening the problems of inequality so right now we have this dominant approach to bias which is that it’s a bug and not a feature and we can just remove it but what if there’s actually something deeper going on classification and clustering at the heart of what it means to be doing machine learning today whether it’s identifying a person’s gender or race or assessing their resume or giving them a risk score what if these things by the very nature of classifying a person is introducing a

deeper harm than just bias so to understand the danger of this I think we have to go back into the history of scientific classification and I want to take you back to this guy in the 17th century he was in fact one of the founders of this institution in the Royal Society this is John Wilkins he was an English scientist and clergyman and he was fascinated with classification he in fact developed a whole new language that he said could classify the universe into 40 categories in some ways this actually reminds me of the early approaches of symbolic logic in AI although weirdly no one seems to have claimed the system from 1688 as being an AI system but I think there’s something in it I century later the focus of classification had zoomed in from the universe to the human face we start to see the emergence of these pseudo-sciences of phrenology and physiognomy as ways to classify people based on everything from their face to their skull to their nose shape now physical characteristics were being used to classify people from everything about their intelligence to their criminality of course the most damning assessments were reserved for women and people of color it was indeed a class of a Kotori system that really served the people who were already in power but large-scale human classification doesn’t really accelerate until 1880 where a young man who currently was working at the US Census Bureau called Herman Hollerith was watching a train conductor stamp tickets on a train and he thought hmm I could design a punch card system for tracking human characteristics and I’m sorry but this is where I have to break Godwin’s law and mention the Nazis because of course it was the German government who bought Hollerith machines on mass from IBM in the 1930s and they used them extensively to do massive racial registries of the population marking out everybody who was Jewish or Roma and other ethnic groups frankly they couldn’t have done what they did without the scaling power of the Hollerith machines to classify us into the sort of us and them categories but now we’re starting to see physiognomy and phrenology get a rerun in AI research in 2016 some of you might have seen this paper Wu and Zhang published in a machine learning conference and they claimed that they could predict if somebody was going to be a criminal based on nothing more than a photo of their face they claimed that this system was free of bias even though it was just trained on government images of criminals which is itself obviously a very skewed set this again seems to me much more like encoding bias in a way that serving vested interests imagine if an autocrat decided to use a system like this to assembly just arrest people based on a photo of their face with no accountability or due process which brings me to one of the most troubling examples Michael Kaczynski the researcher whose work was behind Cambridge analytical’s early methods released a paper around 18 months ago colloquially known as the AI gaydar paper he essentially trained a neural net to detect facial features that would allow you to predict someone’s sexual orientation just from a photo now this paper I would suggest has deep methodological and ethical problems particularly when you consider that being gay is still criminalized in over 78 countries some of which apply the death term death penalty but I’d actually like to make a deeper point tonight machine learning as a field is making a conceptual error when we try to classify somebody’s race or gender or sexual identity based on their face it’s this confusion of categories to treat these sort of fluid relational social ways of being as though they are fixed objects like a cat or a chair and I think it can pose real clasificado harms now Kosinski justify this paper by saying that it’s really important to show people that this can be done with off-the-shelf machine learning tools but I would pose to you the opposite suggestion that we have an ethical obligation to not do things that are scientifically questionable that could cause serious harm and that could further marginalize groups just because it can be engineered doesn’t mean that it should be so these claims of predicting whether somebody is gay or a criminal are being made at this time of rising political populism and I think there are many people who would like to deploy unaccountable systems of power and control as the leading British intellectuals Stewart Hall once said systems of classification are themselves objects of power so machine learning I think now has a fundamental issue that we’re classifying people based on their race and identity and character and their criminality just from their face that to me seems like repeating the errors of history of phrenology and physiognomy and then putting those tools into the hands of the powerful we have an ethical obligation to learn the lessons of the past

so what is the political context for AI today well of course if we just take the narrow sense of the political as governmental politics we’ve got plenty of evidence about how technical systems are changing democracy be it from the outside with companies like Cambridge analytical and the others like them to being built into the business of government itself for example ice the Immigration and Customs Enforcement Agency in the US has been using an algorithmic risk classification system at the border to determine whether an immigrant should be released now last year ice decided to change this system so that it recommends detention in 100 percent of cases it is a predictive tool that only has one answer it also has been reported that ice agents went into those databases and manually deleted the numbers that identified which children belong to which parents just as they were separating families into the detention camps which reminds us that even a set of well-meaning tools can be manipulated to terrible ends and now Isis commissioned something like this a new machine learning platform to handle no less than ten thousand users simultaneously accessing the records of millions of people so how does fairness and machine learning work here optimizing an algorithm for parity or making sure that dark-skinned people are as easily recognized as white skinned people is not going to help us rather I think we’re facing some fundamental questions of human rights and due process and accountability what rights for example would an asylum seeker have against the system like this well this basically makes me think of one of the great philosophers of power Hannah Arendt I had the great privilege of visiting her personal archives recently and it included all of her handwritten manuscripts and you can see all of the scrolls with her notes and edits this is an example of it right here and in these pages she observes that the questions that most haunted her generation after the Nuremberg trials were these ones how did it happen how could it have happened and how did we let it happen she observed that the only thing that held back the totalitarians of the 20th century was that their technology sucked they had things like lie detectors that was slow and basically useless most of the time and then back in 1968 she eerily predicts something very very similar to the policing system and border control systems that I’ve shown you tonight she wrote this the police dream that a gigantic map on the office wall should suffice at any moment to establish who is related to whom and in what degree of intimacy this dream is not unrealizable although its technical execution is difficult if this map really did exist not even memory would stand in the way of the totalitarian claim to domination such a map might make it possible to obliterate people without any traces as if they had never existed at all frankly I can’t think of a more evocative description of technology-driven deportation and disappearance so what are we going to do about this well I think AI may present some very real social challenges but scientific communities like the one in this room have faced these challenges before nuclear energy genetic engineering the list goes on I think research has a real role to play here and I think one of the biggest challenges of the next decade is going to be these social implications of AI we also need to contend with something that I think another speaker in this series will talk to about Joseph Stiglitz that AI has the potential to rapidly accelerate wealth inequality it gives very powerful and actionable information to the very few so if AI is going to play this role in our social systems I think it needs to be guided by research that will consider both the positive and the negative dimensions and how they’re being unevenly distributed so this is the reason why we founded the AI now Institute along with Meredith Whitaker as a center to just focus on these social implications but we’re doing it in an interdisciplinary way we’re collaborating with computer scientists and engineers and lawyers and people across multiple fields because I think that’s what it’s going to take to really understand how these systems work secondly I think ethics is a really important part of what’s needed here and we’ve certainly seen several of the AI companies release ethics codes recently and to circle back to Joseph Weizenbaum he said that the principle of one’s responsibilities should be commensurate with the scale and the effects of your actions that means that in his words engineers have far greater responsibility than ordinary individuals because their work extends not just to millions of people today but also to future generations so I think we should applaud the people who are currently developing stronger ethical guidelines

for AI but I also want a caution tonight that ethics can very easily be captured by a set of well-meaning but ultimately ineffective phrases unless they’re secured to clear and publicly accountable forms of governance ultimately without public oversight real governance and ongoing monitoring AI ethics codes I think risk becoming a mask that is worn over the face of unaccountable power so we need rigorous forms of Industry and research ethics that are combined with real accountability but I’d like to end tonight on the biggest thing that I think that we could all do together because right now I think were plagued with a sense of technical inevitability that sense that AI is basically unstoppable and all we can do it’s tweak the edges we can try to remove some bias here and there we can create better privacy guidelines or we can write ethics codes but these I think are always partial responses and very incomplete what if we looked at this from the other direction and actually reversed this arrow and asked what kind of world do we want and how can technologies serve that vision rather than driving it I think that’s the society-wide conversation that we really need to have because the decisions that we’re making today are going to matter it’s worth remembering that back in the 1940s when everything looked hopeless during the war there was an engineer and general control of occupied France by the name of Renee Camile and he did a very curious thing he was responsible for the hollering through machines in France and making sure that they were categorizing the population but he decided to sabotage them and he reprogrammed them so that they could never punch information into the eleventh column of the punch card which is where people would record Jewish identity and in that one action he saved countless thousands of lives this is basically the opposite of saying I’m just an engineer this is using engineering power to put ethics into action so if all AI is political then how it is engineered is an issue that’s going to concern us all and I think this is the critical time for the broader scientific community and the community at large to remember these lessons of power and classification from the last century because as AI systems play an even bigger role in our day-to-day lives and in shaping our world we want to make sure that that’s a place that we want to live thank you well thank you very much Kate as expected that was a superb extremely thought-provoking talk I know they’re going to be lots of questions this evening we have a packed room but also downstairs we have an overflow room and I know that’s also packed to capacity and we’re also going to try and take questions from the overflow room by a a text system that we hope will work this evening before I do open up the idea you’ll take the liberty of asking the first question and really sort of picking up on your final point there I think most people would would agree that we live in a world today that itself is far from perfect is rife with bias and clearly there are there are dangers that you’ve highlighted that the power of machine learning could could take those biases and amplify them and take them to a very dark place what about the more optimistic scenario the power of technology to remove bias and take us not just avoid the catastrophe but actually take us to a very good place can you come in on that absolutely I mean I think many of the examples that I used tonight really just coming from this early field of fairness and machine learning but I’m going to be like I’m gonna be honest about my concern it’s very real that if we keep this conversation just within the technical community we are not talking to the right people we’re certainly not including an interdisciplinary space but more importantly we’re not including the people who are being affected by their systems and so I actually think we have a much greater responsibility to look at how systems work look at their structural history and then think about what they should be looking like in future there’s a really good example of this in the u.s. at the moment so there’s this huge debate around how do we reform the criminal justice system and there’s been this weird sort of slippage in the argument where people are like oh look we’ve got predictive risk algorithms let’s just use those and that will solve the issue of cash bail for example all of the other research has been going for 30 years has shown that some really simple things would actually really address some of these questions for example for people who don’t turn up to their bail hearings there’s been really good studies that show if you get cars to people or if you give them child care they will be able to show up but instead we’ve skipped these more complex social responses to go straight to the technocratic and I think that can be a real mistake because in the end what you’ve done is

you’ve said to people well the criminal justice system will be fixed by technology and then people feel quite offered by that that’s like oh well I I can’t see how that works certainly the judge can’t see how it works certainly the defendants can’t see how it works so I think we come to a sort of an an impasse very quickly if we just say oh all of these problems of bias let’s let the computer scientists figure it out okay I think they’re all society really should be commended here for the not only this lecture series but actually many many activities which with public engagement and particularly in an AI machine learning where it’s being very very proactive so I’m delighted to see this public debate engagement let’s open it up there to questions what we thought we would do is maybe sort of take a couple a couple at once so we’ll start with one there and hi thanks very much I work at a big tech company called thought works and we’ve been trying to have this conversation and we’re calling our little campaign that building an equitable tech future my question is what do you think tech companies can be doing to engage with these issues should we be trying to speak to folks outside the tech community like the people most impacted which is what we’re trying to do we’re trying to like create some sort of ethical framework and the conversation but what can tech companies be doing to proactively engage with this please thank you great question well let’s just say it’s been a really interesting year for tech companies hasn’t it I mean we’ve seen some extraordinary shifts happening in terms of people inside tech company it’s really rising up and saying we’re not going to work on tools that we think cause forms of social harm I mean at Google there was a petition that over 4,000 engineers signed to say that they didn’t want to work on an object recognition system for drone killing for example I mean this is this is new we’re sort of really seeing I think a very very new set of conversations emerge within the companies and that’s certainly where these sort of new ethics codes are coming from but I don’t think that’s enough I think we do have to include the public more broadly I think we do have to think about what those governance mechanisms are because it’s very easy just to have set of ethical principles that sort of fits somewhere in a drawer and has no real integration into how a system is being used or deployed or even measuring I mean tech companies are very very large I mean from the image that I showed you today they’re this sprawling and sometimes the right hand doesn’t know what the left hand is doing and so I think we have to think much more deeply about what those governance mechanisms will be and and I’m actually quite inspired to think about how we do that and certainly I know that fort works is sort of one of the spaces where there’s conversations are happening so thank you for doing that who’s next yes the front right oh just wait for the microphone just so we can capture the wonderful points you made about the active gang members and how the LAPD keeps a record of these people you mentioned that 3% of it is incorrect 23 percent of people have not been shown to have a clear association right and included babies yeah no the 33 person I can understand giving the inefficiency and the lack of professionalism in an organization like the LAPD that does not surprise me what does what does surprise me is the fact that they had poor baby is ending yes now I’m not a technical person I’m an accountant right but I do understand that the first bit of data you will have about any person is the date of birth yeah and in some cases you might get a date of birth all so the baby yeah and it’s not just a few babies they have 42 babies when this data set was actually audited so I mean it’s it’s it shows you how easily errors can propagate and and certainly in the case of gang databases and the techniques have been very much based on social network so it’s really who do you know and so people are guilty by association and it is kind of extraordinary that you’re guilty now by genetics and I think in that case it’s probably just a straight up error but either way it reminds me that it’s this combination of biases from an institution over time yes also just with straight up technical errors sometimes with bad system design and we that we basically implement a system that we really don’t understand how it’s working and this is happening a time and time again I mean my interest is in the next phase in machine learning how do we start thinking about things like pre-release trials how do we think downstream to think about the long term effects before you’re testing it on a live population I know this is something that some of the companies are really thinking about now because at the moment it’s it’s really much more of an experimental mindset and I think we can do better than that

just so you alluded to but you’d name them Asilomar and their Pugwash movements which were the DNA avoiding splicing which was supposed to leave you to lead to everyone dying of cancer in a week and hogwash was the nuclear physicists refusing to work on weapons but they came from the community themselves very early on and so how is it we are failing in the community of tech people but those were tech people – yeah what is wrong with us yeah that’s a that’s a really good question um I I actually don’t think there is anything wrong particularly with what’s happening with a technical community but I do think we’ve been immunized from these discussions about the social implications and ethics of these tools partly because the shift in the numbers of people being affected has really just happened in the last depending how you count 6 for 10 years I mean this is a very very recent turn so I mean obviously we’ve had the Asilomar principles on ethics we’ve had I think in fact the the ACM the Association the leading association for computer science is about to release their ethical principles this week the last time they updated them was 1992 which was before the the web as we know it today so you can imagine how much has changed in terms of thinking about the ethics of computer science so I think there’s some catching up to do but I want us to go a step further than that I want us to go a step beyond just coming up with principles I think we have to think about the cultures the social practices inside these companies we really haven’t been fostering caution and thinking about social implications it’s been the opposite it’s been moved fast and break things right and so that as I think also been a real sort of cultural abrogating factor to sort of keep ethics at bay and I think we really have to do better now right so we have a question from the overflow room yeah so this question for downstairs which is as you’ve got three questions so I’m going to give you all of them okay you can answer all three at one time so first one is what can academics and researchers in industry do to strengthen the research capacity of low-income countries around the world there’s no current regulation and we need more safeguards how and specifically can or should this be regulated by States by businesses by somebody else and finally do we need a version of Asimov’s Three Laws of Robotics good to see ask them off getting a getting another run so they’re through really good questions I think the first one around how do we actually start to reverse this concentration of power to the sort of the wealthiest countries and the most powerful countries is going to be a really big one because at the moment of course there are really very few countries who are really developing AI system so we could talk about the u.s we can talk about China we can talk a little bit in the UK we could talk a little bit about what’s happening in Russia but ultimately the rest of the world is being treated as a client state so the geopolitics of the development of AI is going to be key and how we start to include people from countries that don’t have the resources to be developing these planetary infrastructures is going to be seriously important and I know again there are some early programs to try and think about this but certainly we’re seeing some concern from the EU in terms of what’s happening we’ve seen France and Spain both release AI reports as ways to try and keep talent in their own countries and we’re really seeing the sort of the zombie return of the nation-state is a way to try and regulate these forces of AI which is a way to respond to the second question and then finally I think we’re gonna need a lot more than just three principles certainly as Chris knows who’s who in all sorts of domains like healthcare we almost have to think in silos you know what do you need in the context of health what do you need in the context of criminal justice what you need in the context of education what you need in the context of finance all of these domains currently have forms of regulation but they haven’t really kept up with what’s going on now so how do we really work in those domains to make them sort of safe secure and ethical that’s more the way that I think we can start to address those issues hi hi there um oh excuse me I work for a company called methods and we do a lot of digital transformation work with government primarily so would be kind of government and tech nerds and we do a lot of work looking at how do we enhance government services by accessing kind of different technology capabilities and thinking specifically about like AI capabilities one is the kind of sort of written something down where is the line where government needs to build its own AI capabilities versus buying it off the shelf or something in the middle oh this is a really tough one it put me right in the weeds thank you look the reason why this is really interesting right now is

that there is a huge race for talent I’m not sure if you’ve all been following the stories but I mean basically engineering talent right now is commanding just vast like basketballer levels salaries or soccer player salaries really and it’s and it’s extraordinary rush to the to the degree now we’re really seeing pressure on universities and and professors and their students being hired away I mean we’ve really got a problem now in terms of these sort of giant gravitational force that’s pulling all of the talent into the big five or five to seven depending how you count so governments have a real problem the first problem is how do you pay people when they’re being signed away for these giant checks the second problem is how do you create and foster a research environment where people can do their best work and this is something where I think governments really can offer something different and perhaps that could also be new spaces for diversity and they can actually teach teacher some industry a trick or two there would be really helpful but I’m not going to be overly optimistic with you I think that’s going to be actually a really difficult problem and you’re really moving against enormous amount of industry energy which is focused on soaking up all of that talent oh you’re next hi thank you so much for that hi my name is Lisa and I’m working as as an associate lecturer at Goldsmiths College and there’s a lot of talk about engaging the public and getting them more engaged in these debates and getting more of the public’s opinion involved in the creation of AI and the deployments and etc but do you know of any successful public programs that have done this really well so I know what Zilla is doing some really good things with some of the the work that they doing and the art fund that they’ve set up but just do you know of anything else is happening not necessarily in the US or the UK but anyway actually yes yeah I do I mean I actually think this is an this is an area where I’m really excited and optimistic as well I mean at the AI now Institute we sort of collaborate directly with the ACLU they were one of our founding partners and the n-double-a-cp Legal Defense Fund to re and also the Leadership Council who’ve been doing sort of extraordinary work thinking about sort of civil rights civil liberties in the u.s. context and that is very much about getting the voices of affected communities into the room to think about how research should work but that’s research and the other question is how does industry do that and frankly industry just doesn’t have the history of doing a lot of that sort of work so I think there’s an enormous amount of responsibility that’s being put on both sort of that NGO sector but also on research institutes to try and do that work and I think we really have to lead by example because there isn’t a great set of a great set of stories you can immediately turn to but there are a lot of whatever say more ethnographic studies that are trying to do that work so I think it can be done but as you would know I mean that sort of community work is slow it’s not quick and it shouldn’t be people just coming in and extracting information and disappearing it has to come from the communities themselves so how those relationships build up over time I think has to be done in a very sort of authentic and long-term way but that’s how we’ll get there we’re very fast running out of time but I did promise somebody here who would have a question so I think yours will need to be the final question so no pressure I wondering you mentioned the question of how do we decide what the neutrality point is I was wondering what your opinion on that is and how we might go about answering that question oh yes I love that you’re ending on again a super tricky one my my feeling around this question is that at the moment we aren’t having enough conversations about what neutrality looks like it’s just something that’s happening and the tweaking of vast technical systems that for each one of you I sort of personalized to you so you don’t even really necessarily see how the whole Beast is working you’re just seeing part of it and the more that we move into sort of deep learning techniques even engineers don’t understand how a system is producing the results that it’s producing so there are some real issues here around sort of knowledge around how things work so what that means is that when we talk about neutrality or sort of erasing bias I think we’ve got a real problem on our hands and that we don’t even have the skills of having this conversation of saying well actually that’s a political decision to remove these things or to add these things in so that’s really part of my interest tonight is to really start reframing this conversation away from AI purely being a set of technical neutral decisions to really being a profoundly political and social set of decisions and the way that I think the only responsible way to really start doing that at scale is to first of all start making it clear how those decisions are being made share it with the public make it part of the discussion but also really look to

the work where people are saying oh if you change it in these ways the following things will happen there’s sort of a disconnect right now between sort of the most social researchers shall we say from the human sciences and the technical sciences so there’s various things we can do but again I think this is gonna be one of the hardest and most controversial problems yes this is so interesting and engaging there was that he heard the hand up here was very key yes to job we can take just one extra question I hate irregular thank you so much professor Crawford for making the link between posh structures and classifications you suggested that john wilkins actually started the trend oh no no no we can take the trend all the way back to Aristotle and and for I just like John because you know this is his house we’re in the house of Wilkins fantastic so Chris Dixon of andreessen horowitz had written an article which was how Aristotle created the computer and you know he talks about logic and some of the challenges within logic and language structures and then subsequent to that Steven cave of Cambridge University wrote the article a dark history of intelligence yes great on which he talks about essentially Aristotle’s dualism z– being non value neutral in other words assigning male or female intelligent or stupid now if we fast forward to present day we actually see the implementation of those Joule ISM mechanisms in swipe left swipe right in thumbs-up thumbs-down and also in tools such as implicit associations test so my question would be hot what’s the best way to encourage engineers to pretty much you know overhaul that entire dualistic mechanism in all of you X all right guys we got five minutes let’s do it together well let me tell you a good story first and then we’ll talk about the complexity there have been changes to these dualistic systems back in 2013 I think it was on Facebook you could only be one of two genders you could be male or you could be female two years later after the Indian High Court decision that everybody has a right to sort of choose their gender identity they have 72 categories so I mean you might say that’s a pretty extraordinary shift towards exactly what you’re talking about this sort of move away from sort of binarization however it’s still a pretty arbitrary number and you could have done a lot of different things and just coming up with a drop-down menu of 72 you could come up with a free text field you can have no gender at all and these questions these design these UX decisions actually have profound on running impacts in terms of the populations what I’m more worried about now though is these systems that are automatically doing classifications and and I just came from a conference where papers were being given about oh look we can easily classify your race by the width of your nose bridge and the size of your lips and I’m like wow this takes us right back to physiognomy this is exactly what was being done and the next step of course in physiognomy was saying you have large lips and curly hair you’re probably less intelligent so those histories so fizzing anomic and phrenological histories are deeply connected to how we do this kind of classification and I’m really worried that what we’re seeing right now is this a rise of scientific racism in Silicon Valley in the actual companies that are designing these systems we’re seeing the return to this type of like race realism I believe they’re calling it so I mean that terrifies me and I think that’s why you know tonight I really wanted to say we need to talk about this we need to talk about these studies so we’re trying to some automatically predict criminality or gayness or race I mean race is itself a social and cultural construct we’re acting as though it’s the sort of clear biological reality it’s simply not and we have 60 years of critical race studies to show us that but it’s not reaching the machine learning community so again this is this crucial need for interdisciplinary conversations and I think some very real public conversations about how we feel about being classified by those systems but thank you for the question thank you very much Kate that was a tremendous and not only an extremely stimulating a thought-provoking lecture but clearly scratching the surface of an incredibly important domain I refer to myself as an optimist of enthusiasts around the technologies you mentioned one of the areas I’m very interested in is the application of

machine learning in healthcare and I do believe that the potential in healthcare and many other domains to bring societal benefit is absent normos and one of the dangers that I see for machine learning and for AI is that if we don’t have a properly informed and rational public debate among these very important very thorny issues we may end up in in a world rather like we ended up with genetically modified foods making decisions on basis other than rational and informed debate and I think that carries a very real risk that will throw the baby out with the bathwater will fail to achieve the the amazing societal benefits this technology can bring so I think events such as this evening I think that incredibly important in having that debate and getting these very difficult very thorny issues out into the public so we could all participate because you’re right it’s not a problem for just engineers or researchers it’s it is everybody to participate so really delighted that you could join us this evening so a very big thank you to you a very big thank you to everybody who’s participated and joined us this evening an event like this this evening is completely meaningless without you the fact that you’ve taken time to come here and join in the discussion is absolutely central so a big thank you to all of you here to all of you in the very busy overflow room downstairs anybody watching online and who will watch the recording a big thank you to to all of you and the next event in this series will take place here in the Royal Society on the 11th of September and will welcome the the renowned economist and Nobel Prize winner Professor Joseph Stiglitz who’s going to talk about AI in the workplace I do encourage you all to come and participate in that event as well I’m sure that would also be extremely stimulating but finally let me thin it finished by just inviting all of you to join me in thanking cake for a superb you