Artificial Intelligence: From Social Good to Ambient Intelligence (Google I/O'19)

[MUSIC PLAYING] YOSSI MATIAS: [INAUDIBLE] artificial intelligence and how it impacts anything from social good to ambient intelligence Let me start, actually, with a personal reflection– a recent experience I had in the mountains of Rwanda I had the privilege to spend an hour with a family of gorillas in their natural habitat And it was fascinating to see how they were interacting with each other It’s clearly– they have their own language And by the way, they didn’t pay much attention to my presence there, as you can see But this reminded me of a famous gorilla called Koko Koko actually learn to speak– learned a language And she learned over 1,000 words of vocabulary and 1,000 signs– 2,000 words in spoken English And it took her 46 years to learn it And she became quite a celebrity, in fact– made the cover of “National Geographic.” But this also brings up this interesting question– in a way, many people would not even believe that a gorilla could learn how to speak before Koko And the question is– what else can we teach about speaking? How about computers? Can we teach computers to speak? Can we have a conversation with computers? How far can we take artificial intelligence in that direction? And if we do that, what are the ramifications? And while gorillas already have their special language In computers, we actually need to teach it from scratch So artificial intelligence is actually bringing some hope in that direction But before actually going deeper into that, I’d like to talk about another aspect of artificial intelligence So first– obviously, we’re seeing that in many products around our life– this is now one nice example that I like, which is– we all have our phones taking pictures quite a bit If I’d like to search for a photo out of my ever-growing gallery Today, sometime, all I need to do is to just search– just search for Birds in Sunset And it could bring those pictures to me– these are actually from my own photo galleries– without ever labeling those And in a way, I don’t pay much attention to it, which is one of the magic of technology– when we stop paying attention– when it just works So really we take it for granted, but it used to be magic– the fact that we can actually identify birds and sunsets without ever labeling it, without training it And this has really become possible by, obviously, advancements in machine learning and AI– more notably in deep learning, deep neural networks And very simplistically, an image taken as an input to a deep learning network– the pixels activate those so-called neurons, which are essentially a computer program inspired by how we believe some of the branch functions work And then those neuron activities is propagating through the layers until, at the end, we have a classification– whether it’s a dog or a cat And the way in which this network is built is by training it on many examples– in a way, similar to how we’re training ourselves to recognize a new object So obviously, this made huge strides over the last decade And this technology is now– I’m sure you heard it many times recently But the beauty about this technology is that, while it can be used for many fun stuff, such as searching images in our photo album, it can also solve some pretty serious problems And again, such inspiring example is what it can do in health And again, many of you heard about diabetic retinopathy This is a condition that may affect people with diabetics Actually, if not treated, it can lead to loss of vision, loss of sight– to blindness sometimes And the good news is that doctors can diagnose it by inspecting an image of the retina The bad news is that there’s a huge shortage of eye doctors, which put at risk millions of people worldwide to lose their sight And the inspiring example here is that research done a few years ago, where one could train machine learning models to help diagnosis, which is empowered with what experts can do, and therefore compliment that And by doing that, now all you need to do is to bring a person to a scanning device to take a picture, have the system help out with diagnosis, and then save the eyesight of that person This is actually an image taken from [INAUDIBLE],, a clinic in India, where there’s already a pilot going on So what has started as a research, became a realization– is now actually already deployed in places

such as India and Thailand, which is very inspiring to us What other problems can we solve? And indeed, some of you may have heard last year about follow-up research of looking into the retina And what started out as an exploration– what can we learn from an image of the retina, perhaps? Can we identify some other characteristics of a person just by looking at the image? Turns out to actually have discovery of phenomena that we don’t even know about The research published showed that you can identify some of cardiac signals and risks based just on the image of retina– a fact that was not known to the medical community before, which is a great example of how technology can help us not only solve known problems or provide solutions to known methods that we want to solve but, also, give us new insights that we were not even aware of So this, obviously, brings up more hope And indeed, just recently we just announced these research showing how we can train machine learning models with the right methodology to help out with the scanning to have an earlier detection for lung cancer, which is one of the deadliest diseases out there And again, think about all the impact that we will have with this technology So these are all very exciting progress that we see worldwide And I’m really excited about the possibilities for health But when we think about the impact of AI to our world, it’s not only to health– which is hugely important There are many other societal problems that we can actually address with AI And one such area that I’d like to actually spend a few minutes about is crisis response And crises are situations where people find themselves in danger And once somebody is in crisis, they have a very strong need for help, for information, for technology So to reflect on that, I’m going back to my own personal experience 8 and 1/2 years ago This was in December, 2010 I have a team in Israel, in Tel Aviv and Haifa And that day, I was in our Haifa office And we had one of the deadliest fires in the Carmel Mountains, which is not too far from the office So this picture was taken by my camera I actually checked the metadata on the picture It’s by Google Nexus 1 phone And at that point, it was clear– there’s something going on But searching for information on the internet revealed very little information And I was really eager to ask questions, such as, what should I be doing? So I evacuate the office? Should I alert people to leave their homes or something? And there was no information available on the internet It wasn’t until I called the mayor’s office– I got a number with some actionable information And actually, asking my search team to put this information online, as a result, for anybody searching for the same stuff that I was searching– this was actually one of our early launches of actionable information crisis response And later on, we used it for dozens of other such situations And we had a lot of investment in crisis over the years But a couple years ago– and I should mention that I was not alone in doing this So what we see is that, whenever there’s a crisis, people are turning to Google to search for information You can see that, during floods, during hurricanes, during explosions, they’re looking for information not only because they’d like to learn about what’s going on but because they’d like to take action They’d like to know what they should be doing about it So to that end, we actually built a team And we launched, a couple of years ago, a product within Search, called SOS Alerts where, every time there’s a big crisis, we’re trying to pull together and provide the best information that we can, so as to help people about questions such as, what’s going on? Where is it? What should I be doing? How can I help? And since then, unfortunately, there are now a lot of crises out there There were over 250 activations worldwide and tens of thousands alerts, which are public alerts coming from various governmental agencies directly into people’s phones, with a total of over 2 billion views on those crises Which– in many cases, we’re hearing about how helpful it is However, we also learned, in the process, there’s certain crises that we cannot provide a lot of health And in fact, one of the most devastating natural disasters is our floods Floods are affecting up to 230 million people per year globally They are actually believed to be responsible for over 6,000 fatalities per year globally And some of those can be, perhaps, avoided, if people would have enough information Which typically– we just don’t have it And we see those examples, in fact– when you do have information about floods,

then people can take action So for example, this is testimonial in places that we did have information about floods and people could take action And just last week, we saw an example– actually, a very beautiful example– about how the Indian government in Bangladesh took action during the Cyclone Penny And because there was a pretty good prediction about where the cyclone is going to be heading, they could take action And based on preparation for many years and a great execution, they actually managed to mobilize many people and get them out of the danger zone But there are many other floods where, actually, this is not happening The reason being– for example, whenever it is the monsoon season, rivers are starting to rise And quite often, we know that there’s going to be floods somewhere But we’re not sure exactly where And the difference between knowing just generally versus knowing exactly makes all the difference in the world So what we can see here is the situation where the accuracy is not good You know that a flood is maybe going there But if you don’t know exactly where it’s going to happen, people are not going to take action, as we learned And governments cannot take action, because there’s too much area to cover So it’s just not practical So the question is, can we get these higher-accuracy predictions? And as I’m going to discuss– indeed, this turns out to be possible However, two years ago, if you’d asked me if it’s possible to provide this kind of flood forecasting, I would say, I don’t know Because there’s a lot going in into this computation of predictions of flood So after some exploration, we managed to figure out that, actually, we could do something about it And we did have some progress on flood forecasting So the way we’re treating it is by, first, taking very accurate data elevation models of the terrain of the places that we’re trying to predict And this alone is a very interesting and important computational problem, based on taking aerial imagery, using machine learning to reconstruct the terrain, and then using machine learning to recalibrate what’s going on there, based on historical data Once we do that, we can run thousands– sometimes, hundreds of thousands– of simulations, using physical models, hydraulic models, where we try to anticipate where the water is going to go In order to do that, we need to have some readings of the water, so that we can base our computations on And we have here a partnership with the Indian government to gather exactly this data And of course, all that needs to be done per place, because the soil may be different We need to match it versus past experience So there’s a lot going on in order to do that And getting into this effort, it wasn’t clear whether this is feasible or not It turns out that it is And indeed, just last year, we announced the pilot in Patna, this bluish area that you see here, where we had accuracy of 90% prediction a few hours before the floods themselves, which was very encouraging So we doubled up on that And we already had an activation back in September, where we could actually send out to people’s phone, with 90% accuracy, the area where the flood is going to occur So based on that, we doubled up And now, we are expanding And we expect our effort to go and cover order of magnitude into much of the rivers in India and help out many more millions of people and then, also, work on disseminating the information, so people could just get notifications on their phone, which will drive them into the alert about what’s going on And they could even go to Map to go deeper and navigate themselves So this is very encouraging– to see how exploration of something that we don’t even know possible turned out to be something that is feasible And we’re very excited about the collaboration also with the government But I’d like to also point out another instance of how AI can be used by everybody And this is another very inspiring example These are two high school students, Sanjana and Aditya, who actually took machine learning models from TensorFlow And they built up small devices, using some parts they put themselves And they put them in various areas of their forest in California to learn And they trained the models to identify when there is a risk of fire because of deadwood and other parameters And by being able to alert about the risk, they could actually send notifications to Cal Fire, so they can prioritize going to check these areas and, hopefully, prevent some fires And the fact that you can have two high schoolers take, off the shelf, libraries and just use them and solve an important problem, I think, is a great opportunity that we have today And it’s just increasing So if we think about these two examples,

these are good illustrations of why we started the Google AI for Social Good program, where we’re looking more closely into a how to first double up or focus on our research in engineering around these societal problems and, on the other hand, how we work with building an ecosystem and supporting the ecosystem, with the realization that we cannot solve all problems obviously And in many cases, actually, we’re not even aware of the problems that could be solved So the opportunity for everybody to actually pitch in and identify problems and bring their own expertise, I think, is a great opportunity that we have And along with the program, we announced the Impact Challenge, where we asked everybody to come and submit a grant and apply for grants and support And we just announced the 20 winners for that, which are coming from all over the world and are also touching many areas– agriculture and emergency services And some of them are about societal issues And some of them are about suicide prevention So we see a lot of variety But also– it was great to see– we had over 2,600 applications from 119 countries And 40% of them– without any background in machine learning So the hope is that, for many of them, perhaps, getting some mentorship and exposure is going to actually help them even accelerate some of the developments in areas where machine learning can actually help And indeed, in addition to the monetary funds, we’re going to kick off support by a program that we have called Launchpad, which is about mentorship of startups So the notion of working with the community is not new In fact, we have Launchpad program that has been running now for a few years And we are focusing now on Launchpad Accelerator that provides mentorship on machine learning and AI to startups all over the world We had over 200 startups going through that program and thousands of startups going through mentorship in general by Launchpad And Launchpad is a global program In fact, it started in our office in Israel, experimentally And since then, it’s been running in Israel, in Africa, in many other countries And by the way, talking about Africa– it’s great to see talent going everywhere and how to encourage that So I was inspired, visiting there last year, when we inaugurated a program called the African Master for Machine Intelligence– and getting to talk with some of the students who are representing the next generation of leaders And again, this is a kind of reinforcement And whenever I go everywhere in the world, you can see that, really, the talent, the entrepreneurship, the opportunity is everywhere, which is really exciting Let me also point out that, when we think about collaboration in community, it’s not only big companies and startups and NGOs Sometimes, there’s an opportunity to do even broader collaboration So I’ll just highlight this collaboration that we’re having with the World Bank and the United Nations and additional tech companies to try and address the problem of famine So it turns out that, even today, famine is a big problem worldwide It can impact millions of people And one of the single most important items to help these people is to identify famine early enough– not wait X months after it happened to then take action So we have collaboration where we’re trying to help out on developing with machine learning models that can identify the indicators of food security and, thereby, enable the World Bank and other organizations to take actions earlier– to divert resources before it becomes a real crisis So the opportunity to have collaboration on a large scale and global scale by everybody for social good– and incorporating AI to do that– I think, is pretty significant Now with that, I’d like to actually switch gear and talk about– oh, one last note about it is that, even on floods, where we’re focusing a lot and we have a sizable team– even here, we understand that we can only talk about a small subset of the problem And there are many problems that aren’t floods that we don’t necessarily have the expertise So just recently we brought together some 80 researchers and people working for governments and other organizations to discuss how we could bring together expertise in hydrology, in physics, and machine learning and have this conversation to see how we can actually, together, tackle these problems and get some more progress on that And with that, I’d like to switch

So if AI here is about how to solve societal problems and really impact people’s lives– sometimes, saves those lives, save the site, make them safe– there is another domain where I find that AI is becoming increasingly more significant in our lives And I’d like to talk a little about conversational AI, about how we can have conversations among ourselves So going back to the questions raised by [INAUDIBLE],, so many of us grew up on the aspirational magical experience of Star Trek Could we ever speak with a computer in a natural way, like we do with a person, just ask questions, get answers, get advice? We don’t need to learn new methodology We have the interface That’s what we do And in fact, this goes back even to Alan Turing, founder of computer science He was already asking the question about, can people talk to computers, much in the same way that they talk to each other, famously known as the Turing Test Now, just think about it If you could have this conversation, obviously, it makes it accessible to many more people Because all you need is to ask the question or to ask for something to get done So this is, obviously, a big problem But we’ve seen some progress in recent years And many of them– actually, we don’t pay much attention So today, many of us already, when they want to search for information, they would just ask their phones Many would just talk to their assistant or the phone to set up an alarm And it should just work– right? There are hundreds or thousands of different ways in which you’d like to set your alarm And you expect it to just work And of course, there are many other situations in which we’d like to use conversational technology And the reason why we’re making this progress, of course, is by having speech recognition, which is becoming better and better And the ability for speech synthesis– to take text and to read it back to user And there are additional ingredients of a conversation that we have had some progress on So for example– Smart Reply Whereas– we have a question We can get a guess about what the answer may be, just based on experience, which is actually sometimes surprising Some people are offended by the fact that they’re a little predictable Smart Compose is another example In fact, this technology, at some point, I believe, some April Fools’ joke by some folks And today, they are just working And it’s pretty magical when it does These are two ingredients But I’m very excited about, also, the technology that we can actually speak a text– the text-to-speech synthesis, which became very natural So for example– could we just listen to pages? So in fact, we launched last year– and we have a version of the Google application that works in India and Brazil and other places where, sometimes, you have lower-end phones or internet connection, which is constrained It’s called Google Go And we launched with the ability to just read whenever you see the browser or web page You should just listen to it [AUDIO PLAYBACK] – Up to 15% of slots, from the new runway would be dedicated to improving domestic connections, and the government hoped that the increased competition with existing routes would give greater choice to passengers [END PLAYBACK] So this is an example that you can just listen to the page in your Google Go application And the beauty here– this also highlights the words So obviously, it’s for convenience But for many people, it turns out, it’s quite important, because of literacy– if they find it difficult to read the text, either because of literacy issue or because it’s a foreign language to them And we just announced using the same technology to solve related problems, such as “what did you see text.” You’re on the go You’re on the street You’d like to understand what’s going on Today, all you need with, again, Let’s Go, is to point out your camera to that– your telephone to that– and to hear what’s– [VIDEO PLAYBACK] – Information for cardholders– all customers using old, proprietary, magnetic stripe cards should be advised [END PLAYBACK] And the beauty about technology is that, now, we can start to put them together and connect them in various shapes and various forms So for example, Translate is technology that used to be science fiction But today, we just expect it to work You open a web page It’s in a different language You hit Translate And you see it And it’s pretty good It still have room for improvement But you get a sense of it So what if we just bring them together? [VIDEO PLAYBACK] – [SPEAKING SPANISH] [END PLAYBACK] Just by putting them together, suddenly, we reduced barrier Suddenly, a person that could otherwise not read the sign, for some reason, cannot hear what’s going on there Now, another inspiring project was developed by Dimitri and Chen So Dimitri is deaf And he worked on taking the speech recognition technology and making it work in what we call Live Transcribe So you can just listen to the conversation

and seeing it on your phone And this has been, now, going on for some time But when you think about these technologies of speech recognition– text-to-speech– there’s more that we can do So if you cannot hear today, we can just click on the button and get those live captions, which we just announced And think about people who cannot hear Now, everything that your phone is talking becomes available for them [VIDEO PLAYBACK] – Do you like the blueberries? – Yeah – Blueberries? Delicious A couple more Mm [INAUDIBLE] [END PLAYBACK] So you can apply it for, essentially, anything that your phone would sound to you And if you think about it, for some people, it’s actually making the difference between understanding what’s going on and what not For many, or for all of us, it’s really about when you get into situations that you just don’t want to hear the sound or you just want to also read it Now, the way I’m thinking about it is that this is really re-imagining what the Mute button is Mute is not about, I don’t want to know what’s being said It’s about, I don’t want any noise to be made So you can actually just cross modality You can just move the hearing from the modality of audio to a modality of screen And if you think about removing barriers and the fact that, with technology today, we can just move the conversation in a fluent way between modalities or have them in both modalities, from audio to text, from text to audio, the opportunities are plenty And we have more examples– right? So now, going back to Dimitri He can hear what other people are saying, because we have voice recognition that works pretty well for those who are talking with common accents and common pronunciation But when he’s speaking– because he has a heavy accent and he became deaf at a younger age– our speech recognition devices actually don’t really understanding him very well And you’ve seen it before [VIDEO PLAYBACK] [END PLAYBACK] So the premise of– just talk to your phone and get stuff done– doesn’t quite work for him And this doesn’t quite fit our commitment to make Google work for everyone and to make technology work for everyone So we embarked on this project, Euphonia, which actually was starting out by asking ourselves, can we help ALS patients, whose speech deteriorates, to be understood? And we trained and we built up technology that can actually take personalized training of models for speech recognition And Dimitri actually took these models He worked with our engineers And he’s put a lot of time to train on thousands of sentences And the result was actually quite not what we expected The error rate that he’s getting is actually in par with what we’re getting on a regular speech recognition Of course, this is still in research stage And we need to find ways not to rely on those thousands of training But here’s the example of what we get today [VIDEO PLAYBACK] [END PLAYBACK] So this is pretty transformative, of course And again, we are getting some very positive feedback from ALS patients who we’re working with about the ability for them to express themselves in a way that, now, it can be understood So I’m really excited about this technology And again, as it goes with accessibility– interestingly enough, quite often, when you develop for the hard cases, it turns out that you can make progress for many more cases that you think about So for example, the team wrote a research paper, which showed that the same technology that is used for those extreme cases can have a tremendous improvement over known data sets with accents And the way this works is, essentially, applying– building up models, training data sets, based on visual spectrograms, then training them And we need plenty of them, in order to really build it in a personalized way So these are the different kind of fingerprints, if you will, of the diverse spectrograms that are here So these are all good And when we talk me into our phone and with our assistants, this is the problems that we’re typically facing But I actually want to go a little bit back

to an older technology– the phone And I appreciate that some of you, perhaps, are not familiar with this device So this is a phone, as it used to be at some point in the past So it turns out that, even though much of our activities are with our assistants, with our phones, there are plenty of cases that people still need to pick up the phone to make a conversation And for example, it turns out that, even though you’d expect that a lot of what we do today is just online reservation, et cetera, 60% of businesses in the US that are relying on reservation do not have an online reservation setup And if you really want to talk to them or to do something with them, you need to pick up the phone And pick up the phone– so we need to take care of that And then, pick up the phone, sometimes, is irritating And for some, it’s impossible, either because of circumstances or because accessibility of other reasons So to that end, we announced last year Duplex, which enables one to have a phone call made on their behalf by the assistant to make phone reservations and to do it in a pretty natural way And indeed, today, this is already working in 44 states in the US And we get some great feedback from users and from businesses And what enabled Duplex to do the job is really combination of many technologies And they have to do with analysis of the audio source and the speech recognition and the text-to-speech and, of course, using deep learning networks that are built for that, because we need to understand also the intent of the conversation And all that is possible, because we focused on very specific domains So we could actually build those models in the way that work sufficiently high-quality To do that, along with a lot of machinery around it, it touches real-time supervised trainings and more And of course, we also added to that Since our aspiration was to have a very natural conversation, then we added those particular speech disfluencies, which are part of the communication They are part of the conversation They are actually part of how we communicate It’s part of how we’re saying, no, in a soft way, in a polite way It’s how we acknowledge, while waiting for the other side to actually talk to us There’s actually a field in linguistics called Pragmatics, which looks into some of these speech disfluencies And these, put in all together, enables us to have a conversation that is natural and can achieve this goal So there are many other natural ways in which natural conversation could help us And I really encourage everybody to think and to apply their imagination about what can be done In fact, some time ago, I was having a conversation with my wife about conversational AI So she actually pointed out to me an opportunity She actually challenged me– well, if you can do conversational AI, why can’t you solve this problem for me? And then she pointed out that, every time the phone calls and there’s an unknown number, she has a dilemma Well– perhaps it’s a sick child or a sick parent But if she would answer the phone, then it’s likely going to be somebody trying to set her on a cruise or a new insurance, which is pretty annoying And I’m sure we’ve all experienced that So could we use conversational AI to help with that? Could we use it to actually take some of the burden from us, give us back our control of our time, of our attention, et cetera? So indeed– fast forward– we announced just a few months ago, a feature called Call Screen And Call Screen is utilizing conversational AI in a basic way to really answer the phone on your behalf, if you’d like to, on unknown numbers, and to help you figure out who’s calling, by asking the other side and then showing you the result in real-time, by transcribing it to you and then enabling you to actually ask some additional questions and eventually pick up the phone or decline or just report spam And really, this is facilitated by a combination of all these technologies– of speech recognition, of text-to-speech synthesis And everything runs on device So it can be offline It’s totally private And the fact that it can run on a device is really one of the significant advancements that we’ve seen And you’ve heard about it as well– the fact that we can actually do things, which is instant, isolated, it’s yours, and it doesn’t actually leave your device So obviously, this is a highly popular feature, because everybody can relate to It was actually interesting to hear from people, not only how happy they are for never need to speak with telemarketing again, but you also heard from a few people that, because of that, they actually took a call that they would otherwise ignore that turned out to be a hugely important call So think about it as a way to help get control of our time,

of our attention And of course, this can go a long way as well So one direction in which this can go is actually, again, coming from an engineer on my team who came to me one day and say, hey, my heart is really with accessibility And I’d like to take Call Screen and extend it so that people who are deaf could have a full phone conversation, by adding additional technologies And here, in order to do that, you need to be synchronous You need to be able to type in real-time But then we could use all those features that we mentioned earlier– Smart Compose, Smart Reply And indeed, this is still in research stage But this is an example [VIDEO PLAYBACK] [PHONE CHIMING] – Hi, this is Nicole’s assistive chat She’ll see what you say And her responses will be read back to you– starting now – Hi, Nicole, it’s Jamie How are you? – Hey, Jamie I’m good And you? – Great– are we still on for your 1:00 PM haircut tomorrow? – Sorry– can you do 3:00 PM? – Uh– yes I can do 3:00 PM We have a lot to catch up on I want to hear all about your trip – Perfect– thumbs up – Great, see you tomorrow Bye [END PLAYBACK] So think about everything that comes together in order to facilitate this– speech recognition, text-to-speech, guessing how to type, helping it out But think about the ramifications of that It means that, essentially, a person could have a regular phone conversation on the one hand The other person doesn’t need to talk or hear, perhaps, because they cannot talk or hear Perhaps, it’s not convenient to them Perhaps, it’s a new generation that doesn’t like to talk on the phone They are just used to chatting This is cross-modality Perhaps, you’re in a meeting or on a flight and you’d like to take a phone call over some other ways And think about– if we would integrate, in the future– say– translation– the opportunity to actually have a conversation with somebody in different language, totally seamless, totally ambient, in a way that, actually, we don’t pay attention– just reducing barriers It just lets people to communicate in a better way So these are pretty exciting opportunities, I think I think these are examples where we’re solving a problem, sometimes, for one person or because we’re excited about it And then it turns out to solve a bigger problem But in many of these examples– to reflect from them– many of them were actually situations that we not necessarily anticipated They came out from a single passion or from exploration Or in the case of flood forecasting, we didn’t know that it’s solvable Actually, even when starting Call Screen and Live Relay, it wasn’t clear that we can actually put the technologies altogether to get to the right level Until we start, we don’t even know that we do that And I was reminded of that, actually, in a recent vacation in Spain when having a morning run and, at some point, deciding spontaneously to take a detour and found these amazing bays and landscapes that otherwise I would miss So I think that research and technology have this nature that, quite often, we need to have this exploration and we need to try And that’s some of the magic that’s happening on those Now, reflecting on where we are with conversational AI– I think we’re in very exciting times Think about all the technologies and everything that could happen if we have all these technologies get to a much, even better accuracy, in a way that we don’t even pay attention to them Think about the barriers that can be reduced, the fact that every person can ask a question, can get stuff done in a seamless way, in an ambient way The magic is that, when the technology just works, it becomes totally ambient That’s what I’d like to think about as ambient intelligence And another reflection is, if we think back, today, we’re having all these technologies and all these products that a few years ago were just aspirational We didn’t even know if we can actually solve them We didn’t know that actually today we could have these conversations and hear back and have all this stuff just working for us Which interestingly, if you think about the future, what is it going to entail? So it’s very difficult to predict But I think that one can expect that, even though I don’t think we’re going to see anytime soon a “Beam me up, Scotty” kind of technology, for many other technologies, if you

can dream them, you can probably build them Thank you very much [MUSIC PLAYING]