Introducing the Awareness API, an easy way to make your apps context aware – Google I/O 2016

BHAVIK SINGH: My name is Bhavik Singh I’m a product manager on the Awareness API MAURICE CHU: And my name is Maurice Chu I’m the [? engineer ?] for Awareness BHAVIK SINGH: And today, we’re going to be introducing a new API that makes it super easy to make your applications more context-aware But first, I want to be on the same page about why we’re talking about this in the first place If you think back to the keynote just a few hours ago, Sundar talked about how mobile has taken off like a rocket ship And what this has done is pretty fundamentally changed the way that we, as users, use our mobile phones, our wearables, and even our IoT embedded devices In the desktop world, I would open up a computer and have these long prologue sessions with multiple pages of content I would input my intention into the computer with a keyboard and a highly precise mouse But with devices that are much smaller and that come with us everywhere we go, we now glance at them in the bus, quickly, to check the calendar, while walking to a meeting, and maybe even to play a game while bored at an I/O talk, or something like that So we, as developers, understand these changes And we’ve actually made a lot of updates to our applications to accommodate them New user interface patterns have emerged, like material design, that are much lighter and simpler than their desktop counterparts We have new ways of inputting intention into our devices, like voice, stickers, and emoji, that allow users to express themselves without the need of a large physical keyboard But we at Google believe that we can actually do more Today, we’re still treating these phones like tiny computers In fact, they’re quite different from computers, though, because they’re jam packed with sensors And with powerful algorithms, these sensors allow our phones to be aware of the context around them And in turn, the phone can tell the applications where the user is, what they’re doing, and what’s around them You can use this information to build more assistive and aware applications that can help users in their day to day lives So to dive a little deeper, I want to give you a few examples of what a more aware an assistive world would look like We all have a morning routine And for me, personally, it involves waking up to an alarm And usually, at this point, I’m incredibly groggy, because I slept way too late last night And I’m pissed off with my phone, because I don’t have my first meeting until four hours from now So why am I awake? Your alarm can be more aware of these signals It can understand when you went to sleep and when your first meeting is, to adjust the time that it wakes you up So at this point, I’m having a pretty decent morning And the next big thing for me to do is to get ready for the day So usually, this involves reaching for my phone to check a weather app, or go to some weather service And now I’ve become distracted by this swarm of notifications I have a Snapchat I need to look at And my day has become pretty distracted What if my phone found a nearby TV through Chromecast and projected onto it the day’s weather? So now, as I walk from my bed to my closet, with one easy glance, I can see the day’s weather and know that I’ve got to get a rain jacket Finally, at this point, I’m pretty relaxed I’m sitting at my breakfast table having cereal And while I’m doing that, an assistant application is looking at driving and location patterns across thousands of users and realizes that there’s a lot of traffic So it wakes up a nearby speaker, maybe my Google home device, and says, hey, Bhavik, you should leave within the next five minutes so you’re not late for your first meeting This morning– there is usually this chaotic storm of applications, and services, and questions– is now this elegant, easy experience, just because a few applications were more aware Let’s walk through another example We in the Bay Area here love our parks And for me personally, I love running through Golden Gate Park So very often, on a weekend morning, I’ll wake up, I’ll strap on my brand new Android Wear device and head out the door into the California sun Half an hour into the run, I’m like, oh, shit, I forgot to actually start tracking my run What if your wearable was smarter, and it automatically detected that you were running and launched the fitness app for you? It could then track your distance, your cadence, and your heart rate, so that you can automatically keep track of those fitness goals and finally achieve those New Year’s resolutions Now, music is a big part of my run

What if, as soon as I plugged in my headphones, I got a notification from my favorite music application saying, here is the best sunny running playlist for today? With one tap, I can listen to the right music to keep my feet moving Finally, if you guys know me at all, you’ll figure out that I get distracted very, very easily So while I’m on my run, I just see this cute little dog I’ve got to take a photo So I reach for my phone, and the launcher application knows that Bhavik often takes photos when he’s outside So it puts the camera app right in the center of my screen And when I take this photo, it tags it with not only the location, but also the weather and the activity So later on, I can search for photos that I took while running It can find that tag and quickly show me this dog again I’m happy now Finally, I want to talk through one more example Driving is really, really difficult to multitask What if we lived in a world where aware applications made it easier? So as soon as I got into the car and started driving, my Bluetooth turned on and connected to the car speakers My favorite navigation app opened up and launched into driving mode Now, the user performs one very simple action, which is starting to drive, and his entire situation, scenario, and applications are set up perfect for that journey As he’s along his way, I get a notification on my phone that says, hey, you’re actually near a pharmacy, and you need to pick up medications Be sure to do this, because the store’s actually still open, and you’re going to drive by it These sorts of aware experiences can really help your users, but also help you, as developers Tasks like launching the Maps application while the user’s driving, or setting an alarm, are tasks that users perform every single day They’re highly critical for them And if your applications are aware, you can streamline these critical tasks In turn, your app becomes a part of their habits, which can increase your retention Suggesting the right playlist when you’re running, or reminding a user to buy medication, is suggesting important actions to your users, ones that they might even have forgotten And these sorts of actions will mean that they’re more likely to click on your notifications, if those actions are tailored, which can increase your click-through rate on the very, very important actions that you care about Finally, when you tag a photo with weather and make it searchable later, or you show me the rest of my day on a nearby TV, that’s a moment of delight for me as a user It’s something that makes me want to rate your app 5 stars in the store, or tell my friends about it And these sort of moments of delight can drive more users to your application So whether you’re a music streaming service, a health and fitness tracker, a local recommendation app, an alarm or driving app, or really any other sort of application, you can use the power of aware experiences to help your users, but also hit your goals as developers Now, at this point in the talk, you guys must be thinking, wow, this is all really cool I want to add it to my roadmap, but this stuff is really difficult to build That’s where we come in We, as a team, build Android awareness and location APIs And what we do is we bridge the physical world where your users live, work, and play, and the digital world where your applications and Android are We focus on this so that you can focus on building more aware and assistive context applications To dive deeper, we think about the signals that we provide in three main buckets The first one is where you are, or location Location is pretty fundamental to the human experience, second only maybe to time And so we fulfill this need with three APIs The first one is called Fused Location Provider, and it provides highly accurate latitude and longitude information by combining signals across a variety of sensors On top of Fused Location, we have a geo-fencing API that lets you specify a latitude and longitude and a radius, which is called a fence When a user walks into that fence, your app can wake up and suggest or perform an action If you want to learn more about these APIs, visit our talk at 6:00 PM today called Making Android Sensors and Location Work For You Don’t worry if you didn’t catch that There’s a link at the end with all the talks that I mention On top of lat/long, we also have semantic location What semantic location means is I never say I’m at this latitude or longitude, I say, hey, I’m at the Starbucks, or the coffee shop

And maybe your app wants to get photos of a Starbucks or get the open hours of some place If you’re interested in that type of information, you should check out the Places API and learn more in our talk tomorrow, Understand Your Place in the World We’re very happy that a lot of developers enjoy using our location APIs And many of you in the crowd today might have also been using them in your own applications Within Google, Google Maps is the obvious example It uses not only Fused Location Provider, but also the Places API to help people find their place in the world and navigate it The second big bucket of signals that we think about is what you’re doing Our phones today have these tiny physical sensors that tell the phone where it is with respect to gravity, how fast it’s moving, what its orientation is in the real world And we build a layer of intelligence on top of these sensors to provide you, as a developer, semantic information about what the user is doing The first API for this is called Activity Recognition, and it can tell you if a user is running, walking, biking, driving And we’re excited to introduce new types, like push ups, sit ups, and squats, this year On top of Activity Recognition, we have a very powerful fitness platform that allows applications to read and write fitness data So if you’re interested in data like nutrition, or how much a person has run, or what their weight is, be sure to go to our talk tomorrow which talks about Android Wear and fitness Finally, underlying all these APIs is our core Android Sensors platform And it gives you access to this raw data that lets you build powerful games, and activity recognition, and fitness experiences If you want to learn more about the Sensors API, visit our talk later today at 6:00 PM, Making Android Sensors and Location Work For You Now, a lot of applications, thousands of them, are using activity recognition But one of my favorites is Google Fit It detects how much I’ve walked, run, or biked every day, and combines that with more advanced information so that I can keep track of my fitness goals and get in shape Finally, the last bucket of signals that we think about is something that’s up and coming and incredibly exciting And it’s called, “what’s around you.” In today’s world, we have more and more devices, phones, beacons, TVs, and they all need to talk to each other The nearby suite of APIs helps you, as a developer, do this Its Messages API allows you to send messages between devices And its Connections API allows you to maintain persistent connections between devices, which is useful for things like multiplayer gaming That team is also very excited to announce a new API later today at their talk, Nearby, Proximity Within and Without Apps So be sure to check that out Chromecast is one of my favorite applications that uses the Nearby API, and they use it to power this amazing feature called guest mode Guest mode means, when you come to my house, you can actually use my Chromecast device, even if you’re not on the same Wi-Fi network, just because you’re physically close to it That’s awesome So that’s it, really We have nine APIs that tell you where you are, what you’re doing, and what’s around you Now, in building these APIs and building the products that use them, we’ve learned a lot about what signals are important for people and where problems can arise And so Maurice is going to talk a little bit about those problems and some solutions we have for you MAURICE CHU: Great Thanks, Bhavik [APPLAUSE] All right So we have nine APIs These help you sense, basically, where the user is, what they’re doing, and what’s around them And in my opinion, the real power of these signals is not what you can do with them individually, but when you combine them together And that way, you can get a holistic view about what the user’s context really is Now, we actually went back and tried to put these together, and we did run into some issues And what we saw is that these APIs really look something more like this They’re individual puzzle pieces that don’t quite fit well together So let me explain exactly what I mean by how they’re not quite fitting together well All right So let’s go back to the example of getting a reminder when you’re driving near a store Now, to implement this, we have basically a geo-fence API And that allows you to set a region around the store to detect that the user is in there And then we also have the activity recognition API that can detect whether the user is actually in a vehicle or not Now, it’s easy enough to just invoke both of these APIs, call into it, get your callbacks, and then try to combine those signals together Now, at this point, we haven’t given you any tools or utilities to actually put

these signals together, so you’re kind of on your own Now you say, OK, that’s not too big of a deal, right? Just have a couple of callbacks, put them together No big deal But the big issue here actually is system health And what do I mean by system health? So system health is everything about how well the phone is functioning, right? And the two major factors for mobile devices is the battery and the RAM Now, battery is kind of obvious, right? If you start using too much of the battery, then it’ll drain And at that point, you have a phone that doesn’t work, which is not good for the user And the second one is a little bit more subtle In terms of RAM usage, if there’s too many things running on the device, this can cause some CPU thrashing And the issue with that is that, then, the phone starts to get sluggish And that also leads to a pretty poor user experience OK, so why should you care about this? Now, the thing is, if the user suspects that it’s your app causing the battery drain, or it’s your app causing that phone to become sluggish, they may actually, in the worst case, uninstall your app And actually, that’s probably the worst case scenario here And the irony of the situation is that you’ve been trying your best to actually target that very specific situation when the user is driving near the store, and hence make it very relevant to them But if you don’t do that right, then you may end up actually causing a worse user experience All right So let me dig a little bit deeper about system health So in this case, like I said, probably the first way you could implement this is to call into the geo-fence API and the activity recognition API, get the callbacks, and combine them together Sure, no problem But now that you have two signals you’re hooking into, you actually have some other options to do things a little bit in a more optimized way So another way to do it is, instead, just turn on the geo-fence API first, make sure that the user actually is near the store And in that case, and only in that case, do you turn on the activity recognition to determine that the user is in the vehicle And of course, there’s an alternative You can do the opposite direction Instead, call into the activity recognition API first And then, when you detect that the user is driving, then call into the geo-fence API Now, the big question is, which is the best for system health? You know, which is the one that’s going to drain the battery the least? Which one’s going to cause the least amount of CPU thrashing? All right So it’s actually a trick question There’s actually many, many factors that really go into determining which is going to be better than the other, things like the sampling rate that you choose for each of these And then there’s also lots of implementation details that you may not be aware of, things like how many times the radio wakes up, and how much time the CPU is using And there’s also, all the way down to the hardware, how much power that the sensor is drawing All right So anyways, if you have two, you may say, OK, that’s not too bad And you can do some work to tune it, and then make sure that system health is good But really, the power is again combining all these signals So imagine scaling up to all of these Now, at this point, you’re actually facing a pretty tough problem There’s a lot more code to handle And if you’re really going to try to optimize system health, I mean, now you’re talking exponentially more different combinations you have to consider in order to do this well The other issue is that the more APIs that you hook into, that means the more your app will actually wake up And this actually causes some pretty bad memory pressure And in the end, it could actually end up causing a sluggish phone And again, we don’t really want that So the issues today, again, are in order for you to hook into these APIs, you do have to learn multiple of them for each type of context signal And the issue is that there are subtleties you have to learn about, things like how do you choose a sampling rate? And there are things called like priority level And these are pretty subtle things to figure out how to use properly The other thing is, now, there’s no support for combining these signals together So you have to write that code to do that And then, furthermore, even after you’ve done all that, you may end up with battery drain and sluggishness that can be pretty difficult to solve So our challenge was to figure out, can we make these individual puzzle pieces fit together into a whole puzzle? And our goals here, we’ll say, is it possible to arrange our APIs so that we can make it very easy to combine them together, and so that you can really target those specific situations? At the same time, these issues with system health, is there something that we can do to help you guys all do that? Well, I’m very excited today to announce that we have a solution for this problem, and we call it the Awareness API [APPLAUSE] Thank you All right So the Awareness API, it’s a unified sensing platform enabling apps to be aware of all aspects of a user’s context, while managing the system health for you We’ve designed this so that you can engage your users in very targeted, very specific, contextual conditions Now, it’ll be available shortly after I/O as a Google Play services API Now, for now, let me give you a preview about what we have to offer So for our first release, we’ll be offering seven different context types right off the bat So this will help you answer questions, like where is the user, via the lat/long locations, as well as the semantic notions of locations

that we call Places Also, what’s around you? Things like can we detect nearby beacons around you, just so you have an idea of what’s there We also have some code to help you basically combine, tie in, with some of these other conditions as well And also, to answer questions like, what is the user doing via the Activities? And we also found that there’s some interesting device states, things like whether the headphones are plugged in, which have some notion about how the user is using that device And finally, things like ambient conditions in the environment, things like weather, which actually do have an effect on user’s behavior, like today, being extremely hot and muggy outside All right So now, the biggest challenge that we had to face was, how do we simplify these nine APIs into something that’s much easier to use, and also allows you to combine the signals together? And the way we approached this problem is to consider, well, why don’t we think about the common usage patterns of how app developers want to use these APIs? Right? What are the ways you want to actually use and access these context signals? And we came up with two of them that covers a good, broad range of these So the first one is called the Fence API And this is a callback style API The idea is that you register a listener with a specific set of conditions that you want, and then that gets called back And then you can react accordingly Now, the word “fence” may seem a little bit mysterious, but it comes from geo-fencing where the idea there was to set up a geo-fence, and then the software would then detect whether the user is in there Now, we realize this is actually a generalizable concept We don’t have to be doing just fences in location, but also fences in all types of user state, things like is the user walking or not, are the headphones plugged in, and is it hot and sunny outside So all of these we can consider a fence And that’s what the Fence API gives you All right So let me give you a concrete example of how we can use this Fence API to accomplish one of the scenarios that we talked about earlier Now, this one had two parts to it The first was the user gets in the car And then, now your device goes, amazingly, into this in-car mode And the second half of it was, as you’re driving nearby the pharmacy, you actually get a reminder to pick up medication All right So with the Fence API, the first thing to do is determine the condition that you want to detect So that first one was to detect whether the user has started driving Now in this case, it’s very easy We have built in a subset of primitive fences based on the context types And we have the detected activity fence And you just merely specify, OK, I want to know when the user is starting to be in a vehicle And this condition is true when you first get into the vehicle Simple OK Now, the other condition is a little bit more involved, if I want to be driving near the store while it’s open So in this case, we’ll start off at the bottom The things you need are, first, a kind of a geo-fence or a location fence around the store This is what that first line is Now, the second line shows a condition to basically detect when the user is actually in the vehicle itself And that’s what that DetectedActivityfence is And the last one is we probably don’t want to show notifications when the store is not open So what we can do instead is create a time fence that is only true between those open hours In this case, we show it in the example between 10:00 AM and 6:00 PM Now, one thing to think about, what is a fence here? Well, it’s actually a Boolean condition And it takes a value of true or false And actually, that gives us our key to how to combine these things Once you have Boolean conditions, you can combine them with the Boolean operators of AND, OR, and NOT So in this point, for this specific example, let’s actually do that So we want to combine these And the AND operator is what’s appropriate here So now, we have our full condition, which is true when the user is in the area around the store, the user is driving, and it’s open hours OK So now you have your two fences And we need to basically register this with the Awareness API So you create your fence update request, add your fences, and then just call updateFences And voila Your fences are registered A couple of things to note We do understand that, probably, you want to key off of multiple conditions of the user So our API is situated so you can actually add multiple fences And in doing so, you need to also give us a key That’s what that first string, startDriving, and drivingNearStore are, so that you can know which of the fences are actually calling back to you The other thing to note is that the pending intents can all go to the same callback mechanism That will help simplify your code to handle all the callbacks And the last point is that you pass in a pending intent And the nice thing there is that your app doesn’t even have to be running We’ll be computing these conditions for you and give you the callback at the right time And this is how we’re helping with system health Your app can stay completely out of the way of the system And yet, you can react when these conditions happen [APPLAUSE] Ah Thank You OK So now, let’s finish this off Let’s write the callback In this case, we show receiving the callbacks via a broadcast receiver You get your data through the intent We have a utility function to extract that state into what

we call a fence state And now, for the first condition, if you know that the key that you have passed in– in this case, startDriving– is true, then based on the condition of the state, you can show the Maps app in the in-car mode And for the second one, you can also key off of that one, which is drivingNearStore, check the state of the fence, and then show the reminder So this is our Fence API, and it allows you to react when the user is in very specific contextual conditions that you specify All right So let’s talk about the Snapshot API now This is polling style API And the idea is that your app, while it’s running, can just ask what the current values of these different types of contexts are, things like the location, the activity, the weather, et cetera So let’s go back to the scenario So at this point, Bhavik saw this cute little dog So he wanted to snap a picture of that And on top of that, what he wanted to do was tag it with the current semantic location and the weather OK, and this is really easy Really, we have an API, called the Snapshot API You just call two methods, one to get the places, others to get the weather, pull out the data Then you can tag it to your photo and share it with the world One thing we did do is that we added caching underneath to basically not allow you to have to think too much about what kind of cost there is to call into these APIs OK, so to summarize, the Awareness API will be releasing with seven different context types and two simple APIs Now, we’ve designed this so that, in the future, we can add more context types, and yet not increase the complexity it takes for you to incorporate the new signals as we go forward And then, of course, the other part of it is that we try to handle system health for you, so that you don’t have to worry about it And by simplifying this for you, you can focus your efforts on building that great experience for your users All right So let me take a step back and talk a little bit about where I feel the Awareness API fits in the grander scheme of things Personally, when I first got a hold of a smartphone, the thing that really amazed me and really kind of excited me was the fact that it had a bunch of sensors And just having sensors was not enough It was the fact that people actually carried these phones with them everywhere they went Now, once you do that, there’s a real opportunity for the phone to really know who you are, what you care about, what your intentions are, et cetera And if the phone can know that and your apps can know that, then I think we could build these kinds of magical experiences that have basically never existed before And we have a kind of a new relationship with computing So that was kind of the moon shot And that’s what we’ve been working towards trying to get to Now, the Awareness API is kind of a step in that direction And what we’ve done with that is to basically take these separate, individual sensing capabilities, put them all together into a unified platform And whether that’s a big step or a small step, time will tell But it is a step to simplify things so that now you can build better experiences Now, this is incredibly powerful And we’ll be putting it soon into your hands But really, to reach this moon shot, it’s not about the technical capabilities as well There’s other things that are of concern And one of those things is privacy, of course, right? So we must be respectful of the user’s privacy And the real challenge here is to build those experiences that really simplify and delight the users in ways that they have never felt before And as far as I can tell, in order to get to that moon shot, I don’t see any path to success that doesn’t include respecting the user’s privacy And I’m sure you’ll agree, as well [SCATTERED APPLAUSE] Yeah, thank you Yeah All right So part of the addressing privacy is what we can do And the Awareness API is built in with a permission model, using Android’s permission model So for each type of context, we do protect it against one of these Android permissions, so that we can ensure that the user has given consent to your app to actually access that signal Now, most of these are pretty intuitive But things like weather, for example, does require ACCESS_FINE_LOCATION permissions And the reason is because we’re giving you the weather at the user’s current location OK, so that’s what we’ve done But of course, addressing privacy doesn’t end there It really has to be end-to-end And that includes, in your app, what kind of experience that you’re building Now the two basic principles we follow for addressing privacy are transparency and control And transparency is about, basically, letting the user know what it is that we’re using, what personal information we’re using about them, as well as what we’re using it for And the second half of it is, of course, control We have to let the users– give them the ability to actually activate or deactivate these features So just to give you a quick example– I mean, if I plug in my headphone, and my favorite music app automatically starts playing music, I mean, that’s fantastic, but only if it was transparent to me that that was going to happen, and I have the option to actually turn it off if I don’t want it OK I think one of the immediate uses of Awareness

will be to post notifications to the user And we do ask that you try to be as specific and as targeted as possible That way, you’ll basically hit the user at the most relevant moments, when they actually want to take action on that notification you send them One thing to note though, as an Android [INAUDIBLE], users can disable notifications for your app And also the notification shade is a shared resource across all apps So of course, you need to give them a reason to turn your notifications on And finally, we’ve done our best to be as efficient as possible to address system health But please be mindful about what fences you happen to register Make sure that you weigh it against the user experience enhancement that you’re providing Now again, to conclude, the Awareness API is coming soon And I very much look forward to seeing what you guys end up building with it Now at this point, let me turn this over to Bhavik, and he’ll tell us what our partners have been up to with this [APPLAUSE] BHAVIK SINGH: Thanks, Maurice Cool So as Maurice has shown you, we have a new API that makes it super easy for your applications to be aware We let this out to the wild a little early and let a few partners play with it And I’d love to show you what they’ve been able to do Trulia is an online real estate agent And one of the big parts of their service is helping their users and potential home buyers find and visit open houses Something they’ve struggled with in the past is, when should I send these users a notification to remind them to visit an open house? Sure, I could do it when they’re near the area where the house is, but what if they’re driving through it, or it’s a rainy day, and they’re just not feeling it? So with the Awareness API’s Fence feature, they’ve actually been able to create highly tailored notifications You will only get a notification for an open house if you’re in the right location, if the weather is nice, and you’re walking, and not driving or running through the area They’re very excited to see how this more tailored notification will increase click-throughs on this very important action for them One of my favorite photo editing applications is Aviary It’s a powerful editing tool that lets you take and edit photos to really capture a moment One of the big features they have is a stream where you can see photos that other people have taken, to get some inspiration A thing that they realize is the way that you take and edit a photo depends a lot on your context The way I capture a rainy day in Seattle is going to be very different from the way I capture a sunny day in Yosemite, or maybe a sweltering day, like I/O. And so what they’ve been able to do with the Awareness API is use Snapshot to understand what your place is, what your semantic location is, and the weather, to show you photos that could inspire you to capture and edit that perfect moment Finally, music is really near and dear to my heart And Superplayer Music is a music streaming application that is very popular in Latin America They have this amazing assistant bot feature where you can ask it for recommendations, and it will return recommendations to you What they’ve been able to do with Awareness and plan to do is merge those functionalities with context signals So that now, when I’ve just finished running and I’m looking for something to cool down while stretching, when I get to the gym, or maybe when I’m about to go on a long journey, they can suggest the right music for the moment Those are not the only partners that we work with And we’re very lucky to have been involved with a wide variety of applications In the health and fitness space, Runkeeper is thinking about tagging its running posts with weather We have local applications, like Trulia and Zillow, that help people find the things that they need and the homes that they want while around them Grubhub is also thinking about how they can integrate weather with their features, and so is Kekanto We’ve got photo editing applications, like I showed you with Aviary, but also Picsart, and even OS level functionality, like Nova Launcher, which is thinking about completely rewriting its launcher to be more context-aware and show you the right apps at the right time, or Zedge, which is going to allow for customizations of ringtones and wallpapers, based on context With these partners, we’re just getting started We have nine APIs across where you are, what you’re doing, and what’s around you, and a brand new API that we’re launching today, called the Awareness API, that merges all of this information in a battery and system health-friendly way I’m very, very excited to see what all of you are going to do with it If you’re interested, sign up for our preview so that we can remind you when the API comes out and get you off of the list to get an early look at developers.google.com/awareness

If you’re interested in diving deeper into any of the other APIs I talked about today, visit g.co/AwarenessIO, to see a full list of our other talks and open hours Thanks Me and Maurice will be around, just outside, for questions after [MUSIC PLAYING]