Best Practices for Privacy and Security in Compute Engine (Cloud Next '18)

[THEME MUSIC PLAYING] SOPHIA YANG: Welcome, everyone Thank you so much for being here today Especially thank you because this is the last session of the last day It’s a three-days’ conference, so pretty long– a lot of work And we are very excited see you still stay here today And that’s the right choice Because, actually, when we first got the schedule, they say we’re the very last one We actually asked, can we get moved to the day before, et cetera, so more people might attend? But then say, no, you can’t We can’t because you’re talk is too important And the order of the talks are ordered by the increasing of the importance So we’re the last one But you’ll be happy to find out that’s it’s important that you made it, right? Well, make sure you learn a lot of important things You’ll find it very helpful so that you can end your conference on the strongest note My name is Sophia Yang, product manager from the Compute Engine team Joining me today are three gentlemen here We have Cache, our developer from the Cloud Security team And we have Sirui, also product manager from Compute Engine team And we have Nick from Two Sigma, our customer, to join us to tell us the story about how Two Sigma is utilizing our tools to build their most secure environment So privacy and security has always been the top priority at Google We’re going to help you build the most secure environment And that’s why every year we continuously invest in building new features, new tools to help you achieve that goal And it’s worth mentioning, not only we’re building this new environment for you guys, we’re also using this ourselves Many products, applications from Google are also running on Google Cloud And today, we’re making our secret sauce, our best practice, our perspectives into six simple steps to walk you through, step-by-step, how to build the most secure environment And when talking about getting started, before we share any best practice yet, let’s just first look at how people mostly get started with the cloud We find out no matter you are one of the Fortune 500 company with thousands of employees or you’re just an individual developer, often when you get started with Google Cloud, you want to first start by trying out the new, cool features, testing out the performance And at that phase, performance– access management is the last thing you want to worry about And we understand that That is why we make the “getting started” stage super simple and easy for you You come to Google Cloud, or you create a project, you automatically are the owner of the project and you have full access within the project You don’t have to worry too much about who can access the resource because your project is created as an isolated environment, so no one else can access your resources within the project All this is trying to minimize your time to finish your proof of concept, to finish your prototype, so make sure you can totally fall in love with Google Cloud And then when you’re done with your prototype, what do you do next? You’re ready to move your production workload, right, your development testing environments onto us And that is when we should started talking about how to build the most secure environment And that takes us a step one, create an organization So why is creating an organization so important? Because afterwards, it’s much easier to apply organizational-wide policy if you have an organization And in talking about building an organization, the first thing you need to do is to set up your organizational identity And so let’s see Cache here to tell us more about how to do that CACHE MCWHERTER: Thanks, Sophia The first things that you need to do to get any kind of security for your cloud is to essentially root your trust in the identities in your system Without securing your identities and the identities that can access your resources, there really is no hope in securing the rest of your cloud And that means that you need an identity store, you need an identity provider, and you need policies that you can apply to those identities Here at Google, we have solutions to get you started here for no matter what kind of company you are And the first way that you– and two general paths in which you can establish your identity store at Google The first is to use Google as your identity provider itself And I’m sure that many of you do this already Google does this internally We use our products internally all the time The basic is a fully-fledged identity system with groups, and users, and APIs to manage them, security controls, and so forth, and so on And it’s also conveniently very easy to set up for small customers of Google Cloud If you’re an established shop, and you have an existing identity system, such as Active Directory, or an LDAP system, you can use Google Cloud Directory Sync

to synchronize those identities and groups to Google and use them and apply them in policies in the Google Cloud product The next thing, after you establish your identity store at Google Cloud, you need to secure them somehow And fortunately at Google, we’ve basically been working on securing clouds for many, many years now, first with our internal cloud product but also for now with our external cloud products We’ve been dealing with attacks on our internal resources and our customers’ resources from state actors, non-state actors You name it, we’ve probably seen it And we have a lot of researchers and security engineers just tracking all of the instances that we can find of things maybe the rest of the world hasn’t even discovered yet And we then also work on trying to mitigate those attacks with new tooling and features that sometimes we apply internally but then sometimes we give to you to apply to your own resources And I’m going to give you a list of basically a few of the things that I think that you should be turning on basically out of the box for ensuring that your cloud is secure The first thing I want to talk about is two-factor authentication and security keys We have been using these internally for a couple of years now And they have been super effective at eliminating all sorts of account attacks and phishing attacks The basic idea is that your users get a– your developers get a security key And this is a hard to phish security device that has to be present every time that your users log in, your developers log in And so what that means is that if someone can steal my password or trick me into answering questions, an attacker can’t actually log in unless they’re in possession of the security device as well To configure your systems to use this, you can go to the Google Cloud Admin console, and you can go to the security settings and turn on the security keys This is so important We’ve actually just announced our own security key products, which should be coming to market at some point in the future The second thing, after you’ve secured your accounts themselves, is to secure your connected applications to your identities I’m sure all of you have seen the news with various incidents that basically, if a third-party application can connect to your users and get their data from a cloud provider, it’s unclear what that third-party provider might do with the data And this means that this is one situation when you have your social media data being collected by an attacker, by a third-party company It’s another thing entirely when it’s your corporate resources that are being stolen And basically, in these situations, your developers, you just need to trick one developer in your company to install an application on their phone or on their browser, and all of a sudden, access to your corporate resources could be granted So we highly recommend that you go to the Google main console and turn off all connected applications for all scopes on all APIs that are important to you, in particular for the Google Cloud Platform, maybe others And then at some point, if you need to actually have developers delegate access to applications, either first-party applications or applications from vendors you trust, you can vet those applications, and you can use the trusted app configuration tools to whitelist individual apps to have access to individual services Now, I think I know what most of you people are now thinking You’re probably thinking, this is great, Cache, but the thing that really keeps we awake at night are the attackers working from their undersea lairs trying to get access to my cloud I want to tell you that I, too, worry about these kinds of situations And we just announced the other day our new set of security features around protecting access to your cloud resources to what we call Secure Contacts or Access Contacts And the basic idea is that I can configure it in my Cloud Console the set of networks or VPNs that should have access to my cloud resources and essentially mitigate and prevent accesses to my resources from outside of those networks And the good thing about this is not only does it help mitigate against attacks from deep sea, undersea lairs, it also helps protect from attackers in volcano fortresses, or even moon bases And the final thing I want to tell you about organizations in Google Cloud is that they give you an incredibly powerful

tool to organize and manage your resources in a way that makes sense to your business In particular, we find that organizations are often broken up into individual product areas and sometimes with different types of deployments For instance, internally at Google, we often speak about dev-level environments, staging-level environments, and production-level environments And you can actually model these things in folders and different projects and apply policies that apply it down hierarchically through the policy hierarchy, essentially, so that you can apply a policy high in the policy tree and to apply to all resources underneath Or for instance, if something makes sense only for a specific product area, you can apply a policy to only the resources within that product area And with that, I’d like to turn over to Sophia to teach us about step two SOPHIA YANG: All right Thank you so much, Cache So now, great You have set up your organization, you have synced out a set of all your organization identities, or synced your identity from on-prem So now you realize you have thousands of employees on cloud So how can you manage that access of them? How you make sure the right person can perform their job, like your developers can go ahead, deploy code in our codebase? Also, you don’t give everyone a super power role, right? And that is when you need to set up your identity and access management, or most people call it IAM So IAM am in a nutshell is really about giving you the control to set who can perform which actions on what resources So in an actual example, for example, I can say I want to grant Cache the instance admin role to my test project That’s going to allow Cache to manipulate my instances in the test project, right, create the et cetera I can also grant, say, Sirui the storage admin role in my project, the same project And Sirui can then access my data that I stored in a GCS bucket Yet, Sirui wouldn’t be able to manipulate my VM instance because he’s not instance admin The same thing, Cache wouldn’t be able to access the GCS bucket unless they coordinate, right? But the good thing, that’s what IAM can allow you, a fine granular control of who can do what So then now you probably can realize, wow, I have thousands of employees The policy sounds like it will look very complicated Don’t panic No matter how and the result might look complicated, the steps you are taking out of the set of the IAMs are always the same The policy language is always the same Which of the steps you will follow is first create a IAM policy and then set the policy on a target resource So let’s first look at what an IAM policy looks like Here is a JSON example As you can tell, policy contains a list of bindings And so within each binding, you define members and rules And in this example, all these members are assigned with the same image user role So let’s take a deeper look at the members, which is the circled part We allow three types of identities at Google Cloud One is the users, which are the human users, two, service accounts, who are the identities of VMs, services, so they are the representation for robots, pretty much, and the groups, obviously the group of multiple things And you can put both human users and the service accounts into one group And next, lastly, we have the roles The roles, or IAM roles, which is really a collection of permissions So let’s take a deeper look at what roles can be At Google, we support three types of IAM roles The first two types are primitive roles and the predefined roles There are three types within the primitive roles, only three– owner, editor, and the viewers They each have very wide access and very wide sets of permissions They are meant to– remember we set a “get started” state? Well, they are meant to get you started quickly But yet, they’re really not meant for production for exactly the same reason They are too coarse They each have too wide access of the permissions So what should you use in production system? Predefined roles or sometimes called the curated roles What are they? They are just really fine-grained roles that Google defined for you that’s specifically targeting your specific use case For example, instance admin roles are most likely mapped to your developer functionalities And the network admin, probably you’ll give to your network admin, as the name sounds, to help you setup the network’s firewalls, VPNs before a developer comes to deploy code And our best practice there is always use the least privileged role So do not grant any additional permissions unless needed That’s for your security purposes And in order to better implement this least privileged principle, we also have a third type of IAM roles, which is custom IAM roles

We made this offering publicly available earlier this year As the name suggests, it’s allowing you to customize what permissions you want to precisely put into your role So there are two typical use cases that you need to use custom IAM role One is the– you might realize our predefined roles are still too limited Some customers are saying, I have to sometimes assign two or three predefined roles to each identity to do their job And if I scale, it’s a little harder to manage Many people on SRE used to have both log viewer roles and monitor viewer roles in this example So then you can create your own custom role to combine these two roles together And then going forward, you can just assign this particular custom role Another use case is the opposite Some customers find out our predefined roles are still too wide or it doesn’t necessarily meet exactly their business needs A popular example will be instance admin role They want to give it to their developer They want everything about it except the external IP usage for exfiltration, infiltration So they don’t want the developers to be able to create a VM with the external IP So they would just easily remove that particular permission from the existing role and create their own developer admin role And next, since we learn about IAM policies, roles, and identities, let’s talk about a few best practices of how to implement those in the most secured way The first recommendation we give to you is always assigning permission and access to groups, not to individual users So why is that? I know it’s very natural to think about, oh, I have roles we should assign to those users directly But think about your employees could move around within a company If you’re directly giving them roles, permissions on themselves, after a while, you might realize it’s harder to track, to understand what permissions this employee currently still need Maybe he was in a different group and needed something else, but now he doesn’t But after a while, it’s very hard to analyze so that you couldn’t achieve the least privileged principle, right? But different, if you can assign permissions directly onto group and then add or remove users from groups, say, so one employee leave a certain group, they get removed from that group, they automatically lose all those permissions they had before It’s much easier and cleaner to manage the access for employees And at the same time, service accounts do not change roles and move around as the humans So we recommend you directly set the policies on service accounts So that’s the first principle and also best practice, grant access to groups Here’s another one that we have seen a lot of success from our customers, which is break-glass style One of the approach is– by the way, if you are not familiar with this term, think about fire distinguisher on the wall, right? They are protected by the glass so that if there is a fire, you can access by breaking the glass But that’s a very invasive action Takes a lot of additional caution to do it And by doing so, you’re preventing a lot of accidental errors So one of the styles to implement this break-glass is to assign human users nothing but the project IAM admin role What this particular role only allows is set permissions, set policies That’s it So if you give your on-call this particular role, they wouldn’t be able to just go ahead, stop your VM in production system, because they can’t They don’t have the permission But yet, as we said, it’s called a break glass So if emergency arises, if there is an urgent situation, they do have to go ahead and manipulate their VM instance They can use their project IAM admin role to grant themselves the instance admin role And from there, they can go ahead and manipulate the VM instance But then this extra layer of caution and actual requirements can help you prevent a lot of accidental actions And by the way, all these change of policies, setting policies, are in the auto-log, so you can go back to trace that And with that, I’m sure many of you already have questions, want to know more about service accounts So let’s invite Cache back to tell more about service accounts CACHE MCWHERTER: Thanks As most of you guys know, service accounts are quite fun toys to play with They tirelessly execute all of our actions for us when we’re not there at the keyboard They run our jobs, they run our services, they never get paid, and yet they’re always there for us But sometimes they’re hard to understand And so I’d like to first help you get to know what the service accounts that we offer look like, and feel like, and how to differentiate them and make sense of them as you encounter them This is a common cause of confusion and errors

that we see when we talk to our external customers This is an IAM policy page that you see in the Google Cloud Developer console This is the landing page when you get to IAM As you can see, this IAM indicates this is an IAM policy This is the policy that grants access to this project and all the resources contained within As Sophia was indicating earlier, policy is essentially just a mapping of the sets of permissions to a set of identities So for instance, this one grants me owner on this project This is a service account You can tend to identify them by the fact that they end in email addresses The name here indicates what it is It’s a service account that was created by default by the system to get me started using the cloud It has editor access, which is fairly powerful And clearly this is not a production project, right? Everything that we told you before about securing your projects was not done here, and so you can imagine what that might look like here You’ve taken off You’ve granted him project IAM admin You’ve taken– well, you actually granted him project IAM admin through his presence on a team called Team David The service account at the top has been granted Cloud Datastore user because all he is doing is accessing Datastore, for instance And so this makes sure that if that job is compromised, the impact and blast radius is limited to the Datastore and not, for instance, my VMs or other things These other service accounts here in the middle are a common cause of confusion for our customers They represent the identities of services activated and essentially purchased from Google I didn’t create them, I don’t own them, and I don’t manage them They’re owned and managed by Google and protected for you They’re kept securely under lock and key for only our production services to access They’ve been granted particular API access to your resources in this project And they specifically grant access to only the types of data and operations that are in line with the business requirements of those services And we basically review every one of those security grants– every one of those grants before we allow them to be granted to your projects to make sure that they actually make logical sense and meet your expectations If I wanted to look at the service accounts that I own and I manage, you can flip to the service account tab And here, you can see that I have two service accounts, one, the default one that we told you was created earlier by the system for me to get me started But I also created another one called My Web FE, which I am using to run my web front end There are two ways that you can use these service accounts The first way is to use what we call Google-Managed Credentials, which essentially means that you don’t have to worry about any of the key management or the rotation story We keep the service accounts secure for you It’s a tireless task, but somebody’s got to do it They don’t pay me enough to do that, but we make sure that they always get rotated for you And they’re kept secure The basic idea here is that I can ask a service, like Google Compute, or Google App Engine, or Google Cloud Functions to use a service account here I’m asking, for instance, Compute Service to create an instance bound to my front end service account And what happens is an actAs check– a permission called actAs will be checked to make sure that I have permission to use the service account and bind it to a job And then once I’ve been granted access, that job will run continuously without me being there as that service account And that VM, the code inside the VM, can go and access data commiserate with the permissions that have been granted to the service account The other way of using service accounts is to download a service account key And this is a less secure, we think, way of managing your service accounts and using your service accounts, because you have to take responsibility for making sure that key doesn’t leak And you have to take responsibility for rotating that key to keep it secure It’s predominantly there for you to use when connecting your on-premise systems to Google Cloud Platform For instance, if you needed to do database backup, or computer backup, or something like this to Cloud Storage, you could use a service account key to do that Many of you are Google Compute customers, I’m sure And you’ve seen in the user interfaces, and the APIs, and the command lines that, when you create VMs, we often ask you for a set of access control scopes, which are scopes when you’re creating a VM We call these VM lock scopes, essentially They’re a legacy feature of Compute from the days before Google IAM came on the scene

They caused some confusion to our customers And so I’d like to clarify some of the usage here They used to be used for some course-grained access control, but they’re not really good for that these days So we highly recommend that, in general, when you see these things, you should close your eyes and don’t use them for access control Rely on IAM for access control And instead, give the VM a fairly high-level scope, such as Google Cloud Platform or full access depending on the user interface you’re using And if you need to call other APIs, like Calendar, or Maps, or something, you can grant those scopes as well And I think Sophia has something more to tell you about managing your resources SOPHIA YANG: Yep All right Thank you, Cache Yeah So now we have got our employee imported here in the Google Cloud and a set of IAM policies onto them, also set up additional IAM permissions onto the service accounts So now what’s left to do? You are going to come here to create a VM instance, right, creating disks as images How you make sure you manage the access to those, to secure those resources? And that’s what this section is we’re going to be talking about So let’s have a quick review of what Cache showed earlier about our policy hierarchies At Google, we offer four levels of hierarchies– organization, folders, projects, and the resources So you can set up permissions, policies on each of a level, and any level down below will automatically inherit it So for example, if I want to give architect the image user role for all of the team A, so then I can go ahead and set that policy on the folder level, team A. Then this architect will automatically become the image user role for all the project A, project two, and et cetera, any projects under the folder So this give you a lot of flexibility to use But sometimes you might want the opposite You actually want to run into the situation you only want to give access to a particular resource instead of grant it a higher level So what would you do there? You can set the policies onto the individual resource, like VMs, disks But probably, many people are familiar with Compute Engine IAM You might ask me, I didn’t know that you were able to do so I thought, at the lowest level, you can’t set a policy with the projects level And you are right until this week We’re actually very excited to let you know that we are going to support resource-level IAM in beta very soon for you guys [APPLAUSE] Highly anticipated feature, it seems like Great And what it allows you to do is exactly to set IAM policies on individual resource, like VMs, disks This totally enables a whole new set of use cases For example, a common use case is, within an organization, people often like to share images with other developers or different users Yet, without the resource-level IAM, they have to give the image user role for the whole project or above, right? But sometimes you do want to give them access to all the image, just the production imagery Before resource-level IAM, what you had to do was create a separate project, and only put that particular image in that project, and then grant policies onto the project level That’s suboptimal and a lot of management overhead, right? So was this resource-level IAM, you can simply just grant the IAM policy onto the particular image In this example here, I want to grant the beta test group the access to the particular beta image And just by simple one line of code, you can achieve that now So here is a real-world example of sharing images, as we were talking about this scenario, right? So now you can create one single image project and then put in your production image together with our beta testing image in the same project, and only grant the developers access to the production image so to make sure everyone uses the most supported and verified version, and giving the beta testers access to the beta testing images So now you probably start thinking, yeah, I can set IAM policies on each beta testing image to give the beta testers But yet, if I have many of the beta testing image, I have to set the policy on individual image every time Would that be a little bit– a lot of work to do, right? And we have thought about that for you So if you want to have another way to manage your resources access at scale, they will be happy to find out another beta feature that’s coming soon to you, which is name Prefix IAM condition What this allows you to do is exactly manage your resources under the condition based on name Prefix So let’s backtrack a little bit to talk about what is conditional IAM first As I mentioned earlier, IAM’s allowing you to define who can do what on what resources, right? And now conditional IAM giving you additional power, which is adding a condition now to the IAM access So we wouldn’t grant access unless the condition is also met In this name Prefix Condition, we

would say, for example, you can grant beta testers the image user role on a project, but only if the image, the name, is starting with beta That way you can achieve what I just mentioned By setting one policy, the beta testing groups will have access to all the images starting with name beta And that’s even including the images that you have yet to create As long as you created them under the beta underscore name, all your beta testers will have access to that image Another common use case customers ask for that you can utilize this name Prefix Condition is to build this so-called Developer Playground Many companies, when they have new developers join in, they like to put them all in the same project to test out, to build their prototypes, because creating each one on individual projects is a lot of overhead Yet, they don’t want every developer to accidentally stomp upon other people’s resource So setting up the name Prefix Condition to base on each developer’s email address solves the situation so that each developer will still only have access to the resources starting with their names, and yet they can still be enjoy, within the same project, the convenience and also the high network performance More excitingly, you will find out conditional IAM not only allow allowing to do name Prefix Basic Condition We have two more type of conditions we support, which is access level and the date time condition Access level utilizes the security context thing that Cache mentioned earlier– sorry– that we will look at the role attributes coming from the request, and then to decide– even, for example, the instance admin, say, you can manipulate an instance, right but you can say any actions to manipulate the instance for a production project, this request needs to come in from my corp network That way even that instance admin sitting in their home, trying to do some work, the request will be rejected, so giving you another layer for security and protection Date time condition, just as the name suggests, it allows you to get a timer around your condition so that you can say, grant a powerful access to your on-call SREs but only during their on-call hours Before this, you had to grant access and then remove access manually at the end of their on-call time But now you can just simply set up a time-based condition And it can be recurring as well So we can help you manage that All right So we talk a lot about IAMs, conditional IAMs, resource IAMs to help you better manage our resources Let’s talk about another dimension and go to how to manage the access for your organization, which is the famous organizational policy Organization policies are meant to work together side by side with IAM policies But there’s a key difference, which is, org policies, the enforcement will be there regardless of IAM access So for example, if you utilize– the first example shows no external IP or org policy You can set that on your project folder or organization level saying, oh, in my organization, for example, I don’t allow VMs to be created with an external IP Then even the person that’s supposed to have that IAM permission, like computer admins, wouldn’t be able to do so So this helps prevent a lot of human errors Again, it’s an organizational policy enforced at whatever level you are granted And similar as IAM policy, any level below will automatically inherit that And Compute Engine will have a bunch of very useful org policies that you can read about, like trusted image, disable serial-port access, et cetera There are, in particular, two more organization policies that we’d like to call out And we really think you should know about them And we will highly recommend you to enable them The first one is domain-restricted sharing organization policy What this is allowing you to restrict, the resources wouldn’t be shared outside the organization We have seen mistakes done before from a customer that, for example, their developer will go home, and they’re trying to do some work, but they accidentally share this image with their IP It was their own Gmail account, and there is information leak there With this organizational policy, you can secure that resources wouldn’t be shared to anywhere outside your organization The next one, Disable Service Account Keys, this [INAUDIBLE] back what Cache mentioned earlier We want to have a Google– we recommend you use Google to manage the key instead of user-generated credentials And if you enable this org policy, it will prevent exactly user-managed generated key so that you wouldn’t have long-living keys that developers hold and that might expose additional risk to your organization, to your environment And we have a bunch more organization policies we can introduce for you Yet, you can read them online probably better than what I can say each about them But I think, more interesting, you

will be interested to find out how a real customer, like Two Sigma, set up and utilize these IAM policies and organization policies to build their environment So we’re happy to invite Nick, the head of Cloud Security at Two Sigma, to tell us about your story [APPLAUSE] Welcome NICK ARVANITIS: Thank you, Sophia Yeah, we’re also very excited about IAM conditions and conditional roles My name’s Nick Two Sigma– I’ll talk a little bit about our business to put our threat model into context We’re a technology-driven, data-driven, quantitative investment manager Really, what that means is we use computers to analyze a ton of data sets and try to make market decisions based off that We’re obviously very worried about external attackers, though we have a very small external profile So although we’re worried about volcano fortresses, we’re also worried about our internal users And that’s not to say that we don’t trust our people There’s some extent to which we don’t trust them, but we do trust them But if you follow security and compromises, you know that sophisticated attackers are targeting people with legitimate access to data And we have some sort of friction whereby we need our people to have access to the data and the resources they need in order to do their jobs, but then also mitigate the risk of mistakes or intentional compromises So we want to protect our IP And one of the big use cases we have for Compute Engine is really as an extension of our data center We run a lot of trading simulations or model tests and simulations that use tons of compute And that’s not exposed to the internet at all It’s in a hybrid-type environment So things like what Sophia mentioned is we don’t want external IPs in that environment Even though there’s no internet access, mistakes can happen And we believe very strongly in defense in depth You don’t want to be one error or one failure of a control, or one compromise away from being on the front page of the papers or, in our case, potentially out of business So we apply organization policies as a really cheap and easy to implement control, but with really powerful implications, like no external IPs Something else that’s really interesting to us is, a really useful feature, the ability to connect to a compute instance as serial console if you have a network outage and troubleshoot it Our issue is that that doesn’t take into account any of our file policies or network configurations So we hate that and we turn it off You can tackle this, again, via IAM But one of our use cases, again, enabling velocity for our teams, is my partner team that runs this environment would really hate it if I didn’t allow them the ability to configure metadata on VM instances where you can turn this on or off They need that for bootstrapping So a safe compromise is to use this organization policy to enforce that control and then be able to be more lenient on IAM Again, we really want to be restrictive about what images we use We have a lot of custom configuration that goes into those So we have– as suggested in the best practices, we have a single project where our trusted image is built, and then we use an org policy to configure our environments that only images from that environment can be booted up and used The flip side of that is also obviously really important We, too, have a developer sandbox, like everybody else And we don’t want, in that environment that’s wide open and connected to the internet, our images that are baked with our IP and, in other cases, sensitive configuration details to be used at all So we use another org policy for that And finally, as Sophia mentioned, domain-restricted sharing really is a no-brainer for us It’s super easy to implement And these things really provide us a lot of access Again, other best practices, this is really defense in depth We rely very strongly on using custom IAM roles only We tailor our roles based on least privilege And we use a whole host of other services for our security posture But these offerings and the ones that have been discussed today are really essential for us in terms of enabling our business without adding unnecessary friction either to our security team or to our client teams So with that, I’ll hand to Sirui to wrap up with the final section of how to secure your environment [APPLAUSE] SIRUI SUN: Thank you, Nick So hi, everyone My name is Sirui I’m another product manager on the GCE team And actually, first of all, I want to say thank you so much, Nick, for talking with us today It’s been an absolute blast and a privilege to be able to work with you guys to really secure your incredibly valuable IP So thank you again Let’s hear another round for Nick, Two Sigma [APPLAUSE] Great So I’m here to take us home and talk about the final step here in our six steps best practices And I’m here to talk to you guys about platform transparency and data integrity So you may be wondering, right, what does that mean? So to boil it down, here at Google Cloud,

your trust is our top priority, right? We understand that many of you, many of our customers, really rely on Google Cloud to run their businesses, right? And we know that wouldn’t be possible without having deep trust in how we administer our platform So the way I think about this is, when we talk about platform transparency here, what we really mean is, what are the tools and what is the information that we’re giving you guys as our customers to really have faith, and trust, and to really build that trust in how we’re administering our platform, right? And as we’ll see that takes a few different forms The first thing I want to talk about is how your data is encrypted, right? And so before I go any further, I just want to say that, at Google Cloud, regardless of how you store your data with us, we are going to be encrypting your data at rest full stop, rihgt? And in the spirit of transparency– thank you, Cache– in the spirit of transparency, you can go online If you Google, for example, Google Cloud Security white paper, we have a lot of white papers and other resources for y’all so that you can read about our security best practices in general, how we do that encryption at rest, things like that So I would highly encourage all of you, if you are interested, to go take a look at that following this presentation What I’m talking about here is the options that we give you all in GCE, in Google Compute Engine, for how we actually do that encryption at rest, right? And so here you see we have three different options So I go left to right And they fall in this spectrum of automation, so how much control we give versus how automated the encryption actually is So we’ll start with the one on the left, which is encryption by default, right? So if you go into Google Cloud Compute Engine, you spin up a disk, you spin up an image or a snapshot, and you just have all the default parameters, we will encrypt that for you automatically at rest, no questions asked, using our world-class encryption methodologies, right? So you don’t have to worry about it You just go ahead and spin it up, encrypted at rest Now, we know that some of you, as our customers, you maybe work in industries, like banking, or financial services, or consulting where you’re expected or you’re legally obligated to have some more rigor or more control in how your data is protected at rest, right? And so as a result of that, we give you some more options here to give you more control over how your data is actually encrypted So I’ll talk about that middle one next And this is a thing that we call customer-managed encryption keys And so here what you do is, right, you create a key in another GCP service called Cloud Key Management Service So you create that key And then when you create a disk, or an image, or a snapshot in GCE, what we go ahead and do is we ask you, hey, which one of these keys do you want to use to actually protect this content, right? And so here you are deferring the creation and the storage of the keys to GCP, but you’re still in full control of the lifecycle of that key You can revoke access You can disable the key You can delete it, et cetera, right? And then finally, on the very right there, we have another option called customer supplied encryption keys And so in this case, what happens is we just ask you for the key And Google Cloud is actually deferring the creation and storage of the key to you, the customer, right? So this gives you full control When you create the disk, for example, we ask you for the key We obviously don’t store it, but we use it to encrypt your content And the next time that disk is used, say a VM startup, we go ahead and ask you for that key again, right? So this is full control And this is really popular, again, in some of those verticals like banking, ok? Switching gears a little– actually, just kidding Continuing on, like most features in GCE, these controls are available to you in the UI, in the command line, or in the API So here I’m going to show you, just for simplicity’s sake, what it looks like in the UI, what these three options look like in the UI So here, this is the VM Creation Page This is, when you create a VM, we ask you to create to boot disk, of course And you can actually select either the automatic option on the far left where no questions asked, we just go ahead and do it for you In the middle, you’ll see the option to specify a key in Key Management Service So that dropdown is a set of keys you have there And then on the far right, that’s the customer-supplied option where we ask you what the key is You input it in that box and you go ahead So the take away here really is that, however you want to manage how your data is encrypted at rest, we may we make it really easy for you to get started and to get going with that So switching gears a little bit, another question that we get asked at Google Cloud a lot is, well, hey, you guys are administering our data How much access do you have to it, right? Are you reading it constantly? So we know a lot of you may have sensitive data in our cloud that you trust us with We know many of you may also have auditors who ask the same question, right, as part of doing their job And so here, the first thing I want to start with is, at Google Cloud, right, we do not access your data for any other reason other than to fulfill our contractual obligations to you, ok?

So that means whenever there’s a data access event, right, whenever someone goes to access your data, it has to have a very strong business justification And it has to be in line with that particular statement They are fulfilling our contractual obligations to you You can go online again You can google access transparency, and you can find out the well-defined set of sub reasons for why we access your data And it turns out by far the most common one, I think over 95% of these access events happen because our customers, you guys, asked us to access your data as part of, say, a support case, right? But on top of that, you don’t have to take our word for it, right? Whenever Google accesses your data, we’ll create an audit log for you in near real time, right, so that you can hold us accountable to that statement there, right? So let’s take a quick look at one of those audit logs looks like The font’s probably a little small, so I’ll just walk through this audit log for those of you maybe in the back who can’t see So the first thing you’ll see is this is just a standard audit log But it’s created when an access event happens, when a Google access event happens And the first thing you’ll see highlighted in yellow there is we give you a reason why we access your data It aligns to one of the sub reasons for fulfilling our contractual obligations to you And so here you see this is actually a customer-initiated support request And we even give you the case number so that you can follow up, understand why exactly we’re accessing your data Here, likely it’s because you asked us to And then in green, you see we give you a lot of information around, what exactly did Google do with your data during this access event, right? So which method did we call? Which resource did we call it on? How long did it take? What project did it happen in, et cetera, right? And so this is really aligning with our goal, with our principle of making it really transparent in how we administer our platform and giving you, our customers, the tools to hold us accountable for that, right? Ok, so that brings us to the end Just to quickly summarize, right, we talked through six steps for best practices in Compute Engine I see a lot of you are taking pictures I just want to remind you all that this, like all talks at Google Cloud Next, is going to be posted to YouTube So you can go, watch that, drill into one of the steps that was particularly interesting to you, et cetera, right? And the final thing I want to leave you all with is this, right? Google Cloud, like the rest of Google, we take security and privacy incredibly, incredibly seriously, roight? So we’re constantly improving And we’re really, really receptive to feedback, right? So please– we’ve worked with Two Sigma We want that to be the rule, not the exception We’re really open to working with you all We want to make sure that what we’re doing aligns with your needs And we want to make sure we address any gaps, right? So leave us feedback Go to User Voice Go to the Developer Console Leave us feedback about this or really any other features It really does help influence our backlog, influence our decisions, right? So with that, thank you all very much I hope you have a good rest of your time here [THEME MUSIC PLAYING]