Understanding color (Google I/O '17)

SPEAKER 1: Good morning, everyone, and welcome to this talk about color So, how many of you think they understand color? Raise your hand Nobody? OK, I guess that’s why you’re here So I gave a similar talk last year at AnDev 360, and there was a lot of math and a lot of equations, and a lot of worried developers asked me if there’s going to be math in this presentation There’s going to be no math, no equations Instead, there’s going to be physics So let’s talk about color The first question we have to ask ourselves is, what is color? It sounds like a deep question The answer is not so deep So here’s a definition that applies to most of us as human beings We’re not developers, we’re just regular people So, it’s a visual perception that can be described by hue, brightness and colorfulness Sometimes colorfulness is also called saturation And if you’ve ever used the HLS, HSL, HSB colors in Android APIs, you may be familiar with that kind of definition What’s really important to understand is that color is just a perception of our brain It’s not a real thing, and we’re going to see why that is So obviously, we see colors with our eyes, and if any of you see color through another means, I would love to talk to you And to understand how we perceive color, we first have to go back probably to high school or college and try to understand what light is made of So I’m sure most of you know that light is made of photons, but there’s a duality to light It can be a wave or particle And we’re going to look at the wave nature of light first So light is an electromagnetic wave And our eyes are just receptors for these electromagnetic waves So, here’s the electromagnetic spectrum It’s not to scale And from the shortest wavelengths to the left to the longest wavelengths to the right, we have, in order, the gamma rays, the x-rays, the ultra-violets, the visible spectrum, the tiny little bits in the middle full of colors, then we have the infrared, the microwaves, and then the radio waves The part that interests us is the tiny bit in the middle, It’s what we call the visible spectrum It goes from about 400 nanometers to 700 nanometers And all of us, these are the only wavelengths that we can see Why does this matter? Our eyes, like I said, our receptors, they are actually made of millions of small receptors called cones Most of us, almost everybody has three types of cones that can detect different parts of the spectrum So you see the spectrum here at the bottom, it has all the wavelengths of colors that we can see And this diagram shows, in three different colors, the sensitivity of each type of cone that we have in our eyes They are called short, medium, and long Sometimes they are called blue, green, and red It’s not technically accurate to call them red, green, and blue, so instead we’re going to call them short, medium, and long And they are called that way because the short ones help us see in the blue wavelengths, so ultra-violets, violets, and blues Then we have the medium ones that help us see the greenish colors And the long ones that help us see green, orange, yellow, and a little bit of red And you can see there’s a lot of overlap between the medium and the longer receptors So now, what is light? So light is a distribution of several wavelengths So this is what we call the spectral power distribution of a light bulb, so the kind of light bulbs that you can find in any house It’s an orange-ish light bulb, and you can see the amount of energy it outputs in different parts of the visible spectrum You can see here that this type of light bulb outputs most of its energy in the red and orange part of the spectrum, and that’s why we perceive it that way But what happens when the lights from that light bulb hits our eyes? So we saw that we have those receptors, the different sensitivities to different parts of the spectrum So we just multiply the distribution of the incoming light with the sensitivity of our different cones, and the result is what we perceive So when we multiply both, we get this So when you look at one of the light bulbs, you see almost nothing in the blues, you see a little bit of green, and more orange and red, and then our brain will interpret that as an orange color The way we actually interpret the result of this cones is a little complicated And we don’t have time to go into the details here You can go look on Wikipedia how it works if you are interested So here’s what happens exactly when we look at the light source There are wavelengths emitted by the light source It hits our eyes, multiplied by the cones, and that’s what we perceive But most of the time, you’re not looking directly at the light You’re looking at different objects that surround us And how do those objects get their color from? So every object also has what’s called a spectral power

distribution It’s a description of the wavelengths that the object can reflect We call that the reflectance So some of the wavelengths will be absorbed So for instance, a black object absorbs almost everything that’s incoming, and a white object will reflect pretty much all the wavelengths So, to perceive the color of an object, what happens is we have light come in It bounces off an object Some of it is reflected, so we just multiply the two distributions Then it hits our eyes and we get the final perceived color What’s very interesting here is that because it’s a multiplication, any combination of light and reflectance can yield to the same perceived color in our brain So a very simple example is that, if you have an orange object lit by white light, you’re going to see orange, but if you have a white object led by an orange light, you’re also going to see the orange And our brain does a lot of post-processing really to help us understand what is the color of the light, what is the color of the object And a few years ago, there was this famous example of the dress, some people were seeing it in black and blue, some people were seeing it in gold and something else And that’s exactly what’s happening, is that some of us were interpreting the results differently And nobody was wrong It’s just without the context you can’t really know for sure So we can swap the colors, the distribution of the light source, and the object, and we’re going to see the same result So that leads us to something called the Chromaticity Diagram It was standardized in 1931 by the Commission Internationale d’Eclairage, CIE It’s a French commission, apparently And it represents all the colors that we can perceive as human beings So on that horseshoe shape, the outside edge is called the spectral locus It represents all the pure colors, the pure spectral colors So that the spectrum of colors that we saw in the previews diagrams actually bands around that edge all along And everything in between, all the colors that we can actually see, are a mix of all those different wavelengths So it does this weird shape It’s not a rectangle or anything And there’s a lot of colors that live outside of that spectrum We call them the imaginary colors because we cannot see them No matter how hard we try, you will not be able to see those colors There are also multiple optical illusions that you can find online that will help you find those colors It’s actually really weird– not everybody can see them or perceive them There’s one, for instance, where it shows you a blue rectangle for one of your eyes and a yellow rectangle for your other eye, and the way I see it is this really weird color that you can’t really describe that keeps changing from blue to yellow but never stops on one of those two colors The visible spectrum is actually a little bit more complicated than that What you’re seeing is a slice, it’s a 2D slice There’s a z-axis that’s coming towards you, and that is the brightness So the footprints at the bottom, this large footprint you see, is the dark colors It’s because our eyes are better at seeing dark colors than they are at seeing light colors We’ve seen what colors are for us as human beings But you know, everybody’s a developer, I think, in the audience, so the real question is what is called for us as developers? What does it mean for your application? So color really is an [INAUDIBLE] scheme for brain sensations And the formal definition is that it’s a tuple of numbers, or a list of numbers, defined within a color model and associated with what we call a color space And we’re going to look at some examples to make you understand that So here are some of the color models that you may be familiar with RGB is one of the obvious ones I’m sure everybody has used that CMYK, if you have ever printed a picture or a book or something, you might have also dealt with it And there’s another one for instance, called L*a*b There are others There’s xyz There’s many others What’s interesting is that the color model defines how many numbers we need to define a color So you’re used to RGB to three colors, to three values, but CMYK requires four values, for instance So the one that we’re interested in today is the RGB color model It uses a triplet of values– hence the name And I’m sure most of you or all of you are familiar with the hexadecimal notation So there are many ways of representing the tuple of numbers This is one of them It is pretty popular, especially on the web You have probably used it a lot on Android when you want to pass a color directly to one of APIs This is a pinkish color So this is the same color represented as a triplet of 8-bit [INAUDIBLE] integers You are also most likely familiar with it This is something we use a lot on Android when you set the color on the paint or when you use the color to try to extract the red component of a color And this is another notation This is actually the one I prefer This uses floats So the values are between 0 and 1 And it’s interesting because it’s more versatile

You can use it to represent [INAUDIBLE] colors And we’ll see that Android actually makes use now in O of the float notation to have negative colors and colors that go beyond 1 So the big question is, once we have those numbers, what colors do they actually represent? So I told you this is a pink But you’ve seen that spectrum of colors There are many, many, many pinks There’s actually an infinity of pink So which one of those pinks does this represent? So to reproduce colors, all of our displays use additive light So, we use red, green, and blue in our TVs, our phones, our computers, our old CRT monitors And they just mix those different lights So the numbers that you just saw, the RGB triplets, it might sound obvious, but they’re an intensity for each one of those lights So let’s say we pick three lights We pick a green light, a red light, and a blue light And they’re not perfect spectral lights, they’re just random lights that you found They are found somewhere in the visible spectrum And together they form a triangle So when you have an RGB color in your application, you can only represent one of the colors within that triangle You cannot represent colors that live in the entire visible spectrum That triangle is what we call a color space, and there’s an infinity of them Depending on the three lights you choose, you can represent any of an infinity of color spaces So here’s, for instance, one of the widest color spaces we could create The problem we have is that with a triangle with just three lights, we cannot encompass all the visible spectrum We have to choose a smaller slice So this one in particular is called the [? IW ?] wide gamut RGB color space There’s no device that I know of that can capture or recreate this color spectrum, this color space And the problem is that if we wanted to create a color space with RGB that contains all the visible colors, we’d have to create lights that are in the imaginary space So lights that we cannot perceive because we’ve seen that our eyes cannot perceive outside of that horseshoe shape The color spaces are more complicated than that So color space is actually defined by three components The first ones are called the primaries There are three of them Then we need a white point And then we need something called transfer functions So this might look like a complicated diagram, but what I did is I’ve put the visible spectrum on the left, and I’ve overlaid three triangles that represent three common color spaces that are used for still images So the smallest one you see in the middle, the blue triangle, is the one we call sRGB I’m sure you’ve heard that term before sRGB stands for Standard RGB, and it was designed in the ’90s And it kind of matches the reproduction capabilities of the CRT monitors of back then And to this day, it’s still used everywhere It’s pretty much the universal color space, the only color space that you can count on There are other color spaces There is Adobe RGB, for instance It’s the orange triangle Because it’s bigger than the sRGB color space, we say that it’s a wide color space or it’s a wide-gamut So the three vertices of the triangle, the triangle is something we call a gamut And then there’s something called ProPhoto RGB that you can see extends beyond the visible spectrum And this is not a color space that we use to actually represent colors on the screen It is what we call a working color space For instance, when I take a picture with my camera, my camera is set to Adobe RGB, so it captures everything in the color space that you see on screen Then when I import my photo in Lightroom, Adobe Lightroom internally works in ProPhoto RGB And it won’t be able to recreate all the colors that it’s working with on screen, but the idea is to have as much precision as possible So you can ignore this kind of color space for your needs on Android applications So I said a color space has three primaries, and the primaries really are the coordinates of each one of the three vertices of a triangle in that chromaticity diagram, in the visible spectrum So they identify, when you [INAUDIBLE] so it’s that you want a color that’s red equals 1, green equals 0, blue equals 0, it tells you what red we’re talking about And if you look here on the screen, sRGB and Adobe RGB, when you say red equals 1, they have the same exact red, but ProPhoto RGB has a different red So it’s two different reds for us, and then they [INAUDIBLE] for computers, We’ll take a look at that And then we also need a white point, and the white point is the same idea, it gives us the coordinates of white in the color space, and we’ll get back to that So I also mentioned transfer functions Transfer functions are a little bit complicated That’s where a lot of math comes into play You’ve probably heard about them under the name gamma, so if you heard about gamma correction,

that’s actually transfer functions So I gave a talk last year about transfer functions I don’t have time to talk about them today I’m going to show you a link at the end of the talk If you’re interested, you should definitely go look at that talk There’s a lot of things that you should learn about transfer functions that can impact your applications So I’ve been talking about color spaces But why do we care so much about color spaces? So the problem is that every device out there has a different color space So for instance, you can have a phone that has an LCD screen and is going to be close to sRGB, you have a phone that has a [INAUDIBLE] screen and is going to be closer to P3, to a color space called P3, or to the Adobe RGB color space that we just saw, and something for your laptop for your computer or for your TV And things get even worse, because even if you have two phones, they’re the same model, they’re the same manufacturer, they’re both supposed to be, let’s say P3, there are variations in the manufacturing process So they won’t be the exact same P3, so the colors won’t be exactly the same on those two devices I’ll show you an example of what happens So let’s imagine that we have content that we created for sRGB, so it’s in the sRGB color space We designed it at home on our computer that shows us RGB colors That’s the wide triangle you see And then if we take that content as is, we don’t do anything to it, and we just show it on the display that’s Adobe RGB, what we’re going to do is we’re going to just take those RGB triplets, those values we had, and reinterpret them in a different color space So suddenly your green that you had in this RGB, like that green equals one, is going to be a different green It’s going to look completely different to your user who’s using an Adobe RGB screen So here are concrete examples This is a photo I took I took it in Adobe RGB I processed it in ProPhoto RGB I converted it nicely on my calibrated monitor to sRGB This is what it should look like Actually this is not what it should look like on my laptop It’s what it should look like, because those [? trains ?] have a curved space I have no idea what it is, but it’s definitely not sRGB And this is really wrong But what matters is the difference between the next photos So let’s say that this is proper sRGB This is how I wanted you to see the picture Now if I display the picture as is on a different screen that said this is P3, it is going to look like this If I go back and forth, you can see there’s a difference in contrast Some of the colors are a little bit more saturated So already my photo does not look like the way intended Then if I display it on the ProPhoto RGB display, if such a thing existed, it would look like this So supersaturated, really garish I don’t know about you, I really don’t like this picture And that happens when you only affect the primaries But you can have similar issues when you affect the white points So the next slide here, that is the same sRGB photo So we kept the primaries, but I changed the white point to something that’s bluish And so in my photo, well, you know it’s underwater, so it kind of makes sense to be blue, but it is not the way I wanted it to look like So again if you take multiple Android devices, for instance, and you put them side by side, chances are that some screens will appear yellow to you, some screens will appear blue to you or green, and that’s because of the white point We have different white points across multiple displays And once again, the same model of device from the same manufacturer, there are going to be white point variations So when this happens, we say that colors are unmanaged This is what Android has been doing since Android 1.0, and I hate it, absolutely hate it It’s horrible Your designers will spend hours and hours slaving away on their computer, creating a beautiful UI, then you test it on a phone and it’s completely different, and you test it on another phone and it looks completely different again So you might be thinking, how can we have our design look exactly the way we want? So the solution is something called color management And it’s new to Android O So the idea is that every color that we want to display needs to be associated with a color space We need to know what was the original intent of the design, so we need that information Then, through the magic of a lot of math and matrices– it’s actually more complicated than just a matrix, but that’s most of it– we’re going to do a controlled conversion to the destination color space So what we’re going to do is that, when we manufacture a device, we’re going to use special devices that will measure the capabilities of the display We’re going to measure the primaries and the white point of the device, we’re going to create a destination color space, and then we can convert your original content to the destination color space and preserve the same colors that you wanted to have So this is something we’re introducing in Android O And there are two parts of it There’s first color management proper, and then there is something called the wide color gamut rendering And we’re going to take a look at both But first, before you need to worry about Android, you need to make sure that you or your designers are using color spaces properly in your design application And there are two things you can do with pictures– you can assign a color space, or you can convert a color space

So I’m going to switch to a demo, but first I just want to show you, earlier I said that color spaces, this chromaticity diagram we had, the horseshoe shape, it’s a slice of the color space, and so are the gamuts, those triangles So this is sRGB in 3-D And it’s interesting because I mentioned that our eyes see more in the dark And you can see that in that diagram, the brighter the colors, the fewer Us we can perceive So for the demo, I’m using a tool called Affinity Photo And this is a picture I took, same process I took it in Adobe RGB I was using [INAUDIBLE] files from my camera, from my DSLR I used ProPhoto RGB to do my work in Lightroom And I created this RGB version of the picture And if we zoom in, pretty much any good design tool somewhere will show you the color space of your image So here we know it’s an sRGB image I’m going to calibrate display, looks fine And like I said, you can assign a color space, often called an ICC profile, or you can convert So if you assign, what you’re saying is keep the colors the way they are, keep the same values I just want to move them to a different color space So let’s say we move to the ProPhoto color space, and now it looks like this This is exactly what Android does This is not the right way of doing it Sometimes that’s what you want because you know what you’re doing, but very often it’s not going to be what you want So instead, what you want to do is do convert ICC profile It’s sometimes called match So we pick ProPhoto, and when I click, you’ll see no difference And this is what we wanted But what happens is that every single RGB value has changed It just looks the same, we just have different values stored inside the image And to give you an idea what happens to your Android design, so this is a screenshot I took of my pixel 2016 running Android N. And on Android we pretty much assume that all the content is RGB So when I opened this file in the tool, I was warned that there was no color space and that the tool is in sRGB Now when you display the screenshots on a [INAUDIBLE] display, because we don’t manage color, what happens on that pixel 2016 is this So I hope you can see the difference If you look at the right icons at the bottom, you can see that everything becomes more saturated So that’s what you designed, and that’s what you see on an actual phone All right, so when you don’t know where the color space is, just assume it’s sRGB It’s the only assumption you can make And this is why, for instance, all my photos are sRGB, because when I put them online on the web, I have no idea what display you’re going to use, I have no idea what device you’re going to use, I don’t know if your app is going to be color managed or not, and sRGB is your safest bet It won’t be always correct, but it’s your safest bet So when you use a design tool and you have a color picker, if there’s no information about the color space anywhere in the tool, you are most likely using either sRGB or what we call native gamut, which is whatever the screen can do For instance, this is what Apple Keynote does When you pick a color, you pick the color directly for your display and not another display If your application is color managed, and your document has a color space, you’re picking a color in that color space Some color pickers are more advanced, so this is for instance, on Mac OS Sierra, if you click that little gear in the second tab, you can choose the color space you want to use for the color picker Actually when I was working on these slides, I picked colors for the slides and then I was creating diagrams into a different tool and I tried to match the colors and I forgot that I had to change the two sRGB in this picker, so it took me five minutes to understand why my colors were different So even when you know about color spaces and color management, it can sometimes be confusing Another tool you can use on Mac OS for instance, and other platforms with similar tools, is the digital color meter It’s in the application slash utilities folder It lets you pick any color on the screen And using the drop-down, you can choose the color space you want for that color So you can look at the color in the native gamut of the display, or you can use sRGB, or P3, or whatever On Android, if you have a recent version of Android, and if you have a pixel device, for instance, if you go to the developer options, you can turn on the sRGB mode So here, what we’re going to do is apply color correction We’re going to apply color management to the display to make sure that all the colors are interpreted and reproduced as sRGB You might be surprised at first A lot of people complain that the colors looked washed out They were actually more accurate It’s just a matter of habits All right, so now some code So on Android, what you’ve been using so far is what we call the ColorInt, just an int, it contains alpha, red, green, and blue And the only assumption we can make,

because we don’t know what color space you are using, is that it’s sRGB, so it’s pretty simple Now in O, we’re introducing a crazy new API We’ve had that color class for 10 years but only as static methods Now you can create instances of the color class You can actually use color the way it was meant to be So you just call value of, you give your RGB values, and that’s going to give you a sense of the color class in sRGB Because note, I didn’t specify the color space, so my assumption is it is sRGB You can also specify the color space So here we also call value of, we pass the alpha, so the order is RGBa, when it floats, and then we say that we want that color to be in the Adobe RGB space What this allows us to do is work with color so we can convert them from one color space to the other So I have this Adobe RGB color, and if I call convert, I can convert it to display P3 for instance Someone’s happy Something else we are introducing So color ints are really useful because they’re easy to manipulate, they’re small, they’re easy to store And color objects, the color class is very generic It can represent colors It uses a float [INAUDIBLE] as a reference to color space, so it can be a pretty heavy object So now we have something we call the color long It’s the same idea as the color int We use a primitive type to store color But we also store the color space for that color So it’s similar to value of You just call pack instead, and you get to long And here’s the format of the long So we have 16 bits for the red channel, 16 bits for the green channel, 16 bits for the blue channel, we have 10 bits for the alpha channel, and then we have six bits that identify one of the built-in color spaces And those 16-bit values, they use something called half floats So they are floats that use only 16 bits instead of 32 bits So there’s a new API in O called android.util.Half that lets you manipulate half floats Anybody who does HDR or advanced rendering in Open G or Vulkan might find that API useful If you use the color class, you don’t have to worry too much about it There’s a ton of utility methods on color that will use the half API on your behalf So we also have the color space class It’s well documented It’s pretty easy to do You just call get, and you can create one of the common color spaces that we provide You can create your own color spaces if you want Two methods that are interesting on color space There is white gamut It will tell you if it’s a whiter gamut than this RGB And get model, it will tell you how many components are in the color space, so RGB, CMYK, Adobe, stuff like that If the model is RGB, you can cast it to color space [? dot ?] RGB, that gives you access to more APIs You can query the primaries You can query the white points You have access to the transfer functions This is also very well documented If you want to do conversions between color spaces, it’s pretty simple You call connect, you give us the source color space and the destination color space And the reason why where you need to call correct is that we need to make sure that both color spaces use the same white points So when the color spaces have different white points, we do a little bit of math internally to make them use the same white point, effectively So then you can call the transfer method, you can give us RGB values, and we’re going to give you back the corrected RGB values You can also change the white point of a color space So if you call adapt, color space dot adapt– so here for instance, we do what I did for one of my examples, my photo of the fish that was very blue, we took the sRGB color space and then changed the white point from something called d 65 to something called d 50 that’s bluer White points are usually defined as a color temperature It’s the perceived color of a black body when you heat it to that temperature So here d50 is 5,000 degrees Kelvin and it’s blue, And d65, which is a very common white point in color spaces for our monitors, it’s 6,504 degrees Kelvin It’s more yellow Bitmaps, we know, support color spaces embedded in bitmap So they are called ICC profiles Until now, we would just ignore it completely on Android So if you use bitmap factory to decode the bitmap, now you can call get color space on the bitmap, and we’re going to tell you what it is Most likely, for most images you are going to load, and presumably for all your resources that come inside your APK, the answer is going to be sRGB So you can call a color space that is sRGB to check what kind of color space it is And that’s very important All the bitmaps on Android are always in the RGB color model We might expand on that in the future, but we don’t let you use L*a*b or CMYK or xyz or any of that They are always RGB

Right now, when you call get color space on the bitmap, can always cast it to a color space at RGB It might not be future proof, so just make sure by checking the color model You can do more interesting things with bitmap factory So on bitmap factory.options, we have this horribly named field called inJustDecodeBounds It was originally created to let you query the dimensions of an image without having to decode all the pixels in the image So it’s really quick It gives you an idea of how big the image is going to be Over the years, we’ve kind of abused this field, so now it’s going to tell you the configuration of the bitmap, you know, is it RGB888, is it RGB 5 6 5? And now it also tells you what the color space is So if you want to know the color space of a bitmap ahead of time, you have to ask us to give you just the [? balance, ?] just the dimensions So you call your decode method on bitmap factory, you press your options, and then we have this field called out color space that tells you what is the color space of the bitmap So if you want to make sure that the map is in the right color space before loading it, you can do this And if it’s not in the right color space, you can use this other API in options called preferred color space, where you can tell us what you want the color space of the decoded bitmap to be So in this example, for instance, let’s say we query the color space of the bitmap We saw that it was Adobe RGB I don’t want Adobe RGB I want sRGB So I can use in preferred color space to force the system to convert it at load time Then you could just call decode We are also introducing in Android O 16-bit bitmaps So they are bitmaps that use 16 bits per channel And those bitmaps are always, always in a color space called linear extended sRGB, and we’re going to take a look at what it is So when you have a 16-bit bitmap, don’t try to convert the color space We’re not going to let you, at least for now This is how you create a wide gamut bitmap You just call create bitmap, you specify the width, the height, the configuration, the Boolean tells us whether there’s alpha in the bitmap, and then you just give us your color space So it’s pretty simple Bitmap, as API, it lets you read and write pixels inside the bitmap So because get pixel and set pixel are used color ints, they have to use sRGB So when you call get pixel on a bitmap that is not sRGB, we’re going to do a conversion for you Same thing when you call set pixel We expect this RGB, so you have to do the conversion yourself Now if you use this other API called copy pixels to buffer, it gives you access to the raw data of the bitmap So that data is going to be the native color space of the bitmap left completely untouched You have to be a little bit careful because of that new configuration for 16-bit bitmaps So if you load the 16-bit png, the data in that byte buffer is going to be half floats, it’s not going to be those color ints And that’s where you might want to take a look at android.util.Half All right, so now what happens when you draw bitmaps on the screen? Until now, what we’re doing is, we take a bitmap, we assume it’s sRGB, we send it to the screen And if the screen is not sRGB, too bad the colors are going to be completely wrong This is still what we do by default on Android O Now, if we have a bitmap that we know is not sRGB, it has a color space that’s associated with it, we’re going to do an sRGB conversion on your behalf into the rendering pipeline It can be a little bit expensive, so you should avoid it as much as you can if you don’t need non-sRGB bitmaps But at least the colors are going to be correct on the display Now if you render a bitmap into another bitmap, so if you create a bitmap, you create a canvas and then you call get bitmap on that canvas, we’re going to convert from whatever the source color space is to whatever the destination color space is So if you want to convert a bitmap from one color space to another, not at load time, this is the way you do it You just create this destination bitmap, you create the canvas, and you just draw So pretty simple So we make all these assumptions about sRGB, but like I said earlier, if you remember, I said that all the displays on our phones have wide gamuts, they can show more colors than sRGB Now the problem is is that if we take your sRGB content and we don’t stretch it to the entire gamut of the display anymore, and we keep it in that little triangle, we have all those unused colors that we’re not taking advantage of So in Android O, we’re adding this new API, and it’s super complicated It’s just one attribute that you add to your manifest per activity You have to tell us that you want to render using those extra colors, you want the wide color gamut mode The way it works is as follows, if you are on a device that does not support that mode– not all devices will support that mode, so if you are on a device that does not support that mode, your window is going to use the RGB 888 format That’s what we’ve been using for 10 years, so there’s nothing new there

You have your sRGB content which is drawn directly on screen If you have known this RGB content, we convert it to sRGB and we send everything to screen Colors may be wrong, but that’s just because the device does not support the new wide color gamut rendering On the device that does support wide color gamut rendering, we’re going to make a much larger window We’re going to use 16-bit per channel, so it’s going to double the size in memory of your window It’s going to double the needs and bandwidth, so it is an expensive thing So if you don’t really need a wide color gamut rendering, think twice before enabling it If we have sRGB content, we’re going to send it directly to this display as well And if we have non-sRGB content, we’re going to convert it to a color space called extended sRGB What is extended sRGB? In It’s a kind of weird color space, it’s a really big color space It’s way bigger than the visible spectrum, much, much bigger And some of the values are negative and some of the values are greater than 1 They go all the way to 7.5 And what’s interesting about that color space is that all the values between 0 and 1 match exactly the sRGB color space So what it enables us to do is, we can take your existing content, all your sRGB content, basically everything you have in your app today, and we can draw it directly We don’t need to do any conversion because it is going to match sRGB So what we do is, we only pay the cost of a conversion when we’re drawing non-sRGB content The expense is that we need 16-bit per channel, because we need a lot of precision and range to be able to encode this sRGB space, but it’s much better for you It’s much simpler You just have one attribute in your application Interestingly, because we use 16-bit per channel, because of extended sRGB, we can or we will be able to, maybe in the future, to render in HDR directly So we could have HDR user interfaces We also have a new resource qualifier called the widecg for wide color gamut So you can create layouts or strings or [INAUDIBLE] that are specific to a display that supports the wide color gamut rendering mode We also have a few APIs that you can use to query whether the device supports wide gamut So if you have a resources instance, you can grab the configuration and the configuration will tell you if you have a wide gamut display If you have a view, you can call get display, we give you a display object, and you can ask the display whether or not it’s wide color gamut So the main conclusion of all this is don’t panic I put a bunch of hearts on the slide to make you feel better about all this I know it’s complicated And no matter how hard you try, and I don’t want to sound disheartening, but your colors are going to be wrong somewhere for someone Like they’re wrong for me on that screen, and I know a little thing or two about color spaces and color management, but you’re not in control of everything You’re not in control of the final display and you’re not in control of all the software that intervenes in the pipeline of an application of rendering colors So don’t worry too much about it Do your best Make sure that your designers work on calibrated displays Make sure they work in sRGB Make sure the images contain color spaces And you should be OK, or just mostly OK So if you want to learn more about transfer functions, there’s this talk I gave last year with Chet So if you go to that url and you go to minute 29, that’s when the color part of the talk starts The first half is about animations It’s also super interesting You can also look at the documentation for color space at Rgb There’s a lot of details and information about transfer functions and what they mean for you If you want to learn even more about colors, I gave a talk at [INAUDIBLE] US a couple of months ago I talked about bending and dithering What is bending? How can you fight it? What do we do in the platform to fight it on your behalf? So if you’re interested, go at that url and it starts at minute 36 And I think that’s it I only have one minute left so I don’t have time for questions, but I’ll be at the Android sandbox if you want to talk more about color and color management Thank you