Keeping Your Cool in the Data Center

good afternoon everyone welcome to the fall semester of the I for energy centers seminar series sorry this starting this semester you know we’ll have enough money to pay for lunch but there are cookies so instead of feeding your stomach’s we’re going to feed your minds with stimulating seminars and we have a terrific speaker to start off the series today dr. claw foot fetish bo he’s the founder and president and chief technology officer of vigilant dr fetish bo in 2004 building on his pioneering research in dynamic cooling technology and wireless networking to address the emerging need of treating cooling and energy consumption as a managed resource found at vigilant dr fetish bo began his career in R&D at johnson controls after receiving his bachelor’s in mechanical engineering from the California Polytechnic State University in San Luis Obispo and his master’s and PhD from MIT in 1998 he joined UC Berkeley where he was a specialist at the center for environmental design research and the electronics research laboratory and I was fortunate enough to work with cliff around 2003 2004 so let’s welcome cliff thank you gamin and thanks to all of you for for coming it’s good to see some friendly faces in the audience and welcome to all of you who I don’t know yet so vigilant is a company that focuses on cooling management in mission critical facilities so I thought I’d you know talk to you about some of the problems issues kind of interesting technical challenges there and give you a little bit of idea of what we do to solve some of those and the results that that derived from that so mission critical cooling so data centers are a good example of that although data centers aren’t the only type of mission critical facility so I’d include in that and we focus on not just data centers where you have racks of servers and things like that but also telecom central offices where you have a very similar situation in either case you have electronic equipment that produces a lot of heat and the heat has to get out or bad things happen and when bad things happen in these kind of facilities they’re really bad so pointer game yeah okay so so these are the racks here that have servers and switches and routers and stuff in them this is one of the cooling units in this particular data center in this case that’s a just a great big box it’s got a cooling coil in it that has cold water pipe to it from a central chiller plant but sometimes these boxes you know have compressor eyes cooling in them and there’s other ways to get the heat out of these data centers but this this kind of a scenario with the floor standing box on a raised floor with perforated tiles delivering cooling dirac sorbets if it’s a telecom office is a very common scenario in these kind of facilities and so what does it mean when its mission critical well what most people expect in mission-critical is it’s just basically virtually always up and when it isn’t up it’s a really big problem I’ll talk a little bit more about that in a minute this is typically the most expensive asset that accompany owns even a small data center cost tens of millions of dollars to build big ones or hundreds of millions that I cloud data center that Apple’s building is about a billion dollar facility so that’s a really expensive asset really important and it has to work the maintenance costs in these facilities are much higher than in a building like this because the system is never down so you have to perform maintenance in this very high uptime mission-critical environment while everything’s up you know every things go in full production so the cost of doing that are much higher than they are in admin type building and these buildings consume a lot of energy they tip even a small one can have a multi-million dollar utility bill and so so all those create a lot of challenges for managing and operating these facilities so just to give you an idea of some scenarios where these are actual

cases where things have gone wrong in in data centers so most of these I can’t you know tell you who it was or anything like that but here’s an example a six-minute outage due to a chiller plant failure 14 million in lost revenue and penalties from a six-minute downtime another situation 15 minutes 25 million that’s that’s a pretty big penalty that’s a scary thing if you’re an operator of one of these facilities there’s a lot of it’s because the energy consumption of these data centers there’s a lot of effort these days and building these high efficiency data centers of the sort that Google Facebook and Yahoo are building a lot of these don’t use compressors at all they bring in outside air to cool and if they need more cooling they’ll use spray cooling basically glorified swamp coolers to cool the air and get the heat out so this is an example of one of these we’re on a hot day where it was supposed to be bringing in hot but dry air the outdoor air dampers closed and the humidifiers went to a hundred percent because they had a bug in there BMS programming this resulted in condensation forming and the power supplies and servers shutting down spontaneously obviously a catastrophic outage for that facility and so so these are kind of like you know sort of whole data center failures but there’s also kind of individual air conditioner unit failures that can also result in outages so here’s an example where they had a single air conditioner failure they had sensors in the data center that would you know read temperature and signal that out to you know an alarm message to somebody to let them know it’s getting too hot but because only one of them failed those are reading about 80 degrees and they didn’t trip the alarm but it did cause the the netapp storage to get exposed to temperatures over 100 degrees Fahrenheit and shut down now these netapp storage arrays are big and they they’re really really important to the operation of one of these data centers so even though this is a single air conditioner unit failure this was like an All Hands everybody who runs that data center was there fixing that problem at three o’clock in the morning and that is just not a fun event if you’re running one of these facilities so if you’re an operator what do you do well what they typically do is first of all they buy a lot more cooling than they need they were on all of it all the time which you know is basically this policy this is the way I think of it always on and always cold they just want to be basically blowing a lot of air that’s freezing cold so that even if something does fail it’s so cold that by the time it starts to get warming up to a temperature that’s uncomfortable for the equipment they’ve had time to react to it and then you know a lot of the time surprisingly they have fairly limited instrumentation you know the example I gave you is over the sensors in the wrong place they just didn’t invest in like enough instrumentation to really catch the small failure that ended up tripping them up and they know that they don’t have this that they’re exposed to these vulnerabilities so they’re going home at night praying that they’re not going to get that three o’clock in the morning call so the more capacity than needed here’s a visual example of that so this is a state of California data center and you can see they haven’t this is actually the just the hunt you know 180 degrees from that photo I showed you before you know they just haven’t filled this in yet and yet they have all these air conditioners and before we got in there all of them are running all the time because they in fact they had tried to manually shut some of these off because they knew they didn’t need it you know these guys aren’t idiots they know that that doesn’t need to all be on all the time but when they did shut them off manually that you know they get a shift in load something would change and they get a you know a incident at three o’clock in the morning so they’re just like forget it was just all on all the time it’s got it it’s got to be that way now on the energy side this is the scenario so this is from a report that was done for the EPA by folks up at Lawrence Berkeley National Lab and this shows data center energy consumption in billions of kilowatt hours so the blue is the servers storage stuff like that the white is cooling power distribution and you can see it just escalating 2000-2005 2010 and this has gotten so acute in some parts of the country that for example in Northern Virginia where basically the commercial internet started the the utility their dominion power ten percent of all the electricity they generate goes to data centers in the next five years nearly two-thirds of the load growth that they’re going to see is going to come from data centers and yeah the other interesting little statistic I found on the Internet is that if the data center industry itself were a country it’d be the fifth largest consumer of electricity in the world so

so this rapid growth and the fact that it’s just so big is creating a huge amount of pressure on people to deal with it so in the US we have new codes and standards that are changing that are forcing at least new construction for mission critical facilities to use less power title 24 has provisions that are coming down the pike as well as ASHRAE standard 90.1 which ties in to do a federal standards in other parts of the world not so much here in the US but in other parts they have carbon taxes so some of the Scandinavian countries in the EU the UK has a carbon tax Australia’s is thinking about having a carbon tax and all these basically just increase the cost of power and drive more pressure to do better now in Japan they don’t have they don’t have incentives there you know the way that they do here there’s no rebates in Japan and historically anyways and and they don’t have carbon taxes I think they just basically rely on two things one is the fact that it’s really expensive electricity in Japan so that alone just sort of drives people to to be conservative in in their use of electricity and they have a you know they have sort of a social mindset that you know I think causes people to sort of work together to do the right thing but of course all you guys know they had a massive earthquake and tsunami that took out the Fukushima nuclear plant and as a result of that the Japanese people have lost confidence in nuclear power and they have now all 54 nuclear reactors permanently offline and that’s created a gigantic energy crisis in Japan where the government is now going to big businesses and saying you know we’d really like you to reduce your consumption by twenty-five percent 25% just like we just want it to go down by twenty-five percent because when the summertime comes around in Japan it was really hot this summer in Japan like the building’s just turn there they have no air conditioning they just hand out little fans you know you’re in a conference in Japan and a conference room and everybody’s just doing this because it’s it’s you know 85 degrees in the conference rooms because they can’t afford to use air conditioning just to keep people comfortable and so they’re making really big efforts to solve that crisis that they have in particular in big loads like data centers and so one of the ways that people deal with efficiency in the data center is by design so I mentioned already a little bit about the you know the Facebook Google Yahoo type data centers that they use free cooling you know outside air is sometimes cool in the middle a night it’s cool outside I mean why are you running compressors to make cold air when there’s cold air right outside right and that kind of thing is working its way into codes and standards some of those get rid of the compressors to the chiller for you know the hot summer days and they use things like spray cooling and then variable speed you know the sort of 10 the 15 years ago kind of data center had no variable speed everything’s just it’s just on it’s running full speed it’s just moving lots of air and lots of cooling fluid and so variable speed is becoming much more prevalent and the Japanese are really really good at this because of the earthquake risk in Japan they don’t like having water in their data centers so so they don’t want to have big chill water plants pipe and water around they don’t want to have spray cooling in their data centers so they they build air conditioners that have compressors in the air conditioners just like your home air conditioner except it’s all variable speed every component every motor the compressor motors it’s all variable speed and as a result they’re able to deliver you know twice the efficiency of conventional types of air conditioning of the same sort that you find in other parts of the world and so so that’s what they do for efficiency by design but that doesn’t necessarily get you an efficient data center because you can destroy that savings by bad operation where I wouldn’t say bad but but but by having operation practices that that waste and interfere with the design that was intended you know perhaps to be efficient so always on always cold is clearly not an efficient way to operate your data center you don’t need to have the full capacity of your cooling equipment running if you don’t have the full capacity of the load and most facilities don’t so in this sort of works two ways if you’re making a very cold data center you’re you’re transferring heat into you know the floor the walls the ceiling all this stuff that’s outside the data center and in a lightly loaded data center that’s a bigger fraction than in a heavily loaded data center in a heavily loaded data center you know there’s just so much heat generated inside that that little bit of parasitic loss to the outside is

kind of negligible but by being able to raise the temperature to the data center you can improve the efficiency of the air conditioning equipment and so having this strategy really you know destroys the efficiency of the equipment designs are usually like a fixed thing at the beginning of time right you design your data center on paper it looks great it’s got a low pua which is an efficiency metric that people talk about for data centers but but then when you start operating it everything changes and it’s no longer you know it’s no longer like it was when they designed it you have a different load and a different layout and just different things and all of those changes may mean that you know the design isn’t quite right anymore the other thing and this is what I’m going to really focus on more in this talk is even in these high efficiency data centers challenges that really eat up efficiency are the instability of the controls in these facilities so this is from one of these high efficiency data centers this is straight off the internet they just publish a whole bunch of stuff about what’s going on in this one they have these air handlers that are you know operating in parallel deliver an heir to the same place strongly interacting with each other and no surprise if you know much about controls they fight with one another in other words it’s unstable the normal like almost overwhelming control strategy though is just simply return air temperature control of the air conditioners of the sort that I showed you in the photos earlier so it’s like this so you saw that blue box in the earlier photo they have a temperature sensor on the top of it that sense is the temperature the air being drawn into so this is a down flow unit it draws air from above itself down through the unit through these filters into a cooling coil below that and then discharges it into the raised floor so it gets dispersed throughout the data center and that sensor is connected to controls down here so on this panel you can set what temperature you wish you could get at this point and there’s just a simple feedback loop it’s a completely decentralized often not even networked for monitoring purposes control strategy and the reason for that is it makes it very easy to you know just sort of plonk these things down wherever somebody thinks that they need more cooling in the data center and they don’t have to like really think about you know the the control operation of the facility and you know I showed you some pictures already you can see that there are many of these but this is a pretty typical for sort of a mid-sized data center the scenario so these are the rows of racks here so you can see they’re out and like a dozen of them across here or something like that these are the air conditioners around limiter so this particular one has 22 of these air conditioners and they’ve arranged this in you know a nice arrangement so they have cold aisles and hot aisles and cold aisles of the blue is the cold and not blue is the hot aisle and and that you know literally they’re they’re just like right next to each other and most of these data centers are like those photos I showed you where it’s just completely open plein air follows the path of least resistance and so you end up with strong interaction between these units that you know consultants and operators they observe this and you see talk at conferences and things like that about fighting behavior and instability so so we’ve looked at whether or not anybody has done like a real technical job at do at the stability analysis of this kind of a system and amazingly we find none of that I can’t find a single paper at a technical conference you know from say you know a place like Berkeley or a place like lbnl where they’ve actually taken a look at this and figured out how it works and why it works the way that it does and what might be better so we’ve done a little bit of that ourselves and I’ll just show you what that reveals so this idea of pairing the return temperature with the chilled water valve or compressor whatever it is this that’s the mechanism for cooling in that unit is is a pairing combination that you can use process control techniques that are commonly used for process control where you want to have decentralized control but you you want to figure out what’s the best way to combine a sensor with an actuator and this tool this technique called the relative gain array is something that was developed back in the 1960s by a guy named Bristol to solve just this problem which sensor should i pair with which actuator in my process so that I can get the best control in this case though the pairing combinations are set we don’t get a choice it’s already been dictated by the equipment manufacturer and so the diagonals of this relative gay narrator the only thing that really matter in this analysis normally you get this array and then you make selections about which which input in which out

what you want to combine together and the way you interpret it this array is that if you get values less than zero at any of the you know in this case on any of the diagonals that means the system is going to be unstable in one way or another if it’s zero it implies that there’s no influence of that control on that actuator if it’s fifty percent it implies that the interactions are about as strong as the the loop feedback for that you know control loop if it’s one it implies that there’s no interaction so this is what you’d like to have nice and easy to control and if it’s greater than one it means that the interactions decrease the loop gain so as it gets more greater and greater and greater you get a situation where it gradually gets harder and harder for that loot to be controlled so this is a graphic of that data center that I showed you with the 22 air conditioners so it’s a visual of the relative gain array and so you can see if this color here this kind of teal color is 0 blues are less than 0 and the diagonal like I said is what matters and so here you can see we have less than 0 there we have some clothes to zeros here and here and and if you look at the numerical values what we see is the minimum diagonal is less than zero it’s minus point 3 we have four of them that are less than point five this system with return air control all is unstable and you know you can just look at it well you know if you have a monitoring system and sees it in fact that’s the case and so you get lots of different behaviors from from this kind of a scenario you get this fighting behavior where they you know they sort of want ones trying to over overcome the other and when they when they fight you can have some of them drive others off so you can have one of them that cools its cold air get sensed by another one’s temperature sensor and it causes that one to shut off and it stays off until the data center gets so hot that that cold air itself actually heats up enough to bring the other one on others get stuck at one hundred percent because when some gets stuck off there’s not enough capacity remaining to regulate the temperature and then often they just oscillate furiously they just thrash and we’re out the equipment like mad and the operators are left wondering why you know compressors that are supposed to last for 15 years or wearing out every year so we’ve done this kind of stability analysis that other data centers and we find qualitatively the same thing this return air control strategy does not work it’s fundamentally flawed and an even more amazing is that you know we’ve done audits of hundreds of data centers and I would say that I could count on my fingers the number of data centers that don’t have this return air control strategy so what this means is is that you know all the telecom data centers that are supporting you know your mobile phone or whatever all of the financial data centers that are supporting the banks all the government data centers they’re supporting all the processes of the government all the data centers that support the internet all of that stuff is susceptible to this vulnerability and yet there’s as far as i can tell been no research on this I find that really remarkable and I think it’d be actually a great opportunity for you know some folks at cowell maybe a graduate student to take a look at this and see what could be done so so the need here as I see it is smarter controls so you know you need to have controls that can take advantage of that multivariate analysis that that goes into something as simple as calculating a relative gain array I think that you know there’s a lot of benefit to having a system that’s predictive so if you have this multivariable capability you have a transfer function you have the analytics built into it to calculate and figure out whether or not it’s unstable you can use it for predictive purposes this one you know I haven’t touched it on the need for this quite as much but the fact is that in a data center the IT equipment the servers that all of that stuff has an equipment life of about three years but the cooling equipment has a life of about 15 years so that means five equipment refresh is on the I teesside for everyone equipment refresh on the cooling side so that means that the stuff is just constantly coming and going the racks come and go the amount of heat they generate the way that they’re operated the way that their internal controls work all that stuff changes fully five times in the life of the cooling equipment so to not have to go Rica mission data centers continuously you have to have software that can essentially reconfigure and Andrey Commission itself and then it has to be simple for operators and

installers you know it has to be easy for them to use has to allow them to get access to the information they have to be able to understand what it’s doing and why it’s doing it and particularly during the installation phase you know this has to be installed if it’s a retrofit in a live production data center so anything that you do that’s construction-related in a live production data center is extremely risky so you want it to be quick non-disruptive and you want to minimize the need for people to be having to make any kind of decision because you know in every kind of mission-critical environment whether it’s data centers or you know aircraft flying across the country it’s always human error that causes stuff to fail you know overwhelmingly that’s what goes wrong the number one cause of outages and data centers is somebody inadvertently hitting the emergency power off switch which is there for safety purposes if somebody’s getting electrocuted it’s a great big red button on the wall and if you hit that all the power gets cut well guess what it gets hit from time to time and you know I’ve heard people refer to that as the the career-changing button yeah and so so you really want to you know minimize the amount of you no need for humans to do things during during particular ly during the installation phase of the system like this so let me show you what what we do so we have a system that has some of these features so the software itself is predictive so what we have well it’s a transfer function for those you know what that is but but we refer to it as an influence model and what it tells you in this case is the influence of this air conditioner on all the sensors so in this case we had about almost 50 temperature sensors on this 10,000 square foot floor and it’s telling you the blue means that it well this one will cool these spots on the floor so these three server clusters and you can see actually a little pink right here what is predicting is that when this one gets activated on supposedly to cool it will actually heat up some parts of the data center a little bit and we see that all the time you know air air basically follows the path of least resistance and if you turn this on while you may pressurize areas that are near this unit you know you can just as well depressurize other places in the amount of air coming out of the floor could go down the flow pattern could change you could get air sneaking from the hot I’ll into the cold I’ll and so it’s very hard to predict that ahead of time but it’s very repeatable you know you turn this on or off and you can see that the more or less the same thing happens over and over same thing for this one and you can see just a very different pattern of behavior here so it like this one air conditioner is up here number three also affects this server cluster but it has a big influence in this higher density area in the lower right now this demonstrates its ability to learn here so what happened in this data center is that the customer about six weeks after we put this in they win and put in containment in in this high-density part of the data center so that the hot air that gets discharged from these servers gets captured in that hot I’ll route it up over the ceiling and back down into four of the twelve air conditioners on this floor and you know so that the idea there is to not mix hot and cold air for efficiency purposes and that in fact did work air conditioner number three you is one of those four units and you can see it just you know you can imagine if you put in containment like that it has a huge effect on the air flow pattern in the data center the software just automatically re learned by itself that this isn’t the influence anymore this is it now it’s almost entirely for this air conditioner in this now contained higher density area this is kind of interesting because what it exposes is that this predictive capability can reveal things about redundancy you know this server cluster between these two units used to have redundant cooling and now it doesn’t you know there’s ten other units on this floor so it’s probably not a problem but you know it reveals the other kind of things that you can do with the predictive system the way that that we deploy the whole system is illustrated by this diagram here so we put sensors little modules on the top of racks that have probes that measure rack top and bottom temperatures these are usually wireless sensors they get routed through a wireless gateway that has an IP uplink that software can get access to and and we can do this bidirectionally so we can get sensor data and then we can send commands out so that we can do control over the air which allows us again to have a very quick clean simple installation of even the control actuations which is really important in a mission-critical environment we can instrument these kind of things because it’s wireless in about 30 minutes if you have a BMS and you need to integrate with it that’s you

know done by open protocols that our system supports and and if you have those already networked together so that they had been you know say network to the air conditioners then we can use that path instead of this path so what that looks like you know is this here so you can see the modules up here you can see these probes coming down in front and this is a photo of what the wireless gateway looks like so the effects of this of having this this system that allows us to rapidly deploy many sensors gives us a lot and our customers a lot of visibility about what’s going on in their data center they’re not left with having you know 125 temperature monitoring points that they alarm on now they can have hundreds of temperature sensors all over the place put in just the right place so that they’re measuring the temperature the cooling fluid for the servers which is the air temperature being drawn in at the front of those racks what the first thing the software does is stabilize those unstable local controls it dynamically right sizes all that capacity figures out what should be on what should be off what should be turned up or down you know in a gross way and then it optimizes the distribution so maybe you don’t need so much cooling over there and you need more over here it figures that out and it optimizes it in a way that uses the least amount of power so here’s some examples coming up next that you know show you the impact of doing this so this is a data center in Japan as you can tell you don’t really need to be able to read any of this text I’ll just tell you what the graphs are showing us so this one this one and this one are showing us the compressor speed in Hertz as a function of time over a number of days before and after the system was turned on the system software so here you can see just the mass amount of compressor thrashing on and off they’re going all the way from completely off up to their minimum thrashing around between the minimum and sun and the maximum and a similar kind of thing over here lots of thrashing a little bit less here this one is obviously not completely shutting off and then after the software comes on you can see that it shuts this one off completely it brings this one up to a high level and leaves it there this one it kind of corrects over a few days and then finally settles out to a fairly stable level and this shows you temperatures on one of the racks both before and after so huge temperature swings you know high frequency temperature swings that are really destructive for IT equipment they you know all of the mechanical components inside a server are really adversely affected by the expansion and contraction of temperature cycles like this afterwards nice and steady and what this allows us to do is raise the temperature and yet the peak temperatures don’t go any higher than the temperatures that they had before so we’re able to get savings not only from just stopping all of this thrashing that occurs there’s a ton of energy wasted by all these starts and stops and cycles and everything else but also we can raise the average temperature of the data center considerably increase in the efficiencies equipment and reducing the losses to the the rest of the facility and so so here’s an energy benefit this this is back to that state data center I was showing you that’s not this is not the Japanese data center but it’s before so this is just power for moving air around the data center this is before these little swizzles here are some tests that were being done on the system turn the software on and in a matter of hours it’s down to less than half of what it was before and it stays there for their everyone and so so you know huge energy benefit but you know wear and tear is a really big deal and like I mentioned it causes it causes maintenance costs to be higher and it causes failures sometimes failures that really matter even if it’s only a single piece of equipment so this is a yet a different data center where they have compressor eyes cooling in the air conditioners in North America where instead of variable speed they have two compressors and they staged them so when it gets a little too hot one of them comes on gets even more too hot the second one comes on sometimes they’re intermediate stages where they have different mechanisms for having a kind of half capacity of one air conditioner within the unit and what this is showing in this particular data center is across the entire room the number of compressor cycles per day so this is the baseline period and this is after the system’s turned on and you can see you know after number of days it’s figured out how to cut the number of compressor cycles to forty percent of what it was before so what causes compressors to fail by cycling well first of all they’re sitting on spring suspensions that are intended to reduce vibration the springs fatigue every one of these compressor cycles is a jolt to those Springs and so after enough cycles they break and it’s not like you can just replace the springs you got to go in and replace the entire compressor because they’re

integral to the compressor the windings and the electric motors that drive these compressors also get a shock a mechanical shock every time you turn one of these things on or off you know when you turn it on you get a surge of current and that causes stress on the windings and so what happens after a while you get a little bit of mechanical friction so every cycle is just doing a little bit of wear a little bit of wear sooner or later the windings get a short-circuit and they fail and it doesn’t you know it doesn’t matter whether it’s slow or fast I mean just the basically the speed just you know means you know if it’s designed for two hundred thousand cycles on average then on average if you’re cycling at five thousand a day you’re going to be two and a half times faster to failure than if you’re at 2,000 but that’s not the end of the story if you cycle a lot if you cycle enough you end up with with what’s called short cycling short cycling has the prop two problems one is that if you short cycle a lot you can have a situation where you end up sucking liquid refrigerant into cut into the compressor most compressors aren’t able to withstand that and it’ll cause catastrophic failure the compressor the other thing that happens is the oil that lubricates the compressor is carried around by the refrigerant so if you short cycle the compressor the refrigerant doesn’t reach a stable equilibrium case where it allows the oil to move completely around the compressor and lubricate the compressor so the short cycles effectively cause the compressor to be operated without lubrication and that obviously wears out compressors a lot faster and so if you had two thousand two hundred thousand hours on your cycle zhan your compressor if you cycle them very fast you don’t get two thousand you get a hundred thousand or 50,000 and you’re doing it faster so it ends up failing instead of say fifteen years you end up failing in a year so you know benefits of being able to solve these stability problems a big reduction in energy consumption we kind of typically see that you know if you have one of these data centers that’s like overcapacity really thrashing around a lot we can get about a forty percent reduction you know even in these ones where they have hyper efficient air conditioning and that’s because of the system effect you know when equipment manufacturers designed the cooling equipment they designed to be optimal like in a test a test room right they set it up for like what simulates a hot summer day and they run it and they see what the co p is and they say oh look at that that’s twice as good as anybody else’s and so it’s a super efficient system but if it’s out there thrashing around sometimes surging and and and all those things that I just showed you that just completely can erode the efficiency that you get from that and so that system effect can allow us to get percent savings you can get improved stability from less cycling less on/off cycling less up and down that extends equipment life reduces maintenance makes the operation and maintenance guys job a lot easier but it also mitigates the risk of failure so I mentioned before that one of the ways that you know instability can manifest itself is you get one air conditioner produces cold air that shuts off another one and if it shuts it off in a way that’s very powerful the data center temperature might have to go up 10 15 20 degrees before it’s hot enough at the discharge or the one that’s on to allow the one that’s off to get turned on and so by mitigating that risk and having a system that provides sensors in the right place so they can actually see what’s going on in their data center with enough fidelity to be able to recognize problems operators get to sleep at night and and that’s a that’s a real big value because these guys like literally live in fear and so just to summarize you know we talked about this instability and what what I assert is that you know this instability particularly this problem with return air control is a vulnerability for basically the entire internet it all of the infrastructure every time you go to any website there’s this kind of cooling equipment in the background allowing that site to work and all of it is unstable and that instability waste energy reduces the equipment life and increases the risk that you’re not going to be able to get those youtube videos that you want when you want them and solutions exist so you know we I talked about one that we have but there there are others out there and there are other ways to solving this problem and you know I think that it’s a big problem and it merits a lot of consideration so put the floor folks some questions I’ll start with you Lindy thanks for an interesting talk one of the questions that I have is how were you able to

start doing pilot projects of your technologies when the when these are such mission critical facilities how did you convince them and how did you sort of do small-scale and then eventually larger scale pilot projects yeah good question so there there’s sort of a few components to that first of all we didn’t start off in mission-critical we started off with an application for buildings like this one and we got a system built that week we could show you know it worked delivered the energy savings it you know all of this sort of nuts and bolts were the same some of the application software was a little different so we had some case studies granted not not case studies and mission-critical but some case studies that could help customers get comfortable we partnered with bigger organizations who for various reasons had a need for a system like ours and we worked with them to go to their customers to line their customers up and help them help us convince those customers that this is the right thing to do and then finally what we did is we took advantage of rebates in California to essentially deliver some of the first systems for no cost to the customer so all of you who pay electricity in California essentially paid for us to get some of those first sites going question what happens if there is a bona fide power outage I mean do most of these data centers have backup generators and or do some of the more critical aspects of infrastructure in this country like finance or Aeronautics have duplicate data centers in different parts of the country good question you know it depends but I would say that you know the ones that are the most critical have some combination of all of that so in a say tier 3 or higher data center they they’ll have a UPS you know so they’re not just running the IT equipment itself on utility power and the UPS usually has the ability to keep the IT equipment running for 10 to 30 minutes before the batteries and the UPS run out they’ll have backup generators too and they’ll usually have the cooling equipment on the backup generators the backup generators take anywhere from about three to six minutes to start so if they have 10 to 30 minutes two of you know backup power for the IT equipment presumably they have enough time to get the backup generator started to cooling back online and keep the system going even in the event of a of a utility loss of power now some of them do have backup data centers so they’ll have you know if they lose you know if they have a problem in one data center they can actually just move all their operations over to another one but that’s really expensive the other thing that happens is that some of the like the big monolithic apps like Google have you know the whole app is distributed so they can have actually a data center fail completely and you might not even notice it because of the way that their app is designed it’s completely paralyzed and and so if you you know if you try to get you know some information from google in the first place it wanted to go isn’t available it can just go to another place but most you know like like financial services apps and things are like that are not like that and so if they need that extra level they need to have that backup data center where they can immediately switch over yeah hi great talk and yeah datacenters this really is a huge problem and especially like the analysis you did with the stability I guess my question was I would imagine this is a hot area a lot of people are looking into and as you mentioned there are other solutions so if you give a quick rundown on how your solution might compare or differ from some of the other efforts that are out there huh so other ways that people are addressing at least certainly the energy aspects of energy problems in data centers are sometimes with monitoring systems monitoring systems that might have some analytics built into them that and you know provide recommendations on what to do you know so they they provide you not only with a data dump but maybe you know recommendations like we can tell that you know the efficiency of these units over here is starting to drop off and you should go do something about it lots of you know sort of efficient more efficiency by design so going in and just simply retrofitting and and providing more efficient operation through more efficient equipment that goes into the data center I think there’s quite a bit less on sort of the you know the stabilizing the data center because like I said there’s sort of this there’s kind of this non technical understanding among the sort of you know consulting engineering

community in this in this kind of data center infrastructure space but there isn’t like there hasn’t been like a really technical clear technical analysis of what the problem is in ways to fix it so that area i think is not really being addressed at all as far as i can tell so in your model it seems to me that you’re basically using temperature to calibrate and test your model are there other variables like air flow and pressure that would also be helpful if you added those to both your physical measurements and to your model yeah good question there you know the usual things that people care about on the kind of you know power and cooling side of the data center temperatures number one humidity sometimes pressure measurement at various places maybe the underfloor pressure or pressure drops across filters to see if they’re loading up things like that and power and we do all those things we monitor all those things the the primary thing that we focus on controlling is temperature because that’s the thing that causes for the most part that’s the thing if it gets out of control and it can get out of control quickly will cause outages hi so you said you claimed forty percent energy savings that was because the previous system is like constant air flow but what if the previous systems already have variable speed drivers and how much energy would be able to say after that yeah good question so you know I showed you some examples of light like that Japanese example that was in a data center where they have those all variable speed Japanese air conditioners so they have variable speed compressors variable speed of a praetor fans variable speed condenser fans the whole thing is variable speed it’s twice as efficient as air conditioner the same sort that you find here in the US and yet we’ve been able to get those same percentage numbers because of the system effect now if the whole system is already more efficient because the you know the cycle efficiency is better the absolute kilowatt-hours you save might not be the same but percentage-wise that the savings ability is still very high and instill in that range getting my exercise hey clip is nice to see your good work and a huge savings thank you yeah so my question may be many other it’s not question too many of you but it’s as to me you mentioned a few times when the temperatures reach that then the efficiency are really raised so I I would like to know all the knowledge about this because we are doing the office you know office building we are doing the same thing if the equipment you are doing a huge capacity or or the capacity of the Childress and ultra size the writer this happens open yeah so why the temperature when the temperature raise then the efficiencies it batteries are higher so let me explain it in terms of you know air conditioners that have compressor eyes cooling so you have you got the compressor you have an evaporator where the heats going into the evaporator the evaporator is cold and you got the condenser where the heats coming out of the condenser say like up on the roof or outside the building or someplace like that that’s where you’re rejecting the heat to the atmosphere so so if the data center is cold the temperature of the evaporator is going to be lower and what happens then is the the temperature difference between the evaporator and the condenser will be greater now the temperature difference in these two heat exchangers is directly related to the pressure because you have two phase flow so if you have a lower so you got this high temperature outside in the condenser a lower temperature in the evaporator if the data center is cold both the temperature and pressure difference or higher and that pressure difference is what the compressor has to work against in order to move the fluid around the cycle so if you can raise that evaporator temperature by raising the temperature of the whole room then the pressure difference that the compressor has to work against goes down and it goes down in a nonlinear way where you know if you can make a little bit of a difference it can make a big difference and that’s the mechanism for getting the efficiency up on those air conditioners is it possible i know a lot of high-tech companies are using solar and fuel cell sources of energy is it possible to use those to energize the backup order those have to be somehow base loaded so they can’t theoretically be used for that or not theoretically but economically be used for that well I mean I know some of these high-tech companies like Google are using solar a lot at some of their facilities but you

know a these days like a new data centers you’re building a new data center sort of the low end of the power consumption of the data center is like 10 megawatts so so to have like on the roof or out in the parking lot of your data center enough solar to you know to deliver 10 megawatts is not possible given you know solar technology as it is today so you know I don’t know maybe there’s some opportunity to like help keep batteries charged or something like that but I don’t think that it’s possible given the you know the size and low density of these facilities for solar power to power them completely two part question one for data centers and lvnl and a pre and others have been working on DC to DC data centers because their digital device and showing ten percent energy savings and ten percent cooling savings yeah I wonder if you seen any uptake in that in the marketplace or in designs ah so we we see DC power in telecom facilities because they have had that forever right Joe a since you mentioned both that that’s where it lives yeah right I don’t know of any place where they’ve put that in you know like you know in a recent construction and it’s especially you know if it’s not telco where they’re not really familiar with that I don’t know of anybody that’s done that even the telco guys for their you know they’re sort of like non telco data centers you know that are there different than the stuff that’s you know managing you know phone lines and stuff like that even they often are not using DC power because it costs more and ok thank the second part is knowing your beginnings and building the aspect of it just wondering if you could comment on how you’re seeing the market with data centers versus buildings and with vigilant how that’s playing out that’s a good question so you know like I said we started with an application for buildings and we still have that and we still have customers interested in that who are paying for that but that market is you know it’s harder to build a growth company in that area I think because it’s not as important so you know we like it you know if times get tough or budgets get tight you know you can ignore your admin building you cannot invest in it and it will be okay but you know you cannot ignore your 300 million dollar data center and expect things to be okay so they they have to continue investing in their mission critical infrastructure but you don’t have to continue investing in their non mission critical infrastructure so that I think you know eventually that market for us will probably be much bigger because there’s just simply a lot more of it but but in terms of you know nice to haves versus a must have the day centers are more of a must-have and the admin buildings are more of a nice to have I think you probably entered that at the beginning with the fans and meeting room in japan vs their data center cooling exactly hi okay I think that’s all for the questions were great talk clear thank you very much thank you