NREL Panel Discussion on Energy Efficiency and Renewables

alright thank you everybody come on in and sit down and let’s get going again listen it’s a real pleasure to be moderating this panel we have the home team here you can’t come to boulder and not hear from NREL the National Renewable Energy Lab and we’re just so pleased that Steve Hammond and a couple of his colleagues are here today we thought it would just be a terrific idea to have them participate and give us an update on some of the real cutting-edge computational work that’s underway here as a number of you now nrel has one of the newer super computer centers in the d ue system and Steve Hammond who will be speaking first is the head of that supercomputing Center also joining him is mike sprague who is a senior scientist in the center also his research interests include computational mechanics of fluids and structures and numerical methods for modeling and then also vlad stevanovic as a senior scientist but in the material science center here at NREL and he’s also an assistant professor at the metallurgical and materials engineering department at Colorado School of Mines so we have a full Colorado contingency here and we look forward to hearing from you guys about what’s going on Steve do you want to kick it off thank you Susie so I want to thank are you seeing the organizers for inviting us to speak it is a pleasure going to a meeting and having it as a home game I think it’s 15 minutes from my garage to the parking lot here it’s closer than getting to the lab and so when I was I spoke at this meeting a year ago in Seattle and I talked about our data center talked about our energy efficiency and our what we were doing and so I wanted to elaborate a bit on how we operate the center our role where we sit in the d ue complex and so I’m going to give sort of broad brush of the types of applications that we’re supporting and I’m going to turn over to Mike into Vlad talk about research in modeling the flow physics pole wind plants that’s a capability type of problem and then Vlad for a material science which is more of an ensemble and a capacity gun kind of application see if this goes so I’m guessing most know the structure of the department of energy as it pertains to the upper left portion right so there’s NFA which is the full left part of the org chart the office of science and energy under deputy secretary Lin or the first box under there is the office of science should go to boxes below that highlighted in red I’m not sure how we can’t see the eye chart so that’s the office of energy efficiency renewable energy there’s off there’s also environmental management so there’s a number of energy offices under lyne or so we’re assayed our sponsor our primary sponsor is the office of energy efficiency renewable energy and it’s a sister office if you will or or companion office to the office of science so the office of energy efficiency and renewable energy has 11 offices within it much like office of science has basic energy research or basic energy sciences Biological environmental research and fusion high energy physics and nuclear physics the eere offices spaniard three directorates from renewable power to energy efficiency to sustainable transportation and that includes solar geothermal wind and water in the efficiency side its homes buildings advanced manufacturing government energy management which looks at improving efficiency of buildings within the government complex I’m and then you imagine in transportation there’s vehicles fuels and hydrogen and fuel cells so when we look at the gioi portfolio of computing most of the focus to the meeting to date has been at the sort of tip of the spear the leading edge systems driven by Office of Science and NSA there’s also lots of institutional computing system sort of sitting in there and I think it’s been mentioned is a sort of missing middle and looking at hpc resources for the energy offices something that maybe ten percent of the leadership class system and it’s not just hardware but what eere is looking to do is to establish computational science modeling and simulation as a means for advancing their mission just as the office of science and and the SE has done for there’s over the past

decades so we’re relatively newer but the focus is on sort of developing a culture of high-performance computing and modeling and simulation for the energy offices and to fill a gap that’s not being met at the leadership class system so it’s it’s not just hardware but there’s expertise or domain expertise in doing something that’s indigenous at the other labs and offices so our role is the mandate is we are at NREL we provide the primary facility for the office of energy efficiency renewable energy or providing high performance computing the system we have which is 1.2 petaflop system is the largest system in the world dedicated to renewable energy and energy efficiency going so we are eer ease only laboratory compared to NSF and the other parts of do we were relatively new to large scale computing for advancing the mission more in the process of working with EGR leader leadership on defining long-term HPC strategy and the stewardship plan our objectives are to meet the computational needs of ET re funded projects independent of where that work is conducted so eere actually has projects that most of the labs in the lab complex as well as universities and they provide support to industry so whether that work is at NREL we’re at the other labs were positioned to support that work we’re looking to advance scientific discovery and have impact to again to develop and nurture a culture of computational sciences for the breadth of the eere mission and established productive and efficient facilities to support the computational capability if you were at the meeting last year we talked about our high-performance computing data center we have a brand new 182 thousand square foot research facility the data center sits in the middle communicated with the orange arrow so we have 10,000 square feet of usable unerupted floor space that’s our raised floor it’s a 10 megawatt capable facility it’s a LEED Platinum facility and we’ve operated the last year and a half hour running average at PU ease is below 1.06 so we’re we’re very efficient we use evaporative cooling only we take advantage of our dry cool nights so we were able to build this at a lower cost and if we had built a less efficient if we were using mechanical chillers and our operating expenses are much lower relative to other data centers we’ve driven direct component level liquid cooling so our cooling supply is 75 Fahrenheit water we get 110 Fahrenheit degree water back so we do heat waste heat capture and reuse it’s a primary heat source for our full laboratory building we have a few fans but mostly it’s it’s pumps and pumps are much more efficient for cooling than lots of fans we’ve adopted a holistic view of the data center from chips to brixton so we’ve integrated our compute capability into the data center in the data center of the campus and we’re actually over the summer we were exporting heat from outside of our building to other places on campus so part of that was our data center mission so we’re advancing energy efficiency we’re looking ahead towards much larger systems and if you’re not paying attention to efficiency your operating costs can exceed your capital costs for requiring your systems and if you’re looking at an X scale system with 20 megawatts you’re going to pay probably a million dollars a megawatt year to operate that system and if you’re not efficient power and cooling it um you know the millions of dollars start to add up very significantly so beyond power usage effectiveness we’re also looking at water usage advancing liquid cooling and energy efficiency looking at reducing carbon footprint in the computing Enterprise advancing waste heat capture and reuse and also energy management demand response and shifting loads how do we do that within a a quality of service agreement with our applications so we go through an annual allocation process we’re in the midst of reviewing and allocating our system for the FY 16 year it’s modeled somewhat after the al-sisi process conducted by the office of science but it’s driven by eere programmatic milestones so we we’re ensuring our resources are aligned with the mission so people are requesting time we want to make sure that’s aligned with some projects that are funded within gaps of energy efficiency

renewable energy so that’s aligned with milestones part of eere solicitations and then within the army eerie mission space so our current compute capabilities our system is called Peregrine it’s a 1.2 petaflop peak we have 14 hundred and forty nodes mostly intel xeon some Xeon plus Phi nodes we’re in the process of adding another petaflop that’s eleven hundred and fifty two additional dual socket Xeon nodes are arriving monday i’m full of a 2.2 pedophile peak system that will be available for you starting november first we have Mellanox InfiniBand interconnect we do have a three petabyte luster based on ddn file system we’ve got a seven petabyte son oracle storagetek the old storagetek facility was just across 36 here so we’ve got a tape library we also have an insight center for immersive interactive scientific visualization and data analytics to go with this if you wonder where our projects are during fiscal year 15 we’re supporting 68 projects the big three in terms of 75 percent of our workload is in wind and water power modeling and simulation bioenergy applications and solar energy we also do work in buildings efficiency energy systems and grid modernization modeling some computational science projects and in vehicles we have a very mixed workload we have some traditional capability large jobs that take you know a good fraction of the system but we also have a lot of jobs that do parametric studies I would say our workloads differ from typical sort of Office of Science workloads and that their engineering an optimization I was joking with the Office of Science program manager and jeezy I don’t envy you guys you know we publish papers and pardon my french is your shit’s got to work right so when we’re looking at wind turbines and we’re looking at pv systems it’s not a existence proof it’s a for all so they’ve got to sit in the field and work for 20 years and optimizing the materials in use and over the lifetime of the system in various weather and conditions so there’s a lot of engineering optimizations or finding materials that are earth-abundant they give the same sort of optical and electrical properties their desired are important facets we do a lot of computational screening and guiding of engineering these are workloads that aren’t necessarily sort of tightly coupled PDE solves of traditional ops of science workloads we do a lot of resource assessments we look at I’ll say a little bit more about that but looking at where there are wind resources aware their optimal solar resources during diurnal cycles and different seasons and interannual variability we also have some jobs that take lots of small node count and that they do hundreds of them need large memory footprint and they need to run for several days at a shot then we have tight coupling with industry we have order 340 cooperative research and development agreements with industry so we have proprietary designs and data that we work with and we can’t necessarily just ship that off to a cloud resource somewhere so taking a look across some of the spectrum of the following a simulation that we do mike is going to go in a deep dive on this but shifting from modeling individual blades and turbines to whole looking at the complex flow physics of whole wind plants is is one area that we’ve identified as a clear exascale applications all all that Mike speak to that in depth when he’s up I mentioned resource assessment so we’re responsible for providing assessments for the wind industry and make the data available so we run we reanalyzed weather data and provide 80 meter tower data across all of the continental US at five-minute ervil’s at roughly one or two kilometer resolution so that people interested in deploying wind turbines can know where that’s going to go and we do that over ten years so I think we’ve got this was in collaboration with three tier and the full data set was 500 terabytes that we make available to the wind industry in fuels we do a lot of molecular dynamics work looking at how enzymes can break down cellulose so this is the woody parts of plants how do we break down the

silo so think of it as like a hardwood floor and how do you peel up the planks of the hardwood floor with the enzymes and break that down to simple sugars that can then be fermented to make ethanol so the molecular dynamics use you know many thousands of pores and it’s chopped for these and then they do these as his umbrella samples and have you know I’m sure if we let them they would take over our whole system there probably are equivalent of the QCD folks in the vehicles area there’s a lot of interest in lithium-ion batteries from integrated into vehicle systems and looking at the degradation of lithium-ion batteries over frequent charge and discharge cycles and how the materials change so large-scale compute resources are allowing our folks to look at the broader temporal and spatial scales to look at how these materials how they evolve over time and how the grains of the materials shift and take on different physical characteristics made seen some of the spectacular videos of lithium-ion batteries were spontaneously short-circuiting and catching fire be catastrophic of that was in a vehicle there’s a lot of work and traditional strength of the lab is in electronic structure calculations for materials for photovoltaics both miraculously prediction of the energy levels for point defects as well as looking at crystalline structures for improved polycrystalline materials we’re getting close with a super cell size and the modeling and simulation that we can do is now in close agreement to what we observe experimentally so larger systems has been a great benefit in looking at improved PV materials another probably our fastest growing areas in grid grid modernization and energy systems integration so we use plexus which is a unit commitment software we analyzed the entire Eastern interconnect of the u.s and looked at various scenarios of higher penetrations of wind and PV resources into the grid and under certain scenarios how much storage will be needed to maintain grid stability if we were to shift to say thirty forty or fifty percent renewables our grid interconnect one of the last applications I wanted to highlight was geothermal well drilling there’s a small company just outside of Denver that we work with their modeling enhanced geothermal well drilling so they use ansys and they used to do all of their analysis on some work stations and they’ve been working with us for the past year or so to study higher pressure and temperatures and it’s been a great turnaround for them as they go forward looking at geothermal wells and where they can get take advantage of some of the oil and gas techniques and use this for geothermal and energy recovery there I think one of the more satisfying things we’ve gotten is for a new center we’ve actually received some relatively high profile accolades both in terms of our efficiency as well as the Editors Choice Award for achievement in application of hpc for renewable energy we got we received 2 r.d 100 awards for our approach to energy efficient computing with in collaboration with HP and Intel and a do a sustainability award so I’m going to stop here I’m going to turn it over to Mike to give it a deep dive on on the wind modeling and then to have lawd for materials in solar at work okay thanks Steve yeah so as Steve said I’m going to talk about the modeling work we’ve been doing and then do we more broadly has been doing around wind plant modeling so I’m gonna put in that church fields name on here he works at NREL as well but he works up the national wind technology center I’m going to be showing some of his results

so so a lot of the cool things that I can show today are his work I work in the computational science center but I do a lot of basically numerical methods for pdes a lot of computational mechanics so I work with the guys up at the wind site quite a bit talk today I really want to talk about I want to motivate the problem and talk about real opportunities and real challenges that we face trying to do the wind plant problem we talked about the wind plant simulation as being a really good exascale problem talk about what do EE is doing to support this and and talk a little bit about where we are with the state of the art in wind plant simulation so if so where’s do we fit and the whole wind energy spectrum okay if you go out there you’re going to see a lot of really big wind turbines out there big multi megawatt turbines out there I would say a lot of people would say that the industry has really figured it out when it comes to putting up a big turban out in the middle of the field with no other turbines around it they feel pretty confident about it they can predict what it’s going to do pretty well and they have a fair amount of confidence on how long it’s going to last okay so i don’t know there’s a lot we can offer there that they don’t already know when you talk but if you’re talking about wide scale deployment and you’re trying to make wind power competitive with fossil fuels without subsidies that means you have to have much greater penetration that means large wind plants tens to hundreds or even maybe more wind turbines all in the vicinity okay and so there’s that there are quite a few challenges that a wind plant will face or I should say a turbine that lives in a wind plant will face a lot of challenges that a turbine out in the field by itself will face okay and that that really comes down to the fluid dynamics that are going on in that wind plant so there’s a few numbers up here so oops oh I’m new to this vladan I’m going the right way yes there we go all right so if you if you think about scaling up the single turban and a field by itself and you say well I’m gonna have a hundred of these right you’re not going to get a hundred times scale up okay you’re going to look at about twenty percent losses and this basically due to the wake interactions okay if you’re in complex terrain it’s going to be greater than that thirty percent maybe more we’re not gonna be able to get rid of that totally but we can reduce that alright so so the wind plant faces a reduction in power output just inherently that is inherently there but we can reduce that and we can optimize that okay the turbine failure rates are significantly higher in a wind farm again this come we believe this has to do with the wakes you’re going to see a fatigue loading if you will that translates into more frequent bearing failures gearbox failures and so forth and then again because the industry is not well equipped to optimize and design an entire wind plant the tools are not there yet there’s a huge uncertainty when you’re designing this plant and you’re going to say okay well this this is what our power production is going to look like okay there’s a huge uncertainty there and then the expression i love to use that here in this part of it is the cost of money okay so basically they want to finance these things right and they code to the finance ears and they have a huge amount of uncertainty okay and and the people who are financing look at that and they don’t like that so that means cost of money is high so if the wind industry had the tools to reduce that uncertainty quantify that uncertainty that means a reduction in the cost of them of energy um okay so opportunity so where does HPC where does wind plant modeling come into this again this is really about reducing the cost of energy and addressing some of the things I was just talking about um the the wind plant again the the flow dynamics in a wind plant are really poorly understood now I’ll talk about that more in a little bit it’s an extremely complicated system it I guess maybe my point here is that if we understood it better we’d be able to design better plants all right and again I mentioned the tools that the industry has right now for designing wind plants I would say the pretty lacking they’re quite low fidelity and they are not validated for wind plants and again they present a huge amount of uncertainty which then translates into how well they

can accommodate design standards so so here what I’m really interested in this on this bullet is how can we take the results from validated high-fidelity simulation and translate train translate that to better models that are used by the industry control systems control systems is an interesting one because the control system any improvements the control systems can be applied to win plants today all right what can we do with control systems that that will optimize an existing plant some of these other advances well that’s not going to help a wind plant that’s already been built all right it’s not like we move these turbines around to new places you know optimize an existing wind plant the control systems any improvements there can have an immediate benefit on production of an existing wind plant and of course reduced uncertainty and predicted plant performance again this comes back to new plants lower costs of money and thereby reducing the cost of energy and there’s this is um just one of the wind plant simulations that Matt Churchfield did I’ll talk a little bit more about these where we are with the state of the art in a little bit see okay so so we got hopefully I’m motivated it well enough there’s a lot of opportunities that high performance computing can bring to the wind problem and for those of you who are familiar with fluid dynamics a cascade of scales causes this to be a very problematic problem all right there’s a huge span of flow huge span of flow scales when you’re looking at the wind problem all right this this diagram tries to illustrate that right at the blade surface you have extremely complicated boundary layer physics and those play a big role in how that blade performs which of course plays a role in how the turbine performs you have the the blade boundary layer scale you have the turbine scale you have the array scale and of course while we may be focused on simulations that encompass the entire wind plant if you don’t have the right boundary conditions for that wind plant box it is perhaps not a very valuable thing so you got to bring in at least weather scale data in some capacity to drive your wind plant box and then if you’re really going to think long term then who knows maybe you go out to the climate scale but this goes from I think we listed 10 to the minus 5 here for us grid cell spacing if you’re going to capture the boundary layer correctly and that goes you know way out there to climb it scales it depends where you want to go there the coupling the two-way coupling varies and its strength on these different scales but you can see there’s a huge span of flow scales that we need to deal with ok let’s look at very complicated flow physics that our models have to capture within the wind plant itself let’s see the call at a few of these so right i just talked about the boundary conditions on your wind plant box right to modeling the atmospheric boundary layer in a manner or to a fidelity that is appropriate for a wind plant is not an easy thing in itself okay there are a lot of physics going on in there you have atmospheric boundary layer you have the stability you can see these little plumes off the ground you have these low-level jets that with today’s turbines capture that the top part of the turbine no problem the turbines are so big now it goes right into that low level jet which can cause quite a bit of shear a big shear profile across the damn rotor diameter and of course you have complicated wakes that are going on in the turbines if you have complex terrain again you can hopefully you’re getting the gist of this this is a really complicated system now that we have to solve within our or wind plant box so just to look at this the basic problem over overview and some of the things that we use to solve this problem so it’s it’s a multiscale and multiphysics problem there’s no single model or code that is going to solve this whole spectrum that we need to solve if you look at the individual components you could say okay mathematical and computational models exist for each of these different components and some examples are down here so for example this is um a mesoscale simulation of whether a numerical weather prediction simulation that’s the worth code if you’re familiar with that weather research and forecasting that’s from John McLaughlin this is a single turbine simulation again we feel pretty good in this regime but there’s still tons to be done that’s an open foam the mat churchfield did and then we have down to the structure we this is a finite element model and that one’s i think in Ann’s is finite element model that Sandhya has has made and just to point

out these blades now they’re huge they’re 50 meters in length and they’re as they get bigger they need to become less dense okay you can’t just scale up an existing turbine blade it would be way too heavy so they become very flexible so if you go out and look at a large turban today in a decent amount of wind you will see these blades flexing dramatically they will flex a little twist there’s aeroelastic Bend twist coupling that goes on there pretty complicated mechanical systems so you need to couple all these together so we need rigorous and efficient and scalable coupling of these models so just to look at some resource requirements okay what are we looking at for simulations today and what are we looking at for where we want to be okay the simulation I showed of maths before use what are called actuator lines it basically is a line force that exists in the CFD grid so the CFD grid can be fixed and you can have this line force moving through through the domain it’s a it’s a greatly simplifying thing that that makes these simulations accessible but even so if you’re going to say do for as an example and Matt churchfield went through these these um number calculations but basically let’s say we have a 49 turbine wind plant it’s in flat terrain it’s covering 25 kilometers squared goes a kilometer into the atmosphere and you only want to simulate 30 minutes I won’t go into these numbers but basically if you’re going to simulate 30 minutes it’s going to take about two weeks on a leadership class machine there are a lot of approximations in this approach there’s a lot of questions about how predictive these actuator line simulations are and how good the CFD is with this kind of grid resolution looking forward we want to be able to resolve the surface of the blades to reduce as many of these approximations as we can if we think about that so future simulations with resolved turbines um what did he call out i mean i mean we’re talking billions of grid points here we’re talking million time steps billions cpu hours i mean if you look at the numbers here for the future simulations that we’re looking at we’re actually resolving down to the blade surface the blade geometry we’re very much talking an execute scale system problem what I will say about this for exascale is that the wind plant problem is very well suited because you can see it as a week scaling problem okay if you’re going to do the single turbine problem and and do the blade resolve single turbine problem in that one box one turban it’s a petascale so if we want to do a hundred or a thousand turbines it’s a week scaling up of those boxes so this fits within the desired goal of excess scale computing of the bigger problems okay it is very nice capability problem I wanted to say a few words about what the do e office of energy efficiency and renewable energy how they’re looking at this and what they want to do with this and there’s the atmosphere two electrons initiative and over the last year Steve and I on a lot of other people have been very involved with planning and trying to get this off the ground and this is pretty exciting for me because it is really a shift for that office they’re really trying to use high fidelity modeling to understand the fundamental physics a whole wind plants it’s a programmatic shift from looking at single turbines to the whole wind plant and high fidelity modeling is a huge part of that it is at the core of it they’re doing it right in my mind they are supporting an experimental campaign that is validation directed so the mod the the planning meetings we had we’ve had modelers we’ve had experimentalists working together to design the experiments we also have effort of a the dap data analysis portal that’s going to be at pnnl and so the data should be publicly available both experimental and and simulation data is meant to be on the dap we’ve held two well attended workshops for this program over the since um since January we had one workshop where we were just looking at creating the modeling and simulation environment to address wind plant problem at about 60 or 70 people at both these and we had another one the wind plant modeling that and this is where we got experimentalists together and the modelers together to design the experimental campaign and the models that will be used in the mod same environment and for those of you who are into CFD if you haven’t heard of it we just had a an Oscar led workshop on turbulent flow simulation at the

exascale and that that workshop report is coming together now the website has a bunch of them will have all the plenary talks and so forth but for those of you who are into in CFD you might want to take a look at what came out of that I’ll just make a few closing remarks then imma show a movie so I’m confident that that HPC does offer us a pathway to reducing the cost of energy for wind energy capability versus capacity computing i find this topic to be quite interesting in cfd land CFD has traditionally been just you know the poster child for a capability compute computing when you get coming out of the turbulence flow workshop last month it was very interesting how much and let me step back and say that was the application those are all do e application experts as well as just hardcore computing guys as well and there was a consistent call out for capacity computing in the exascale for CFD and this comes down to uncertainty quantification sensitivity analysis and in the exascale context what people were promoting is not just simulations that could have been done with thousand petascale systems but what they would call are loosely coupled say optimization or loosely coupled ensemble simulations and I was interested to hear a very consistent call for the importance of ensemble calculations in that context which there’s there’s a there’s a mismatch then when you come to Oscar for example who is very much interested in capability computing but I could I’m happy to talk more about that one off the record if someone wants to we’re looking at still some challenges within this research area so that meso scale numerical weather prediction trying to couple that with our wind farm simulations that one is still pretty big unknown on how to do it well coupling turbulent flow across scales like that is not a straightforward thing uncertainty quantification and sensitivity analysis we’re talking huge simulations here so what does that look like is that even possible verification and validation coupling computation physical data data analysis I mean these are topics that that fit broadly in the exascale environment multi fidelity model management and then looking very much at how does hpc influence the models that industry is using so the lower fidelity models that that industry is using so i wanted so my colleague Matt Churchfield put together this little presentation about well I’ll say that this is it this is a demonstration of how hpc can be used to influence an existing wind plant through for example a control system modification control systems have been traditionally very turban centric so I’m an individual turban and a wind farm I want to optimize my own power production this is a flawed egocentric view okay this is where you should really be thinking about the whole and so the example here that Matt’s showing so basically we have one two three four five wind turbines and in this case the wind is going right down the turban line okay and and in in a turbine centric point of view each turbine is going to be facing that prevailing wind direction okay the idea is detune if you will d tune each turbine you give it a yaw angle off of its preferred angle and then look at the whole plant performance instead okay so the idea is let’s see if that works I don’t know if it’s playing there we go okay so you can see the yaw misalignment so he’s twisting these each turban off the main direction okay and I’ll just stop it right there and take a look at this so on the left here all the turbines are aligned so this is an open foam simulation with those actuated lines I was talking about and each turban of course is living in the wake of all the turbines up in front of it over here you can’t see it so much here you’ll you’ll see it more in a moment perhaps these guys are yaad so each one is is not optimally placed you can see that the wake kind of goes off to the side there alright so you’ve basically by taking away the yaw some number of degrees off of optimal you’ve pushed the wake to the side there’s two plots on

the right here this is the total power produced but I’m in the two scenarios these are aligned turbines these are with the wake redirection and there’s a small percentage here that is consistently better for the for the non optimal individual turbine configuration the bottom plot is the energy produced by the wind plant as a function of time so there’s a couple things here right a small performance increase like this can translate to a huge change in the overall energy capture okay this can make a huge difference in the cost of energy as produced from the plant and you can see it’s a pretty consistent over production there so an example of how if you just change a control system environment you can make a significant change in the overall power production of the wind plant I guess I’m going to stop right there and I’ll turn it over to bladon okay well while this thing is setting up I haven’t tried it before so hopefully it will work hello everyone as you can see my name is once the language and I’m coming from two institutions Colorado School of Mines and then world that are like four miles apart and my work is more on the fundamental science side then the applied perceives remark previous remark I’m one of one of those guys that publish papers and but I hope to be doing something that is actually the same time relevant and then Steve invited me to talk here about materials by design which basically is it stands i mean it’s it’s a hot were phrase but it what it stands for it’s basically can we design materials like we design cars can be insane assemble atoms in a certain way knowing the way and then be able to predict properties and actually realize those materials in reality and then when i first heard this kind of definition of mod materials by design is i said okay so what I mean since the Bronze Age humanity was doing the same thing I mean bronze was in alloy that was fabricated for a certain purpose and then for a certain amount of time it it was useful and it still is so what is different now then then in the Bronze Age is that we now can model using high-performance computing and using quantum mechanics but the interactions between the atoms the interactions between electrons and protons and so forth and basically predict materials properties and hpc is a really important part in that so okay motivation is what this light shows is the history of materials discovery in 20th century as a function of time and if you look here every material that you probably use in your everyday life took from the first paper published on that material until the deployment about 20 plus years so the idea is can we use a high-performance computing and our knowledge of quantum mechanics to speed up this process and recognizing this goal the White House launched the materials genome initiative in 2011 2012 with the main goal to do materials design twice as fast at half of the cost so we want to be better faster cheaper everything and materials the genome initiative specifically calls for tight integration between computations and experiment and data so what is the role and I’m the computational guy in the end up to some extent data guy so what is the role in this of the author of the computations and data and I see three kind of roles first we want to have predictive approaches to actually because of course the reality is more complex than we can model it ever so the question is how can we approximate that reality so to be relevant so I’m sitting in the place where I’m trying to develop predictive models that can be used on high performance computing machines that can actually produce some some results

so there is that part in data generation then there is a material genome initiative called specifically for data sharing and building knowledge from from the data and that applies to both experimental and computational data a couple of words about MG I work at NREL and Colorado School of Mines we had a number of programs and we still have a number of programs and I’m listing for the first one is the center for inverse design that was a previous energy frontiers Research Center funding by the by the US Department of Energy it stopped existing last summer but then we recompete it and we got the new efr see which is called Center for next generation by designing these are large 20 million ish dollar efforts over for 25 years involving multiple institutions and an realized as the lead there is also an annual Maddie B which is the database listing all the results from from from these efforts that is hosted by nrel HPC and then we also have programs which we called T design level on thermal electrics funded by the National Science Foundation slide on people these are the people I will not go so we have collaborations with MIT Harvard and the other institutions involved involved in these relatively large programs so what I want to show you here is an example of what I actually do and yr HPC resource is very important in doing what I do and I’m what I’m doing can be called capacity computing so I’m doing many many relatively small calculations whether I say relatively small I mean couple of nodes meaning 5200 course involved for each calculation tech takes about four to six to ten hours in that order and but the number of calculations is hundreds of thousands summing up to tens of millions CPU hours per project basically in the example that i’m showing here is the prediction of entirely new materials so why would we want to do this if you look at materials databases current ones that we have we know about hundred thousand or so materials that exist that have been synthesized but if you up look at the periodic table and apply some freshman chemistry rules you arrive at billions of possible combinations and the one one thing that we could do on a computer much faster than in the lab is try to test those things and see whether those materials would be stable in reality so would be synthesizable or not and this is one example of the things that that we are doing at NREL and in order to do that so what I’m showing here is just a like a ladder diagram of energies and supposedly I want to predict whether a chemist comes to me with an idea of material a to bx for with some chemical elements and ask me okay can you predict will this be stable at all so in order to and of course we are talking about solids so we are talking about crystalline materials with well-defined crystal structures so in order to answer that question you need to know what the energy of that material is so you need to sum up all the interactions that happen inside and then once when you know the energy you need to compare it with the energies of all other possible combinations of the same elements and then if that turns to be the lowest combination in energy that then you can predict it stable and this is slightly simplified way but i am i’m trying to to make a case rather than going to the details but that a to bx for can potentially exist in numerous different crystal structures and this is where the one side of the problem is is how do we know in which crystal structures it will exist for the non materials we know but for the completely unknown things we don’t and this is the problem that we call structure prediction and because every structure might have a different energy and we are interested in finding the lowest energy crystal structure for that for that particular composition what do we do in calculations we are basically solving quantum mechanical Schrodinger’s equation which is an analogous of Newton’s equations in classical mechanics and when you have a crystal structure which is here represented by this grid where the ions or the pro or the nuclei of the atoms sit in every grid point then you have nuclear in grid points and then electrons represented these smilies are supposedly wandering around confined in this space of the crystal and what you want to solve is basically solve for the dynamics of the electrons represented by this relatively complicated equation I mean it’s much less complicated than navier-stocks but it’s still pretty complicated because you need to solve it for very many electrons at the same time and they are

coupled because they interact both with with nuclei but they interact among themselves and this is what makes the problem very complex so this problem again not going into the details this problem has been not solved but useful approximations have been developed in a Nobel Prize in Chemistry 98 was given to these two guys to Walter khan and John pople one for the development of density functional theory which is the first principles or ab initio calculations as he is also called in the high performance computing community and John popper for the first useful in implementation of density functional theory into an actual code ok so what are the typical inputs and outputs of our calculations was so what I input in my calculations are the positions of every atom in a crystal structure because the crystal is a periodic system so you need to define that only for a single unit cell and then it’s periodically repeated and I need to put atoms in some core with with some coordinates into the box and then let the codes output the total energy of the system electronic structure and so forth so this is this is what is typically done and then I’m the input is actually assumes a certain crystal structure so going back to the problem that I that I initially started talking about how do we know which crystal structure should we assume the most trivial solution to this problem that i will be showing you to you today there are many more many more complex methods to determine this but the simplest way is to try if I’m interested in a to bx for chemical formula i can go into the databases we know hundred thousand materials i can basically look similar materials crystallizing in the same I mean having the same chemical formula and how many different crystal structures they crystallize and I can use that as an example and ensemble of crystal structures as my test set then if I can perform calculations on every one of those I could approximate the lowest energy structure somehow this is trivial in principle but in practice is much less trivial because if you go into the databases for a to bx for there are about forty different crystal structures that known materials assume again your limb you’re basing everything your results on the existing knowledge which is an approximation again but still there are about forty forty different crystal structures okay and then if you are interested in about 400 of these combinations away to be X 4 and then I didn’t talk about some details of magnetic structures and blah blah blah so not going into real details but you are easily climbing to hundreds of thousands of calculations independent calculations each of which requires a couple of tens of course so from 22 to 200 course lasting in in the order of 10 hours so we are easily talking about 10 million CPU hours and this is a relatively conceptually relatively simple thing okay we did this for the a to bx for system we did it for many more systems since then but like a couple of years ago and did this took us about 10 million cpu hours and this was done on an real HPC and I need to say the tender light we see was really instrumental in achieving these of course then there is the whole business about not launching all these jobs by hand and you know we there are some software developments about how to automate automatize all that specifically for first principles or DFT calculations and to facilitate the extraction of data the data analysis and that kind of stuff I will not be talking about that but that was also part of the work where HPC hand rule HPC helped us a lot and then what i’m showing is the relevance of these kinds of things so what I’m showing here is the comparison with experiments on the material that that is known so it’s not new material so it’s kind of a test how all these business works and the material is a mineral manganese ilocano for having that kind of chemical formula 214 and once when you do these kinds of calculations then you can apply some additional physical models and then you can construct these kinds of phase diagrams of the po2 verso oxygen partial pressure versus temperature which are the typical handles that experimentalists have in in the lab if the temperature and pressure and then the green line or the green region is the regional stability of this material so this is an output of our calculations and as you can see experimental synthesis really happened inside inside that green green area

which kind of provides confidence that what we are doing is actually the correct this is not one example we have many many more of these so this is another system Kabul to zinc 04 where you can have so again the green region is the region of the material that we want and the other regions are regions of the existence of other phases apologies for the colors I was trying to avoid all the national flags like I know okay so this is the synthesis bulk synthesis this is my colleagues at NREL did it in in teen film chambers growing this material and as you can see the agreement is good then this was also used as initially I said to predict entirely new materials so colleagues of mine used the approach that we together developed and while I was focusing on a to bx for materials they were focusing on ABX materials they had predictions of about 50 new completely new never before synthesized materials and then experimentalist from Northwestern University our our colleagues managed to synthesize nearly 30 of those in the crystal structures predicted by the theory and this was published in Nature Chemistry cup like a month or two a month or two ago and this is I believe in illustrative example why do we need hpc so I will wrap up now and basically repeating everything that I said so far that modern material material science cannot be done without computers we need significant computing computing computing resources for the work that I do I its thousand petascale computers is more useful than an exascale computer so when I was starting my my PhD sometime in 2005 2006 big computers then I was doing it in Switzerland big computer then was like a thousand course or so so relatively much smaller than what we have today and then the question I mean it was obvious that much larger machines are coming and then the whole field was kind of wondering the field of ab initio first principle calculations was wondering where where shall we go shall we pursue linear scaling methods and really do a large scale computing computing with large system sizes or shall we shall we go for doing many more calculations at the same time in parallel and the field kind of split into the linear scaling is not yet there and the hype what we call in our field we don’t call it capacity computing we call it high-throughput computing apparently is taking over or at least to my to my knowledge is taking over and it’s relevant for experimentalists experimentalists really I think found finally found use of theory as and and that is really a a result of combination of theory plus high-performance computing and I will stop here thank you very much you just keep that up everyone hey thank you guys very much very very interesting um as the moderator I’m going to take the liberty of being the first to ask some questions so i make sure i get mine in if that’s okay but i have a number of them actually but steve i wanted to start with you about the center you know i was intrigued with one of your early slides where you had a you know the typical pyramid and you said that some of the Office of Science supercomputing Center is really focused at the tip of the spear but that NREL was focus a little bit more on the missing middle if I understood that and so I kind of have two questions one is can those those more capacity problems be offloaded onto a cloud or do you see that happening and then also if that is a part of the nrel workload how do you see that shifting in light of all the discussions that we’ve had here today on exascale so we’ve sort of talked about the tip of the spear a lot and yet you’re talking about a center that focuses a lot in in the middle range yeah that’s an excellent question so there are cloud like applications that are certainly done at our facility and when sometimes they involve proprietary information or work with a industrial partner that would prefer that that be protected within n roll’s firewall so while there well some

of those calculations can be done in a cloud-like environment it’s they would prefer not be done in the cloud gaia so so that’s that’s one happening the other one when i look at sort of the role we play and it’s not just about computing and there’s been a conscious investment by yuri in expertise to work closely with the domain specialists in the in the area of modeling and simulation that’s important to these specific domains to get them started at all and part of the notion is not to be duplicate what’s done by the leadership class facility get folks started and to graduate them as the field matures as the problems mature as they get is they gain comfortability as the models that they use advanced and to graduate those so that they use the leadership class facilities that the office of science is providing to those users have to have eere funding or do they just have to have a project that is aligned with the eere mission area so there’s priority given to URI funded projects okay but it’s not a requirement not a requirement okay great do we have other questions I’m sorry did you what do you wanna say anything about cloud computing them well okay I’m not really especially for cloud computing I guess the things that I’m doing because they still involve using in the order of between 50 and 100 cores which have to be connected with the fast fast network we use quite a lot FFT to do things that we do I’m not sure how can that be implemented in a cloud if there is a cloud that provides those kinds of capabilities that you can have a cluster of hundred course somewhere then probably yes but I’m not aware of that we have any other questions from anyone out in the audience great right here and then over here to VJ this is more point than a question John Popol got the Nobel Prize award for computational methods in theoretical chemistry not just DFT wave function method okay hi everybody great did you quick question I saw a slide in which you said that the behavior of a single turban is quite different compared to behavior of collection of turbines in a wind farm right and I imagine part of the implication was that it can lead to larger failure rates of blades when they are in a farm and so I my question is are you does the lab consult with people who are operating building farms and are we in a position to simulate a collection of 5200 wind farms within a given location and so forth where where is the state of the art in that that’s question one and two is any work going on vertical turbines as opposed to these ultra long 50 meter blades let’s see so let me just clarify the UM the failure rates are mainly seen in your boxes and bearings that I don’t know so much about the blade failure rate so this is where I’m not a domain expert and I’m growing into one but I’m more on the you know navier-stokes is my domain more so than actual blade failure modes but you are seeing significant failure rates so let me ask you this if you’re seeing in gear boxes yeah is it tied to a greater density of blades I mean when a sort of windmills greater than right so the whole system is going to be experiencing when you put put the turban in in the wind farm you get you might have seen it everyone is living in the wake of another turbine effectively and that brings much more oscilla tori forcing to those systems and so they see a lot more related oscillations force oscillations that a single turbine would not see so that’s basically what I meant by by that now your first question was working with how would you say that again basically like it it’s not good right the the models that they’re using to design the wind plants right now there are very few people who are doing kind of simulations that i was showing in designing the wind plants and and the high fidelity simulations at

best we’ll only be a point check right they’ll basically come up with the design they can basically do a point check because the simulations are so expensive so they use much lower fidelity models than the design of the wind plants linearized type models for designing the wind plants and then when they’re built they often underperform right because they’re models were not good enough so so the state of the art and actually affecting how the wind plants are designed hpc is beginning to help but you can do a lot more both right i mean the calculations are so expensive to do if you’re going to the state of the art calculations like i was trying to get it there petascale type simulations and you’re going to be able to get a 30 minutes simulated time over a couple of weeks on mirror all right so that’s a great limitation using it in design that kind of calculation in my mind its purpose is really an understanding the system and affecting changes in the lower models that are actually used to design the wind plans oh yes yes that for example the dt you guys seem to always be in some number at the nWTC and germany and you know we give a lot of a european collaboration going on for this problem yeah yes we so wind turbine data is it is and the failure rates are sort of closely held company secrets if you will right so if probably the if you go maybe a mile south and a couple miles west that that’s the national wind technology center so there’s a handful of utility-scale turbines that are so when Mike talked about NWTC it’s just I think it’s just over the hill you just can’t see them but they’re there so we formed a gearbox collaborative we collected gearbox failure data from a handful and I don’t know the exact number i think it’s order a dozen owner operators of wind farms and try to find root cause analysis of operating conditions and failure modes to go back and then update both the operating conditions or the the control system to control sequences and to help the industry improve to reduce the mean time to failure and some of that came back to where pull our turbines are living in significant turbulent regimes due to the wakes in the interaction of the up wind turbines how can we steer the wakes by adjusting to pitch in the yaw the blades so that so it would the video that Mike showed was showing the steering of the wakes which is some of the results of that taking data failure rates so it has a multiplicative effect both its reducing the cost of the energy by improving the energy capture plus it’s putting the turbines in a more stable environment so they’re not you know they’re not going through the top of the up wind turbines which again put stress on the gearbox and then dancer your second question about vertical accidents justice turbine the energy capture potential of a turban whether it’s vertical or horizontal is proportional of the swept area right and you’ve got to get your turban into the right wind regime so you actually get then sell up the turbines we have at at nWTC then sell the gearbox hits up about 80 meters and the you’re into the bottom of the planetary boundary layer where you get very stable night flows if you wanted to be able to take advantage of that with a vertical turban or vertical axis turbine sort of spins like this as opposed illegal they have to be much larger systems so your energy capture potential / installation is much smaller unless you start packing them like trees and then then they have crater impacts on each other so the state of the art right now really is three blade horizontal access turbines and if you go offshore they’re just massive ten megawatt systems are being deployed I know we have a question in the back here but well we’re still on this topic I had one just so we don’t lose a threat it’s kind of a related to what Vijay was saying Mike you’re doing this modeling now of farms to understand the vorticity between the different individual turbines and how they’re having an impact but if this is a sensitive from a company perspective who funds you to do this is it all done on like a work for others from a company and its proprietary or does this data then somehow get back out to industry so that industry then can use it to make adjustments and how the farms are set up the or is that question too sensitive that’s fine that’s fine let me um let me try Steve before I hand it off um right

so everything we’re doing is open source and I mean we do have a work for others part in our portfolio but everything that I’m talking about is all open source and that’s from the the data that is Garrett gathered in the experiments the simulations that we’re doing to the software that the Department of Energy is trying to develop it is all meant to be open source and then four and then for my partna sore for our part in this i mean we want to deal with open source systems to demonstrate the capability we want those capabilities than influence what the act the industry itself is using right so so we’re trying to make the come up with the ground truth simulations and the state of the art and the computational methods for solving systems and we want the industry to take those things and to push the the process and then for our validation for example we have a three turban wind farm down that’s being put up in the Swift facility in Texas that Sandhya is putting up so part of our experimental validation of these systems is to work with that open source system so so the gearbox reliability cooperative was funded by the Department of Energy win program okay to fund lab resources staff time and reach and sort of capabilities in computing to anonymize the data so that each company can see its data and everybody else’s but you couldn’t you knew which was yours but you couldn’t identify anybody else’s directly so it was supported the companies came because I realized that they didn’t have enough data all in their own to be able to find the whole problem to find that needle in the haystack but collectively they could not saudia we supported the the gathering of the tribes to get together n roll was the trusted holder of the data without sharing whose data was which associated with which particular installation and then then it was shared they all could see the data but they didn’t know whose was which cool very good not i think we had questions in the back someone else how to mike yeah so actually Susie’s stole my question but but I kind of have a follow-on so are there issues around intellectual property and do have a kind of coordinated intellectual property agreement that your industry folk sign I don’t know what the intellectual property arrangement was with the gearbox reliability but for typical collaboration where there’s a if it’s a fun zine research and development agreement if it’s a funds in kryta then the company owns it’s the pre-existing IP joint IP is governed by whatever terms are negotiated with in the crate agreement so you’re citing create agreements even if no real money is flowing the front again these so there was there was the arrangement I don’t know what the arrangement was done with the gearbox reliability cooperative that was set up but I know the intellectual property will depend on the terms of this specific crate agreement yeah okay session over here maybe just follow one more a little bit for that for one of the things we’re doing right now is we’re working with Siemens and that did start under a crater and Siemens has been great on this one because we’re basically doing a validation of our turban models with the big 2.3 megawatts Siemens turban that’s just as Steve said over the hill from here and we’ll be publishing that data we just need to mask the data in the sense to to make it nonspecific so just another example of what we’re doing with the industry there just yeah just just a comment the the ECI the doee CI project can and likely will support applications that need ensembles as opposed to you know one big honkin simulation point is is we’re seeing lots of need for sensitivity optimization uncertainty quantification so there there’s you know those sorts of workflows absolutely can and will be supported do you worry about taking too much energy energy out of the the motion of the airmen if you stop at all it’s not going to rain in Kansas Kansas anymore and they’re gonna be irritating it’s a fun question yeah we are it’s um so this is um if you have the wind plants ettrick view you can look at your weather forcing as a one-way forcing if you’re a farm that is sitting behind a very large wind plant then you are very much interested in the two-way coupling aspect so part of this mesoscale microscale coupling initiative that i spoke briefly about it is interested very much in the one-way coupling of

force in the wind farms but there is also great interest in in the two-way coupling what does a wind farm due to local weather patterns so so definitely Julie Lundqvist is the name that comes to mind up at cu-boulder who is very much looking at that measurement point of view from from out in the field measurements of changes in precipitation in particular in the vicinity of large wind farms but if you think of the wind turbines a wind farm as being a small almost a small mountain because it puts a drag on the bottom of the planetary boundary layer so it’s going to slow it down you you you will alter the everything from precipitous loss cycles right so so if you know what they are that that could be to your advantage you can use it strategically not just holy cow what are we going to do right so knowing how close you can space them what the impacts are you certainly don’t want to put in turbines in Kansas the turns Iowa into a dust bowl right one more interesting point on that is um what if you build a wind farm and then someone build a wind farm right in front of you