Joe Stein – Building & Deploying Applications to Apache Mesos

all right so welcome everybody we’re gonna be talking about building and deploying applications to Apache mesas so before we get started let me just kind of get a good feel of the room you know how many people already are using basis in production all right I’ll take that how many people have no clue what Mesa is whatsoever all right that’s good how many people I guess are somewhere in between that like maybe looking to use it building applications all right all right fantastic cool all right so first quick about myself so my name is Joe Stein nice to meet all of you developer and technologist by trade about a year and a half ago I started a professional service companies focused on Big Data open source solutions right so we actually build out a lot of these types of solutions on maysa and kafka hadoop Cassandra really kind of fitting in between like vendors right you know the data stacks and cloud areas of the world and working with organizations to train develop architect software solutions on open source technology I’m an Apache Kafka committer and PMC member who here knows Kafka yeah that’s what I’m talking about awesome great cool I also do some blogs and podcasts so if anyone’s interested in Hadoop stuff I’ve had a blog and podcast for about five years talking about Hadoop some really great content in there if you’re interested in Hadoop I’d say definitely check it out and my slides I posted to my Twitter so if you want to grab them now you can I’ll post them to the meetup page so you can grab them as well after all right mmm all right so for this talk we’re gonna kind of just start with a quick intro to meso so assuming that most of you or all of you have no clue what Mesa is we’re gonna talk just enough information so that the rest of the talk will be relevant okay lots of good information online so I’m a breeze through it but if you have any questions stop me so I don’t want to have the rest of the talk where you’re kind of lost and not really understand where we are and what we’re doing then I’m gonna talk about marathon marathon is one way that you can run your applications on maysa I’ll talk about Aurora which is another way you can run your applications I’m ASOS and then I’ll talk about custom frameworks custom frameworks are allow you to basically natively run your applications on Mesa right so a lot of different ways that you can build them to collab cases Tomatoes there’s some rules around like when you want to do one versus the other and at the end of the day you may just end up doing them all so we’ll kind of talk about all those alright so basically this is what the talk is about in a nutshell right it’s like getting everything that you have running on maysa alright so for the rest of the next 6090 minutes or whatever it is like just kind of keep this in your mind right like you should be thinking everything that you have should be running on me so so and if you have the idea of like well this you know Redis shouldn’t run on me so yeah it should right everything should be running on me so so and we’ll talk about how to do that why to do it you have any questions like you know filled them during the talk after the talk or you know whatever whatever works best for you okay so the origin before we talk about even what Mesa says right I like to kind of talk about the origins of Technology right so the origins of maysa really comes from Google alright so fundamentally what we’re going to talk about is like how Google runs their infrastructure right so the same way that Google put out like the MapReduce paper and this whole thing like Hadoop came you know into fruition there’s a little different with meso s– right so with meso s– no one really knew about borg borg is the way like the codename for how Google like the codename of the datacenter operating system that Google has eventually changed it called it Omega and then wrote a paper about it but then Hinman and a bunch of other folks from Berkeley they had this idea and they talked about it and the Google people thought it was pretty cool gave them funding at Berkeley and you know we’re here today talking about it so some great papers technical papers some great videos to watch really kind of understanding like you know really where all this came from it wasn’t just some hey this is a good idea like it actually has very practical implications and real-world scenarios behind it all right so let’s talk about life without me so because for me the best way to understand what the benefits are about maysa is talking about how you all are living today with your infrastructure so this is what you do right you’ve got static partitioning right you’ve got DB 1 and DB – you got web 1 web to web 3 you know maybe you 100 web servers right you’re still calling them web servers all right you may have some Hadoop server some database nodes you got all these different servers that you’re basically saying this machine is for this server right you may actually even go out of your way to go buy special hardware specifically for that server and what you do is you say like let’s say this is a you know naive rock let’s say you know two thirds of this rack you assigned to the database and the other one-third you assigned to the web server and that’s all you know fine and well the thing is that static partitioning is really bad

okay with static partitioning you’re basically trying to deploy applications and hoping that the applications could utilize all the compute resources that are on those machines right so if you’ve got ten machines each with eight core right you’re hoping that your applications each of them could use all eight core some of them can write some of them single core machines right some applications they’re very memory intensive right caching servers you need 96 gigs of ram and those machines and they have like one you know one core use so you’re wasting seven but you’ve used up all the RAM and you know another part of your system you’ve got lots of storage space right because you’ve got a lot of data that you want to store so there’s a there’s a there’s a huge imbalance not just across time right so things at 3 a.m. are different than 3 p.m. right it’s not just time but even at any one point in time your utilization curves and and your applications they look different all right and that’s what static partition is this is what people live with today all right it doesn’t scale right like all of a sudden you need to now like get more middle to your servers or database servers or Hadoop nodes you’re basically taking them away from the web applications and you’re doing that like physically you’re actually consciously saying I’m gonna you know change up these servers and now run other processes on them and anytime you have like a failure right you’re basically gonna have downtime right you may be in a situation where you have like Hadoop nodes let’s say or even your web application where you can take down a rack and your application is still live and running right and that’s great but what you’ve done whether you know it or not is you you’ve over utilized right you’ve over provisioned your hardware resources you’ve bought twice as much as you need just so that when the failure happens you can still take in the load and everything’s okay right that’s what static partitioning is so everyone kind of like on the same page here as far as like where we are today as a society and how we do compute and how we handle problems right this is what this is what we do all right okay so like that was this is a really naive example right like this is a little bit more realistic but we don’t have ten servers we have hundreds of servers right so some people have 10,000 servers where you’ve got all sorts of different servers and when you’re in a static partitioning world you’re actually like going into Oh Rock three server seven as a web server right you’re actually having to go to specific machines and figure out what they are and what we’re talking about here is abstracting the hardware layer from the software layer at the computer okay so if you have a hundred computers or 10,000 computers you know it doesn’t matter anymore right like start to think of those computers with a layer on top of it which we’re gonna call an operating system right the same we have an operating system when you’re you know PC or your Mac that I’ve shocks the local hardware resources from the software application that’s what we’re talking about here right we’re talking about a layer of code that runs across all your servers that treats all the servers like one big computer right we’re talking about an operating system right we’re talking about a kernel for your data center right one operating system that allows you to deploy applications anywhere in your data center to any of your machines without really caring where they are what configuration they have and be able to consume the compute resources for where and how you want to deploy your applications all right it’s really really cool like solution and it works right so that’s what Mesa says right Nate so this is basically the kernel for your data center it’s an operating system but as everyone knows the operating system doesn’t really do a lot for you right all it does is abstract the hardware from the software so I’ve quick on Apache meso straight it’s scalable to 10,000 nodes right so it’s been shown to this is a really old slide that’s like a year old I think it’s like four or five times this for at least what people will speak to it’s fault-tolerant right so slaves now can get restarted in upgrades in your tasks fail as folks don’t know like I don’t know if anyone’s used Twitter before right remember like back in the day there you see that like that whale that would come up in your page number one remember the whale right so like maysa is you know part of like why twitter fail world doesn’t happen anymore right twitter runs their entire business on basis so it is OpenTable and a bunch of other organizations right it’s become a much larger project over the years it’s got support for docker containers I’m sure everyone knows and loves docker and all right and we could talk about docker and you know why maybe when you run in production you want to think twice or at least understand what you’re getting into like it’s okay to run docker on production with mesos you just have to understand the trade-off and we’ll talk about that at some point if I forget someone remind me and I’ll bring it up but it can be done you just got to understand you know the little dark alleyway that you shouldn’t walk into there’s Native isolation so without docker you still get all the benefits with mesos so it may so integrates with C groups C groups is originally

contributed by Google to like the Linux kernel and what C groups does is basically isolate your CPU and your ram and all these different resources and processes at the kernel level right so it’s all that abstraction that you get from a hypervisor without all the software layers lots of different multi resource scheduling that we’ll talk about right it’s a memory CPU disk you can kind of put anything you want in there it’s kind of cool it’s very fine green resource allocation so if anyone has a question about like oh yeah we use yarn how is this different than yarn it’s really different than you’re right it’s like very fine-grain resources at the kernel level right it’s not an application level up system it’s a very you know low level system to build other compute systems on top lots of different api’s so Java Python C++ API is for developing new applications on top of it which is pretty much what we were talking about and then a webview I for viewing cluster State now this might sound like oh there’s a web UI who cares but when you’ve got thousands new machines and you’ve got 10,000 tasks running on a thousand machines it’s really nice to be able to go to a web UI and click on a task and like go into the actual like drive where that’s running and like look at the log files or you know whatever information is there it’s it’s it’s pretty helpful and very handy okay so this is kind of like the high level architecture of Mesa so missus uses zookeeper surprise right and it uses zookeeper to basically handle State for the master failover so the master failover is kind of the the allocator and the controller part of the system that’s figuring out where all the different resources are on the slave and then with the frameworks where those resources should get scheduled so if it fails zookeeper is just used for leader elections so that a new master can come up and all the state is maintained and everything works and everyone’s happy and then there are slaves okay so masters and slaves is not like a failover relationship slaves is actually where all of your tasks run okay so if you have 10,000 computers you probably have five that are masters and ten you know nine thousand nine hundred ninety five that are slaves that are actually giving all the compute resources to the one big computer and this is kind of how it works right so in maceiĆ³ s– you really can’t do anything without a framework right it’s not a tool for anything but like engineers right you can’t give this to like your analyst essentially right you can’t get this is like hey I want to run a spark job on it well spark has a framework that runs on me so so you need to have these frameworks that basically run on mesos and what happens with the framework is basically you have a slave right so the slave has resources right CPU RAM disk whatever resources it might have and it says to the master hey master I’ve got four CPU and four gigs of ram master doesn’t care right master just gives it out to the frameworks and hands it out and says hey framework you know here’s for CPU four gigs of ram are you interested and the framework can be like nope not interested or it can hold on to it and wait for other resources and then say hey yes I want to take these resources that you’ve given me and I want to launch a task right they’re kind of flowing through here right right resources go to the master framework sees the resources framework decides that now you’ve got enough resources to launch the task because if you’re launching let’s say three Kafka brokers and you want to launch three Kafka brokers you need three different computers to give you let’s say eight core and 24 gigs of ram and unless you get all of those you don’t want to launch what you’re trying to do right so it’s not just one to one it could be an aggregate so the framework says yes master please go ahead and schedule these resources and the master says no probs I got it and it will go and launch in what we call the executor the resource processes okay so you’ve got this whole you got me so stat the core right and then you have this it’s really it’s really a powered I’m more than anything else this concept of framework right and a framework is really scheduler and executor there’s a really important part so if anyone is confused or as any questions this is definitely the time to ask cuz you won’t understand anything else after this all right the rest of the talk is about this so there you got this framework alright and then you have on one side the scheduler and the scheduler is deciding based on the resources that are coming in which tasks that should be launching right you’re a spark job you’re a Kafka broker you’re a Redis cluster you’re react or Cassandra whatever it is right you’re a scheduler looking at these resources and you’re making the decision yes I now want to launch this the executor is actually what does is the launching right so the executor is code that’s running on the slave that the meso slave is starting the executor is a process that once that’s then launching all the other processes and tasks that is required by the schedule a really important concept here right framework right missus core framework on one side

the scheduler figuring out what’s going on and then it’s buddy the executor who gets launched and then handles all of the execution of what has to happen based on the schedulers decisions starting a Kafka broker starting up react starting up Redis whatever whatever you might be starting up and when you start these things up this coordination involved right you can’t just start you know a Redis server and it just runs right there’s there’s all sorts of different coordination that happened in distributed system and that’s all what happens in the underlying protocol buffer transfers that exist in here which we’ll talk about and get to any questions about schedulers and executives or anything else yeah to ask you sir priorities queuing all those things that we have in classic schedulers like the green dam Jim yeah because I see multiple schedulers there so the question is basically is there any two schedulers have any notion of priority and how does that also work with other schedulers I guess two questions right okay so we’ll talk about one scheduler that has the notion of priority and preemption and how that works so schedulers they’re their native frameworks to scheduler so you can build the scheduler to do whatever you want but there are schedulers that do handle things like priority and preemption and I’ll talk about that as we go so hold that thought as far as multiple schedulers go so that’s there there’s some contention right now between the omega paper that google put out and you know scheduling like doing true fair scheduling across multiple frameworks right like if you have a scenario where you have a spark scheduler and not a spark scheduler the spark schedule would most likely starve the other schedulers just and how this framework is written and there’s a lot of ways that you can control that and I’m going to talk about some of those right so I’m gonna talk about some of the ways they control starvation but in I want to say it’s this next version 0 to 2 of May sews if it’s not 0 to 2 at 0 to 3 there’s a new there’s a new feature called modules that will actually allow you to plug in a new allocator for the meso s– master and the allocator is responsible for giving out the offers to the schedulers and figuring out that fairness so the answer to your question is yes it’s a problem today if you don’t understand the controls that you have to put in place to manage it and moving forward different organizations are building different a locators to handle better fair scheduling across the frameworks as well so good story today for handling that and there’s a better one in the future but it’s do and we’ll talk about roles how all that works as we go any other questions where we are schedulers executives is there any major advantage to having the schedulers be reactive in the sense rather than actively pulling for resources that’s a good question you know from a from a scale perspective I think that you know the way that this was built is really to be able to handle tens of thousands of computers right it’s really focused on making sure that this could scale it like a very large amount of machines I would I’m no knowing the people who write the software like I’m sure that they thought they’d thought about that so yeah I mean if you think about it like if you have ten frameworks and they’re all trying to pull and get resources versus like things like reservations and we’ll talk about attributes and resources and reservations and especially now some of the new features which are dynamic reservations where you actually know that one framework had already asked for something and you want to keep giving it back to that framer because they had already you know asked for word before right all that intelligence like it being in the master scale aside has a lot of benefit right because you have that one controlling system all right all right so so let’s talk a little bit about some Mises features right that’s kind of the basis in a nutshell but we’re gonna talk about some maces features which are important to understand both from a the rest of the talk perspective and just in general I said two of them are resources and attributes okay these are really two very critical parts of the meso software attributes are really nothing more than strengths okay you basically can go and litter your slaves with attributes and set different information that might be appropriate to you right you might have some servers that are solid-state drives right you might have some servers that are as400 so I mean whatever right it doesn’t matter the point is that you have different characteristics for things like hardware right maybe you have some machines that are you know for certain customers even right where you want to physically partition the data

from one client to another it doesn’t matter if they’re just they’re just strengths okay so a this allows you to do is basically litter the cluster and the slaves with your strings as it’s appropriate right and set these attributes on and then do stuff with it and we’ll talk about the do stuff with it later in a little bit right but this is an important feature it may so this is that you’ve got these attributes these strings out there right you can you know whether our servers in rack 1 or rack two right like the attributes can be whatever you want and then you have resources and resources where it gets really exciting right so resources are basically scalars and everything could be looked at as a scaler right you can do sets and some other you know variations all right so you’ve got CPU you’ve got RAM maybe you’ve got some de strives you know maybe you have a you know some other resource that you want Mace’s to handle right so resources are something that the mesas master is going to deduct from and control as people are asking for it right if there’s ten core and someone asks for two there’s only eight left right so maize house is going to control that right the mesas master is going to control the resources and actually making sure that you know the total sum of resources are managed by the most master and divvied out appropriately right actually beets are just strings they’re just labels but all right awk about this right so so out of the box you get CPU memory ports just given to you you can make anything a resource you want right so you if you have ten J bought discs each of them could be a resource and when you deploy each one of those discs can get owned by an application excuse me and you’d have to worry about that here’s some examples so this is kind of where it gets a little nifty so you have so resources here up on the top right you could actually specify how many resources this slave is going to manage you know maybe you have a machine that is running a couple hypervisors and it’s only 50% utilized well that’s fine you can give the meso slave the other 50% utilization so it’s a 24 core machine you can start the meso slave up with 12-course right you can set the resources to be CPU 12 or maybe you want to set attributes on it right you actually run in a data center right so you want to set the attributes being DC equals 1/4 equals 2 IL equals 6 rack is a server is 15 you know what have you or you run an AWS right so you want to set you know us East 1 and your instance ID right you can really set your your attributes is you know whatever you want right maybe your docker version because if you don’t set your docker version then you don’t know what version of doctor you’re launching to and when you upgrade docker everything on the machine dies because the docker Daymond eyes that was that point where you painted ok yeah so that’s definitely something to look out for and that’s it right so you’ve got resources and attributes and that’s what made sauce is basically controlling and then you have these things called roles so this actually this gets to your question before about like how to segregate these pieces is that basis will actually allow you to assign roles you could say that you know these 8 core are going to go to production two of these will go to staging right or these twelve core go to Hadoop and the other 24 core go to Kafka right so you have the ability to separate roles so that you know maybe you’re maybe you have different criteria of hard drives and you want to have you know your Hadoop workloads on your you know j-bot drives and you want to have your MySQL and your SD you know your SSD drives but you got all this overhead of CPU and RAM that’s just not even being used and you want to give that to spark ok so you create a role called analytics and you give whatever available you know CPU exists to analytics and spark will just go and eat up every single available resource on that role for your cluster and do its computations on it right so roles are important because right now as of today you you could have one scheduled or star of your cluster right one scheduler could sit there and be like hi I got all the resources I’m not using them but I don’t care I’m not giving them to anybody else right it’s totally possible and it will happen you got a right especially if you write it yourself you got to think about these things so roles are a really good way today to help that moving forward there’s some new features and Mase’s that are coming out that I’ll talk about that kind of a lot you don’t have to do this moving forward but still today it’s something that you you have to think of and think about okay so before I get to a marathon any Miso’s questions kind of going to start

jumping into building stuff yeah yeah so the question is at what point does the integration with cgroups happen so it happens at the slave level when it launches a task so when you configure a slave when you install may tous cgroups is not configured okay it’s POSIX so you don’t get any isolation you actually have to go and turn on C groups once you turn on C groups then when a slave launches a task that task is within a C group based on the CPU and RAM that was specified for that one task within that C group every single other process and we’ll talk about this also you can open up 75,000 processes if you want within that task it will all be within that one C group and it will all be governed that way right so things like CPU limits or soft limits so you can go over if you’re given two CPU and you’re not even you’re you’re the only one running on the machine and you use for okay no one’s gonna you know it’s fine but if you’re given ten gigs of ram and use eleven that you’re just gonna get the carpet pulled right down underneath you right you’re gonna get c you frozen that’s it your task is dead you’re done right so for things like memory like there’s an upper bound which is great for things like CPU it’s you know it’s all good right we’re all just friends okay all right so let’s talk about marathon all right so marathon is basically like the init.d for the operating system right it’s a way to basically start applications on this you know operating system the operating system just happens to be your data center right but it’s still a way to start things alright it originally came out from Airbnb so it’s got lots of good production history and it’s one of the applications that mesosphere is that come behind me sews great cloud eras to Hadoop as mesosphere is Tomatoes I don’t think they do that anymore on the SATs there right that whole section is gone maybe that’s alright show my age so but marathon’s cool and we’re gonna talk about marathon and a little bit marathon is really great when you know you and your team were the ones who are touching your cluster okay so that’s important right you know that like marathon is a great tool but it’s like you you have the keys to the kingdom right you can do anything good bad and indifferent and you can do some amazing things with marathon I will talk about a little bit what happens if you have users or you don’t want to give the Kings keys of the kingdom left and we’ll get into that huh all right so marathon has some really cool features that we’re gonna talk about and we’re going to talk about how to deploy things to marathon and how it works and kind of I really want to make sure that everyone leaves here with like a I can go back and I can take my application I can get it running on basis tonight right so marathon has this feature called constraints right we talked about attributes and resources well who cares about that unless you can do something with it right so marathon has constraints basically it takes the attribute fields and allows you to put business logic around the attribute field so one of those business logic are things like unique right you can say deploy these ten tasks right ten rails apps let’s say and when you deploy all ten rails apps you want to make sure that each of them are you know on a different host because that’s how you would need to run your rails app I don’t know whatever doesn’t matter right so you could say unique host name right and what will happen is that maysa will not schedule any of those tasks on the same host or maybe you want to do like unique rack right you want to deploy three applications and you want to make sure they’re all in different racks right you could do that so there’s the concept of unique where you have an attribute which is just a string and then you know what you’re doing with that string and creating uniqueness around it there’s also cluster so clusters sometimes very useful right let’s say you have very low latency requirements where you need to have like let’s say you’re doing like like RTB and AD bidding and you need to have your bitter like right next year ad server always right on the same rack right you’ve got to have like the most fiber you can so what you can do is you can cluster and say okay deploy this to you know the same rack and I’ll cluster all the applications into the same whatever the attribute is right you could have the attribute be datacenter 1 and cluster DC 1 and everything will just fall into DC 1 or a Z you know US East 1 whatever the string is it doesn’t really matter the important thing here is that we’ve got this cluster primitive that we can you know basically use inside of our application to make sure that our software is getting to the right point of the hardware because remember we no longer know about slaves right the machines that are underneath we don’t know about them anymore but they’re still important to our application right

and then you have group by group by is a really very useful one right group I is basically gonna say something like go through all the possibilities continuously and make sure that I’m evenly distributed across them right so let’s say you want to deploy something like HDFS to may sews right you’ve got three racks you want to have 10 data nodes deployed well you don’t want all 10 data nodes in one rack that’s silly right so you want to go 1 2 3 4 5 6 right you want to kind of group by right so group by allows you to evenly distribute the resources and the tasks based on whatever label you’ve set essentially okay mm and then there’s like I mean pretty obvious right where you can actually just say you know oh hey I’ve got SSD drives right and I need SSD drives for my application so you want to make sure that when this application deploys it will get to a label like SSD otherwise don’t run there because you know if you don’t have SSD drives your application doesn’t run so you want to make sure that you know the SSD label is something that your application will you know match on or not maybe your application will die if there’s SSDs underneath right or maybe you need something that’s not jboden doesn’t matter what it is as long as it’s not j-bot so you also have the negative of that so you can stop things from going to certain places if need be ok any questions on constraints or anything like that so that would be more like cluster than it would be group by so if you want to take two applications and force them on the same host name then you would cluster by hostname and then you don’t care it would just happen under the hood magically and yeah yeah okay so running things on marathon I find having talked to people about this both on teams on a daily basis and events like this it takes a little while for people to really like rock what you have to do to make things work but it’s really easy like it’s really really it’s like stupid simple right but you’ve got to kind of understand what you need to do to kind of make things work with marathon all right so let’s kind of walk through this this is like this is my pseudo bash script so make believe this is bash okay it’s bash ish and what we’re gonna do is create a variable we’re just gonna create a variable called tag don’t worry about just some sample it’s not important right now and then some app which is called XYZ and then we’re going to create an ID for this which is just gonna be tag da tag – app alright and then we’re going to create a command so command is basically going to be – your script host port 0 port 1 alright and then what we’re gonna do is we’re going to make a JSON and we’re gonna post it to marathon and in that JSON we’re basically gonna have the ID alright so we’ve got the ID alright we’re gonna set the command which is a command alright we’re gonna set the number of CPUs all right and the number of CPUs is going to be 0.1 so nice about May so this is if you only need 0.1 like CPU because that’s all Nagios or whatever needs then you can just give that app zero point one CPU you can set your memory right because you need to set your memory and then your number of instances and then you’re gonna set some paths for what’s gonna get downloaded okay and then you could set any environment to variable you want so env here is basically going to do essentially an export of your it’s an environmental variable right so if you’ve got whatever environmental variables you need to set you can actually pass them into here and marathon and will set them for you so if you ever have like an export some bash whatever this will do that for you which is nice okay so let’s kind of start to and then you’re gonna post this to marathon right so let’s kind of hook this together so this tgz writes this tar ball is basically going to hold your script which you’re going to execute and what maysa does for you is when it gets that tar ball or zip or whatever it is it’s going to take it download it to what it calls the sandbox and the sandbox is just a director in the slave and it’s gonna unzip it for you right so think of it this way right maysa will put your script in a place where it’s unzipped your tar ball and allow you to execute whatever you want and we set ports here 0 0 what this means is dynamically assign me to ports so if your script here is like let’s say a play application or rails app or whatever it is right you just have to

zip it up give it to marathon maysa will download it for you and then you execute it as if you were executing at the command-line manually but instead of you doing it manually may so this is gonna do it for you right and it’s gonna pass in host because you don’t know what slave it’s on right you have no clue with slave it’s on but you may need to know this information and then it’s also going to pass in the ports so here we need two ports so you’ve dynamically asked for two ports and old pass in two ports right once your script launches you could do anything you want at that point right you can go and take a template file and change that template files like Tomcat XML the server com and have it be the ports that you were assigned and start listening on those ports and start accepting web requests write anything that you could do with the command line by hand is basically what you’re typing into here and maysa is automating that and it’s scheduling it and speaking it fault-tolerant and if it fails it’ll make it run somewhere else it’s like it’s a really powerful powerful system with a very simple simple easy interface to run that not the best idea to do BitTorrent but there are other ways to do it as well so we I mean I’ll talk about that in a sec okay yep so the question was I guess the question is sure so miso supports I want to say three different URL three different you are eyes so it could be HTTP based which means you could host it anywhere that you could host HTTP it could be s3 based or HDFS okay so as long as you could put some file on s3 HDFS or HTTP then hold on hold on then out-of-the-box maysa will pull it down for you if you want let’s say you want to integrate BitTorrent then you’d have to build your own custom framework and we’ll talk about that okay so that would be cool but that would have to be custom and we could talk about how to do that but out of the box sure yeah whatever sure yeah any any any URI it doesn’t matter and same thing with like docker containers like you can when you deploy docker containers the URI is the image and where that poles problem it could be a private repo or you know what have you so this is your script and if you remember your script right you’re basically launching your code and when you launch your code it’s important to realize that like you’re launching somewhere you don’t know where it is but you’ve been told the host which is great and you know what ports you have to listen on and most times that’s enough right if your web application that’s great if you’re a database server or a Kafka broker or react or Cassandra or some other system you might need to do some other logic right you might you may need to figure out who are the other tasks that are running in your cluster and get their information right you might have to contact zookeeper you might have to call marathon to get some other information so how did you start working with meso so you’re going to be developing quote-unquote your script more as your own layer within your infrastructure right doing things like service discovery and really having a wrapper around your applications but at the heart of it like it’s really just command line write marathon doesn’t actually have an executor remember we talked about every framework scheduler executor yeah marathons not like that right marathon is just a scheduler the executor is actually meso so may shows comes out of the box what’s with what’s called the command executor which is like literally nothing more than like su – see right it just launches whatever you’ve given to it within a C group and it’ll just launch and pass you parameters but that’s okay because you’re an application and once you’ve got your parameters you can you could be Python or Ruby or go or – whatever you can do anything you want in here all right very very powerful paradigm all right so let’s talk a little bit indeed a– because web applications are great but at the end of the day this becomes really awesome when you can put applications that you know like Kafka as an example where you actually want data to be stored there and you don’t want it to disappear and you want to be able to get it to it later so currently to store data and persist data on marathon you have to write it outside the sandbox okay that’s just the reality and it’s okay to do no one’s gonna yell at you maybe call people yell at you but it’s still looking everybody does it and when you do that since you got this tag and app right since you’ve already created this kind of structure of tag and app then when you write your data to your local disk you know borrow lib data then you just need to organize it by tag and app this way if you launch a dev tag

or a prod tag or QA tag or an analytic tag you can still have all these persistent stores all launched on the same slave and none of them will bump into each other right and these are really simple concepts once you start to understand them but that’s that’s how you build systems of maces right moving forward this hack is not going to be required so there’s some really cool features coming in with like dynamic reservations where if you request a resource you can say this is a static resource the master will actually not give that resource to anybody else and we’ll keep giving it back to you as a framework so if you launched Kafka brokers there right and you said this is a static resource then that would be your resource now right if you die and it crashes no one’s else is gonna get that offer and when you start up again you will get that offer and can start up your broker again and when you do it will be like you know local right there there’s also some cool stuff about men mounting sound boxes and you know some other features that really make persistent data way better but there’s nothing stopping you from doing it today people do this today and it works great and I I continually recommend it any questions about marathon marathon all right so let’s talk about Aurora Aurora is completely different than marathon and Aurora also goes to some of your questions that you had before about priority and preemption so Aurora really came out of this is part my opinion in part with Bill foreigner said people who used to work at Google they managed to hear work at Google or used to work at Google like I’ve asked every single person who’s ever worked at Google that I’ve ever met like what do you miss most from Google don’t ruin it because everyone’s always had Borg so that’s the meetup answer so that’s okay but most people I asked like what do you what do you miss from Google and you know people always say oh I miss Bork right and when bill Farner started working for twitter he really missed Bork right and he missed borked so much that he basically decided to write a scheduler on top of me so that fundamentally gave twitter the ability to do what Google did in their data center with missus so Aurora is I guess the closest thing that you can get to Google with missus and it’s a really it’s a it’s an open source project right so it’s an Apache project it’s written in Java it’s a really cool application rora takes the concept of scheduler very differently and basically Aurora or rora kind of assumes that it’s the only scheduler running okay so Aurora says I’m gonna own your hope Buster it doesn’t care about all that role garbage that I talk to it doesn’t matter Aurora’s like I will own your cluster and and I’m going to do that and that’s okay because if you have an organization where you want to give compute resources out to analysts or marketing people or whoever right you don’t want to give them keys to the kingdom then something like Aurora might be better for you because with Aurora you have a very different architecture and we’re gonna talk about that and then we’re gonna talk about some of the benefits of you know when you want to use Aurora because you have an organization that’s not just your DevOps teams using mesas that you actually have like a team of analysts that want to run and use your computer sources right so we’ll talk about a little bit so Aurora is both scheduler and executor but they call the executor thermos like thermosphere mesosphere thermosphere whatever I didn’t name it so Aurora is the scheduler you got mesa us in the middle starting tasks and when thermos does is it will actually open up processes inside of your task and if anyone here has done marathon development at least from my perspective I’ve definitely got to the point many a times where it’s like I’ve just written thermos right like they’re miss for marathon so there’s there’s a lot that happens inside of this executor that gives you a lot of business logic and power that is great for your organization right so here’s kind of the job object and I decided to pull this out because it really highlights the features that Aurora gives you so it separates things by environment and role okay so you don’t have to willy-nilly hack this concept of segregation of tasks it does this as part of the actual scheduler right so you can have an environment called dev production you know what have you and you’d have different roles and then you could have different quotas okay so you could actually as a user be given a quota right like an intern you’re an intern all right you get 10,000 CPUs cool right fantastic so I can go in as an intern and do whatever I want my 10,000 CPUs and whatever environment I’m allowed to

do it and I’m good to go right but what a word does is it has this concept called production it’s a really cool thing production right because sometimes things go wrong in production and all of a sudden things that are not production just need to stop right and all those resources of what was in production need to be shut down and the production system that went down needs to automatically be started up right Aurora does that all all for you automatically in calls of preemption right very very cool so that in turn who wants the 10,000 CPU SPARC job when a production issue happens all that just is gonna get killed and you know the production resources will then get owned by the cluster and owned by Aurora and then it has all the other cool stuff that you would expect with you know instances and constraints and CPU health checks all the other same stuff about marathon right so let’s talk about right oh yeah we’ll talk about this first and then we’ll talk about writing code in Aurora because that was like oh why wouldn’t we just want to use a rower all the time yeah well there’s always a catch right so here’s kind of the life cycle of Aurora it’s it’s very much like marathon except it doesn’t have this concept of like pre-empting right so it does a really very tight-knit job on not just starting and running tasks but also killing pre-empting and restarting tasks this is a very unique aspect of the Aurora scheduler and if I mean if this is how Twitter runs Foursquare runs like this now moxie runs like this Google you know the lot of people user are at this point it’s a nifty application so this is hello world in Aurora so Aurora decided that for its DSL how you write jobs right so in a marathon you write jobs by REST API posts right in marathon you’re within the container always but what they did is they decided instead of making a DSL it’s just Python okay so if you’ve ever written Python before then that’s what this is okay and all of all of these are pretty much custom with things like job right because remember I showed you like the job object there are some specific objects that you tie into when you’re inside the container and otherwise you can kind of write and do whatever you want so this is a really naive hello let’s use a hello world got your process the command-line just gonna echo hello world you set your resources you say which process it is you kind of link these together set your job so you say what cluster you want what role you want your task name set what job it is and then it’s just gonna run it and you know find a place in the cluster and run your little echo hello world the reality of like building on cough I’m building on Aurora is that you really have to build inside of the Aurora framework right like to run on Aurora you have to run on Aurora right you can’t you know if you’re gonna write some go application or a ruby application like it’s not going to work very well so Aurora is a great application especially if you have dozens or hundreds of people who actually need those compute resources for like the Mesa system then Aurora is fantastic if you’re setting off a mesas cluster and you’re using it just for your operations and your team is the one who’s managing it then you should use marathon and in either case you should think about writing your own custom framework which we’ll talk about in a minute alright so this is what it looks like to get Kafka running on Aurora I thought this was a good example if for nothing else just to understand like you know both the power and trade-off of this type of system so here you set up like a profile right with everything you’re setting you know locations of where to download stuff right always Mesa needs to download the executor from somewhere and run it right it’s just not magically going to appear there so you’ve always got some URI or something that your download you’ve got a bunch of other like you know important information like keep and version and and some other fields here we create a structure here called the process right this week and said this actually this integration will actually pull the binaries from HDFS directly that’s what that process on the bottom is and then this is just like make sure I want to bleed right it’s just not but it works though right I mean and that’s the cool thing is that like when you’re building software in Aurora it does work there’s a framework and you have to work within the framework and within the container right it’s very different than marathon where with marathon you can focus on your applications and build them but it’s you who controls the access to it here anyone can go in and get compute

resources and launch something I’m not gonna go through this is really not that it’s all in mind if people are interested in running cop gonna roar you can do it so here’s the process right we’re gonna this registry script is required to kind of match up and have like some service discovery she kind of have to build and work on your own service discovery and that layer and then all sorts of like these are all the Kafka settings right that you want to go and set when the Kafka broker starts and then here you’re going ahead and actually you know getting to the point where you’re running it right so here you’re actually like running Kafka alright so here this is the actual command line Kafka running starting class that you’re gonna start and execute and then you’ve got your task and it’s got resources but then you got you have different resources for staging then you do production and then you set up like different profiles for staging and production and it’s all within this one scripts right this one job is like handling everything within the one script it’s a lot so that’s it for Aurora so just because we talked about Kafka and miso so if you want to run Kafka on maysa you should not use marathon or Aurora you should use the new framework that we’ve been building and working on and we’re gonna actually talk about custom frameworks now there’s a good it’s a good be way in to custom frameworks so the reason to use custom frameworks is a few fold so marathon is great if you have a team that is owning the production system and they’re the ones who are building whether it’s developer admin they’re the ones who are building the software on the production system right Rohr is great where you’ve got people who you know like maybe shouldn’t like have access to production who want compute resources and want to do things with these compute resources right aurorus fantastic but at the end of the day these frameworks are abstracting a lot of features away from you that you’re not accessing directly with mesos ok one of the most important ones is at least that I find is that when these preemptions happen when your task gets killed for whatever reason like in marathon in Aurora you just get shut down all right you just get killed you have no clue it’s coming and you just die right there are some applications that actually like to do things you know before they get shut down and killed so when you’re building a custom framework you actually have the ability to have like communication between the executor and the scheduler do things like hey man I’m gonna kill you yeah then it could do its cleanup thing and like all right you know go ahead and shut me down right so there’s a lot of different it’s a lot of different flexibility that comes with frameworks so we’ll talk about those and and for real if people want to try out the Kafka framework its alpha please try it out it’s kind of cool so there’s lots of sample frameworks out there and I would definitely say like pick one of these languages that you like and go look at the sample framework it’s a really great way to understand both scheduler and executor and the protocol buffers that actually have to exist between the two and how they work together which we’ll talk about in a sec okay so frameworks are made up of protocol buffers that are going back and forth between the scheduler and maysa and meso s– and the tasks in the executives all right and basically you getting called back within a container right so when you launch the scheduler you’re sitting on a call back right and you’re getting called when things happen when the executor starts you’re sitting getting called when things happen and then you’re passing protocol buffers on those calls it’s it’s it’s that simple so here’s one of those protocol buffers framework info framework info like kind of kicks off the whole shebang right so you’ve got you know like the user you’re gonna run under the framework name framework ID fell over timeout which is really important when you’re like you know if your framework fails how long until May so skills all your tasks before your framework comes back online right you kind of want to set that to a high number because things fail and you don’t want all your tasks to die checkpointing same thing right so without checkpointing when your framework fails mesas will just kill all your tasks by setting checkpointing if your framework fails your task will keep running even though there’s no scheduler and there’s time to do this in time to not do it right for things like spark which is like a short-lived you know quick scan you don’t need checkpointing you don’t want check pointing right when the scheduler is done your operation is over the job is over role so we talked about that before so the default is star but you could have the role be whatever you want and it’ll find a matching role within the cluster and consume those resources right now you can only assign one role and that’s it this is for

Kerberos and the rest is pretty obvious and then you have task info so task info so framework info right is this information kind of being run by the scheduler going down to the executor right and then you’ve got this task info which is all this other information related to the task and it’s actually gotten to be pretty robust this could actually probably take like three hours to go through in detail so make it quick right so you’ve got your task ID your slave ID you got what resources are assigned right these are all other protocol buffers that exist in the meso stop road Oh structure right so resources would like to CPU the RAM what’s associated to it the executive that’s associated to it the command that’s associated to it whether it’s docker or you know eventually there’ll be other things besides docker like qmu and then some cool new fields here so healthcheck not that new but still pretty useful and cool so maysa will actually run for you based on the health check information that you set in here so in marathon this exists I’m not sure about Aurora honestly so a marathon you could set health check information and I’ll pass it down to my sews and what happens is mesas will actually run whatever you want inside the task to check whether or not something’s healthy right so it could be a bash script it could run a URL ping you could run whatever you want to determine this one tasks health all right so you have a very tight integration to basically without meso is having any clue what the software is to basically give the rest of the infrastructure knowledge about whether this is alive or not right I don’t know if folks I’ve ever run named nodes before just because your name node is on does not mean it’s working right like these are fundamental things right the app is up but I hit it and it’s blocking right just because something is running doesn’t mean it’s working right so to be able to get into the is this really working or not is a you know very low-level primitive that missus exposes so when you’re running your own framework you could you can implement that right labels labels are kind of cool so labels are a new feature coming out that basically allow you to sprinkle extra information about the task once that task has been launched so you know more information about it that you know you essentially need to know so let’s think of a good use case for that so people can understand this let’s see so let’s say you meant let’s say you launched three kafka brokers right and you don’t know you have them as tasks but you don’t know which one is broker ID 1 and which one is broker ID 2 and which one is broker ID 3 right where you’ve got you know a Redis cluster and you got one reddit server that’s master let’s say so the labels allow you to sprinkle this extra metadata about the tasks that can tell you information about it when you look it up so you kind of know what that task is all about right or like if you launch two named nodes you can say hey this name node is you know the active one and this is the standby so when you go to a name node and you look at it you can just look at the labels and know what’s going on about the information it’s really this is just key value pairs and then discovery info this is also pretty cool it’s what allows the new may sustain us to run it’s it’s just discovery information right so just finding more information so maysa doesn’t even look at this it’s just more information that you’re trying to tell people about your task that they might be interested in and then there’s task date right so when you looked at and I kind of threw up the and blazed over quickly when you look at like the aurora structure of like how it handles tasks and killing and all that kind of stuff right at the end of the day like it’s this is really what’s happening under the hood and but I’d like to impart more than anything is that like we’re at a really good time with this application where you’ve got systems like marathon and Aurora that could allow you to be productive today I’m ASOS but the future is really with things like this because you know the same way that we had mobile change computing back in 2009 right everyone’s got a phone happy you want on it now because you don’t listen it’s fine cool good you know mobile change in 2009 right and it changed with apps right everyone agrees yeah so this is really what is now the future with me so this is basically apps for your data center right instead of having something run on marathon you can build a framework right you’re a react person or a aerospike person or a rails person or an Jeanette it doesn’t matter doesn’t matter doesn’t matter right those are just frameworks apps that run on the data center computer right and that’s what

frameworks really are is basically these primitives that allow you to build and run on this kind of ecosystem and such and and it’s really simple it’s like staging starting running finished failed killed boss error I mean like it’s really straightforward now the scheduler this is kind of when we get into callbacks so those are the protocol buffers that’s like one third maybe of the protocol buffers so you can go to you know Apache comm is all open source so you go look at the code and look at the protocol buffers yourself but when you’re writing a scheduler this is where the callbacks come in so one function that you have to implement is registered right and you can see like scheduler driver framework right these are all parts of the protocol buffer that we looked at right the framework right maysa will call you and we’ll call this function and you’ll get instantiated and you’ll run and you’ll get this information right so when you’re registered with meso s– you’ll get this call and you can do whatever you want same thing with re-registered right if you fail and you come back online basically be like oh I know you were ready so it’ll rear edge astir you right so you can know if you’ve been there before and that state will be held for you and then resource offers right so this is basically the function that you implement whenever a resource offer comes in and this is I mean this is a really simple stuff right like this is not like very difficult like in terms of most computing problems these are really simple things to build once you understand that right you just got to follow the API essentially you’re gonna get a resource offer every time the master has some offer for you of the slave this function is gonna get called and you’re gonna do whatever you want with it right hopefully you’ll decline the offer because if you don’t you’re gonna starve your cluster please decline your offer if you’re not gonna use it and then you know sometimes offers are rescinded right sometimes mesas will just be like yo like that’s not yours anymore I’m sorry right you get that information you also get the information on status updates right so as things are changing within the system you’re getting called by meso stright so if you think about the framework that you’re building in the scheduler you’re really just like living in a container and the container is calling some functions that it’s specified for you and you’re just operating on to those functions right it’s a very simple traditional paradigm mm-hmm and then framework message right because sometimes you get a framework message as well all right so now you can parse that framework message and do something with it same thing when you’ve been disconnected or a slave lost you’re gonna get these calls slave lost executor loss these are important because there’s a difference between your task dying and your slave dying right your slave dying means some admin may be rebooted the computer or the mouse chewed through the NIC card or whatever right the machine went away which is different than your slave going away and you want to do different things at those times right you want like if the sleeve is still there maybe you just want to start adding tasks and make sure all the data is still available but if the slave is gone maybe it’s time to go and start you know instant somewhere else and get replicating because that slave is gone right so you don’t have that ability with marathon or Aurora to really see what’s happening in the cluster and they make those like really intelligent decisions that your software needs to make and when I say your software like I know everyone doesn’t work for react here like gosh or whatever like we’re all you know engineers we all work and have different business problems like I’m talking about like internal Rd proprietary software right you should be thinking about like oh you know I build an ad server how can I run an ad server as a custom framework right or like you know I build a compliance system like whatever it is like all of these could and should be custom frameworks that essentially run on basis errors they come in as well blah blah blah all right so that’s the scheduler right so you got your framework your schedules sitting on one side the basis master is telling you everything about it figuring out all the resources and then the scheduler is saying ah I like that resource go ahead and launch this right now when it launches that what maysa does is it downloads the executor right you give it a location of where to take your executable and when that executable runs right again it’s running in the mesas container now on the slave and getting these callbacks right so now the executor starts the first thing that’s going to be is gonna be registered same thing with three registered if the executor restarts within a task which is possible launch task is where the magic happens hey so when you actually have like now tasks is launching it’s your responsibility to launch your task right like you could do whatever you want at this point you could open up another process you could just launch it in your same process you could do whatever you want but you’ve now been instructed that like now it’s time to launch your task go ahead and launch your task do whatever you need to do kill tasks right this is where the I’m gonna shut you down you know do something about it comes in more framework messages right so now framework messages are just gonna

go back and forth right there’s a there’s no guarantee that framework messages would be delivered but you can rely on them being there all right so framework messages are really nice way to like talk to yourself between executor and scheduler if you need and then shut down right very simple straight-up concepts this we could have a clean shutdown errors coming in as well and that’s it yeah anyone have any questions do you recommend that it will usually rank your own framework for service so yeah I mean weird if you ask me that question in three months it’ll be different than I answer you now so I would say if you’ve never used mesas before then you should probably get started using marathon and after a few days and you start to understand it then look at your organization and decide whether or not you need to be using Aurora marathon and then look at all your systems and try to pick out the ones they’re gonna make sense to build this frameworks right like right now there are a handful of frameworks right like I think until there’s like a hundred frameworks it doesn’t make sense for you internally to go make everything a framework it’s way easier and you don’t get any deficiency from just running it on marathon right long term right a year from now whatever mmm everything will look like frameworks my opinion right you’re just gonna like oh I need an engine X server alright just go download the you know nginx framework and then you launch within the nginx framework but until someone builds that then what you do is you take and nginx like install you launch that own merit and then launch your application inside of there and some people actually run custom frameworks to do things like their own analytics right or their own stream processing like a lot of people you know have done some really cool things and their own custom frameworks for their own businesses that are very specific but that came out of the need of oh we can’t do this on marathon right and even though like I’m my little crystal ball right and that you know a year from now it’ll all be custom frameworks and we’ll have apps in the data center which would be awesome that’s a year for now right so that all right so but today marathon is great or are as great get started with those figure out how to get your systems running on there and where you start to see those gaps and deficiencies then custom framework start to make more sense or if you just want to be on the curve depending on where you work you know it depends right if you’re if you work for a product company then you know pressin framework for me might make more sense right yeah so it’s it’s not the versioning warning I’ll give it again because it’s so important so with docker docker is great I love docker love docker I’m a so so it’s fantastic you only got you really that I’ve seen so far is the upgrade path so the docker daemon if the dr. Damon dies all your tasks die all your docker containers die so that’s a single point of failure depending on you look at it so that’s an issue but you can’t upgrade it without shutting it down so if you have a slave that’s running you know 1-4 and you want to upgrade it to one five you have to shut down everything this is not a meso stank right you have to shut down every docker container that’s running on there right and then upgrade it so in may sews asus has a like a fully H a like restart upgrade life cycle you could we start a slave and your tasks don’t die so you can upgrade a slave and all of your tasks stay there running and humming along right like so for Twitter they run like a some stupid like 20 terabyte Redis cluster it just right just crazy you don’t want to restart you know a six terabyte like right a server it’s probably take like 30 minutes startup right so it depends you know if you have like you know really quick systems that can just start up really quick and they can failover then then maybe it’s not an issue if you have data that’s residing there attach them volumes and you gotta shut it all down just for an upgrade that might be a problem or at least you have to know about it before you get there besides that yeah so that’s that is a fantastic example of where things like attributes make sense right because if your

application is going to install itself in bash with sent to West bash right with like YUM let’s say right then you want to make sure that it only goes to a sent to West Machine so if you want to do that sure but that’s where attributes come in so then have operating system sent to us or like said to us five seven sent to us six five JDK 1 7 JDK 1 8 right so you can use attributes for like where things are installed and it and then if no one’s thinking this like you know it’s like oh we’re chef we’re chef where’s puppet right it’s great no more ansible no more salt right like you may need them to bootstrap your machines but once that’s happened like that problem doesn’t exist anymore right the orchestration problem of having to manage and install and do all that stuff goes away right it happens within the container right like when you deploy something it will deploy everything that it needs to run right there locally right and if it doesn’t or it’s conflicting you could throw it in the container and it will run as well all right so it’s it’s it’s different but it works it’s pretty cool yeah you’re saying I cut few actually news you specified like the host OS all the attributes that you would need in your application could you install yeah I mean it all depends on your available hardware and how much of it you have right so I would say like for attributes keep two things that are like not changing too much like JDK operating system you know maybe some other like python version you know things like that and then when you want to install you know jar files all of that could sit in the sandbox and you could just locally unzip them and run them locally within your class path if you need to start changing up like Ruby versions and starting to do things that are going to conflict on the operating system then maybe you should start thinking about something like docker because then at least you’ll have an operating system within your operating system right to me that’s where the power of doctor comes in is when you know hey we run one seven but we have a new application that’s using one eight well we don’t run one need anywhere you know we don’t run JDK a garis no install JDK Don docker container so I’ll the app on a document deiner and launch remaini dr containers you want to meso s– and it just runs right and it doesn’t conflict there either so containerization with docker is good for that type of packaging otherwise i’d say use attributes or just install it locally like that’s what we do for like when we so we use one of my clients we do HDFS and Kafka and a bunch of other stuff on marathon on May sews when we install HDFS we just unzip it change the configs and start it we don’t install it locally on the machine you actually have different Hadoop versions running on the same machine they’re all just corralled in different directories all right you don’t have that conflict of like you know static C libraries or anything like that but if you get into that then docker becomes like your so you are not going so Windows doesn’t have a Linux kernel so so you don’t get cgroups okay but outside but there is new containerization coming out in Windows 10 so the new Windows 10 containerization in theory will allow meso s– to do proper isolation on a Windows machine now the cool thing besides running meso some windows is actually running Windows on besos right so what you could do is actually do things like you know like take a KVM image and actually launch dozens of KVM images on a mesas cluster and then give people like RDP access and have like desktops in the cloud but they’re just on your basis cluster and all you’re doing is starting up a KVM mode which is just a command right you just got to download the KVM and just local to the machine start the KPM image and then boom you’ve got like Windows with however many resources is needed like running in your cluster doing your compute and you just RDP to it and boom you’re in your you’re in you’re in your machine so that I think is like yeah that’s great right yeah so that’s good the whole basis of Windows with like I said with Windows Server 10 sure because you get isolation then right no I think without docker as well I could be wrong but I’m pretty sure

without the darker functionality you would still get the same isolation it uses the same underlying perimeters the same way that both docker and Mesa issues like the underlying like Linux containers right it just use a different you know API there yeah but KVM on KVM on asos is awesome I’m hoping to see a QM EU framework sometime in the near future just to make that stuff easier but you don’t need it to do it you could do it just run a marathon it works it’s cool yeah everything could just run on that huh kind of cool any other questions all right cool well thank you everybody