Evan Lofsky – Amazonian fabric

the usual in genetics and Apache stag in front of it and for deployment they’re using fabric and when I got there they were using a different virtualization provider before I was on and one of my first things there it was to actually what they were running on to Amazon’s ec2 servers and so I have a lot of experience with ec2 itself and a lot of sysadmin experience and not a lot of experience with Schaffer puppet and I work with cfengine a long time ago but really talked about that and so I took a look at chef and puppet and I thought I could spend a lot of time learning something new and then try and set stuff up with it it may or may not be the way we’d like it or I could just use what I know about Amazon and Python and the Python module for Amazon in systems administration and just write something so I just wrote something and basically so the basic interface is some shell commands there whether actually Python command line there easy install stuff that you know it assaults things on your path and that’s basically how you interact with it for setting up your stack and so you can set up right now we have a few different kinds of weather and beautiful kinds of servers the command here is a little bit different from the slide text because I just but if where it says web right there you say database it’ll set up a new database server using the large Amazon server configuration it’ll set the hostname to be beat and memorize comm it’ll install install my sequel 5.5 from sources there’s never Bunty pedigree for that yet well they may be now there wasn’t at the time and then it’ll install our special might on CNF file basically has all of our performance settings and so we do this for web servers we do this for our Jenkins continuous integration server we do it for our database servers we do it for our solar text index servers all the different server types have a different special configuration and we also have like a really a really standard common configuration with all the servers use installs basically a list of Ubuntu packages list of Python packages that everything has and then there are actual API calls inside this framework now that you basically just add things to Python lists of packages and the configuration and it’ll add those sub lists of system packages for AB for example or Python actions with pet that installs and we’re actually publishing the code for this I put the URL down memorized out column so I should yeah this basically our open source coding Center where we put up all the different projects were sourcing this is going to be one the code for this will probably be out later early next week we’re doing a massive rewrite of it now to pull all the memorized specific stuff out of it you can just configure everything for whatever your local site policies are and so there are two or determining like what kind of easy to man system use and which machine image to use there are two maps so right here we have so you notice back on the slide you see it says starts instance which is a command the first parameter is going to be the type of the server that you want setup in this case a web server the second one is going to be the hostname without the domain component that you

want to have so we call it instance type here but really it’s server title so web basically needs an Amazon to see one medium which is basically I CPU a medium sized instance it’s 32-bit and so if you go to this other man here inside a dictionary here you’ll see you see one medium now uses our 32-bit image the databases the databases we actually do quite a bit of the database we use the largest instance they have which is an egg CPU 64 gigabyte 64 bit it’s really massive it’s a lot of fun to play with and so that you know it’s an m24 x-large which uses the 64 bit image so we only have to do just one one 64-bit because those are the two architectures that me Sakina basically provides but they provide a lot of configuration behind us so we set up a database or a web server and then it looks up like what to use which this is just a summary of and then chooses an architecture niche and then it goes through and sets up basically all the packages all the custom software we have and if it’s a web server it’ll actually check out our software stack from a git repository locally after installing the SSH keys we need and beautiful deployment on it and making it ready to go so that’s when we run our standard appointment script which facing the desert and updates all the web servers it gets included in the mix so let’s step back a bit here another architecture we considered was having a central control server which basically you know you would SSH to you and run like a deploy command on everything through and check its list of servers that we had running and deploy to those and you know set up an au configuration that had to be set up and the problem with that is a lot of times things happen and the server’s you have running aren’t necessarily the servers the master server knows about and so one thing that I found over the years is that master servers one of the things have greatest ad he’s getting out of sync with the real world but Amazon provides metadata tags that you can assign to each instance and then read through the Python API one of them happens to be a name that shows up in the Amazon console others are basically just its reformed string key value bears and so what we’re doing is we’re assigning one called instance time which basically is a string which is that first parameter in the start instant schematic which tells all the deployment scripts and all the management scripts we have what kind of instance it is so then when we go to do any sort of systems management it all happens on our individual laptops our laptops we’ll just connect to you ec2 and say find out where all the web servers are and then tell each web server to go do a deploy so it connects to the web server using fabric SSH s to it it says check out this tag from get you know upload the static files s3 if they haven’t been already and then make the new version of the site live and that actually works really well because there’s only really one place that the information is stored on like what kinds of servers we have and that’s in the AWS console itself so basically the servers are the source of information about them in essence and we don’t have to worry about maintaining a separate list of like well what web servers are good today well we might have started a few because we were expecting the load spike but the load balancers acting funny maybe some of them were down or something and you know it turns out well someone forgot to take some out or any sort of any other weird sort of thing like that so yeah the metadata tags we have the instance type the name and they’re set up by the initial set up instance so when you call start instance or set of instance you give it a host name which is basically going to be like the name you want that server to have it’ll be set by the startup script in Etsy host name it’ll be set an Etsy hose they’ll be setting the postfix configuration and then postfix would be configured to us end I think we’re using send grid for email

all the email coming from the host will go through send grid so that we’ll get it and everyone on the write lists will get it when you know like cron job emails come through since we set the hostname for everything on the host yeah you don’t get weird local email configuration errors like you know could not resolve hostname or whatever and so you don’t lose those kind of annoying but often important to the least expected time emails when khurana saying yeah your disk is full you should check into like web 37 and find out why it also connects to the Amazon DNS provider the route 53 and since a cname record from you know and memorize calm – whatever the Amazon hostname said that yeah you can use our short in turn allows them to connect anything that we’re running and it’ll still be resolved anywhere you’re at and it’s actually pretty cool because you know amazon provide it was it had just been rolled out we did this and said there was no documentation on it there is no AP or the api’s were just basically really raw in the photo library but we figured out how to do it and it’s basically part of our anytime we start a server self-assertive using our management scripts the DNS entries get updated itself even when Amazon changes like the host name like when we start a server we have a command that will just go through and look at all the metadata tags and looking like all the names we have and force all the DNS names for the servers to be whatever you know the name memorize comm will be a cname to your whatever the new current AWS hostname is that is just coming and coming any on a number of occasions so now we get to what we’re doing with fabric and so don’t normally have it have some examples for this but I’m on the internet through my phone which is a little unreliable so I don’t want to like actually do anything that’s going to update our site so I’ll show some more code and talk about it and then at the end if there any questions you know I’ll show you pretty much anything we have related to this I may not show you some of our more esoteric internal stuff but I can show you pretty much anything related to our deployment stuff so recent fabric which is a Python execution command execution environment it basically lets you run shell commands locally and shell commands on remote hosts and you can specify different groups opposed to run one one set of commands on and they basically use scripted in Python for the most part you say like this Python function make it run these shell commands on the remote host and B shell commands under local O’s then you just change those Python functions together and then you call it fabric from the from the actual shell and say run this Python command which is basically a recipe for running a bunch of commands in a bunch of different places and so we have a command called deployed seeks a dot slash pad deploy and what that does it takes a list of our batch job server which has to run the application stack it’s spending salary a list of the web servers right now I think we have 25 or something we have a lot more than when I started but we’re also growing really fast and the text index service which also have to run the unit full static because we’re using hey static with Django it’s a Django plug-in for using solar so basically anything that’s running our Django application needs to deploy done to it every time we give a deploy we do maybe 10 or 15 deploys a day we’ve kind of eaten the continuous deployment dog food so it finds the host types that should get a deployment done to it um there are three basic groups it does the batch server first which also lets us upload all of our static images history and then it runs all the web servers in parallel if we’re on if you’re running it from a laptop that supports a parallel decorator in fabric it doesn’t work on some versions of OS and so we have to check for that and those guys when they do deploy takes like 20 or 30 minutes but what I do deploy it takes about three it’s a lot faster but we can do it in barrel all 24

months and then just in case some hosts of change have gone away or you’ve added more hosts that are yet in the load balancer it looks at the hosts load balancer knows about it looks at the hosts that are supposed to be web servers by checking the AWS metadata tags finds the differences and then updates the load balancer configuration by removing anything that’s no longer listed as a web server and adding anything that’s listed as web server it’s known as the load balancer it just does the differences it doesn’t basically be able to the whole thing so the site never goes down and you know the way Jango works with mod whiskey and Apache is you can change the code locally all you want and it won’t reload it until you touch the whiskey file and so we we actually update the load balancer configuration after me I don’t touch the whiskey file so they’re all going to be running the latest version by the time they’re in the load balancer and that also kind of works out pretty well because we get fast deploys we’re running the latest version of code by the time it is deployed and then if the web server is gone away the load balancer is updated automatically so it stopped receiving like health check failure notifications if you know something brought a web server down and then forgot to pick the load balancer don’t be taken care of for the next deploy and those happen all the time so yeah we have people here in Los Angeles well person here in Los Angeles we have a couple of people in New York we have most our team is in London and we have someone in Beijing and so basically 24 hours a day one of us is going to be deploying something so we’re kind of taking advantage of that and building a lot of our DevOps out into our deployment scripts so it all basically happens automatically so yeah the three types of servers what we do we assign you to roles and fabric so fabric has this really cool thing let’s see called roles so web servers is wonderful it’s basically a list of all our web servers with the you know user add hostname port whatever the SSH can use to connect to it so of our web servers are like you know memorize user at you know web 27 memorize comp over 2000 and that’s how basically fabric identifies as internally and then it uses that to connect to the hose so what we do we add all the hosts for each type of server like for example web server hosts we create a list of web servers up here and get web sorters which is really just a giant filter function that looks for all the instances from ec2 that our web is is–is and that are running those are like the humane criteria for something to be a web server to be deployed it has to be a Webster in the metadata tag is right in here checks to see that it’s running is busy instances live you know it’s powered on the operating system is like functional and then is web just checks like our metadata tag to see is this a web server and if it is yes so then this gets a list of all the web servers it sorts them by hostname because we just like sorting things my house it’s easier to read the output when we’re doing serial bills in parallel it’s just like basically this crazy soup of random characters because each process rights to the turtle at the same time and maybe I’ll be able to show that later I’m not sure yet and so we have the web server list and then we have the web server hosts and then we have the actual web service role which is basically telling fabric these are all the web servers and so then later on you’ll have something like lists web service so yeah you say fit out the list with servers now will contact easy to see me the Internet I’m assuming on the internet and then it’ll find all the web servers get their names and then listen just like us it’s a little hard to follow the way it does the output but basically for each web

server lists the short hose name in the long term so here we have our full complement of web servers running and then you know does the same thing for celery servers and I don’t think we have something for the other servers so so now this is like the meat of our deployed command if I were to do that you’re not gonna do this I’m probably a few weeks behind our repository because I’ve been working on other stuff it’ll probably take a while to like sync my repository here with the main repository but what that would do is basically they’ll deploy to the celery servers which is another fad Rajan which so since that employees run for such a long time sometimes fab caches passwords that you have behind you so we have this thing pre cached sudo password which will right off the bat ask you for the CTO password and then store it so that you don’t have to come back you know in ten minutes time and see that you deploy failed because this is a chart that waiting for a password and that’s always a bummer so we get that push master here is basically a command that syncs up the local git repository with the main gate repository you know it does like the push pull push pull dance thing and then since the roles listed here include all the celery servers this function gets run once for each celery server you may only have one so this function gets running once and then it calls our main to our common deploy function deploy one which is used for every deployment which does a whole bunch of stuff and then it restarts the celery services celery survey deployed function and you’ll note here so if you look at deployed web for example you see like right here there’s this may be parallel decorator that isn’t on the celery the celery service can’t be done because there’s a lot of stuff that happens on the celery server that affects all the servers after it mostly involving on uploading all of our static assets to s3 so the celery the Saudi deploy only gets run one at a time and since we only have one sever it’s fine then the deployed web you can actually gets run in parallel said it’ll be all 2627 web servers at once on your laptop I’ve got a bunch of junk to the screen and then it takes about a minute and a half or two minutes to do that on my laptop and then we finish off by deploying to our solar server for basically a site search and so there we go so basically the fabric the fabric man’s handled all the deploying for this and you know we have that parallel decorator it’s actually really lame but it’s kind of effective basically if it’s Linux or if it’s one of these three systems we’ll do a parallel deploy but all of their systems get it they get basically just a serialized deploy this some people running the latest version of OS 10 which it works fine on and other people are running an earlier version in which it breaks on I was running Linux for a long time until I got this laptop and it works fine on Linux of course because I mean who develops than anything else really and so the parallel stuff really does cut down the deploy time significantly especially as we you know as soon as we got onto Amazon we were we had already outgrown the previous provider we had two web servers there and one medium sized database server when you’re struggling 50 or 60 concurrent users on the side and we got to Amazon I did a bunch of profiling and found that our bottleneck was not the database server and there are a couple of mice – all things that we reconfigured after doing just raw database benchmarks and found that even with all these settings applied where the database benchmarks

were screaming when we ran the whole stack benchmarks it was still really slow and then we realized my web servers so we started out 25 web servers to see what would happen in yeah the site suddenly was really fast and so we found that at the half point we found we kept pulling out web servers in the mix until we found like the number of web service we could have where the site wasn’t painfully slow and added a few more after that and it turns out that at the time we did this we went from two web services our baseline to about 16 as our baseline so at our lowest load we need 16 web servers to handle our application and deploying to two web servers when it takes a minute or two for web servers all right you know that’s maybe five minutes time when you count you know the celery server and the solo server deploying to sixteen web servers when each one takes one or two minutes time you’re looking at a quarter to half an hour depending on all sorts of things and so at this point we realized you know the fabric the fabric version we were using was fairly old and had none of the new features that the latest version had like the parallel decorator and so we took that as a sign that we should probably upgrade fabric which also had a much nicer API for defining tasks and post groups and roles and commands and all sorts of things and so I spent a couple of days rewriting the fabric file from the ancient version we were using to basically something I just checked out of their repository the night before is I thought if you’re gonna go to the newest version why not go all the way to the newest version because then when they release it everything already been working and so I did that and everything worked great on my Linux laptop because everyone and then one of the guys was easy latest version of us and and he tried it and it worked great for him you know it took like hour half hour deploy to 16 web service down to like a few minutes and then the VP of engineering tried it on his laptop and it failed with a pretty obscure about some sort of method not being found on some sort of type that we weren’t even using it’s like you know Unicode not found on type whatever that doesn’t make any sense and so we tried it without parallel after about a week of him not being able to deploy anything and it worked fine when he was doing it in serial so that’s when I wrote the magic may be parallel function and so let’s see right so the fabric father goes through five bills lists of service for each role deck functions are decorated by what role as they run and it also caches the results because we end up calling these functions along you know you’ll have like Q or three functions called for all the web servers so if we didn’t cash it you know it would have to contact easy to you every time one of these functions got ran in the middle bit employment and that takes you know as you saw it could take a little bit of time to do that so yeah if you’re taking like 30 seconds at the top of each function call is in a deploy and you’re calling this thing 20 or 30 times that adds up to so we know caching results now the problem is we want it to be very easy for us there’s not all the people working at the company you might notice that we need more capacity are very technical one of them is a community manager he’s the one in Beijing and he’s going to be up at hours when no one else is up and he’ll notice problems before anyone else does but he does know how to run the basic shell commands so we have a fab command that will start more web servers from our pool of South web services ease into you it’ll start a49 every time you run it we get formal web service um but the problem is this is running in fab this command runs after these lists of servers have already been built so if you were to then immediately run a deploy as part of the same fab process it would only deploy to the web servers that knew about it wouldn’t deploy to the new ones that you just created and so I did a lot of weird eat poking around and fab and tried a bunch of things and I got it working and I looked at it and thought this is really really hard so what we did instead we just uh

fact I’ll show you what we do let’s start spare web servers so what we do alright so basically it’s it starts a web server it updates the that waits for them to actually I’ll be running the Pierre then it updates the DNS entries so every time you stop a server and easy to you and then started again it gets a new ecq IP address and ECG hosted so the old hostname we had set up for it won’t be right so here we just fors it to be whatever the new one is and then right here you basically say you updated the web servers so just run a deployed again and that way we don’t have to like get down and deep into like the inner workings of fab role lists and everything and hope that they don’t change the API in the next release and we break our deploys for a few days while we track that down so basically the way this would look we do fab start spare and then deploy it’s not quite a single command but it’s two commands and one of them everyone already knows because everyone’s always doing deploys and the deploy let’s see oh yeah it calls this function load balance web service another cool function it basically goes through all the instances we know about which don’t include the new ones and checks to see if they’re running and if they’re web servers and if they are it oh then it basically gets a list of services the load balancer knows about it figures out which ones to add it figures that which ones to remove them and then has the ones that have to be added and removes the ones that have superior this gets run as part of every deployed right here and this is in right deploy right down here basically calls the balance web servers so every time you deployed load balance gets updated it’s almost like magic and then so yeah this is basically what happens the site set gets deployed all the web servers the new servers are added the DNS is updated the inactive Service removed and when servers are inactivated as part of like a stop spare web service command which is what we used to remove web servers they’re also removed with load balancer so basically that’s the last slide this time are there any questions so what you do deployment or studying for more servers doesn’t be point you all 20 let’s say yes because it may be that you’re not when you do start spare web servers you may be running an older version of the tree look you may have an older version of the development tree locally so basically pull the latest version of the tree that’s on them on github I’m merging it with your changes and in deployed all of that to all the web servers and I have like the side effects like what well why did you have some weird stuff that Noah visited gets deployed everywhere well you could do that even if you weren’t I mean if you push weird stuff your master repository and something deploys you’re going to have reached a friend even on your service how do you how do you defend against that to me that’s like a big possibility in your scenario that happens in any scenario it’s not uqa at first all right yeah yeah I mean in some environments having like the QA process is appropriate but in ours we’re much more interested in getting features out to users quickly which is the argument tests in fact we have a continuous integration server which is always running so every time someone does it commit to the master repository Jenkins will pull the latest version of it run all the unit tests and let us know if there any problems and most people run a subset that unitary deploys to you but our unit tests aren’t particularly fast which is something we’re going to address at some point I’m sure you sometimes run like some new features for some clients yes in fact one of the other things that we’ve done is this ad test framework so each user if we have like a fork and art

functionality we have both deployed and each user as they go to the site for the first time after this sport has been deployed we’ll get randomly assigned to one group or to the other and then each day we get reports back about like users in this group had these retention figures they did they learned these many items they created these many mnemonics used to this other group had these metrics and then we can compare you know side by side you know what impact each change and one of our AV testers has serious about the evening customs on pork in the master coder you have two different branches of code you’re bringing something it’s uh the both features will be implemented in the same master branch but basically be like a conditional in the code somewhere you know if user is in Group A for this feature go here if users be for this feature to go there and one other question I might have misheard when he said he said when you kind of deploy it you get the latest I assume like this angle app or whatever else and kind of dozen local Merc as well and pushes it back to master and then deploys it out right seven it’ll do the merge look we have the master repository set up so that you can do you anything other than a fast-forward merge okay fast-forward push so you have to merge locally if they’re merge conflicts then the process stops and it says yeah merge conflicts a fix it and then once you do that you know you just do the deploy again and hopefully everything works unless someone else made some changes and yeah it breaks again which happens sometimes like when all the Americans show up in London for you know a couple of week long sprint and we’re always trying stuff out during the day you know sometimes we do step on each other’s um do you have it can you use this same kind of like push to like it like a deaf Bach let’s say you had something that was gonna be one other than kind of a continuation cycle like a branch can you push this stuff to got your own mini versions of those we can actually I mean we run we all run the stack locally okay but we have a command deploy staging which deploys a named branch to our staging server and then goes on to the stage and a server does the whole deploy there instead of from like the tip or the head of the master branches to have whatever branch is specified and the only thing that has to happen before you do that is that this branch has to be tracked on our github repository as well but we have basically it’s part of like every time you make a branch we do that I think we have a script for some were even that end up doing it again but uh basically an in fact you may really have a use of this because we’re just deploying an entirely new item versioning framework which is basically going to make it easier for people to suggest changes and then for curators to go through and look at the suggested changes and say take this one not that one take these three not that one because before you know people just have to email us and say your content is wrong fix it your content is wrong fix it so I can meet a new manager who’s philia here at 3,000 emails a days after we actually press releases a few months ago we started doing really fast and so we’re rolling this out now actually I mean we made extensive use of it on our staging server to find like as many bugs as we could and get people the community manager and his people using it ahead of time so they could see how it’s all gonna work I think it’s actually going out to the live side so yeah that was all done on the station branch yes can you speak to briefly alright I got your back you got no no no I uh all right so we’re actually using both and this decision predated me coming here they started with just Apache and everything was too slow so they added nginx and everything was too slow and now they have two different web servers to look at in fact when we setup a new web server one section of the web server configuration is sent out the pachi with this site another section is set up nginx with this site which proxies to the pageant so we’re facing the running engine acts as a proxy server to Apache and Apache assuming all the heavy work there’s without like slagging on the people I work with interaction really really brought in a lot of ways there’s no reason to do it that way really we could just be using Apache which is actually running the Whiskey static of Django in the hashing processes and everything with the Apache front-end the Apache web server front-end did everything would be exactly like it is

now except with one less layer of administration overhead in between now running both services not impacting our performance in any way so not only as an honor faster than it would be it’s also not any slower and so it’s something we haven’t really addressed because the overhead of coming up with benchmarks to demonstrate this is higher than the effort of just writing the script once throwing in a github repository then ignoring it forever after maybe a bottle Opteron lady yes I guess actually that came up matters oh I see you’re running Apache and nginx should we throw a light H can you be a dear and maybe we could like it the Zeus framework to you just just in case you know and then throw in some note action because yes twist is amazing for this if suddenly you’re gonna have five run pretty easy could actually run so I know we’re using memcache D I think for some other comedians I don’t know if it’s actually helping we set cache headers on things that can be cached we use s3 for love the static content so we said cache policies there and I think I haven’t looked at that side of things much but down on the actual server side basically so we did benchmark a lot of their caching a lot of things and what we found is that the overhead of pickling and on pickling is far higher than just returning results from the my sequel database which basically has its entire rating sitting around so one of the things I did early on after discovering this is removed a lot of the caching code from the memcache decode from the application and it was God and of this one was part of our stack to worry about and we actually have been cashing and running as part of the standard web server configuration and part of what the deploy does is it gets a list of the web servers and we have running and adds all of those or enough to deploy the actual Django settings that py file connects to AWS finds out which web servers are running ads all of those the memcache DD list of memcache geo so basically memcache um charted across all the web servers someone you had more web servers and then run deploy and touch and then the whiskey file gets updated all the web servers start using all the new web servers – but then invalidates to cache everything gets done and it’s almost like caching is pointless of that Velarde yes one we’re good to know if you can’t deploy a lot more often and I’d like to be able to play that back but um the when you had the rollback let’s say someone’s committed a bunch of stuff I don’t know that never roll back to World War II we did once and it was so incredibly painful that now if we see a problem we just go in and fix it and then redeploy it and if you’re rolling back the time it takes to roll back and the pain involved in that is at least as high as the time that decides broken and you’re pushing like a quick fix so whatever you broke and then redeployment yes um I heard you were saying you guys are using EBS and starting stopping stuff yes why would you do that if you have the ability to spin up this is like why would you start to stop moving justice penafiel intervene I mean he was thrown the way um you know that’s basically what’s hurting so he is well know starting to stop anything EBS which is like post I know I have a part of the hole and it’s on the platform so anybody gets in and you’re you’re stopping this and it’s sitting there on that block storage taking up space and that’s awesome and it works most of our web servers from entirely out of RAM anyway so after they do they never really touched this to separate like lot of rotations but you guess it causes any problems and weirdo on reddit or anything but we do actually do quite a bit of a reaction our database server is actually running a rake an array in software raid aloofness and it does over the latest numbers I think we’re doing like consistently we’re being you know 50 or 60,000 K per second it is us east in April of last year

no like you’ve lost EBS volumes that you couldn’t get out in for 24 hours and any instance they were attached to would go down and you think like that stuck the zombie states if you want to sort of do it since he couldn’t because you were you’d hit your your limit and it was a disaster well yes but if you need storage on Amazon you need EDS there’s no way around it well that’s three years right no s3 isn’t one well it’s if you go wide with it you have a way of getting stuff in another in pieces and go wide that we really fast and that’s a Heroku does with there right ahead log suppose presses hoping it write up an s3 real time all right seems like an awful lot of work when you can use these EBS volumes and s3 has problems too I mean basically when you’re using the virtualization with butter you’re putting a lot of faith in and they’re going to have problems but if you did it yourself you would have brothers if you use any other virtualization provider you’d have problems you’re going to have network partitions you’re gonna have outages you’re gonna have hardware problems you can have software problems we have people like breaking routers just because they didn’t mean so we understand that there are parameters of EBS sometimes and we had all of our backups in history so we need to plan about eight hours went to that new zone if we had to but so far we have considering the fact that they are alternatives to Amazon beside let’s architecture we found on his own we took separate setup changed a lot if you went to different provider yes well yeah it would we’re basically using the photo library Python which is aw that’s only and so we were using a different virtualization provider they didn’t have an API and I said we couldn’t do any of this stuff and so that was one of the big reasons why we went Amazon is because they have this well-known API they supported very well everything you do even in the web console is still calls to the API so their API is basically how everything works so it’s kind of a known thing but if we were to go somewhere else you know you basically have to rewrite a lot and I don’t think we’re planning on it anytime to see we basically we put a lot of faith in Amazon for better for worse and so that’s how it’s all working right now well we have the staging server but otherwise people just deploy them and they just work locally then we have a trim database so that they’re not loading on I think it’s up to you add 15 gigabytes compressed database down every time there’s something yeah I think are compressed backup is growing and three or four hundred megabytes per day we’re growing fast it’s kind of it’s kind of exciting kind of scary so yeah every time we do a big migration we have to redownload the dump and rhe trim it so that people can use it and that takes you know the better part of two or three days but our schema right now is in a state where we rarely have to do like massive in place migrations we’re just adding Collins already cables anything else you