RailsConf 2016 – Real World Docker for the Rubyist by Jason Clark

– Thank you everyone for holding ’til the bitter end Last session before the last keynote on Friday My name is Jason Clark So I’m here to talk to you about Real World Docker for the Rubyist And this talks genesis really comes out of the fact that the company that I work for, New Relic, deploys a lot of our services using Docker And I hear a lot of hype about Docker, I hear a lot of people saying, “you should use it” And then, wildly diverging opinions as to how to use this tool Docker turns out to be a tool kit that gives you a lot of options that you can pick from And so what I wanted to do was, I wanted to give you a presentation that tells you a way you can approach Docker This is tried and true tested stuff that we’ve been doing at New Relic, for actually, the last couple of years We got into Docker pretty early, so we’ve experienced a lot of bleeding edges and we’ve experienced a lot of things that made our lives easier So, this talk is going to take the shape of a story And this story is going to be about two developers Jane, who is a new New Relic employee and has a great idea for a product, a service that she wants to run We encourage experimentation, so it’s a lines of codes service that’ll do metrics for how many lines of code you have Super useful, like, we want to let people experiment and see how that goes And Jill, someone who’s been at New Relic a little longer and has some experience and can help answer some questions So, as we are a public company, this is our safe harbor slide which says, I’m not promising anything about any features, please don’t sue us, please don’t assume anything based on me making up stories about services that we might develop Okay, so we’re all clear This is a bit of fiction, but it will help us frame, how we use Docker and give you a picture of ways that you might be able to apply it So, you know, one of the first questions that Jane has as she comes in is, why does New Relic use Docker at all What is the purpose and what is the sort of features that drove us to this technology And one of the big components of it, is packaging that you get out of Docker So, Docker provides what are called images Now images is basically a big binary package of files It’s essentially a file system, a snapshot of a piece of code and dependencies that it has that you can then distribute around and run At this point Jane’s like, okay, I’ve heard about this, these images you can build off of them, you can take, so for instance, the official Ruby image that’s maintained by Docker You can use that image and then put your Ruby code into the image that you build off of it And Jane pauses Jill at this point and it’s like, okay, so this is slightly confusing I’ve heard about images and I’ve heard about containers What’s the relationship here And, so the relationship is that an image is kind of the base of what you’re going to run A sort of the deployable unit Where a container does not really resonate from me in the term, but is the running instance of something like that Now, the way that you can think of it is draw an analogy to Ruby The image would be like a class in Ruby So this defines what’s there Defines what’s possible and what’s (unintelligible) And then, the container’s like an object instance that you have new It’s an individual running piece of that So, we’ve got Docker images and then those we can deploy as running containers that do our application and run our code and our staging and production environment So, but you know, there are lots of ways to package up your stuff I mean, you could just shovel files out there, you could make a tar out of it, I mean, it’s not enough to tell us why we would want to use Docker And that brings us to the other major thing that Docker brings, and that’s isolation So, for most of us, we don’t have our apps set up in such a way that, you know, one host will be completely maxed out by an app that’s on We may want to be able to share those resources and run multiple things across multiple machines to increase our resilience and use our resources well And traditionally, you might’ve done it in some fashion like this You’ve got your server, you’ve got different directories where you keep the different apps that are there And you deploy and run those things on that given host Well, the problems here are pretty obvious when you look and see these things sitting next to each other They’re all sharing the same neighborhood They could interfere with each other’s files They could interfere with processes that are running They’re sharing the same memory on the machine and they are lots of ways that these two applications that are running might interfere with each other Docker gives us a way to contain that To kind of keep those pieces separate Now, they still use the same kernel This is not like a VM, where there’s some separate operating system But Docker provides insulation so that each

of those running containers appears to itself as though it is the only thing that is in the universe It only sees its file system, a subset of that It has a, you can put constraints on how much CPU and memory it uses and so it minimizes the possibility of those two applications interfering with one another, despite the fact that they’re running on the same host So, this is a pretty attractive thing for us to be able to have, sort of share hosts so that we can deploy a lot of things too, very easily without having to worry about who else is in the neighborhood So, clearly, you know Jane’s new developer has shown up, how do we get started? Well, Docker’s a very Linux based technology and it has to be running on a Linux system with a Linux kernel And, you know, a lot of us here, don’t run Linux systems directly, we run Mac’s or Windows And fortunately, Docker toolbox or toolkit, is available So this comes from Docker, it’s the kind the sanction way to set up a development environment Be able to get the Docker tools installed on a non Linux system So, once we have that, we can get down to actually writing our own images to construct an image for our hub that we want to deploy So Jane, you know, sits down with Jill, they’re pairing And Jill has her write this in a file called Docker file in the root of her application And, you know, Jane recognizes a little of this, she had done some reading about Docker That from says, what image should I start from as I’m building the image for my app? But, that’s all that Jill tells her to write And she’s like, well, shouldn’t there be some other things? Like, this looks more like a Docker file that I’ve seen you know, has Working directories and copies and runs and a bunch of show commands and things that are setting things up So, Jane’s really confused about what’s going on This is an image that we’re using from New Relic that we’ve got, but where’s the rest of the Dockerfile? And Jill says, okay, this is a fair question, but you know, running codes’ awesome Let’s get your thing deployed to staging and then we’ll dig into this later and look at how that very simplified Dockerfile actually provides us a lot of value and shared resources So, having written this basic little Dockerfile, Jane goes to the terminal and writes this command line, so it says Docker build the -t, it provides a tag for the image that we’re going to construct and tells it to work in the current directory with that dot And once we’ve done that, there will be a whole bunch of output that appears at the command line and as Docker goes through, takes that base image and then runs the various pieces that are baked into that image to build out a package of your app Now, if you have errors in your Dockerfile, or you have problem permissions, things that go wrong, this would be the point when building that would tell you You’ll see output from those commands there But once it’s successful, if we ask Docker what images it knows about, it will give you a listing and here we’ll see our LOC service image that we build We gave it a default tag of latest because we didn’t tell it to give it a particular tag And that image is a runnable copy of our application we can do something with This is all well and good for Jane on her local machine, but clearly, if this thing’s going to go into a staging environment, this image need to get from her computer, somewhere else And to fill this gap, there are a variety of things that you can do called Docker registries Now, by default, Docker runs one called Docker Hub This is what all of the Docker tools will default to if you don’t specify as you push and pull images It’s where it will look for it There are alternatives though, so at New Relic we ran into a problem when they deprecated what version of Docker you could use More quickly than we had moved some of our systems off of it And so we had to go looking for some alternatives as well One of them that we’ve had pretty good success with is called Quay I know it’s spelled kind of funny, but that’s how the word gets pronounced And that is very similar to Docker, how they provide you a nice web UI, you can push and pull images They have a paid service so you can have those be private And so that’s been one of the major alternatives that we’ve gone to as we’ve moved off of Docker Hub Another alternative, as well, is a piece of software out there called dogestry Now, dogestry is a little more bare bones, but what it’ll let you do is let you store images on S3, in your own S3 bucket So it sort of takes that third party provider out of the picture Which can be important if you have critical deployments If we have our deployment, depends on Docker Hub being up If Docker Hub is down, we can’t deploy our stuff That might be a problem for you, depending on your organizational structure and scale All right, so we have an image, we have this picture

of what Jane’s service looks like that she wants to get running, so she wants to go get this started up out in our staging environments And so, how does she do that? Well, at New Relic, we developed a tool called Centurion! Now, typically, if you just want to run a Docker image, and create a container off of that, that’ll start your application up, you would say docker run and then the image And that image has a default command baked into it which is what we’ll get invoked and then this starts running If you run it in this fashion, it will be blocking You’ll see the output that’s coming out of the container as the commands run So you can imagine that this is something, you know, you could go out to a machine somewhere in the staging cluster and go tell it to Docker run these containers and that would work But, unfortunately, if your company gets to any size and scale, you probably want things running on multiple hosts and you probably have a lot of computers that are out there And interacting with those individually is problematic And so, that’s here Centurion came in Now, this is certainly not the only way to solve this problem, and I’ll briefly refer to some other possibilities later on But when New Relic started with Docker, these things didn’t exist And so Centurion is a Ruby gem that allows you to work against multiple Docker hosts and easily push and pull and deploy your images and do rolling restarts and things like that One of the other big powers that Centurion brings, is that it is based off of configuration files And these are things that you can then check in the source controller conversion and have a central point where you know what’s deployed in your Docker environment rather than individuals going out to boxes or starting containers that you don’t know anything about If you run everything through Centurion, you have a central record of what’s actually going on So, Centurion bases these configs off of Reic so they have some amount of dynamic programming you can do in Ruby So you define a task for a given environment that you want to deploy to So, in this case, we made a task for our staging environment We tell it what Docker image we want it to pull onto those hosts and that allows us to have it grab the latest You can also tell it different tags, so if you had different versions of the service and you wanted to deploy certain one, you can do that And then, to handle that issue of having lots of hosts that we might want to start on, you can specify multiple hosts that Centurion will then go and restart or deploy these services to So, with that, it’s pretty easy to get Centurion started, it’s just a gem You install it and it installs an executable for you called Centurion, unsurprisingly There’s a number of flags that it’ll take, but the basics are, you tell it an environment, you tell it the project where it should find the configuration and then you give it a command There’s a couple of different commands We’ll just give it a deploy and say go out there, start these things So, Jane’s a little nervous I mean, she’s hardly been here at all, but you know, she asks us does this all look good, are we ready? Yeah, let’s go She kicks in a Centurion command and what you’ll see is a lot of output as it connects to the various hosts It will go through them, it will pull all the images down to those, so that all of the boxes that you need have the image that you’re then going to start with And then, one by one, it’s going to stop any container that’s running for that particular service on that box And then, start a container up for you, based on that config So after it’s connected, there’s also options that’ll let it hit a service status check in point So you can do rolling deploys, where you make sure that things are actually up and running before you start to the next host and shut things down All right, so having done all of these things, it’s been shipped, things are in staging, Jane is able to test out her codes and see that things are working swimmingly and goes home for the day feeling very accomplished Goes to bed, comes back the next day, and unfortunately, things were not as great as she thought Service is not there Where did her app go? Well, it’s time for the tables to turn Jill’s going to ask a few questions of Jane So Jane says, well, where were you logging to? Let’s start trying to figure out what happened here And Jane looks through the code, and she had kind of cribbed a line from somewhere that she wasn’t really clear about But in her production configuration for her Rails app, it looked like it was a standard practice around New Relic to have all of the logging go to standard out, rather than going to files It would get written inside of the Docker container Okay, so that being the case, this has actually put Jane in a really good position, because New Relic’s infrastructure, where we run our, the Docker hosts,

actually takes all of the standard out that comes out of Docker, like we saw when you run a container, you see what’s going to standard out from that So, we’re able to capture it and we actually forwarded to a couple of different places We forwarded into an Elasticsearch instance, which runs Kibana, which is a fairly common serve logging infrastructure I’ve heard it referred to as the ELK stack, elastic something with an L Logstash, forget in Kibana And then also, we actually take that opportunity to send things to our own internal database Event database called Insights And this lets us do analytics and querying across these logs But, you know, you could set things up to send these logs that are coming out of your Docker containers anywhere you want But, I highly recommend that if you do use Docker and production in this way, that you do make sure that all of the logging that you can is going out of the containers and not getting written inside of them Because it will give you better visibility to it for one, by getting it out It’ll also prevent the file system sizes from getting huge in the Docker containers themselves All right, so they take a look at the logs and there’s not really anything there, unfortunately, you know, you don’t always hit home run on the first go So, it’s time to take a little closer look at the containers themselves Well, that’s actually something that you’re able to do And Docker provides the commands for it So, here we’re supposed to find the -H, that points us to a different host So, by default, Docker’s going to be talking to Docker running on your local machine So this let’s us go point at our staging environment and the command way off on the end there saying PS, lists the running Docker containers that are on that host And here we see a container ID, it has a nice (unintelligible), that will be fun for us to type, but that’s an identifier for the individual running container that we’ve got going out there And it looks like it’s still there and it’s running So what we can do from there, is we can say, exec, rather than run And give it the container ID, the -it sets things up to be interactive And so this will actually give us a Bash prompt on to that Docker container Now, this depends on Bash being installed on the Docker container that we’re connecting to And there’s a variety of other things that could interfere with this But we have things set up so that we can do this To do any sort of debugging that we need on those containers as they’re running in our production and staging environments They look around, they see that the processes are gone It’s not exactly clear what’s going on, but they eventually dig up some stuff that looks like there might have been some things happening with memory And that tickles something in the back of Jill’s brain She remembers another project that they had that had some similar problems Where things just seemed to have been disappearing Like processes would just go away with no trace that they could see And, the problem there was memory So, the lines of codes service apparently is clocking in at a good 300MB Not totally crazy for a Rails app, but a little big And that was the key that they needed to figure out this problem with things getting killed So, like we talked about way in the beginning, one of the key things about how Docker provides you isolation is that you can set limits on the containers for how large they can get and what memory they can consume This prevents the individual containers from interfering with other things that are on the same host And it turns out that 256MB was about the limit that was being set by default if you didn’t specify anything So as soon as you got past that, then Dockers infrastructure would kick in and it would just kill processes to free up the memory Well, this is clearly not a good situation, and so, fortunately, we allow for configuring that So, in Centurion config, you can say memory, tell it to give us two gigabytes And what this actually correlates to is a command line flag that you can give to Docker, to tell it how much memory you want And basically, any of the commands that you can give, any of the flags that you can send to Docker when you’re running things to modify that environment and tell it differently how to run stuff are available through Centurion configs So you have a source controlled place to do all of the changes that you might want to do to how your containers run All right, this is great, we’ve got 2g, things stay up, they keep running You know, but we actually ask for a little more memory than we really needed and Jane’s like, well, we should probably you know, (unintelligible) a little bit more performance a little out of this Even though it’s in staging, it would be nice to have a little more room So, I want to increase the number of Unicorn workers that I’ve got Jill’s response is to try the ENV So, Docker provides flags when you’re running to let you set environment variables that will be passed along into the container And this is actually really fundamental part

of how you should structure your Docker systems So that things get passed in from the outside So, when we say, -t Unicorn workers, once we’re inside that container, it’s just an environment variable, like you’ve probably seen in many other places For our setup, we have a fairly standardized Unicorn config, so what we do is, we look for the Unicorn workers environment, we turn that into an integer and tell it to run the number of workers that we want And so, our Docker image can be used to scale up or down, to run larger or smaller numbers of workers without us having to construct a new image that changes that configuration As you might expect, Centurion supports this, in fact, this is one of the key features of how we use Centurion Is that we drive as much of the config, out of the code and out of the file system into the environment as we possibly can And so you can say ENVars, give it a hash to give it the names of that Now, this is not the only thing that you might want to configure Your database xml file, in a typical Rails app, get parched through Erb before it’s actually run And so you can do things like this Where you parameterize, potentially off of the environments So when we run in our production and staging, we can be explicit about where we go connect to our databases But one of the niceties is, since this is just Ruby code inside of the Erb braces there, we can also give you defaults So if you’re running the Rails app locally, it’s gonna work, it’s going to find the things that it needs Similarly, application specific configs, are something that we can rely on the environment as well So, in your application.rb, you can set config values, and you can set these to arbitrary names, arbitrary things that you want to pass around And then those will be available throughout your Rails app So, here we take, we’re looking for another service that we’re gonna talk to We set the URL and we have a default to fall back to We set time outs And what this does, it gives us one central place in our Rails app that we will see all of the things that we can configure through the environment All of the knobs and switches that you might want to control Accessing this from other places in your code is a simplest thing rails.configuration., you know, the access or the you specified so here we can get the service you were (unintelligible) and time out that we were talking about And you use those throughout our system Now, Some of you may have heard of The Twelve Factor App This is a thing that Heroku has promoted that’s got a lot of principles around how to run applications well in production This whole environment driven thing, while it applies very strongly to Docker, it’s not limited there And this is one of the key tenets that they have with it This is also a really good idea to drive things through the environment as well For security reason If you have secrets, you have passwords, you have sensitive pieces of information If you put those into your source code, or put them in files that are in your Docker images, if somebody gets a hold of that Docker image, they will be able to see what that stuff is So if Docker Hub gets compromised or some other place does Your secrets, you don’t want them baked into those images By putting them in the environment, they’re only there run time and someone would have to have access to the containers to be able to get at those bits of information that you don’t want All right, so, this is all well and good Jane’s feeling awesome about the work that’s going on, but she really wants to understand better So that one line Dockerfile that we showed at the front, right, she just like wrote one line to say from this image, how does that actually work? Well, it turns out, at New Relic, we put a lot of effort internally into building share Docker images on top of other pieces of the Docker infrastructure that we’ve gotten from the world at large To make our lives simpler and baking in the things that are shared across our applications So base builder was the name of the image that we grabbed from to start So this encodes a lot of our standard ways of approaching things at New Relic So, for one, for various historical reasons we run mostly off of CentOS, that’s what our ops folks are most comfortable with And so we derive ourselves off of a CentOS image, rather than a lot of the base Ruby images are either Alpine or run to Linux Well, we know that this is something that people are running Ruby off of And so, one of the first things that we do in this base image, is we install Ruby versions for you Now, we end up using rbenv, rbenv, to do that, it’s not strictly necessary, because there’s not going to be version switching going on But it just happens to be the tool that’s most commonly used at New Relic for switching Rubys You can get a Ruby installed on to your Docker however you would choose to Once we have that Ruby version installed, we can start putting other things that we assumed people are going to use

So, for process management, we use an application called supervisord, so we can install that Most of the time you’re running something that’s a web service or a website of some type So we run that through Ingenix, so we put that into this base image In fact, we can go even further, we an gem install Bundler And then rehash so that executable for bundling is available And, you know, this is great We’re finding all of these things that are like shared between these applications and taking that duplication out, making it simpler for people to build their images So, why not just bundle install? Get all the stuff, right? Well, here we hit a roadblock and it’s pretty obvious when we try to build it, what’s wrong The base builder, this is a base image This isn’t the the application itself So we don’t actually know what your gem file is yet Someone is going to build their app on top of this And so we can’t go and do the Bunde install when we’re making the base We don’t know what’s going to get into that actual app But fortunately, Docker provides us the tools that we need to do what we really want, which is to say, when somebody uses this image, I have commands that I want you to run And Docker’s partners for that is on build So any Docker command that you put into your image, if you say on build before it, it will wait to run that command until after somebody has already used your image in their own Dockerfile And so we can do things like, wait until somebody uses this in their app, and then go copy their gem file and bundle install So we get their copy of dependencies, but we don’t have to have them write the lines to know to go Bundle and do the correct things in their particular Dockerfile In fact, we pushed this approach quite a ways and provided, not just standard things that everybody does But, options that people may want to choose So Unicorn is used pretty broadly and New Relic It tends to be the default web server But, there are people that are using Puma and like to try that out And so what we’ve done, is we’ve created scripts that allow for those sorts of configurations to be a one line thing that you can put in your applications Dockerfile And all that these have to be is a script that modifies whatever configs you need to on disk to get the effect that you want So, in our case, this is just a matter of changing out the supervisor config for which app server to start up And then swapping in a Puma configuration instead of the Unicorn config into the app itself But this is a one line thing for somebody to do in their app and be able to try out In fact, we’ve even gone so far as to provide helpers for installing other things that people might want, like Python We have some teams that have background processing and it runs in Ruby and then has some Python that it needs to invoke And so we can provide simple wrappers baked into these images to smooth the path for app developers as they do their work All right, so, it’s a fun technique, it’s fun to see which things you can pull out and make it so that people don’t have to think about But let’s get back to some codes So Jane keeps writing her app, she’s working on those lines of code service and she wanted to write a file somewhere She just kind of picked the root directory to go write it and she’s getting an error out of it So she goes pings Jill Jill comes over, they take a look at it It’s a, you know, again, a pretty straight forward error message, but it’s not totally clear why this is happening Permission denied, she tried to put this file there And Jill, you know, being fairly experienced, knows just what the problem is The problems with nobody Nobody? Who’s nobody? Well, nobody is an identity that we have on our Linux machines that has your privileges than root So, it’s actually a user that we run our things in inside of our containers by default So here, this is not super relevant in all the details, but this is how supervisor starts an app up and we say, user is nobody There are things at the Docker level, where you control this as well But this kind of makes Jane a little confused because she’s heard from many different people about how Docker runs it’s root and isn’t that fine, because the containers are isolated And, while it is okay to do that, and it’s not, it’s sensible why Docker has chosen not as the default It doesn’t mean that you can’t crank things down further So if you are writing your own applications, you can be more defensive than Docker is itself And by running as nobody inside of our containers, we give ourselves extra protection, in case there is some exploit or some problem with Docker that would let them elevate root privileges inside of the container to the outside host So, running things in as secure of a mode as you can, within the boundaries that are in your control,

will end up giving you a safer result in the end All right, so, Jane gets that fixed up Starts writing things in a location where she’s allowed And the, you know, may be a little late, but she comes around saying, “Yeah, I want to write some tests “How does Docker fit into this?” Well, there are some ways that you can work with Docker to make sure that your tests are running in a realistic environment, like you’re going to deploy it The simplest, most straight forward thing that you can do, is you can write alternate commands against the images that you build So here we say Docker run against an image of our lines of code service and we just told the bundle.exec, great And it runs off and it runs the test inside of that container Now, this presumes that all of the configuration that’s necessary is there if it needs database connections You’d have to figure out how to feed those things in, but at base, all it needs to do is run that Ruby code inside of the container, instead of running your full web application But, unfortunately, this has a problem And that’s the fact that this relies on the image that you built, having your current code And I don’t know about you, I occasionally will edit my code, while I’m working on it And if you make a change to your tests or you make a change to your production code, you have to rebuild that image to be able to get those tests to run against the current thing that you’re doing And I don’t know about you, but this would make me very sad Anything that gets in that loop of making it so I have to do something extra, before I can do my tests, is not a really great experience Fortunately, Docker does have some options that will let you get out of that and do things in a little bit different way And that is with mounting volumes So here we have a Docker run command, it’s running against our lines of codes service image And that -v is the important part So what that is saying is, take what’s on my local host where I’m running at source my-app, and make it so that that appears inside my container at /test-app And so this mounts that in without rebuilding the image And so what we actually have happening at New Relic with most of our tests is, they run against the Docker image, but they just mount the current code into that image rather than rebuilding it from scratch You have to do a little directory mudging to make sure you’re in the right place to go run the code, but otherwise, this is a very good approach to keep you from rebuilding the images all the time So, life moves on, Jane’s got more and more things that she’s wanting to do with the service And as it often happens, she maybe’s looking to use Sidekick to do some background processing and she needs a Redis instance and figures, oh, I need to talk to somebody to provision that or set that up Well, it turns out that what we built with Docker allows her to kind of self serve that and have stuff deployed through the same mechanisms So what we have, is we have a image already constructed that we use internally that has Redis installed And that takes all of it’s configuration through environment variables So all that Jane needs to do to get a running Redis instance into staging her production, is to create this configuration and go deploy it the same way she’s been doing with her app This is a powerful approach, you can do this with anything where you’ve got some sort of appliance, some sort of just code you would like to be able to have people parameterize and run without tampering with it If you build the images to run off of the environment, then people can just take that and run with it and use it kind of out of the box So there’s a lot of talk about Docker There’s a lot of things that are going on You know, Centurion came out of the need that we had a couple of years ago at New Relic, but there’s a lot of other things that are in the ecosystem that might be of interest, or something that you might want to pick up today So, one example of that is Docker Swarm Now this comes from Docker, it is a software that easily allows you to control a cluster of hosts So the sort of staging environment that we have there, Docker Swarm is a good way to sort of bootstrap yourself into running in that type of environment Something that we’re looking at to potentially, either evolve Centurion into or use to replace it, is a project called Mesos And Mesos, in conjunction with a thing called Marathon, allows you to have more dynamic sort of scheduling for your containers So rather than saying, I want to run this on hosts A, B, and C, you would tell Mesos, I would like to run three instances of my image, please go find somewhere to put them And it would put them out there, and it has some really nice properties around resilience If it drops one of those instances because something crashes, Mesos can start it back up for you automatically You can scale things dynamically with it A similar technology that’s for this sort of container

orchestration is Kubernetes, from Google And there are a lot of other things that are out there that are happening in the space There’s a lot of people working to make this a better work flow All right, so we’ve come to the end of Jane and Jil’s story We’ve looked at how you can use Centurion to deploy and control multiple Docker hosts We’ve looked at how using the environment to drive your configuration allows things to be more dynamic and controlled We’ve looked at some tips and tricks around building shared images, so that you can spread best practices within you organization and not repeat stuff We’ve looked at some security and testing, and a little peek at the future of where things might be going I hope this has been of use to you and hopefully you’ll have good success if you choose to use Docker in your company Thank you (applause) So the question was, where the Dockerile lives And yeah, typical practice for us is that the root of your Rails app is where the Dockerfile would live It doesn’t have to, you can put it in other locations, but that has been the simplest, sort of convention that we’ve followed So the question is, between vagrants, for similar sorts of workflows of testing and developing in Docker From what I’ve experienced, Docker startup is very fast So if you have a pre-baked image, like the image building takes a while, but starting a container is really quick So, it would definitely be worth looking into, I think it provides similar things to vagrant and it’s a little lighter weigh That’s one of the selling points there The question was, what concrete usage do we have of this I think at last count, we had a couple of hundred services that were running on this internally It is not everything that we run There are a number of our bigger, older applications, especially the (unintelligible) that aren’t converted over to Docker But pretty much any of our new products that have been developed in the last year or two have been deploying into our Docker cluster The question was, the deployment workflow is building a new image, like run your test, build a new image and then deploy that image, yeah, that’s correct We run things through a CI system, we happen to use Jenkins, but it’s fairly up to you how that flow happens I showed a lot of use using the command line directly to do those deploys, we don’t actually do that much in practice, you have a central CI server do it But, all that it’s doing, is calling Centurion from a little shell script the same way that you could from your local machine So the question is, what do we do about things like database migration and asset compilation Asset compilation very often we will do at image build time I didn’t show it here, but it is a common thing for us to do in constructing the image We have some other techniques that we’re playing with for externalizing the images entirely from our Rails app It takes that out of the picture Database migrations, the database currently and probably for the foreseeable future does not actually live in Docker itself, and so we will tend to have another environment where we would potentially use that Docker image to go run the migrations, like use the image, run that command to go talk to the database and do those migrations But, it’s not part of the individual deploys It’s normally scheduled separately as part of things And the questions, what about the fact that migrations might break currently running instances That’s something that we kind of have to manage ourselves at this point It’s certainly something you can build more infrastructure around, we tend to just have a very conservative cadence for when we do migrations in the apps that have those sorts of setups Red light is on, so I’m out of time to be on the mic, but I’ll be happy to talk to anyone afterwards that would like to Thank you for your time and attention (applause)