A Step by Step Guide for Dockerizing and Managing a Java App on 28 Different Application Stacks

in this video we will walk you through a step-by-step guide for dog arising and managing a java application on 28 different multi-tier application stacks now DCH true not only automates the application deployment and management but it also automates the infrastructure provisioning and auto scaling now in previous demos I would have typically started off by creating the cluster of computer sources on which we’re going to deploy the docker based applications but in this video I’m gonna reverse this order and I’m gonna actually start off by demonstrating how you can docker eyes a java application and create these multi-tier application templates so we actually created a public github project with a sample java application that can be deployed on a number of different application stacks so I’m gonna switch over to that github project and it’s called DC HQ docker Java example and the reason we created this was because many developers were still confused as to where we’re getting these environment variables from how do they initialize the database as part of the app deployment and what if they have their own scripts or deployment plans so we’re hoping to address a lot of these issues in this video and if I scroll down we have a table of contents with different sections so I’m first gonna start with configuring the web.xml file and over here you’ll see what we did was that in the web dot XML file we’re using the bootstrap servlet to start up the spring context and in the context config location we’re referencing this web app – config dot XML file and then the content of this web app config dot XML file it’s using the the data source beam and has a bunch of different properties so that the java application can connect to the database now the most important step is that you are using here an environment variable that can be passed to the actual linux host or to the actual container as part of the application deployment so when you’re creating this job a war file you’re referencing this environment variable that will only be passed at request time so over here you’re not hard-coding any of these values and you’re making it possible to pass all of these properties as environment variables in the application server it’s and if I scroll down we’re also using the liquid based beam to initialize the connected database now some users may want to initialize the database separately using a different script we actually recommend initializing the database as part of the Java application deployment itself but if you wish to do it separately you can refer to this section where you can use our plug-in framework to actually execute a plug-in in the database container itself to create the scheme and setup things for your application but this is what we did in the web app XML file we’re using the liquid based beam and we’re basically referencing this upgrade dot X SQL file and the content of this upgrade the SQL file it basically has some basic sequel statements for creating the names directory table and it’s basically supported on a bunch of databases like MySQL Postgres and Oracle so that’s pretty much it this is all you need to do for setting up your java application for docker so the next step would be to create the multi-tier application templates and in this section over here we basically give you 28 different application template examples using apache HTTP server for load balancing as well as nginx tomcat jetty websphere in jboss for the App servers MySQL Maria DB Postgres and Oracle for the databases so I’m not gonna actually go through this section instead I’m gonna switch back to the DC HQ console and then I’m gonna click on manage and then templates and then if I click on the plus sign over here you can see that you can add your own docker compose application template for creating the multi-tier applications or a machine composed template in order to facilitate the provisioning of VMs and cloud servers on 13 different clouds and virtualization platforms so let’s start off by a typical three-tier application which is the apache HTTP server tomcat and mysql I’m gonna click Edit and what you’ll see over here is that this is typical docker both syntax if you wanted to copy and paste your own docker compose file it will work perfectly with GCHQ except that we’ve added our own enhancements and parameters to facilitate the complex app modeling and management so over here we’re using the official images for httpd Tomcat and MySQL you’ll notice that the HTTP server over here is invoking a bash script plug-in that will basically inject the application server container IPS inside the httpd.conf file

for load balancing and the beauty of this plug-in is that this is executed at request time but it also is executed post provision so when you want to scale out the tomcat cluster this will automatically update the httpd.conf file to basically update it with the new list of IPS that is trying to load balance to and then on the tomcat side we’re invoking a bash script plugin a very simple one that’s just grabbing the Java war file from the github location over here and then deploying it into the directories in that Tomcat server so this exact same bash script plug-in is invoked for jetty and JBoss except the directories will be different and then you’ll also notice that we have the host parameter so the host parameter allows you to distribute containers on different hosts for high availability and to meet with affinity rules so for the HTTP server this could be on host 1 the the clustered application servers so this could be a cluster size of 2 and this could be distributed on hosts one host 2 like this and then MySQL could be on host 3 and the value is over here for the host parameter they don’t have to be host 1 or horse – this could be the actual host name IP address or a wild-card so if you know that the MySQL database host name starts with MySQL – something you can use that wildcard over here and then lastly you’ll notice that we have environment variable bindings so we can automatically resolve the app server pipe container IP and inject that into the HTTP decon file over here we’re using the environment variables that I basically showed you in the web app XML file and then we’re basically at request time providing the values for these environment variables so over here we’re using the MySQL driver we’re basically injecting the MySQL pipe container IP from down here and as as the database URL we’re using the MySQL pipe MySQL user and root password which are values that are basically provided down here so the beauty of this is that you don’t have to hard code these values across the different images you can actually reference values from different images as part of the application deployment and then the the next example that I’ll quickly show you is the WebSphere one so this is a simpler two-tier application for deploying the exact same job of war file but this one is using the official image for WebSphere it’s basically using Oracle as the database so we’re using an unofficial image for Oracle XE as an example and you can see that the database driver is now using the Oracle connection and over here again for the database URL we’re referencing the Oracle container IP Oracle username and password and then the plugin over here is very very similar to the one that I showed you for Tomcat and jetty except that it’s also executing a server initialization script to basically update the server and file so let’s go ahead and show you an example of a plug-in so I’m gonna copy and paste this plug-in ID and then from the manage I’m gonna click on plugins and you can see all the different plugins that we’ve created for customizing the containers at request time and post provision so if I search for this ID I’m gonna see the deploy WebSphere war file and this is how you basically invoke these different plugins is through the IDs so over here this is an extremely simple plugin that’s basically echoing the database driver class name into the server end file so that you’re passing these different environment variable values into that server den file and then we’re removing whatever war file deployment is already exists in the drop-ins folder and then deploying the latest Java War file by literally just W getting the job of war file from github over here and then deploying it into the drop-ins directory so this is all this plugin is doing it’s just initializing the environment variables in the server temp file and then deploy deploying the job of war file so now that we’ve created the application templates we’re going to actually provision a cluster of compute resources on which we’re going to deploy the docker based applications we created so from the manage drop-down I’m first going to select a cloud provider and over here we support a bunch of different virtualization platforms and cloud providers in this example I’m going to stick to Rackspace so if we click on edit on this Rackspace cloud provider all we had to provide was the

username and the API key if you’re registering vSphere for example all you have to provide is the vCenter credentials so the next step after creating the cloud provider would be creating a cluster so a cluster is just a logical mapping of compute resources that could be on the same cloud or across different clouds and if I go ahead and click Add over here I’m just gonna call it rax space Java as an example and then under advanced configuration you can actually specify the lease for this cluster the placement policy so we’re using the capacity based placement which is the default one that will select the host that has sufficient CPU memory and disk space for provisioning and then the networking we’re actually going to select we’ve as our networking layer to facilitate the cross container communication now optionally you can select to apply quota on this cluster you can even select an auto scale policy that will allow the cluster to automatically scale out when its resource constrained up to the maximum limit of VMs that you would like to specify and then we also have these granular access controls so you can say or dictate who gets to deploy applications into this cluster is it all ten users or is it just you and also in addition to that you can specify what app templates are allowed to be deployed into this cluster so that way you can specify specific clusters for different environments like production and staging on which only approve templates can be deployed so I’m gonna just select the defaults like this and I’m gonna click Save and then there’s a couple of ways or for provisioning virtual machines into that cluster from the manage drop-down if we go to a hosts and then click on the plus sign over here you’re gonna see all the UI based workflows for provisioning into vSphere OpenStack cloud stack Rackspace so if I go ahead and select drag space as an example you’ll basically select the cloud provider that you would have registered the region like the IAD region the flavors the image so we’re certified on scent OS chorus Red Hat Enterprise Linux Oracle Linux you’ve been to and many others and then you can specify the the port’s you would like to open on Rackspace they’re open by default the cluster you would like to provision to and then the number of VMs so this is one way of doing it which is the UI based workflow but in this demo I’m gonna showcase the machine composed template so if we go to the library and then we’re gonna scroll down to find the Rackspace large cloud server over here and if I go ahead and click customize this is a machine composed template for basically provisioning the exact same VM that I was showing you earlier through the UI based workflow so you can specify the region the the description so this could be Rackspace large instance like this the instance type so we’re provisioning an 8 gig cloud server on our you bin to image and then for this cluster we’re gonna provision three VMs and then I’m gonna select the cloud provider so I’m gonna search for Rackspace GCHQ over here and then I’m gonna select the Rackspace Java cluster that we had created earlier and then I’m gonna like create machines so over here it’s basically we’re gonna wait until the machines are provisioned and if I scroll all the way down you can see that three provisioning requests have been initiated as part of this request at this point our three cloud servers on racks bees have been provisioned we can look for the Rackspace Java cluster and you can see there’s one two and three and we started collecting some metrics like the CPU memory and disk space utilization so at this point we’re gonna go ahead and actually deploy both the WebSphere based application as well as the Tomcat based application for the same job of war file so from the self-service catalog we’re gonna select the two tier Jaguar WebSphere and Oracle and then I’m gonna click customize over here and for the sake of this demo we’re gonna split the application server and the Oracle database on two different hosts by just changing the host parameter so I’m just gonna say host one host two like this and then if I scroll down I’ll be able to select the cluster of my choice and you can see that we basically display the CPU memory and disk space utilization for all the available clusters so I’m gonna go ahead and find the Rackspace Java cluster that we just created select that and click run so at this point our application is up and running you can see that the application server or WebSphere is running on one host and Oracle Express edition is running on another host this is possible because of the weave integration we built and then you’ll

also notice all the port bindings on the right hand side so the Oracle ports are actually not exposed on the host itself but are accessible to the application server that’s connecting to it which is WebSphere so in order to verify that the application is indeed up and running we’re just going to copy and paste this IP and try to access it on the port that’s mapped to ninety eighty which is three to 771 so I’m just gonna type three two seven seven one / DB connect / like this and sure enough our application is up and running I can go ahead and write my name and the application is up and running so going back to the DC HQ console over here we’re gonna repeat this task but now with the three-tier application with HTTP server Tomcat and MySQL so I’m gonna click customize on this again we’re gonna split out these containers on different hosts so we’re gonna say Tomcat will be a cluster size of 2 and it will be distributed on hosts one goes to like this and MySQL will be on host 3 and I’m gonna go ahead and select the cluster again or which is the Rackspace Java and click run so at this point our application is up and running you can see that all four containers are running on different hosts and on the right hand side you can see the port bindings so port 8080 for the app servers and 3 3 or 6 for MySQL are actually not exposed on the port the only exposed port is that of the HTTP server port 80s map 2 3 2 7 6 9 so to make sure that this app is up and running we’re just gonna copy and paste this IP and try to exit on 3 – 7 6 9 so 3 – 7 6 9 and sure enough our app is now loading and it’s up and running and I can go ahead and type my name and sure enough the app is actually working so going back to the DC HQ console we have this Actions menu that gives you access to all the management features post provision you can start and stop the application you can change the lease of the application so that after 15 30 days it will automatically destroy in depth test environments you can move containers to other apps or add additional containers to the application we have the monitoring so monitoring gives you the CPU memory and i/o of the running containers and you can view this information historically by customizing the date range over here and if I scroll all the way down we have these charts that overlay the CPU and memory utilization for all the running containers and then we also have the backup feature so backup actually allows you to select the containers you would like to back up which repository you would like to back up to the repository name and then you can over here select the actual tag name that you’d like to use so you can provide a unique timestamp with every backed up image and then you can use a cron expression to just schedule the backup every day every night or every hour and then we also have continuous delivery so this is a very very powerful feature that will allow you to refresh the job of war file on the running application servers so over here you can select the app server on which you would like to refresh the job of war files you can select the Jenkins endpoint and then select the actual job from Jenkins so it could be this web app JDK 7 and then you can select the actual bash script like this web app JDK 7 plugin that will actually deploy the job of war file every time a build is triggered or a job is completed successfully in Jenkins and over here for the file URL you can actually pass the credentials encrypted credentials to Jenkins so that you can grab the latest job or war file deploy it on to the directory on under Web Apps and restart the container as part of the process so this continuous delivery workflow can be saved as a policy and anytime there’s a job or war file build triggered in Jenkins we can refresh the build on the running container and then we also have the in browser terminal so if I wanted to connect to one of the application servers I can click on the command prompt over here and I can type ls’ – LRT slash user slash tomcat slot is sorry slash local tomcat web apps in order to view the job of war file that’s deployed and sure enough there’s a route war deployed on this tomcat and this command prompt you can actually whitelist the commands that you could

allow inside the container and it’s really meant for you to view logs and cab different logs and that sort of thing and then lastly you can do the scaling and scale out so if we wanted to scale out the application servers you can click on scale out over here change the cluster size from two to three and then you can say scale out like this and what this will do it will actually add a new tomcat application server with the same dependencies and that can be requested either on demand or on a defined schedule and you can and at this point the third application server is now up and running if we scroll all the way down we have an application timeline that basically tracks everything that’s happening to the application and you can say that you can see that the application was created successfully and then scaled out successfully and in order to update the HTTP load balancer so that it’s aware of the third app server that we added we’re gonna go ahead and click on the actions and click on plugins and in the next couple of weeks we’re releasing a plug-in life cycle so that you this scale out and execute plug-in workflow will be in the same workflow and you don’t have to actually do two separate steps but over here if we wanted to update the HTTP a load balancer or HTTP server you could actually do that and search for the Apache HTTP load balancer bash script and what this will do it will actually figure out the array of IPs the app server container IPS inject them as separate lines as balancer members and it will refresh the httpd.conf file with the new IPS that are available so I’m gonna basically execute this bash script and then restart the container and then I’m gonna just say update HTTP server like this and click run now and then we can track this by looking at the timeline over here so it’s still being processed but eventually it will actually execute this plugin so now it’s executed successfully and now the HTTP load balancer is up and running and if I scroll up you can see that we also have alerts and notifications for when the containers go down the hosts go down or when the CPU and memory utilization go above predefined threshold