In this article we are going to take a look at a workflow for building and deploying a micro-service using Docker container. The model discussed here focuses on consistency, security and productivity. For example -
- Build once run anywhere
- Productivity for local development
- Consistency between local and controlled environments
- Keep environment sensitive data out of SCM
1. Build once run anywhere
Q: How can we build a Docker container for our micro-service once but run it with different configurations for each environment?
Solution: Do not copy environment specific information in the deployment bundle. Often times, we need to customize our runtime, e.g. Wildfly container with connection information for database or external services. These kinds of configuration are very specific to a particular environment. We need to build our configuration files with environment variables whose values would be known only within that particular environment at runtime.
Docker container allows passing environment variable from the host machine. It also offers a way to map a host volume to the container volume. We can leverage this feature to override environment specific property files.
To show an example, in case of Wildfly standalone.xml, instead of configuring passwords or URLs, we can configure it to read those values from the environment variable.
With this kind of template files, there is no need to maintain one file per environment. This file should kept alongside the code for easier and consistent maintenance.
Other level of consistency needed is the jars (dependencies). The docker container should be built with a pre-packaged set of dependencies for our service to run. Let’s call it the base image.
For reference, see: Sample Project which has a Gradle build file to produce a build output container WAR + server configuration. Let’s call it a deployment bundle that can be just copied over to an existing installation of Wildfly.
Next, when we build a Docker container to package this deployment bundle from the base image of our server container, we will get our micro-service docker container. This container can be run in any environment by overriding environment variables and properties using Docker run command line options.
2. Productivity for local development
Building or updating the Docker container for each build is going to slow down the build process and is not practical for frequent code changes. What we can do is run the Docker container with the base image and map the local machine’s volume containing deployment bundle to the container’s path. Use a docker-entrypoint.sh script to copy over the server overlay to existing installation of the Wildfly server in the container.
With this approach, our local build is same as the build generated by a CI server. We both get the same deployment bundle and we both build the the same Docker container.
Once our micro-service has been tested by dev and QA then the same container can be deployed to production without requiring a special production build.
4. Keep environment sensitive data out of SCM
Q: With above approach we did not check-in environment properties or variables to SCM. So how a particular instance of Docker run in production or QA environment is going to get the environment specific override automatically?
Answer: We maintain environment specific configuration in S3 bucket for AWS deployment. The S3 bucket for each environment can be access controlled. The Elastic Beanstalk includes a file construct in its Dockerrun.aws.json file where we can configure it to download environment specific files from our secured S3 bucket.