Deploying Clojure apps with Docker and Immutant 2

While Docker is a very cool technology that holds a lot of promise for the future of devops, it’s not always the best fit for JVM-based apps. The standard approach of baking every runtime dependency into the image and running each container as a self-contained system gets out of hand rather quickly from a resource utilization perspective. How many JVMs can your server fit into memory at once?

At Democracy Works we wanted to find a way to share JVMs, for this and other reasons[datomic-license]. We use Immutant as our production app server for this. Our first attempt to combine Docker and Immutant 1 worked but it was a bit clunky. It involved building .ima (Immutant archive) files in the CMD’s of the app containers and then doing a –volumes-from for each (now-stopped) app container when we ran the Immutant container. Especially in dev, it was hard to get all the pieces to line up just so. We no longer had the luxury of docker build ... and docker run ... to test a full stack in dev, our build and deploy scripts were somewhat complex, and it was difficult to tell what all was running inside the immutant container in production (remember that the app containers were now stopped because they had done their jobs already by generating the .ima file and copying it to the shared volume directory).

Enter Immutant 2. Unlike Immutant 1, Immutant 2 is a library that tries to abstract the difference between running its own web server (which Immutant 1 couldn’t even do) and running inside a servlet container (which is just a regular old .war file in a WildFly server instead of Immutant 1’s .ima files inside an embedded fork of JBoss). This opened up some new possibilities.

What I’m trying out now is:

  1. Build a more traditional Docker container for each app. In the Dockerfile: install the dependencies, build the .war file, and set a deploy-or-run script as the default CMD. See the example Dockerfile below.
  2. Run the app containers using good ol’ docker run .... The deploy-or-run script looks for a linked servlet container via the env vars that Docker creates when you link containers.
    1. If it doesn’t see a linked servlet container, it just exec’s lein run and Immutant 2 spins up its internal web server. This is really convenient for quickly spinning up an app in dev.
    2. If it does see a linked servlet container, it deploys the .war file that’s already baked into the container (and was built during the docker build phase so we know it represents the same code as the snapshot in the Docker image) using curl. Then it goes into an infinite sleep. But the script has an exit trap that undeploys the .war file when its container is stopped. So while the app is running in the servlet container, there’s still a separate entry for it in docker ps (that is just a sleeping shell script) and if you docker stop that container id, it will undeploy itself from the servlet container. This is much easier to deal with in production (it readily integrates with things like CoreOS’ fleet, for example), and it’s still pretty simple to stand up in dev. See the example deploy-or-run script below for the details.

So far this seems like a better-of-both-worlds approach (I hesitate to use the word “best” here because Docker just doesn’t have a great story w/r/t shared JVMs today–but hopefully it will get better).

I’ll post more as we venture further down the rabbit hole.

Example Dockerfile for our address-works app:

FROM clojure:lein-2.5.0
MAINTAINER Democracy Works, Inc. <dev@democracy.works>

# jq makes dealing with WildFly's JSON responses *much* simpler
RUN apt-get update && apt-get install -y jq curl

# cache deps installation unless project.clj changes
ADD project.clj /address-works/
WORKDIR /address-works
RUN lein deps

ADD ./ /address-works/

# make sure we're not building a broken image
RUN lein test

# generate the .war file so we can deploy it
RUN lein immutant war

# expose the local HTTP port in case we do `lein run`
EXPOSE 8080

CMD ["script/deploy-or-run"]

UPDATE: I’ve posted the deploy-or-run shell script to GitHub.

Example runs:

# just spin it up quick n' dirty
docker build -t democracyworks/address-works:immutant2 .
docker run -d -P democracyworks/address-works:immutant2
# wait for Immutant's internal web server to spin up
curl http://localhost:[port-mapped-to-8080-in-address-works-container]/
# run it in WildFly
docker run -d -P --name wildfly democracyworks/wildfly # just jboss/wildfly w/ the management interface set to 0.0.0.0 and an admin user created
docker build -t democracyworks/address-works:immutant2 .
docker run -d --link wildfly:wildfly democracyworks/address-works:immutant2
# wait for it to deploy
curl http://localhost:[port-mapped-to-8080-in-wildfly-container]/address-works

[datomic-license]: We use Datomic as our database because it has some very innovative features and unique properties. But we don’t love the license. We’re all for giving its creators and maintainers money to do what they do, but we don’t like it when concerns over licensing costs force our hands on architectural decisions. Since we’re only allowed to have so many JVMs connecting to Datomic, we had to solve these shared-JVM problems earlier than we might otherwise have had to (if at all). While it is certainly more efficient to run multiple Clojure apps under one JVM, it’s an awkward fit with the Docker Way™ and not where we would have started were it not for the licensing issue. But who knows, maybe it’s better that it forced us to figure this out now rather than later?

 
27
Kudos
 
27
Kudos

Now read this

RabbitMQ cluster on kubernetes with StatefulSets

UPDATE 2018-1-8: Don’t do it this way. Use the k8s support built in to either the official autocluster plugin (for RabbitMQ 3.6.x) or the new built-in peer discovery feature in RabbitMQ 3.7.x+. UPDATE 2017-3-18: Improved the postStart... Continue →