As DevOps technicians, our jobs are to abstract away infrastructure concerns from developers. Obviously in a complex system, there’s a limit to how much that can be done, but there are ways to mitigate that. This post will focus on how we develop microservices in a developer-friendly way.
In production, we use Kubernetes to manage our microservices, which is great for those of us familiar with it. Most application developers, however, have little experience with Kubernetes, so trying to replicate that system to closely (say, with Minikube) can often cause major headaches. We use Docker-Compose in development because it’s much simpler and most devs (at least, most of ours) are already fairly comfortable with the tool. Let’s dive into how we can set this up.
I’ll assume the reader is already familiar with the basics of Docker-Compose. Setting up a bunch of simple services, each exposing their ports, is pretty simple. However, this isn’t the way most companies set up their cluster — the majority of the time, all traffic passes through a Load Balancer / reverse proxy and gets routed to the appropriate service (called an Ingress in Kubernetes jargon). We chose to create an Nginx service in our Docker-Compose file to emulate that. With a simple
nginx.conf specifying the same routing rules as in production, we can bring up the whole cluster with
This works fine, but the problem now is that developers very rarely care about every service, and bringing up the whole cluster can put strain on a MacBook. We would like a way to bring to life only certain parts of our cluster, with the same routing rules as we can expect in a production-like environment.
This turns out to be something that’s not supported by vanilla Nginx — It’ll refuse to start unless every upstream can be resolved. After some research, I found an answer: nginx-upstream-dynamic-servers, which allows Nginx to start even if some upstream servers can’t be resolved. This is perfect for our use-case! After following the installation directives to put it on our reverse proxy, using it is as simple as adding the word
resolve to our servers.
All that’s left is to configure our services in
docker-compose to depend on the Nginx service (not the other way around, as is the case normally), so the reverse proxy gets started whenever we want to spin anything up.
Now that everything is set up, we now have a convenient way to start individual microservices while keeping the same reverse proxy. For example, to start
service1, along with Nginx, all we need is
docker-compose up service1. This will be reachable at http://localhost:6006/service-1. If we decide to bring up
service2, we can simply do
docker-compose up service2. Now both services are reachable, at /service-1 and /service-2.
I think this approach hits a sweet spot for a lot of developers who want to mimic the basic parts of the production environment, but don’t want the overhead of replicating it perfectly.