Microservices are pretty big these days. Unfortunately, they add complexity to development, deployment, and maintenance.
For instance, how do you code against multiple dependencies locally? What if your architecture gets larger and includes things like service discovery via consul, a database, long-lived TCP connections with a backend service? It would be a pain to spin all of those dependencies up locally, and it would be costly to run them all in the cloud for development. It is absolutely possible for your architecture to be too large to follow this pattern– fair warning.
It’s easy to put together a virtual network using Docker Compose which collects microservices into a single environment. Docker also makes it easy to do containerized development. I feel like this is a huge trend, and we’ll probably see IDEs or other tooling spring up to support this.
This isn’t a tutorial about Docker. If you’re not familiar with Docker, walk through some tutorials first. This is a quick example of connecting a development container to an existing docker network.
The example is taken from an example in jimschubert/sbt-scala, the repo for my SBT/Scala docker image.
QOTD Service/Client Example
This example will create a composed environment consisting of two Quote of the Day (QOTD) services. These services will exist on the same network, exposing port 17 only within the network. The client is the hypothetical piece under development which queries the QOTD from one (or both) services using only the internal host and port.
The QOTD service code is very simple, requiring only nmap-ncat and fortune (in alpine linux):
#!/bin/sh
exec ncat -l 17 --keep-open --send-only --exec "/foo.sh"
The script that generates the QOTD is also simple:
#!/bin/sh
echo $HOSTNAME: $(/usr/bin/fortune)
Check out the full Dockerfile. Remember, this post is more about the service/client communication within a docker composed network.
The key here is in how docker-compose.yml is defined:
version: '2'
services:
fortune1:
image: fortune
hostname: fortune1
mem_limit: 64MB
expose:
- "17"
networks:
- fortune
fortune2:
image: fortune
hostname: fortune2
mem_limit: 64MB
expose:
- "17"
networks:
- fortune
networks:
fortune:
driver: bridge
Each service is assigned to an internal network called fortune. This is how the development container will associate with these services. To run the services, you’d just build the Dockerfile which runs each service and execute
docker-compose u
Do that in one terminal, then open another terminal for the next part.
Client
The client is written in Scala. For simplicity, it is a command line tool which queries one or more hosts (on port 17) provided as an argument to the script. For kicks, consider how the following code (that is, querying against fortune1:17 and fortune2:17 from a single client) could be done without docker on a development machine.
Save this somewhere as src/Example.scala:
import java.net._
import java.io._
import scala.io._
object Example {
def qotd(server: String): Unit = {
val s = new Socket(InetAddress.getByName(server), 17)
val in = new BufferedSource(s.getInputStream()).getLines()
println(in.next())
s.close()
}
def main(args: Array[String]): Unit = args.foreach(qotd)
}
If there’s a suitable public image available, you don’t need a Dockerfile for this code. Here’s how you can do local development against the above client:
docker run -it --net=fortune_fortune -v $(pwd)/src:/src --workdir=/src jimschubert/sbt-scala:latest
The docker run -it … jimschubert/sbt-scala:latest parts above just execute an interactive shell in an alpine linux container that I maintain for SBT/Scala development.
–net=fortune_fortune joins this container to the network created by the docker-compose.yml. The network is created automatically when you bring up the environment with docker-compose up. It’s in the format project_network where project is pulled from the directory name, COMPOSE_PROJECT_NAME environment variable or the optional docker-compose
-p, --project-name NAM
switch. If you’re not running against the code from github, you can run
docker network lis
to get the generated network name.
-v $(pwd)/src:/src mounts the src directory relative to where you’ve run the command to the /src directory within the container.
–workdir=/src is here for convenience, so you’re starting in the right folder in the container.
Once run the full command and you’re in the container, you can execute the client directly:
scala Example.scala fortune1 fortune2 fortune1
Go ahead and add a println(server) to the top of the qotd method in Example.scala and run the command again. No need for restarting the container!
Conclusion
This post demonstrates how to do containerized development of a simple client. There are multiple instances of the QOTD service (port 17) consumed by the client running on a single development machine.
The ports of these services can’t be exposed to port 17 on the host machine because this would result in a port conflict. It doesn’t make a lot of sense to remap port 17 to some other port on the host because port 17 is an explicitly defined port for QOTD. There may be issues if you do attempt to remap the port, like what if you choose a port that is randomly acquired by some other application?
Running the client in a container allows us to access the services as they would exist in the wild. Mapping the src directory from the host to the container allows us to quickly iterate on code without restarting the container. This is important, especially if you’re using a much larger base image than alpine.
If you’re doing Scala development, a recommended next step would be to use the spray/revolver plugin. Once setup, you would just start an sbt REPL and run ~re-start. Any local code changes are automatically detected and the plugin restarts the JVM process running your application.