Running multiple web applications on 1 server with Docker

Many developers have asked me the effective way to test their code in a multi-server environment. In the most case, I suggested them to go with Docker and Docker compose so that everything can be simulated in an effective manner. And in most of the case, the reverse proxy/balancer services will be done with HA Proxy or Nginx. Today I will note about another approach: running multiple web applications on 1 server with Docker swarm mode and use Traefik as the load balancer solution.

If you want a starting guide of using docker, be sure to read the Docker Get Started series.

The layout of this work

  • 1 MySQL docker container for MySQL (with 1 another for PHPMyAdmin)
  • 1 Nginx docker container & 1 PHP-FPM docker container for webapp1
  • 1 Nginx docker container & 1 PHP-FPM docker container for webapp2
  • There is an overlay network named db_shared which are shared among containers. This is used for connections from application containers to the common database container.

Create a Shared/Overlay network

docker network create -d overlay db_shared

Start the swam mode

docker swarm init

Build the Database stack

  1. Create the mysql-cluster/docker-compose.yml file as follows:
    version: '3.1'
    
    services:
      mysql:
    #    container_name: vs-db
        restart: always
        image: mysql/mysql-server:5.7
        deploy:
          replicas: 1
          resources:
            limits:
              cpus: "0.5"
              memory: 8192M
            reservations:
              cpus: "0.2"
              memory: 2048M
          restart_policy:
            condition: on-failure
        ports:
          - 3306:3306
        environment:
          MYSQL_ROOT_PASSWORD: 'YOUR_SUPER_SECURE_PW'
        volumes:
          - ./dbdata:/var/lib/mysql
          - ./override-my.cnf:/etc/my.cnf
        command: --innodb-use-native-aio=0 --skip-name-resolve
        networks:
          - db_shared
    
      phpmyadmin:
        image: 'phpmyadmin/phpmyadmin'
        restart: always
        deploy:
          replicas: 1
          resources:
            limits:
              cpus: "0.1"
              memory: 512M
          restart_policy:
            condition: on-failure
        ports:
          - '8888:80'
        environment:
          PMA_HOST: dbserver_mysql 
        networks:
          - db_shared
    
    networks:
      db_shared: 
        external:
          name: db_shared

    In this configuration:

    • we run a cluster of 1 container of MySQL, each utilizes at most 50% CPU (across cores) and 8G memory.
    • there is also 1 PHPMyAdmin container exposes at the 8888 port.
    • be noted that I use the “mysql/mysql-server:5.7” image – the Oracle’s one, not the “mysql:5.7” provided by docker. In my experience, the stock one from Oracle MySQL is way better in term of performance without any tweak.
    • if you need to change some server configuration, you should create a customized file and mount it to the container. In this case, I just copy the “my.cnf” file from the container and add 2 new parameters (innodb_buffer_pool_size – 70-75% of RAM and innodb_log_file_size – 25% of the pool size), and then mount it back to the container via the volumes option. This way we will not lose our config in future upgrading.
  2. Start the database service (we call it as dbserver):
    docker stack deploy -c docker-compose.yml dbserver

    Some notes regarding this:

    • We deployed the above docker compose in the “swarm” mode instead of “local” if we run docker-compose up -d. Also, be noted that the container_name and restart options are ignored in the swarm mode.
    • In order to access services inside this stack (e.g. the MySQL service), we need to refer to it as StackName_ServiceName. So in this case, we need to set the environment variable PMA_HOST to dbserver_mysql in the PhpMyAdmin container.
    • In case we need to stop the stack later, just remove the stack with the command
      docker stack rm dbserver

Build the App1 stack

  1. One point noting that we need pre-built images when running in docker swarm mode, so we need to build images (if we need the customized one), push to docker hub before using it for “docker stack deploy”. This is quite different from using docker compose, so we need to build our customized images first, push it to docker hub, and then use the docker stack one.
    • In my case, I put customized build Dockerfile in nginx and php folders.
    • I then declare the build option in the docker-compose.yml file and then run “docker-compose build” just to build images in this file.
      • If we run on multiple machines, we should push the image to docker hub.
    • I then note built images’ names and then put it again into the docker-compose.yml file.
    • Finally, I can run the “docker stack deploy” feature of docker since I already have attached images 🙂
  2. Create the app1-cluster/docker-compose.yml file as follows:
    version: '3.1'
    
    services:
      nginx:
        build:
          context: ./nginx/
          dockerfile: Dockerfile
    #    restart: always
        image: app1-docker_nginx:latest
        deploy:
          replicas: 2
          resources:
            limits:
              cpus: "0.4"
              memory: 4096M
            reservations:
              cpus: '0.1'
              memory: 1024M
          restart_policy:
            condition: on-failure
        ports:
          - 8080:80
        volumes:
          - ${HOST_WEB_ROOT}:/var/www/html
        depends_on:
          - php
        networks:
          - app1_net
      php:
        build:
          context: ./php/
          dockerfile: Dockerfile
    #    restart: always
        image: app1-docker_php:latest
        deploy:
          replicas: 3
          resources:
            limits:
              cpus: "0.6"
              memory: 8192M
            reservations:
              cpus: '0.2'
              memory: 2048M
        volumes:
          - ${HOST_WEB_ROOT}:/var/www/html
        networks:
          - db_shared
          - app1_net
    
    networks:
      app1_net:
      db_shared: 
        external:
          name: db_shared

    Some notes regarding this file:

    • The build option will be ignored when running the stack deploy command, and the deploy option will be ignored when running docker-compose. I just put them there for different usage.
    • The app1-docker_nginx and app1-docker_php images must be pushed to docker hub if we want to access them from different machines.
  3. Start the App1 cluster with the .env file:
    export $(cat .env) && docker stack deploy -c docker-compose.yml app1_cluster

    Some notes regarding this:

    • I use the .env file to store some environment variable so I need to export them before running “docker stack deploy“. The .env file is supported in docker-compose but not with the stack deployment.
  4. Done, access your public service at the 8080 port of the host machine

Build other AppN stack

  • Similar to build the App1 stack, just add it to different listening ports so that we can use a reverse proxy later for them.
  • In fact, we can use different local IPs for docker services, however, I do not really care 🙂

Public Access via reverse proxy (Traefik)

We all know that we can use HA Proxy or Nginx as the reverse proxy for all above Docker services. In this entry, I will try one another emerging service: Traefik.

  1. First, we need to create a new overlay network for traefik:
    docker network create -d overlay balancer
  2. Next, I will create a traefik configuration file (traefik.toml):
    # defaultEntryPoints must be at the top 
    # because it should not be in any table below
    defaultEntryPoints = ["http", "https"]
    
    # Entrypoints, http and https
    [entryPoints]
    
    # http should be redirected to https
    [entryPoints.http]
    address = ":80"
    #[entryPoints.http.redirect]
    #entryPoint = "https"
    
    # https is the default
    [entryPoints.https]
    address = ":443"
    
    #[entryPoints.https.tls]
    
    # Enable ACME (Let's Encrypt): automatic SSL
    #[acme]
    #email = "[email protected]"
    #storage = "./acme.json"
    #caServer = "https://acme-staging-v02.api.letsencrypt.org/directory"
    #entryPoint = "https"
    #  [acme.dnsChallenge]
    #  provider = "route53"
    #  delayBeforeCheck = 0
    
    #[[acme.domains]]
    #  main = "*.MY_DOMAIN.com"
    #  sans = ["MY_DOMAIN.com"]
    
    [docker]
    endpoint = "unix:///var/run/docker.sock"
    domain = "*.MY_DOMAIN.com"
    watch = true
    exposedbydefault = false
    
  3. Then, I will create the docker-compose.yml file for the Traefik service as follows:
    version: '3'
    
    services:
      reverse-proxy:
        image: traefik
        command: --web --docker --docker.swarmmode 
        ports:
          - "80:80"     # The HTTP port
          - "8080:8080" # The Web UI (enabled by --api)
        volumes:
          - /var/run/docker.sock:/var/run/docker.sock # So that Traefik can listen to the Docker events
        networks:
          - balancer
        deploy:
          placement:
            constraints: [node.role==manager]
    
    networks:
      balancer:
        external:
          name: balancer

    Some notes regarding this:

    • I am running the swarm mode in docker, so it must be noted at the command option.
    • Traefik uses the balancer network to communicate with other containers.
    • Traefik DID support Let’s Encrypt, and there are several notes for you when requesting wildcard certificates as follows.
      • Docker might not create missing files (e.g. acme.json), so we should do it for our sake.
      • If using a wildcard domain, you will need to also configure the environment variables to your “docker-compose.yml” file. So if you are using CloudFlare DNS, be sure to point the root domain and *.MY_DOMAIN.COM to the server IP, and then add the following environment section to the “docker-compose.yml” file
        version: '3'
        
        services:
          reverse-proxy:
            image: traefik
            command: --api --web --docker --docker.swarmmode 
            ports:
              - "80:80"     # The HTTP port
              - "443:443"   # The HTTPS port
              - "8080:8080" # The Web UI (enabled by --api)
            volumes:
              - /var/run/docker.sock:/var/run/docker.sock # So that Traefik can listen to the Docker events
              - ./traefik.toml:/traefik.toml
              - ./acme.json:/acme.json
            environment:
              CLOUDFLARE_API_KEY: "XXXXXXXXXXXXXXXXXXXXXXXXXX"
              CLOUDFLARE_EMAIL: "[email protected]"
            networks:
              - balancer
            deploy:
              placement:
                constraints: [node.role==manager]
        
        networks:
          balancer:
            external:
              name: balancer
      • Sample “traefik.toml” is also attached as follows:
        defaultEntryPoints = ["http", "https"]
        
        [entryPoints]
          [entryPoints.http]
          address = ":80"
            [entryPoints.http.redirect]
            entryPoint = "https"
          [entryPoints.https]
          address = ":443"
          [entryPoints.https.tls]
        
        # Enable ACME (Let's Encrypt): automatic SSL
        [acme]
        email = "[email protected]"
        storage = "./acme.json"
        caServer = "https://acme-v02.api.letsencrypt.org/directory"
        entryPoint = "https"
          [acme.dnsChallenge]
          provider = "cloudflare"
          delayBeforeCheck = 0
        [[acme.domains]]
          main = "*.MY_DOMAIN.com"
        [[acme.domains]]
          main = "MY_DOMAIN.com"
        
        [docker]
        endpoint = "unix:///var/run/docker.sock"
        domain = "MY_DOMAIN.com"
        watch = true
        exposedbydefault = false
        
  4. Then, I deploy the above reverse-proxy service:
    docker stack deploy -c docker-compose.yml tl-balancer
  5. Finally, I will include the balancer network to the docker-compose.yml of other services and then declare the labels information in the deploy option of this file. A sample app1-cluster/docker-compose.yml will be now as follows:
    version: '3.1'
    
    services:
      nginx:
        build: ./nginx/
        image: app1-docker_nginx:latest
        deploy:
          replicas: 2
          resources:
            limits:
              cpus: "0.4"
              memory: 4096M
            reservations:
              cpus: '0.1'
              memory: 1024M
          restart_policy:
            condition: on-failure
          labels:
            - "traefik.port=80"
            - "traefik.backend=tl-app1"
            - "traefik.docker.network=balancer"
            - "traefik.frontend.rule=HostRegexp:{subdomain:[^www]+}.MY_DOMAIN.com"
        ports:
          - 8080:80
        volumes:
          - ${HOST_WEB_ROOT}:/var/www/html
        depends_on:
          - php
        networks:
          - app1_net
          - balancer
    
    #  php: this service does not change
    
    networks:
      app1_net:
      db_shared: 
        external:
          name: db_shared
      balancer:
        external:
          name: balancer

    Some notes regarding the above file:

    • I will forward all sub-domain traffic of MY_DOMAIN.com to the Nginx containers of the App1 service, except the www.MY_DOMAIN.com and the root domain (since I will use it for another service).
    • The App1 cluster stack will need to remove and deploy again so that changes will be taken into account.

Troubleshooting

  • If you get an error regarding “iptables: No chain/target/match by that name” when starting docker containers (mostly after you stop/down and start/up a container again), just simply be sure that your firewalld service is started, or just more simple: restart docker service with
    ip link delete docker0
    service docker restart
  • In order to view service process list, we can use docker service ps:
    docker service ps balancer_proxy --no-trunc
  • _

Leave a comment

Your email address will not be published. Required fields are marked *