Docker

Docker Compose Services

Docker Compose Services

Heimdall

Defining Services

Creating a Heimdall service using docker-compose can be done with the below basic docker-compose.yml -

---
version: "2"
services:
  heimdall:
    image: linuxserver/heimdall
    container_name: heimdall
    environment:
      - PUID=65522
      - PGID=65522
      - TZ=Europe/London
    volumes:
      - ./config:/config
    ports:
      - 8080:80
    restart: unless-stopped

Further customization of the default user and other users can be done within the app itself. Once logged in, set a password for the admin and decide if you want public view on or off. Its important to note that the Heimdall dashboards are user-specific, and act according to their users settings.

Before starting your service, you should take note of the port that your local host will pass to this container. (local:container). Also be sure to mount the volumes you wish to further configure / modify, so you will have easy access to them. To do this, just add a volume to the list above -

volumes:
  - ./config:/config
  - /etc/docker-heimdall/:/config

The above added volume will also mount the /config directory on the container to the /etc/docker-heimdall/ directory on our local host. This feature can be used to store directories in such a way that will make backing up our service and its files much easier.

Stopping / Starting Heimdall

Create a directory to store your Heimdall configuration files and / or mounted volumes, and within it insert a docker-compose.yml file with the above contents and run docker-compose up -d. This will start the services we defined, mount the volumes specified, along the ports we set in the docker-compose.yml. This file can be further modified to suit the needs of your application / local server. Adding mounted volumes is a useful feature when planning your service, since these directories will be available locally they will be easy to backup and modify.

Starting defined services -

admin@host:~/heimdall$ docker-compose up -d
Creating network "heimdall_default" with the default driver
Creating heimdall ... done
admin@host:~/heimdall$ 

Stop and remove created containers / networks -

admin@host:~/heimdall$ docker-compose down
Stopping heimdall ... done
Removing heimdall ... done
Removing network heimdall_default
admin@host:~/heimdall$ 
Docker Compose Services

Shlink

Overview

Webservers: Apache, Nginx, and Swoole Databases: mysql, mariadb, postgres, Redis, and mssql

See the docker-compose.yml on the GitHub for a good collection of services and options for each that could be ran.

wget https://github.com/shlinkio/shlink/releases/download/v2.2.1/shlink_2.2.1_dist.zip
unzip shlink_2.2.1_dist.zip 
cd shlink_2.2.1_dist/
sudo chmod -R +w data/
PHP Parse error:  syntax error, unexpected '$isUpdate' (T_VARIABLE), expecting ')' in /home/shlink/shlink_2.2.1_dist/vendor/shlinkio/shlink-installer/bin/run.php on line 12

So I grabbed the docker image instead -

After pointing nginx to port 8080, Shlink can be quickly spun up using a docker command. The only values needing changed below are SHORT_DOMAIN_HOST and SHORT_DOMAIN_SCHEMA, if you are not using https.

docker run --name shlink -p 8080:8080 -e SHORT_DOMAIN_HOST=domain.com -e SHORT_DOMAIN_SCHEMA=https -e GEOLITE_LICENSE_KEY=kjh23ljkbndskj345 shlinkio/shlink:stable

Once active, visiting your domain will result in a 404, since we are only running the nginx server for shlink to route links through we'll need to setup a connection to the remote database at app.shlink.io. This is done by first generating an API key from the commandline under the same user that manager the shlink docker service -

docker exec -it shlink_container shlink api-key:generate

This command will output a string of characters that we can input on app.shlink.io by filling out a quick form requesting us to name our server, provide the domain, and the secret API key.

Once this is done, you'll be greeted with the page below, allowing you to create shortlinks and edit or track links that already exist. This could be useful for changing links that are spread across a wider range of services, so you wouldn't need to go back and replace links to reroute to a new or updated location, you could simply update your shortlink within your shlink dashboard.

One thing that got old quickly is deleting shortlinks, where the prompt required you to enter the short-code into a prompt that covered the shortcode from view. I only had 3 to remove, and it took quite a bit of time for such a simple task. I did not see a bulk deletion option through the dashboard.

You have the option to set various parameters when creating a shortlink, and can always return to make edits to existing shortlinks that provide the same options -

The dashboard provided further insight on links, creating some graphs using information like OS, location and browser -

There was even an option to omit results from the generated graphs using an interactive table provided within the dashboard. If you'll notice below, I've deselected the only android hit on the table and the data still appears as though there are still android users. I noticed that modifying the table would only impact the bar graphs.

I noticed from the statistics page above that the dashboard didn't seem to be reporting any information on location, though it seemed to support displaying this information. After looking through the commands above a bit, I attempted to see if I had any luck on the back end by running shlink visit:locate

Seems like there's an issue here, but at least its clear and easy to test for a fix. I'll come back to this if I have time.

When attempting to track a shortlink via CLI, it basically feeds you the nginx logs of the relative hits to your shortlink. I noticed that when attempting to create a database that already exists, it exits cleanly and there is also a nice tool for updating your database

wget https://github.com/shlinkio/shlink-web-client/archive/v2.3.1.zip
unzip v2.3.1.zip
cd shlink-web-client-2.3.1/
docker build . -t shlink-web-client

The docker-compose.yml looks like the following

version: '3'

services:
  shlink_web_client_node:
    container_name: shlink_web_client_node
    image: node:12.14.1-alpine
    command: /bin/sh -c "cd /home/shlink/www && npm install && npm run start"
    volumes:
      - ./:/home/shlink/www
    ports:
      - "3000:3000"
      - "56745:56745"
      - "5000:5000"
Docker Compose Services

GitLab

Versions

GitLab offers two types of instances, SaaS and self-hosted. SaaS is their hosted gitlab.com instance which you can sign up on an purchase different tiers. The second is a self-hosted environment with limitations based on the license purchased.

SaaS

Differences in SaaS GitLab versions

Support for CI tools and dashboards come with Bronze

Support for Conan, Maven, NPM come with Silver.

Support for major security features comes with Gold.

Self-hosted

Differences in self-hosted GitLab versions

Its good to know that you can always upgrade your CE instance to EE just by installing the EE packages ontop of the CE.

Its also good to know what would happen to your instance should your subscription expire if considering a EE license

Installation

Ansible Role

Docker CE Image

Docker EE Image

GitLab uses their Omnibus GitLab package to group the services needed to host a GitLab instance without creating confusing configuration scenarios.

GitLab can be hosted on a Pi which means you can do some tweaking to improve performance or save some resources on your host. Some options would be splitting the DBs from the host and reducing running processes. Both are described and documented in the link above.

Docker Compose

Official Compose Documentation

Currently, the basic docker-compose.yml shown on the official documentation is seen below.

web:
  image: 'gitlab/gitlab-ce:latest'
  restart: always
  hostname: 'gitlab.example.com'
  environment:
    GITLAB_OMNIBUS_CONFIG: |
      external_url 'https://gitlab.example.com'
      # Add any other gitlab.rb configuration here, each on its own line
  ports:
    - '80:80'
    - '443:443'
    - '22:22'
  volumes:
    - '$GITLAB_HOME/config:/etc/gitlab'
    - '$GITLAB_HOME/logs:/var/log/gitlab'
    - '$GITLAB_HOME/data:/var/opt/gitlab'

By default, docker will name this container by prefixing the web service name with pathname_ relevant to your current working directory. If you want to name this container add container_name: name within the web layer of this docker-compose.yml

Required Modifications

We need to make sure to replace hostname and external_url with relevant URLs for our environment or starting this container will fail.

hostname

The hostname must be in the format of the root domain domain.com - without the schema (http / https) or port.

external_url

The external_url must be in the format of http://domain.com:8080 where 8080 is the port we are serving the content to externally. If you are using the default port 80, you can just use the http://domain.com format.

This error is seen with docker start gitlab && docker logs -f gitlab when we have improperly set the external_url variable within the root docker-compose.yml

Unexpected Error:
-----------------
Chef::Exceptions::ValidationFailed: Property name's value http://myspace.com does not match regular expression /^[\-[:alnum:]_:.]+$/

GITLAB_HOME

We also need to ensure that we either replace the environment varialble $GITLAB_HOME or set it to a value relevant to your environment. Otherwise, when starting this container Docker will not be able to bind the volumes and we will not be able to modify the required configuration files within them.

If you want to see what environment variables are set by default with the gitlab/gitlab-ce Docker image, run the following command

docker run gitlab/gitlab-ce env

For this image, we see the following output.

PATH=/opt/gitlab/embedded/bin:/opt/gitlab/bin:/assets:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=0a37118aae33
LANG=C.UTF-8
TERM=xterm
HOME=/root

Serving Locally

Working on hosting this container on localhost? Because DNS resolves locally on your host first, you can override any URL within your /etc/hosts file by passing the below configuration, which allows us to visit www.myspace.com within a web browser to see the content being served locally.

127.0.0.1   localhost www.myspace.com myspace.com

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

Starting the Services

Since the Omnibus is a self-contained environment that has everything you need to host a GitLab, the docker-compose.yml we configured above needs to only contain the single web service which uses the gitlab/gitlab-ce Docker image. If you configure your hosts file as I did in the above /etc/hosts example you can quickly deploy the entire service with the below docker-compose.yml

web:
  image: 'gitlab/gitlab-ce:latest'
  container_name: gitlab
  restart: always
  hostname: 'myspace.com'
  environment:
    GITLAB_OMNIBUS_CONFIG: |
      external_url 'http://myspace.com'
      # Add any other gitlab.rb configuration here, each on its own line
  ports:
    - '80:80'
    - '443:443'
    - '22:22'
  volumes:
    - '/home/user/docker/gitlab/config:/etc/gitlab'
    - '/home/user/docker/gitlab/logs:/var/log/gitlab'
    - '/home/user/docker/gitlab/data:/var/opt/gitlab'

You should not need to be running NGINX on your box locally.

This simple configuration is meant for testing only and omits the environment variable $GITLAB_HOME so that it is self-contained. That being said, all we need to do it run docker-compose up -d && docker logs -f gitlab, and visit myspace.com in a web browser.

At first, you may see the default GitLab 502 page while the container is starting, but within a few minutes you should be able to refresh the page and see the page below

This page is requesting for you to create a password for the root account. After you submit this form you can then login to the GitLab with the username root and the relevant password configured here.

Once logging in as root, we see the below landing page

A normal user account can be created through the normal registration process on the home page of your instance. At this point we can already register a guest user, create a public repository, clone it, then push new content.

Below, I'm logged in as a user in one window and root in the other. The Admin Area is a nice landing page if you are looking to configure a new feature that your instance does not have yet, as clicking on the ? next to any label will take you directly to the documentation to setup or modify that feature.

Resource Usage

Table Source

General hardware requirements can be found on the Official Hardware Requirements Documentation which gives detailed specifications on resources needed for various configurations.

If you plan to configure your instance to support greater than 1,000 users, you'll want to refer to the Official Reference Architectures Documentation.

Here, specifications are outlined for each component and service within the GitLab Omnibus that needs hardware adjusted or expanded based on the number of users expected to be using your instance.

For example, if you plan to use 10,000 users it would be much more expensive to support the hardware versus running an instance with 500 users

Memory

Below, we can see the actual difference in memory usage on our host by running free -ht while the container is running and after the container is stopped. This instance is running GitLab locally with no NGINX proxy running on the host itself. At the time of this test, there were only two users signed into the instance.

We should note that though the actual usage seen here is only 2.6GB, the basic requirement of 3.6GB for up to 500 users is still valid.

CPU

Below, we can see the difference in CPU load seen within htop. This instance is running GitLab locally with no NGINX proxy running on the host itself. At the time of this test, there were only two users signed into the instance.

GitLab Running

GitLab Down

Notable Features

Some notable differences seen on a self hosted instance of GitLab

Repository Creation

When hosting your own GitLab instance, you are granted an extra option when creating a repository. This allows you to create repositories which are only available to users which are logged in.

GitLab Settings

Health Check Endpoints

GitLab provides some default endpoints to gather general status information from your instance. To see these, navigate to the Admin Area as an administrator and see the section below

Readiness example -

Liveness example -

Metrics example -

The output here is huge, and this screenshot is only a very small amount of the information available. See this pastebin for the full output, which is nearly 3,000 lines long.

Self Monitoring

Go here and enable self monitoring to automatically create a production environment which can be monitored by Prometheus and then passed to Grafana through extra configuration later on.

Doing this prompts a notification with a campaign offer for free credit on the Google Cloud platform and an additional credit from GitLab for getting started with a self hosted instance.

GitLab Applications

WIP

Grafana

GitLab's Omnibus includes a Grafana that is configured with GitLab's builtin OAuth right out of the box if you are using any GitLab version beyond 12.0. If you do face any issues, see the Official Grafana OAuth Documentation for more detailed information on configuring this manually.

Visit http://yourdomain.com/-/grafana/login/gitlab to automatically link your GitLab account to a new Grafana user.

By default, the GitLab Omnibus ships with the following Grafana dashboards configured

A partial example of the NGINX dashboard

GitLab Server Advanced Configurations

To modify these files, which configure several back-end options for our GitLab instance, we need to have started our services so Docker can mount the container volumes with the files we need to edit. Run docker-compose up -d and check the directory you input for $GITLAB_HOME in your docker-compose.yml. After a few seconds, we should notice this directory contains some new configurations.

gitlab.rb

To regenerate the default configuration, remove or rename the $GITLAB_HOME/config/gitlab.rb and restart the container

Mail Settings

GitLab sends mail using Sendmail by default. General email configurations can be found in the Email Settings section of the $GITLAB_HOME/config/gitlab.rb configuration file.

### Email Settings
# gitlab_rails['gitlab_email_enabled'] = true
# gitlab_rails['gitlab_email_from'] = 'example@example.com'
# gitlab_rails['gitlab_email_display_name'] = 'Example'
# gitlab_rails['gitlab_email_reply_to'] = 'noreply@example.com'
# gitlab_rails['gitlab_email_subject_suffix'] = ''
# gitlab_rails['gitlab_email_smime_enabled'] = false
# gitlab_rails['gitlab_email_smime_key_file'] = '/etc/gitlab/ssl/gitlab_smime.key'
# gitlab_rails['gitlab_email_smime_cert_file'] = '/etc/gitlab/ssl/gitlab_smime.crt'
# gitlab_rails['gitlab_email_smime_ca_certs_file'] = '/etc/gitlab/ssl/gitlab_smime_cas.crt'
SMTP

If you want to use a SMTP server instead, you can configure this in the GitLab email server settings section of the $GITLAB_HOME/config/gitlab.rb configuration file.

### GitLab email server settings
###! Docs: https://docs.gitlab.com/omnibus/settings/smtp.html
###! **Use smtp instead of sendmail/postfix.**

# gitlab_rails['smtp_enable'] = true
# gitlab_rails['smtp_address'] = "smtp.server"
# gitlab_rails['smtp_port'] = 465
# gitlab_rails['smtp_user_name'] = "smtp user"
# gitlab_rails['smtp_password'] = "smtp password"
# gitlab_rails['smtp_domain'] = "example.com"
# gitlab_rails['smtp_authentication'] = "login"
# gitlab_rails['smtp_enable_starttls_auto'] = true
# gitlab_rails['smtp_tls'] = false

###! **Can be: 'none', 'peer', 'client_once', 'fail_if_no_peer_cert'**
###! Docs: http://api.rubyonrails.org/classes/ActionMailer/Base.html
# gitlab_rails['smtp_openssl_verify_mode'] = 'none'

# gitlab_rails['smtp_ca_path'] = "/etc/ssl/certs"
# gitlab_rails['smtp_ca_file'] = "/etc/ssl/certs/ca-certificates.crt"
Incoming

GitLab can handle incoming email based on various configurations. Official Incoming Mail Documentation. This could enable features like responding to issue and merge requests via email.

Outgoing

By default, GitLab sends no email to users upon registration. To enable this feature, sign into your instance as an adminsistrator and navigate to the Admin Area. Once there, go to the General Settings of your instance and scroll down to expand the section below

Dockerfile

Official Dockerfile Documentation
Docker Images Docs is also a good place to get information on using various basic images which can be built off of.

Below, we define a Dockerfile within some exclusive directory on our system where we want to work on our docker image. Create this file with any text editor, where the following commands are possible in a CMD input format.

FROM defines the base image to build off of from a repository on dockerhub
LABEL
RUN defines a command to run in sequence as the Dockerfile is built.
SHELL Restarts into a given shell, seen below where we pass --login and -c parameters to bash
EXPOSE defines a port to expose on the container to the host VOLUME
USER
WORKDIR
ENTRYPOINT
ENV
ARG
COPY \

# Default repository is the same that is used when running `hexo init`
ARG REPO='https://github.com/hexojs/hexo-starter'
# https://hub.docker.com/_/nginx as our base image to build off of
FROM nginx:latest
# Otherwise provide one during build..
# `docker build -t <TAG> . --build-params REPO='https/github.com/username/repo'
ARG REPO
LABEL maintainer='username@gmail.com'
# Install additional packages we need
RUN apt-get update && apt-get -y upgrade && apt install -y curl vim
# Grab NVM and restart shell to load commands into bash
RUN curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.34.0/install.sh | bash
SHELL ["/bin/bash", "--login", "-c"]
# Install NVM stable version and hexo
RUN nvm install stable
RUN npm install -g hexo-cli
EXPOSE 8080

In the Dockerfile above, I use ARG to define a default value REPO which represents the repository to clone when building this docker image. In this case, the repository is the same that is cloned automatically when running hexo init. Since we defined ARG REPO after the FROM command in the dockerfile, it will be accessible for the entire build process, instead of being limited to FOR. If you want to provide a different value for this when building the image, you can do so by using `docker build -t . --build-params REPO=''

SHELL restarts our shell to load the nvm commands into bash so we can in the next step nvm install stable. Otherwise, this command would fail saying that nvm did not exist.

Building Docker Images

To build a dockerfile into an image, run the following command, where -t is tagging the built image with a tag in the preferred format of dockerhub-username/dockerhub-reponame:version

docker build -t username/nginx-hexo:0.1 .

Running Built Images

We can run docker images and see the following output displaying all the built docker images on our machine

REPOSITORY                   TAG                 IMAGE ID            CREATED             SIZE
username/nginx-hexo          0.1                 86325466e505        32 minutes ago      331MB

Now to start our newly built image, we run the following command

docker container run -d --name nginx-hexo \
username/nginx-hexo:0.1

To check that our image is running, run docker container ls to see output similar to the below

CONTAINER ID        IMAGE                     COMMAND                  CREATED             STATUS              PORTS               NAMES
7a74d968f0d2        username/nginx-hexo:0.1   "nginx -g 'daemon of…"   30 minutes ago      Up 30 minutes       80/tcp, 8080/tcp    nginx-hexo

Pushing Images to DockerHub

To login to docker, we need to run docker login and follow the prompts, supplying our username and password. On some systems, you could see the below error -

error getting credentials - err: exit status 1, out: `GDBus.Error:org.freedesktop.DBus.Error.ServiceUnknown: The name org.freedesktop.secrets was not provided by any .service files`

To fix this, we run the following

sudo apt install gnupg2 pass

After logging into docker on your machine, since we already properly tagged our image when we built it with docker build -t <TAG> . above, we can simply docker push <TAG>. Below, we look up our image's local ID and retag it to ensure this matches our DockerHub username and preferred image name / tag. Then, we push the image to DockerHub, publicly. If you want this image to be private, which it should be if unstable, you can do so by logging into dockerhub and modifying the repository settings after making the first push.

Get the image ID -

docker images
username/nginx-hexo          0.1                 86513686e505        32 minutes ago      331MB

Assign the image ID a new tag (This is the same as the old in this case) -

docker tag 83213123e515 username/nginx-hexo:0.1

Push the docker image to DockerHub -

docker push username/nginx-hexo:0.1

Saving Images Locally

If you don't want to push to DockerHub for any reason, you can always just save you image locally using docker save, and then reload it later either on the same machine or a new one by using docker load.

Save the image

docker save username/nginx-hexo:0.1 > nginx-hexo.tar

Reload the image

docker load --input nginx-hexo.tar 

You should see the following output

a333833f30f7: Loading layer [==================================================>]   59.4MB/59.4MB
68a235fa3cf2: Loading layer [==================================================>]  119.3kB/119.3kB
b402ba6c11cd: Loading layer [==================================================>]  135.7MB/135.7MB
3fc85c9d7bd6: Loading layer [==================================================>]  17.61MB/17.61MB
Loaded image: username/nginx-hexo:0.1