I know, whales are not fish. But, go with it. The title sounds cool.

A crash course on Docker — Learn to swim with the big fish

The quick start guide you are looking for.

If you’ve been following software development trends in the past year, Docker is a term you must have grown tired of hearing every once in a while. You may have felt overwhelmed by the vast number of developers talking about containers, isolated virtual machines, hypervisors and other DevOps related voodoo magic. Today I’ll break it all down for you. It’s time to finally understand what Containers as a Service is and why you need it.

TL;DR

  1. “Why do I need this?”
    - Overview of all the key terms
    - Why we need CaaS and Docker
  2. Quick Start
    - Installing Docker
    - Creating a container
  3. Real-life scenario
    - Creating an nginx container to host a static website
    - Learning to use build tools to automate Docker commands

“Why do I need this?”

I asked myself the same question not so long ago. After being a stubborn developer for way too long, I finally sat down and accepted the awesomeness of using containers. Here’s my take on why you should try it out.

Docker?

Docker is a software for creating containerized applications. The concept of a container is to be a small, stateless environment for running a piece of software.

A container image is a lightweight, stand-alone, executable package of a piece of software that includes everything needed to run it: code, runtime, system tools, system libraries, settings.
Official Docker website

Avoiding all the fancy words, it’s just a tiny virtual machine with the bare bones features for running the application you put in it. Okay, virtual machine?

Virtual machine?

A virtual machine (VM) is literally what the name says. A virtual version of a real machine. It simulates the hardware of a machine inside of a larger machine. Meaning, you can run many virtual machines on one larger server. Have you ever seen the movie Inception? Yeah, well somewhat like that. What enables the VMs to work is a cool piece of software called a Hypervisor.

Hypervisor?

I’m killing you with these terms. But, bear with me, it’s all for a reason. Virtual machines only work because of the Hypervisor. It’s a special software that enables a physical machine to host several different virtual machines. All of these VMs can run their own programs and will appear to be using the host’s hardware. However, it’s actually the Hypervisor that’s allocating resources to the VM.

Note: If you’ve ever tried installing software such as VirtualBox, only to have it fail miserably, it may most likely have been due to not enabling Hyper-V in the bios of your computer. This has perhaps happened to me more times than I can remember. *nervous laugh*

If you’re a nerd like me, here’s an awesome write-up on the topic of what Hypervisors are.

Answering my own questions…

Why do we really need CaaS? We’ve been using virtual machines for so long, how come containers are so good all of a sudden? Well, nobody said virtual machines are bad, they’re just hard to manage.

DevOps is generally hard, and you need one dedicated person to do the work all the time. Virtual machines take up a lot of storage and RAM, and they are timely to set up. Not to mention you need a fair share of experience to manage them the right way.

Instead of doing it twice, automate it

With Docker you can abstract away all the timely configurations and environment set ups, and focus on the coding instead. With the Docker Hub, you can grab pre-built images and get up and running in a fraction of the time it would take with a regular VM.

But, the biggest advantage is creating a homogeneous environment. Instead of having to install a list of different dependencies to run your application, now you only need to install one thing, Docker. With it being cross platform, every single developer in your team will be working in the exact same environment. The same applies to your development, staging and production servers. Now, this is cool. No more “it works on my machine.”

Quick Start

Let’s get crackin’ with the installation. It’s awesome that you can have just one piece of software installed on your development machine, and still be sure everything will work just fine. Docker is, quite literally, all you need.

Installing Docker

Luckily the installation process is very easy. Let me show you how you do it on Ubuntu.

$ sudo apt-get update
$ sudo apt-get install -y docker.io

That’s all you need. To make sure it’s running you can run another command.

$ sudo systemctl status docker

It should return back to you some output like this.

● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: active (running) since Sun 2018-01-14 12:42:17 CET; 4h 46min ago
Docs: https://docs.docker.com
Main PID: 2156 (dockerd)
Tasks: 26
Memory: 63.0M
CPU: 1min 57.541s
CGroup: /system.slice/docker.service
├─2156 /usr/bin/dockerd -H fd://
└─2204 docker-containerd --config /var/run/docker/containerd/containerd.toml

If the system service is stopped, you can run a combo of two commands to spin it up and make sure it starts on boot.

$ sudo systemctl start docker && sudo systemctl enable docker

That’s it, you’re ready to go.

With the basic installation of Docker you’ll need to run the docker command as sudo. However, you can add your user to the docker group, and you’ll be able to run the command without sudo.

$ sudo usermod -aG docker ${USER}
$ su - ${USER}

Running these commands will add you user to the docker group. To verify this, run $ id -nG and if you get back an output with your username in the list rest assured you did everything right.

But, what about Mac and Windows? Luckily the installation is just as easy. You download a simple file that starts an installation wizard. Doesn’t get any easier than that. Check those out here for Mac and here for Windows.

Spin up a container

With Docker installed and running, we can go ahead and play around for a bit. The four first commands you need to get up and running with Docker are:

  • create — Creates a container from an image.
  • ps — Lists running containers, optional -a flag to list all containers.
  • start — Starts a created container.
  • attach — Attaches the terminal’s standard input and output to a running container, literally connecting you to the container as you would to any virtual machine.

Let’s start small. We’ll grab an Ubuntu image from the Docker Hub and create a container from that.

$ docker create -it ubuntu:16.04 bash

We’re adding -it as an option to give the container an integrated terminal, so we can connect to it, while also telling it to run the bash command, so we get a proper terminal interface. By specifying ubuntu:16.04 we pull the Ubuntu image, with the version tag of 16.04, from the Docker Hub.

Once you’ve run the create command go ahead and verify the container was created.

$ docker ps -a

The list should look somewhat like this.

CONTAINER ID  IMAGE        COMMAND  CREATED    STATUS   PORTS  NAMES
7643dba89904 ubuntu:16.04 "bash" X min ago Created name

Awesome, the container is created and ready to be started. Running the container is as simple as just giving the start command the ID of the container.

$ docker start 7643dba89904

Once again check if the container is running, but now without the -a flag.

$ docker ps

If it is, go ahead and attach to it.

$ docker attach 7643dba89904

Did you see that? The cursor changes. Why? Because you just entered the container. How cool is that. You can now run any bash command you’re used to in Ubuntu, just as if it was an instance running in the cloud. Go ahead and try one.

$ ls

It’ll work just fine, and list all directories. Heck, even $ ll will work. This simple little Docker container is all you need. It’s your own little virtual playground, where you can do development, testing or whatever you want! There’s no need to use VMs or heavy software. To prove my point, go ahead and install whatever you like in this little container. Go ahead. Installing Node will work fine, be my guest and try it out. Or, if you want to exit the container, all you need to do is to literally just type exit. The container will stop and you can list it again with typing $ docker ps -a.

Note: Every Docker container is running as sudo by default, meaning the sudo command doesn’t exist. Every command you run will automatically be run with sudo privileges.

Real Life Scenario

Time to get into some real stuff. This is what you’ll be using in real life for your own projects and production applications.

Containers are stateless?

I mentioned above that every container is isolated and stateless, meaning once you delete a container, the contents will be deleted forever.

$ docker rm 7643dba89904

Okay, this is a problem right? How do you persist data in such a case?

Now’s when shit gets real. Have you ever heard of volumes? Let me tell you. Volumes let you map directories on your host machine to directories inside of the container. Here’s how.

$ docker create -it -v $(pwd):/var/www ubuntu:latest bash

While creating a new container add the -v flag to specify what volume to create and persist. This command will bind the current work directory on your machine to the /var/www directory inside of the container.

Once you start the container with the $ docker start <container_id> command you’ll be able to edit the code on the host machine and see the changes immediately in the container. Giving you the ability to persist data for various use cases, from keeping images to storing database files, and of course for development purposes where you need live reload capabilities.

Note: Let me tell you a secret. You can also run the create and start commands in one with the run command.

$ docker run -it -d ubuntu:16.04 bash

The only addition is the -d flag which tells the container to run detached, in the background, meaning you can go ahead and attach to it right away.

Why am I talking about volumes this much?

Indulge me for a bit longer. Let me show you why. We can create a simple nginx web server for hosting a static website in a couple of simple steps.

Create a new directory, name it whatever you like, I’ll name mine myapp for convenience. All you need is to create a simple index.html file in the myapp directory, and paste this in.

<!-- index.html -->
<html>
<head>
<link href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap.min.css" rel="stylesheet" integrity="sha256-MfvZlkHCEqatNoGiOXveE8FIwMzZg4W85qfrfIFBfYc= sha512-dTfge/zgoMYpP7QbHy4gWMEGsbsdZeCXz7irItjcC3sPUFtf0kuFbDz/ixG7ArTxmDjLXDmezHubeNikyKGVyQ==" crossorigin="anonymous">
<title>Docker Quick Start</title>
</head>
<body>
<div class="container">
<h1>Hello Docker</h1>
<p>This means the nginx server is working.</p>
</div>
</body>
</html>

We have a generic web page, with some heading text. What’s left is to run an nginx container.

$ docker run --name webserver -v $(pwd):/usr/share/nginx/html -d -p 8080:80 nginx

Here you can see we’re grabbing an nginx image from Docker Hub so we can get an instant configuration of nginx. The volume configuration is similar to what we did above, we only pointed to the default directory where nginx hosts HTML files. What’s new is the --name option we set to webserver and the -p 8080:80 option. We mapped the container’s port 80 to the port 8080 on the host machine. Of course, don’t forget to run the command while in the myapp directory.

Check if the container is running with $ docker ps and fire up a browser window. Navigate to http://localhost:8080, and behold the beauty!

[Today’s random Sourcerer profile: https://sourcerer.io/samuelmarks]

It’s as simple as that. We have an nginx web server up and running in just a couple of commands. Feel free and edit something in the index.html. Reload the page, and you’ll see the content has changed. Damn, how I love Docker.

Note: You can stop a running container with the stop command. Make sure to stop the container before proceeding with the tutorial.

$ docker stop <container_id>

How to make your life even easier?

I have a saying, if I need to do something twice, I’d rather automate it. Luckily, Docker has me covered. Alongside the index.html file add a Dockerfile. Its name is literally just Dockerfile, without any extensions.

# Dockerfile
FROM nginx:alpine
VOLUME /usr/share/nginx/html
EXPOSE 80

The Dockerfile is quite literally the build configuration for Docker images. Key focus on images! We’re specifying we want to grab the nginx:alpine image as the base for our image, create a volume and expose port 80.

To build an image we have the build command.

$ docker build . -t webserver:v1

The . is specifying where the Dockerfile is located which will be used to build the image, while the -t marks the tag for the image. This image will be knows as webserver:v1.

With this command we didn’t immediately pull an image from Docker Hub, instead we created our own image. To list all your images you use the images command.

$ docker images

Now we want to run the image we created.

$ docker run -v $(pwd):/usr/share/nginx/html -d -p 8080:80 webserver:v1

The power of the Dockerfile is the customization you can give your container. You can pre-build images to your liking. But, if you really don’t like repetitive tasks, you can always take it a step further and install docker-compose.

Docker-compose?

It’ll let you both build and run the container in one command. But, what’s even more important is that you can build a whole cluster of containers and configure them by using docker-compose.

Jump over to their install page and get it installed on your machine, for your respective operating system.

Back in the terminal run $ docker-compose --version and hope it outputs something back to you. If it does, you’re set. Let’s get crackin’ with some compositions!

Alongside the Dockerfile add another file named docker-compose.yml and paste this snippet in.

# docker-compose.yml
version: '2'
services:
webserver:
build: .
ports:
- "8080:80"
volumes:
- .:/usr/share/nginx/html

Be careful with the indentations, otherwise it won’t work properly. That’s it. What’s left is to run docker-compose.

$ docker-compose up (-d)

Note: The -d signals docker-compose to run detached, then you can use
$ docker-compose ps to see what’s currently running, or stop docker-compose with $ docker-compose stop.

Docker will build the image from the Dockerfile in the current directory (.), map the ports as we did above, as well as share the volumes. See what’s happening? The exact same thing we did with the build and run commands, instead now only running one command, docker-compose up .

Jump back to the browser and you’ll see everything works just as it did before. The only difference is that you’ve now escaped the tedious work with writing commands in the terminal, replacing them with two configuration files, the Dockerfile and the docker-compose.yml file. Both of these can be added to your Git repository, meaning every contributor to your project can have the development environment up and running in a fraction of the time it would take to install dependencies manually. Why is this important? Because it will always work in production as expected. The exact same network of containers will be spun up on the production server!

To wrap this section up, go ahead and list all the containers once again.

$ docker ps -a

If you ever want to delete a container you can run the rm command I mentioned above, otherwise use the rmi command for deleting images.

$ docker rmi <image_id>

Try your best to not leave residual containers lying around and make sure to delete them if you don’t need them.

A broader perspective?

Making sure not to identify Docker as the only container technology, I have to make sure to mention the less popular kids on the block. Docker is merely the most widely used containerization option we have today. But, rkt seems to be doing just fine.

Digging deeper, I have to mention container orchestration. We’ve only talked about the tip of the iceberg. Docker-compose is a tool for creating networks of containers. But, managing all that and ensuring maximum up time is where orchestration comes into play.

This is not at all a trivial task. As the number of containers grow we need a way of automating the various DevOps tasks we usually do. Orchestration is what helps us out with provisioning hosts, creating or removing containers when you need to scale out or down, re-creating failed containers, networking containers, and much more. All the big guns out there use Google’s solution called Kubernetes or Docker’s own Swarm Mode.

Wrapping up

Whoa, that was a lot to take in… 
If I haven’t convinced you of the vast benefits of using CaaS and the simplicity of Docker, I’d urge you to reconsider and wrap one of your existing applications in a Docker container!

A Docker container really is just a tiny VM where you can do anything you like, from development, staging, testing to hosting production applications.

The homogeneous nature of Docker is like magic for production environments. It will ease the stresses of deploying applications and managing servers. Because now you’ll know for sure whatever works locally will work in the cloud. That’s what I call peace of mind. No more hearing the infamous sentence we have all heard one too many times.

Well it works on my machine…

If you want to take a look at all the code, and terminal commands, we wrote above, here’s the repository. Or if you want to read my latest articles, head over here.

Hope you guys and girls enjoyed reading this as much as I enjoyed writing it. 
Do you think this tutorial will be of help to someone? Do not hesitate to share. If you liked it, smash the clap below so other people will see this here on Medium. Don’t forget to show us some love by following the Sourcerer blog!