I’ve used Docker multiple times in the past and I wasn’t very clear on how it worked.

The program’s image (drawing) is:

I have macOS and I’m connected to a Linux Ubuntu server via SSH

I wanted to install a video game server on a Linux Ubuntu server

The installation was done directly on the OS

The installation was giving me a long error log

It was due to compatibility issues

Docker allows you to download and run images (an image is the installation) (public and private) of the program, OS (image base) (there are layers) (each layer is a different image). You can get public images in various places, including Docker itself (the repository)

A container isn’t a “combo” of separate pieces that you choose (App + OS + Hardware). It’s more like an onion with layers that are already glued together

Docker does not emulate hardware. Unlike a virtual machine (such as VMware or VirtualBox), the container uses the kernel of your Ubuntu server directly

Container: Uses your CPU and RAM resources natively

The container does not have a complete operating system. It has no drivers, no graphical environment, and no kernel of its own

It’s an extremely “cut-down” version so that the app can run

You choose a “Base Image” that already includes the minimum “OS” and the libraries you need, and then you paste your code on top

There is no hardware layer: Docker “talks” directly to the hardware of your actual machine


If I use an Ubuntu Server image on my PC, is it the same as installing it on my own PC? Does it use the same hardware?

Exactly, it uses your own hardware, but with a “filter” in between. It’s not a real installation in the traditional sense (where the system takes full control of the disk and boot process), but rather an isolated process that runs on top of what you already have.

Unlike a Virtual Machine (VM) that “pretends” to have its own video card, processor and RAM (emulation), Docker asks your kernel (the core of your system) for permission to use the hardware directly.

  • CPU and RAM: If your PC has 16GB of RAM, the Ubuntu container “sees” all 16GB. You don’t have to “reserve” 2GB as you would in VirtualBox
  • Speed: The execution is almost as fast as if you installed the program directly on your Windows, Mac, or Linux

The software running inside “thinks” it’s on a real Ubuntu Server. It has its own folders (/etc, /var, /bin) that don’t get mixed up with yours.

There’s no “boot” process. There’s no BIOS, no GRUB, no driver loading. When you run docker run, the container takes advantage of the fact that your PC has already booted and simply launches the process

  • The Ubuntu Server container uses the same kernel as your PC. It’s like any other program, but “disguised” as an operating system
  • Docker starts a very lightweight Linux mini-virtual machine (invisible to you) and then places the containers on top of that

If you were to install the app directly on your OS (bare metal), you would clutter your system with .NET libraries, SQL databases, and dependencies that are difficult to remove later.
By using the image, you have that “Ubuntu Server” inside a jar:

  • You use the full power of your hardware
  • If something goes wrong or you want to start from scratch, you delete the container and your PC is left spotless, as if you had never installed anything

It’s like having an instant “spare PC” that uses the same components as yours, but that you can turn on, turn off, or erase in a second


Your application sits on top of those layers of files

Installing a program on a specific OS using Docker eliminates compatibility issues, as the entire environment is identical to the one used by the developers and works seamlessly

With Docker you can emulate program + DE it’s all together and called an environment

the environment is isolated

With Docker you can create containers, environments go inside, and you can have several containers with completely different environments


I want to reinstall the video game server on Linux Ubuntu Server, but this time I’ll do it using Docker

In Docker I need to install images: the program (game server), the Linux Ubuntu Server OS version

The program must be compatible with the OS version

Has anyone already tested that the program works with this image

Since someone has already tested it, there won’t be any compatibility issues

So, Docker is for doing Web 2.0 installations

Once you understand how Docker works, you’ll want to do all your installations with Docker


Docker is very useful in a work team

Let’s assume there are 5 people in a work team

Each of the 5 people has different hardware (PC, notebook, mac)

If out of the 5 people, 2 have the same OS (macOS, Windows, Linux), they have different versions (xp, 7, vista) or distributions (ubuntu, opensuse, debian) (in the case of Linux)

Together they work with Git and upload the source code (not the executable) to a repository like GitHub or GitLab

The code will run on a Linux Ubuntu server, and run on a server

(meaning the executable will not run on any of the 5 computers but on the server)

So each computer has to use the server to run tests? NO

Each computer uses Docker to emulate the server exactly as it is (OS)

Perhaps each computer cannot even run an executable of the source code it is programming

Not directly on your computer, but indirectly and with Docker

Each computer will need images: the application, OS, database image…

You can get the OS image and another knows from the official Docker repository

The application image must be created by each programmer

You need to create a container in Docker and put the images in it

It’s like having a jar (container) and putting ingredients in it (images, app, OS)

This will create the same environment on 5 different computers

The application works the same on all 5 computers and will fail at the same points (you can find the same error in the same way on all 5 computers)


install docker on ubuntu server

sudo apt-get update

sudo apt-get install ca-certificates curl gnupg

sudo install -m 0755 -d /etc/apt/keyrings

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg

sudo chmod a+r /etc/apt/keyrings/docker.gpg

echo   "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \

 $(. /etc/os-release && echo "$VERSION_CODENAME") stable" |   sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

sudo apt-get update

sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

sudo usermod -aG docker $USER

docker compose version

docker run hello-world

newgrp docker

The command above is a temporary “patch” to apply group permissions in the current session without logging out. It’s fine to use it for a quick “hello-world” test, but for the change to take effect across all your processes and the server, you’ll need to log out and log back in (exit and log back in)

docker run hello-world

start/enable: Generally, when you install Docker CE on Ubuntu, the service is already enabled and set to automatic startup. It’s still a good idea to run it, but it will likely tell you that it’s already active.

sudo systemctl start docker

sudo systemctl enable docker

sudo systemctl status docker

Technically, nothing is superfluous, but if you want to do it shorter (pro style), you could use the official convenience script that does all that in a single line:

curl -fsSL https://get.docker.com -o get-docker.sh && sudo sh get-docker.sh