Docker: Supercharge Development and Deployment

Docker: Supercharge Development and Deployment

ยท

9 min read

Basics of Docker

  • Docker was first released and developed by Solomon Hykes and Sebastien Paul in March 2013.

  • Docker is an open-source tool used to create, build, deploy, and run applications on a system.

  • Docker is a game-changing technology that simplifies the deployment and management of software applications. It allows you to package your application and its dependencies into a portable container that can run on any system, eliminating compatibility issues.

  • The Docker engine runs natively on Linux distribution and is written in the GO Language.

  • Before Docker, many users face the problem that a particular code runs in the developer's system but not in the user's system.

Benefits of Docker

  • Portability: Docker containers encapsulate the application, its dependencies, and its configuration into a single, portable unit. This enables consistent and reliable deployment across different environments: development, testing, and production. Containers can run on any operating system or cloud platform, making it easier to move applications between systems without compatibility issues.

  • Scalability: Docker provides excellent scalability options. With containerization, you can easily scale your application horizontally by running multiple containers across multiple machines.

  • Resource Efficiency: Docker containers utilize the host operating system's kernel, resulting in minimal overhead compared to traditional virtualization methods. This means containers are lightweight, start quickly, and consume fewer system resources. Docker enables efficient utilization of hardware, allowing you to run more containers on a single machine.

  • Isolation: Docker provides process-level isolation between containers. Each container operates independently, with its file system, network interfaces, and process space. This isolation ensures that applications running within containers are protected from conflicts and interference, making them more secure and stable.

  • Continuous Integration and Deployment (CI/CD): Docker simplifies the implementation of CI/CD workflows. Containers can be quickly built, versioned, and distributed as part of an automated pipeline. Docker images serve as the building blocks of applications, allowing for consistent testing, deployment, and rollback processes. This enables teams to deliver software faster, with increased reliability and reproducibility.

These advantages of Docker contribute to faster application development, improved collaboration between teams, efficient resource utilization, and simplified deployment processes.

Drawbacks of Docker

  • Networking Complexity: Setting up and managing networking between Docker containers can be complex, especially in distributed setups.

  • Resource Usage: Running multiple containers can consume a significant amount of resources on your system, so you need to manage them carefully to avoid performance issues.

  • Security Concerns: If not properly configured, Docker containers can have security vulnerabilities, so it's important to follow security best practices and keep everything up to date.

  • Compatibility Challenges: Some applications may not work smoothly in Docker containers, requiring additional configuration or modifications to make them compatible.

  • Docker is suitable when the development O.S and testing O.S are the same, if the O.S is different, we should use Virtual Machine.

Docker Architecture

Docker architecture revolves around three key components: Docker Engine, Docker images, and containers. The Docker Engine serves as the core, handling container creation and management. Docker images act as lightweight and self-contained blueprints of applications and their dependencies, created from Dockerfiles. Containers are the running instances of Docker images, providing isolated environments with all the necessary components to execute applications. The Engine, images, and containers work together to enable portability, scalability, and simplified management of applications across various environments.

Components of Docker

  • Docker daemon: Docker daemon runs on the Host O.S. It is responsible for running containers, and managing docker services. Docker daemons can communicate with other daemons as well.

  • Docker client: Docker users can interact with the docker daemon through a client. The Docker client uses commands and Rest API to communicate with the Docker daemon. When a client runs any server command on the docker client terminal, the client terminal sends the docker commands to the docker daemon. Docker clients can communicate with more than one daemon.

  • Docker host: Docker Host is used to provide an environment to execute and run applications. It contains the docker daemon, images, containers, networks, and storage.

  • Docker hub/registry: Docker registry manages and stores the docker images. There are two types of registries in the docker.

    1. Public Registry: Public registry is also called Docker Hub.

    2. Private Registry: It is used to share images within the enterprise.

  • Docker images: Docker images are the read-only binary templates used to create docker containers. or, a single file with all dependencies and configurations required to run a program.

  • Docker container: The container holds the entire package that is needed to run the application. Or, In other words, we can say that the container is a running instance of the image. A container is like virtualization when they run on the Docker engine. Images become containers when they run on the docker engine.

Docker installation on Ubuntu

  1. Update the system.

     sudo apt-get update
    
  2. Install docker.

     sudo apt-get install docker.io
    
  3. Check whether the docker is installed or not.

     docker --version
    

Important commands

  • To provide root privileges to the user.

      sudo usermod -aG docker $USER
    
  • To see all the images present on your local system.

      docker images
    
  • To login to the docker hub.

      docker login
    
  • To search for images in the docker hub.

      docker search <image_name>
    
  • To pull an image from docker-hub to a local machine.

      docker pull <image_name>
    
  • To give a name to the container and run it.

      docker run -it --name <container_name> <image_name> /bin/bash
    
  • To check the status of the docker.

      sudo systemctl status docker
    
  • To start the docker.

      sudo systemctl start docker
    
  • To start/stop the container.

      docker start <container_name>
      docker stop <container_name>
    
  • To go inside the container.

      docker attach <container_name>
    
  • To see all the containers.

      docker ps -a
    
  • To see only running containers.

      docker ps
    
  • To stop the container.

      docker stop <container_name>
    
  • To delete the container.

      docker rm <container_name>
    
  • To exit from the docker container.

      exit
    
  • To delete an image.

      docker rmi <image_name>
    

Essential Dockerfile Concepts

  • Dockerfile: Dockerfile is a text file it contains some set of instructions.

  • FROM: For the base image. This command must be on top of the docker file.

  • RUN: To execute commands, it will create a layer in the image.

  • MAINTAINER: Author/Owner/Description.

  • COPY: copy files from the local system to the docker container. we need to provide a source and destination. (We can't download the file from the internet or from any remote repo)

  • ADD: Similar to copy, it provides a feature to download files from the internet.

  • EXPOSE: To expose ports such as port 8080 for Tomcat, port 80 for Nginx, etc...

  • WORKDIR: To set up a working directory for the container.

  • CMD: Execute commands but during container creation.

  • ENTRYPOINT: Similar to CMD, but has higher priority over CMD, the first commands will be executed by ENTRYPOINT only.

  • ENV: Environment Variables.

  • ARG: defines the parameter name and defines its default value

Creation of Dockerfile

A Dockerfile looks something like this.

  • To create an image out of Dockerfile.

      docker build <path_to_dockerfile> -t <image_name>:<tag:name>
    
  • To see all the images.

      docker images
    
  • To create a container from the image.

      docker run -d -p ports:ports <image_name>:<tag:name>
    

Comprehensive Guide to Docker Volumes

Volumes in Docker offer several benefits that make them valuable in containerized environments. Firstly, volumes provide data persistence, ensuring that data stored in containers remains intact even if the containers are stopped, restarted, or removed. This enables easy recovery and prevents data loss. Secondly, volumes allow for data sharing and collaboration between containers, making it simple to share files and information across multiple containers. Thirdly, volumes enable seamless container migration, as they can be easily detached from one container and attached to another. This facilitates smooth deployment and scalability. Finally, volumes provide a convenient means for backups and restores, allowing you to easily create snapshots of your data and restore them as needed. Overall, volumes in Docker enhance data management, flexibility, and reliability within container environments.

Pros of Using Volumes in Docker

  1. Data Persistence: Volumes keep data safe even if containers are stopped or removed.

  2. Data Sharing: Volumes allow multiple containers to access and share the same data.

  3. Container Migration: Volumes make it easy to move containers between different environments.

  4. Backup and Restore: Volumes enable simple data backups and restoration.

  5. Flexibility: Volumes support various storage options, including integration with external systems.

Commands to create the docker volume

  • To create the docker volume.

      docker volume create --name <volume_name> --opt type=<type_name> --opt device=<path_to_volume> --opt o=bind
    
  • To attach a container with the volume.

      docker run -d -p port:port --mount source=<volume_name>,target=<working_directory_for_container> <image_name>:<tag_name>
    
  • To see all created volumes.

      docker volume ls
    
  • To delete the volume.

      docker volume rm <volume_name>
    
  • To remove all unused docker volumes.

      docker volume prune
    
  • To get the volume details.

      docker volume inspect <volume_name>
    
  • To get the container details.

      docker container inspect <container_name>
    

Other Essential Concepts

The difference between the docker attach and docker exec?

  1. docker exec creates a new process in the container's environment while docker attach just connecting the standard i/o of the main process inside the container to the corresponding standard i/o error of the current terminal.

  2. docker exec is especially for running new things in an already-started container, be it a shell or some other process.

The difference between expose and publish in docker?

  • We have three options:

    1. Neither specifies expose nor -p

    2. Only specify expose

    3. Specify expose and -p

  • If we specify neither expose nor -p the service in the container will only be accessible from inside the container itself.

  • If we expose a port, the service in the container is not accessible from outside the docker, but from inside other docker containers, so this is good for inter-container communication.

  • If we expose and -p a port, the service in the container is accessible from anywhere, even outside docker.

NOTE: If we do -p but do not expose, docker does an implicit expose. This is because, if a port is open to the public, then it is automatically also open to the other docker containers.


Docker is a good topic to ask in DevOps Engineer Interviews, mostly for freshers. One must surely try these questions in order to be better at Docker. These questions will help you in your next DevOps Interview. Click here.

ย