Sunday, September 8, 2024

Docker Commands Cheat Sheet: From Basics to Advanced

Docker Commands Cheat Sheet: From Basics to Advanced






1. Docker Installation

  • Command: `docker --version`
  • Description: Check Docker installation and version.

Example: docker --version`  

Output: `Docker version 20.10.7, build f0df350`


2. Docker Images

List Docker Images

  •   Command: docker images
  •   Description: Lists all Docker images on your local machine.

  Example: docker images

   Output: Lists all images with columns: REPOSITORY, TAG, IMAGE ID, etc.


Pull a Docker Image

  •   Command: docker pull [image_name]
  •   Description: Downloads a Docker image from a registry (e.g., Docker Hub).

  Example: docker pull nginx

   Output: Downloads the latest nginx image.


Remove a Docker Image

  •   Command: docker rmi [image_id]
  •   Description: Removes a Docker image from your local machine.

  Example:docker rmi 7b28eabc0405 

   Output: Deletes the image with the specified ID.


Search for a Docker Repository

  • Command: docker search [repository_name]
  • Description: Searches Docker Hub for repositories that match the given name or keyword.

Example: docker search rizwanzafar/pyapp 

Output: Displays a list of public repositories related to "rizwanzafar/pyapp" if available on Docker Hub.


 3. Docker Containers

Run a Docker Container

  •   Command: docker run [image_name]
  •   Description: Creates and runs a new container from a specified image.

  Example: docker run nginx

  Output: Runs an nginx server in a container.


Run a Docker with name

  •   Command: docker run --name myContainer [image_name]
  •   Description: Creates and runs a new container from a specified image with name.

  Example: docker run --name reactapp rizwanzfar/reactapp

  Output: Runs an reactapp  server in a container name reactapp.


Run a Docker on port

  •   Command: docker run -p computerPort:containerPort [image_name]
  •   Description: Creates and runs a new container from a specified image on our computer port.
    I wrote 3000 bcoz I know react by default run on port 3000 in container

  Example: docker run -p 5000:3000 rizwanzfar/reactapp

  Output: Runs a reactapp server on our system at port 5000. 


List Running Containers

  •   Command: docker ps
  •   Description: Lists all currently running containers.

  Example: docker ps

   Output: Displays active containers with details like CONTAINER ID, IMAGE, etc.


List All Containers

  •   Command: docker ps -a
  •   Description: Lists all containers, including stopped ones.

  Example: docker ps -a  

   Output: Shows all containers with their statuses.


Stop a Running Container

  •   Command: docker stop [container_id]
  •   Description: Stops a running container.

  Example: docker stop d4c3d4c3d4c3

   Output: Stops the container with the given ID.


 Remove a Docker Container

  •   Command: docker rm [container_id]
  •   Description: Deletes a stopped container.

  Example: docker rm d4c3d4c3d4c3

   Output: Removes the container with the specified ID.


Start a Stopped Container

  •   Command: docker start [container_id]
  •   Description: Starts a container that has been stopped.

  Example: docker start d4c3d4c3d4c3 

   Output: Restarts the container.


 Run a Container in Detached Mode 

  •   Command: docker run -d [image_name]
  •   Description: Runs a container in the background (detached mode).

  Example: docker run -d nginx

    Output: Runs nginx in the background and outputs the container ID.


4. Docker Volumes

Create a Docker Volume

  •   Command: docker volume create [volume_name]
  •   Description: Creates a new Docker volume.

  Example: docker volume create my_volume 

    Output: Creates a volume named `my_volume


List Docker Volumes

  •   Command: docker volume ls
  •   Description: Lists all Docker volumes on your system.

  Example: docker volume ls

   Output: Lists all volumes with details.


Remove a Docker Volume

  •   Command: docker volume rm [volume_name]
  •   Description: Deletes a Docker volume.

  Example: docker volume rm my_volume

    Output: Deletes the volume my_volume


5. Docker Networking

List Docker Networks

  •   Command: docker network ls
  •   Description: Lists all Docker networks.

  Example: docker network ls

   Output: Lists all networks, e.g., bridge, host, etc.


Create a Docker Network

  •   Command: docker network create [network_name]
  •   Description: Creates a new custom Docker network.

  Example: docker network create my_network 

   Output: Creates a network named my_network


Connect a Container to a Network

  •   Command: docker network connect [network_name] [container_id]
  •   Description: Connects a running container to an existing network.

  Example: docker network connect my_network d4c3d4c3d4c3 

   Output: Connects the specified container to my_network


6. Docker Compose

Run Docker Compose

  •   Command: docker-compose up
  •   Description: Builds, (re)creates, starts, and attaches to containers for a service.

  Example: docker-compose up

  Output: Starts all services defined in the docker-compose.yml


Stop Docker Compose

  •  Command: docker-compose down
  •   Description:** Stops and removes all containers, networks, and volumes defined by the docker-compose.yml

  Example: docker-compose down

    Output: Stops and cleans up the Docker Compose environment.


Build and Run Containers in Detached Mode

  • Command: docker compose up -d --build
  • Description: Builds the images (if not already built) and starts the containers defined in a docker-compose.yml file in detached mode (in the background). The --build flag forces a rebuild of the images before starting the containers.

Example: docker compose up -d --build

Output: Rebuilds the Docker images (if necessary) and runs the containers in the background, allowing you to continue using your terminal.


7. Docker Advanced Commands

Inspect a Docker Container

  •   Command: docker inspect [container_id]
  •   Description: Displays detailed information about a container.

  Example: docker inspect d4c3d4c3d4c3

   Output: Shows JSON output with details of the container.


View Container Logs

  •   Command: docker logs [container_id]
  •   Description: Fetches and displays logs from a container.

  Example: docker logs d4c3d4c3d4c3 

   Output: Displays the container’s logs.


Run a Command in a Running Container

  Command: docker exec [container_id] [command]

  Description: Executes a command inside a running container.

  Example: docker exec d4c3d4c3d4c3 ls /var/logs

  Output: Lists logs inside the container.


Prune Unused Docker Resources

  •   Command: docker system prune
  •   Description: Removes all stopped containers, unused networks, and dangling images.

  Example: docker system prune

   Output: Frees up space by removing unused Docker resources.


Build and Push an Image to Docker Hub

  •   Command: docker build -t [repository_name]:[tag]. && docker push [repository_name]:[tag]
  •   Description: Builds an image from a Dockerfile and pushes it to Docker Hub

  Example: docker build -t myrepo/myimage:v1 . && docker push myrepo/myimage:v1 

  Output: Builds and uploads the image to your Docker Hub repository.


8. Docker Swarm (Orchestration)

Initialize Docker Swarm

  •   Command: docker swarm init
  •   Description: Initializes a new Docker Swarm cluster

  Example:docker swarm init 

   Output: Initializes the Swarm and displays the join token.


Deploy a Stack to Docker Swarm

  •   Command: docker stack deploy -c [stack_file] [stack_name]
  •   Description: Deploys a stack (collection of services) to Docker Swarm

  Example: docker stack deploy -c docker-stack.yml mystack 

   Output: Deploys the stack defined in docker-stack.yml


Update a Service in Docker Swarm

  •   Command: docker service update [service_name]
  •   Description: Updates the configuration of an existing service in the Swarm.

  Example: docker service update --replicas 5 myservice

   Output: Scales the service to 5 replicas




Conclusion

This cheat sheet covers the essential Docker commands you'll need from basic container management to advanced orchestration with Docker Swarm. Each command includes a brief description and an example, making it easy to follow along and apply to your Docker projects.


This format is SEO-friendly and optimized for readers looking for quick, actionable information on Docker commands.

Thursday, June 27, 2024

How to Set Up and Manage a Docker Swarm: A Comprehensive Guide

 Set Up and Manage a Docker Swarm






Introduction

Docker Swarm is a powerful container orchestration tool that allows you to manage a cluster of Docker nodes as a single virtual system. In this guide, we will walk you through the steps to set up and manage a Docker Swarm, including initializing the Swarm, adding worker and manager nodes, troubleshooting connection issues, creating and scaling services, and rolling out updates.

Initializing Docker Swarm

To start using Docker Swarm, you need to initialize it on your primary machine, which will act as the manager node. Run the following command to initialize Docker Swarm:

command: docker swarm init --advertise-addr 192.168.67.9

This command sets up your machine as the manager and advertises its IP address for other nodes to join the Swarm.



Adding Worker Nodes

Worker nodes perform tasks assigned by the manager node. To add a worker node to your Swarm, you need a join token. Obtain this token by executing the following command on the manager node:

command: sudo docker swarm join-token worker

You will receive a command with a token that looks similar to the following:

command: docker swarm join --token SWMTKN-1-1cjqj7bvb19mgsi26mc4qfgmnrtturj959mb1ne9pu1im7a8vw-9hl3z4iwjgrucleh2vr986rrg 192.168.67.9:2377

Run this command on the worker node to join it to the Swarm.


Adding Manager Nodes (Optional)

To add more manager nodes for high availability and fault tolerance, obtain a join token specifically for manager nodes by executing:

command: sudo docker swarm join-token manager

Use the provided token to join the additional manager nodes. 


Troubleshooting Connection Issues

If you encounter connection issues while adding a worker node, it might be due to firewall settings. On the manager node, run the following commands to open the necessary port:

command:
sudo firewall-cmd --add-port=2377/tcp --permanent sudo firewall-cmd --reload

After adjusting the firewall settings, try adding the worker node again.




Verifying Node Addition

To verify that a worker node has been successfully added to the manager node, execute:
command: sudo docker node ls
This command lists all nodes in the Swarm along with their roles and status.


Additionally, you can check the Swarm section in the Docker info output by running:
command: sudo docker info
If the node was added successfully, it will be listed in the Swarm section.




Creating a Service

Docker Swarm allows you to deploy services across the cluster. For example, if you have an image named nodeapp:crud_exit, you can create a service with three replicas running on port 4000 using the following command:

command: docker service create --name nodeapp --replicas 3 -p 4000:4000 rizwanzafar/nodeapp:crud_exit

This command instructs Docker Swarm to run three instances of the nodeapp service, each listening on port 4000.

To see which nodes are running the replicas, use:

command: docker service ps nodeapp





Scaling the Service

You might need to scale your services based on demand. To increase or decrease the number of replicas of a running service, run:

command: docker service scale nodeapp=5

This command scales the nodeapp service to five replicas.



Rolling Out Updates

Updating a service in Docker Swarm is straightforward. If you need to deploy a different image or a new tag of the same image, use the following command:

command: docker service update --image rizwanzafar:01 nodeapp

This command updates the nodeapp service to use the new image rizwanzafar:01.



Node Availability (Optional)

if you you want my new created service should not be doploy to this specfic node then you can
dtrain the avilabilty of that node
command : docker node update --availability drain node-id check node status : docker node ls

and let now you want to revert back to availabel status you can run follwoing command
command : docker node update --availability active node-id
check node status : docker node ls

Managing Node Roles in Docker Swarm

Use following command to change a manager node to a worker node.

  • docker node demote <node_id>
Use following command to change a worker node to a manager node.

  • docker node promote <node_id>

Conclusion

Setting up and managing a Docker Swarm is an efficient way to handle containerized applications at scale. By following this guide, you can initialize a Swarm, add worker and manager nodes, troubleshoot issues, create and scale services, and roll out updates with ease. Docker Swarm provides robust features for container orchestration, making it an essential tool for modern DevOps practices.

Feel free to leave comments or questions below, and don't forget to share your experience with Docker Swarm!


Wednesday, June 26, 2024

How to Mount a Docker Container with a Volume on Different Servers

How to Mount a Docker Container with a Volume on Different Servers





 Mounting a Docker container with a volume that resides on a different server involves setting up network connectivity and configuring network file systems like NFS. This guide will help you set up an NFS to share a Docker volume across different servers. let my volume be available on Redhat and the container is running on Ubuntu


Step-by-Step Guide to Setting Up NFS


1. Install NFS Server on Red Hat

First, install and configure the NFS server on your Red Hat server:
commands:
sudo yum install nfs-utils
sudo systemctl enable nfs-server 
sudo systemctl start nfs-server

2. Export the Docker Volume Directory

Edit the NFS export configuration to share the Docker volume directory:
command:
sudo vi /etc/exports
add the docker volume path with the IP of the server that will access this (Ubuntu)
example : 
/var/lib/docker/volumes/testData/_data ip_of_ubuntu(rw,sync,no_subtree_check)



3. Restart NFS Service

Restart the NFS service to apply the changes:
command: sudo systemctl restart nfs-server

Mounting the NFS Volume on Ubuntu


4. Install NFS Client on Ubuntu

Install the NFS client utilities on your Ubuntu server:
commands:
sudo apt-get update 
sudo apt-get install nfs-common

5. Mount the NFS Share on Ubuntu

Create a mount point and mount the NFS share:
commands: 
sudo mkdir -p /mnt/tetData  #this is the place on ubunte where volume will be mount
sudo mount -t nfs 192.168.67.9:/var/lib/docker/volumes/tetData/_data /mnt/tetData
Replace 192.168.67.9 with the IP address of your Red Hat server.

Using the NFS Volume in Docker

6. Update Docker Compose Configuration

Modify your docker-compose.yml to use the NFS-mounted directory:
compose file syntax


Additional Considerations

  • NFS Security: Secure your NFS configuration based on your network and security requirements.
  • Persistent Mount: To make the NFS mount persistent across reboots, add an entry to /etc/fstab on your Ubuntu server.
  • command:  echo '192.168.67.9:/var/lib/docker/volumes/tetData/_data /mnt/tetData nfs defaults 0 0' | sudo tee -a /etc/fstab

Permission Issues

1. Verify and Adjust Permissions on the Red Hat Server:
   

   Ensure that the directory and its contents on the Red Hat server have the appropriate permissions. You need to set the ownership and permissions so that the Docker container running on Ubuntu can access it.


 commands:  

   sudo chown -R 999:999 /var/lib/docker/volumes/tetData/_data

   sudo chmod -R 755 /var/lib/docker/volumes/tetData/_data


 Here, `999:999` corresponds to the `mongodb` user and group ID used by the official MongoDB Docker image.

2. Check and Adjust Local Directory Permissions on Ubuntu


   Ensure the local mount point directory on Ubuntu has the correct permissions:


   commands

   sudo chown -R 999:999 /mnt/tetData

   sudo chmod -R 755 /mnt/tetData


3. Restart Services:


Restart the NFS server on the Red Hat machine to apply changes:

commands: 

sudo systemctl restart nfs-server

   


By following these steps, you can mount a Docker container on one server to a volume residing on another server, leveraging the NFS for seamless data sharing. Adjust configurations and permissions according to your specific setup and requirements.




Sunday, June 23, 2024

How to Install Docker on Ubuntu 22.04 (Jammy Jellyfish)

 How to Install Docker on Ubuntu 22.04
(Jammy Jellyfish)











Installing Docker on Ubuntu 22.04 is a straightforward process. Follow these steps to get Docker up and running on your system:

Step-by-Step Guide to Install Docker on Ubuntu 22.04

  1. Update the Package List

    First, ensure your package list is up to date:
    command: sudo apt update

  2. Install Required Packages

    Install the necessary packages for Docker installation:
    command: sudo apt install apt-transport-https ca-certificates curl software-properties-common

  3. Add the Docker GPG Key

    Add Docker’s official GPG key to your system:
    command: curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

  4. Add the Docker Repository

    Set up the Docker repository:
    command: echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

  5. Update the Package List Again

    Refresh your package list to include Docker’s repository:
    command: sudo apt update

  6. Install Docker

    Install Docker CE (Community Edition):
    command: sudo apt install docker-ce

  7. Verify Docker Installation

           Check that Docker is installed and running:
           command: sudo systemctl status docker


By following these steps, you will successfully install Docker on your Ubuntu 22.04 system, allowing you to start leveraging Docker containers for your projects.

For more detailed guides and troubleshooting, refer to the official Docker documentation.


How to install Docker in Redhat






Tuesday, June 11, 2024

Communicate Between Two Containers | Docker Network

  Docker Network | Communicate Between Two Containers with Example




Running a Node.js application that connects to a MongoDB database, each in separate Docker containers requires proper configuration for communication due to the isolated nature of containers. Docker networks facilitate efficient and smooth communication between containers. Below are the steps to set up and ensure seamless communication between your Node.js app and MongoDB using Docker networks


Step-by-Step Guide


1. Create a Docker Network

First, create a Docker network to enable communication between the containers.

Command: docker network create mynet



2. Run the MongoDB Container with the Network

Next, run your MongoDB container and attach it to the created network. Ensure you assign a name to the MongoDB container. You do not need to map ports at this step.

Command: 

 - docker run -d --rm --name mongo --network mynet ImageID





3. Run the Node.js Application Container with the Network

Before running the Node.js application container, update the MongoDB connection URL in your app's code to use the name of the MongoDB container (mongo). Rebuild the Node.js image after making this change. rebuild your docker image again, because we made changes in code, if you are working in development environment then you do mount bind as I am doing here

Example MongoDB URL Update:

const mongoURL = "mongodb://mongo:27017/mydatabase";


 






Verifying Communication

Your Node.js application should now be accessible at http://localhost:4000. To verify that the communication between the Node.js app and MongoDB is working correctly, you can use a tool like Postman to send API requests.

When you hit your API endpoint in Postman, the Node.js app should fetch data from the MongoDB database running in the other container. This confirms that both containers are communicating via the Docker network.

 


So above both container are communicating via Docker network.

 

Conclusion

By following these steps, you can set up seamless communication between your Node.js application and MongoDB running in separate Docker containers. Using Docker networks ensures efficient and smooth interactions, allowing your services to function correctly in isolated environments.

Ensure to keep your Docker network and container configurations updated for optimal performance and security. Happy Dockerizing!



Connecting a React App Inside Docker to a Node.js App on the Host Machine

Scenario:

  • Node.js app: Running locally on your OS at http://localhost:5000.
  • React app: Running inside a Docker container.

Steps to Connect:

  1. Node.js App (Running Locally):
    Ensure your Node.js app is accessible on http://localhost:5000.

  2. React App (Running Inside Docker):
    When making API requests from your React app to the Node.js app, use http://host.docker.internal:5000 instead of http://localhost:5000


Explanation:

  • host.docker.internal allows the Docker container to connect to the host machine, and 5000 is the port where the Node.js app is running.

This setup ensures that your React app inside Docker can communicate with the Node.js app running on your local machine.


Wednesday, June 5, 2024

Simplifying Docker Operations with Docker Compose

 



Introduction:

Docker Compose revolutionizes Docker image management by eliminating the need for repetitive command typing. Instead of manually executing complex Docker run commands each time, Docker Compose allows effortless container management through a simple configuration file.


Installing Docker Compose

  • Ensure Docker is installed and operational by running docker --version.

  • Download Docker Compose using command:
    sudo curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

  • Grant executable permissions to Docker Compose command:
    sudo chmod +x /usr/local/bin/docker-compose

  • Create a symbolic link for easy access command :
    sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose

  • Verify the installation with command : docker-compose --version

Running Single Images with Docker Compose:

  • Create a docker-compose.yml file

  • Configure the services and their dependencies in the file.

  • Execute the command docker compose up to launch the containers.



Optimizing Docker Operations:

Running Multiple Containers:
Docker Compose allows running multiple containers simultaneously with ease. By defining services in the docker-compose.yml file, Docker Compose automatically handles network creation and inter-container communication. we can also use the port binding and resolve the dependency of the container if they are dependent on each other




Another Example:

Now, we are working with a separate Docker Compose file from the one mentioned earlier, this file includes named volumes, anonymous volumes, and bind mounts. Additionally, it runs a Docker image configured with an .env file. This setup ensures efficient volume management and environment configuration for seamless application deployment.





Running Specific Services:

If you only need to run a specific service from the docker-compose.yml file, Docker Compose provides the flexibility to target individual services for execution.





Network Creation:

Docker Compose simplifies network management by automatically creating networks for connected containers. This seamless connectivity enhances communication between containers, fostering efficient application development and deployment.






Stopping and Cleaning Up:


To halt Docker Compose and remove all associated containers and networks, utilize the command demonstrated below.







By leveraging Docker Compose, developers can streamline Docker operations, improve workflow efficiency, and simplify container management tasks.

Meta Description: Learn how Docker Compose streamlines Docker image management and simplifies running multiple containers with ease.

Monday, June 3, 2024

Docker Bind Mount: Keeping Your Containers Updated



 Docker Bind Mount: Keeping Your Containers Updated





When working with Docker, you might encounter scenarios where a local file needs to be in sync with a file inside a Docker container. This is where Docker bind mounts come into play. Bind mounts allow you to bind a file or directory on your host machine to a file or directory inside your container. Bind mount is part of docker volume here are some key difference between anonymes volum , named volume and bind mount




 What is a Docker Bind Mount?


A Docker bind mount allows a file or directory from your host machine to be directly accessible within a container. This means that any updates to the local file are immediately available inside the container.


 Example Scenario


Consider you have a file located at `/opt/issues.txt` on your host machine. Your Docker container is designed to manage and process issues listed in this file. However, if the `/opt/issues.txt` file is updated on the host machine, the Docker container won't see these updates because the file exists outside the container.


To solve this, you can use a bind mount to link the `/opt/issues.txt` file on your host machine with a file inside the Docker container. This way, any updates to the local file are instantly reflected inside the container.


 How to Use Docker Bind Mount


Here's a step-by-step example of how to set up a bind mount:


1. **Identify the local file and the target path inside the container**: 

     note: make sure the local path is the absolute 

   - Local file: `/opt/issues.txt`

   - Target path inside the container: `/container/issues.txt`


2. **Run the Docker container with the bind mount**:

   ```bash

   docker run -d -v /opt/issues.txt:/container/issues.txt --rm  imageID

   ```


 Breakdown of the Command


- `docker run`: Starts a new Docker container.

- `-v /opt/issues.txt:/container/issues.txt`: The `-v` flag specifies the bind mount. The local file path (`/opt/issues.txt`) is mapped to the target path inside the container (`/container/issues.txt`).

- `--rm`: Automatically removes the container when it exits.

- `imageID`: The ID of the Docker image you want to run.


3. By default, bind mounts in Docker are set to read and write mode. This allows changes made outside the container to be reflected inside it, but it also means the container can modify your external files, which can be risky. To enhance security and maintain the integrity of your files, you can use read-only bind mounts. This setup ensures that while you can update files from outside the container, the container itself cannot alter your external files or code. Using read-only bind mounts is a best practice for protecting your development environment and ensuring consistency.

command 
```bash

   docker run -d -v /opt/foldername:/container/foldername:ro --rm  imageID

   ```

Note : above step will make each and everything readonly but let in case you want something should be writeable for container as well then we can do this by adding anonymous volume .


```bash

   docker run -d -v /opt/foldername:/container/foldername:ro -v /container/folder_want_to_readable --rm  imageID

   ```

 Benefits of Using Bind Mounts


- Immediate Updates: Any changes to the local file are instantly available inside the container.

- Persistent Data: Data in bind-mounted files persists even if the container is removed, making it ideal for development and debugging.

- Flexibility: Easily share files between the host and the container without rebuilding the Docker image.


 Conclusion

Docker bind mounts are a powerful feature that helps you keep your containers synchronized with the host machine. By using bind mounts, you can ensure that your container has access to the most up-to-date information, improving the efficiency and reliability of your containerized applications.


Optimize your workflow and keep your containers updated with Docker bind mounts!


 Docker, bind mount, Docker container, local file sync, Docker run, container synchronization, Docker volumes, persistent data.


Meta Description: Learn how to use Docker bind mounts to keep your containers synchronized with local files. Ensure your Docker container always has the latest updates from your host machine.

Thursday, May 30, 2024

 How to Install Docker on Red Hat Linux | Redhat

 How to Install Docker on Red Hat Linux





Main Components of Docker

Understanding Docker's key components is essential for effective use:

1. Docker File: A script with instructions to build Docker images.

2. Docker Image: A lightweight, standalone, executable package with everything needed to run a piece of software.

3. Docker Container: A runtime instance of a Docker image.

4. Docker Registry: A storage and distribution system for Docker images.


 Step-by-Step Guide to Installing Docker on Red Hat


 Step 1: Install Yum Utilities

First, you need to install the `yum-utils` package to manage your repositories efficiently. Open your terminal and run:


command: sudo yum install -y yum-utils



 Step 2: Add the Docker Repository

Next, add the Docker repository to your system to ensure you get the latest version. Execute the following command:


command: sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo



Step 3: Install Docker

Now, you can install Docker and its associated components. Run the following command to install Docker Engine, CLI, and necessary plugins:

command: sudo yum install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin


 Step 4: Start Docker

Finally, start the Docker service with the command:

command: sudo systemctl start docker


For more detailed information, you can refer to the official Docker documentation [here](https://docs.docker.com/engine/install/centos/).


By following these steps, you will have Docker up and running on your Red Hat Linux system. Happy containerizing!