How to install Docker on RHEL 8 beta


This post will guide you about the steps required to get started with Docker on RHEL 8 beta system. RHEL 8 beta was announced a few months back and it is available to download for testers and developers, It comes with a lot of advancement and new features and If you want to install and explore RHEL 8 you can read our old post that shows the detailed steps of RHEL 8 installation on VMware workstation
One of the new features added in RHEL 8 is Containers without daemons. Buildah and Podman can help you build containers without any requirement of Dockers.
There are two versions of Dockers available: Docker CE [Community edition] and Docker EE [Enterprise Edition].
Well, if you are wondering of RHEL 8 Beta supports docker? There is no official documentation regarding the installation of Dockers on RHEL 8 beta, we tested installation of Docker CE on RHEL 8 beta to play with dockers on RHEL 8.

Steps of Installing Docker CE on RHEL 8 beta

Before installing Docker CE on a machine, we need to setup the docker repository. Once this step is done, we can install Docker from the repository.

1. Set up the repository

Install required packages, dnf-utils, device-mapper-persistent-data and lvm2 are required by the devicemapper storage driver.
Note that : DNF (dandified yum)is used as package management utility in RHEL 8.
# dnf install -y dnf-utils \
  device-mapper-persistent-data \
Use the following command to set up the repository.
#  dnf config-manager \ 
  --add-repo \
Now, the docker ce repository is set up, you can check using dnf repolist or yum repolist

2. Install Docker CE

You can install the latest version of the docker ce using below command.
# dnf install docker-ce
While installing, if you are prompted to accept the GPG key, accept the same by entering y
Note: We encountered below error while running the install command for the first time. We re-ran the install command, it did not appear and prompt for accepting gpg key came next and after entering y, the installation went smooth and was finished in about a minute.
Curl error (6): Couldn’t resolve host name for [Could not resolve host:]

Once installed, you will see the download complete message with the package details that are installed.

Next step is to start and enable docker using systemctl
# systemctl start docker
# systemctl enable docker
Finally, you are done with the Docker CE installation on your RHEL 8 machine, now lets verify that it is installed correctly by running the hello-world image.

That’s all for this post, hope you are able to install docker on your RedHat Linux 8 machine.
If you have thoughts on this, please feel free to comment below.

Different Instructions in a Dockerfile and their usage

Learn about different components/Instructions that makes a Dockerfile

What is Dockerfile?

The set of instructions for building a container are written in Dockefile. It is a text file that contains the base image, all the settings, and the list of commands required for preparing and starting a container.

Dockerfile basic Example

A basic example of a Dockerfile for installaing and building a apache web server.
FROM centos:latest


RUN yum update && yum -y install httpd

RUN echo "Hello, Welcome to the world of Dockers" > /var/www/html/index.html


CMD [“/usr/sbin/httpd”, “-D”, “FOREGROUND”]

Instructions in Dockerfile

The syntax of Dockerfile is simple and contains below instructions:


FROM is the first instruction in any Dockerfile and it defines the base image to use. Base image can be any existing Docker image, typically it is operating system image or a specific application framework.
FROM <image>
FROM <image>:<tag>

MAINTAINER:  [deprecated]

It is used to set the author of that image. You can do same task using LABEL instruction.


RUN is used to run or execute commands in the container. Usage of RUN is RUN <command> , command is run in a shell, which is by default /bin/sh –c
RUN <command> (shell form, the command is run in a shell, which by default is /bin/sh –c)


ADD is used to copy the files, directories, remote file URLs from current directory into the container. Eg ADD <src><dest>.
<src> file can also be tar archive, the same will be unpacked for you as a directory into the docker image.
If <dest> path do not exist, it will be created for you.
ADD <src><dest>


It is similar to ADD, but it do not support the compression and URL file functionality. It is used to copy the files or directories from <src> and adds them to the destination path.
COPY <source>… <destination>


When container starts up, these commands will be executed/run. It is used to provide defaults for an executing container. The defaults can be executables or in case executable is omitted/excluded then you must specify an ENTRYPOINT instruction as well. In case there are more than one CMD in the dockerfile, then only the last CMD will take effect.
In case a user specify any arguments to docker run, then they will override the defaults mentioned in CMD.
Usage :
CMD [“executable”,”param1″,”param2″]  (exec form)
CMD [“param1″,”param2”] (default parameters to ENTRYPOINT)


It is used to add metadata to an image. A label is a key-value pair. An image can have more than one label. You can view image labels using docker inspect command.
Eg of Label:
LABEL version=”1.0″


It is used to inform Docker that container listens on the specified network port at the runtime. You can specify the protocol as well (TCP or UDP), by default  TCP is the protocol if nothing is specified.
Using EXPOSE,  the ports are exposed and can be used to publish and map one or more ports using –p option with docker run.
Regardless, of the EXPOSE instruction, the settings can be override at runtime by using the –p flag.
For eg.
docker run –p 80:80/tcp –p 5050:5050 ….
 EXPOSE <port> [<port>/<protocol>…]


The ENV instruction is used  to set the environment variable <key> to the value <value>
First format given in usage below will set single variable to value, while the second format can be used to set multiple variables at one time. You can view all the set environment variables using docker inspect command and change them using dockerrun –env<key>=<value>
ENV <key><value>
ENV <key>=<value> …


ENTRYPOINT allows you to configure a container that will run as an executable.

What’s the difference between CMD and ENTRYPOINT

Docker has default entrypoint (/bin/sh -c) but does not have a command. For example, running below <docker run image>, here the default Entrypoint will be /bin/sh -c and CMD will be bash. So that means, CMD is run via the ENTRYPOINT or CMD will be the parameter of the entrypoint.
Docker run –i –t ubuntu bash
Entrypoint : /bin/sh –c
Cmd : bash
Another example, suppose you want to use ENTRYPOINT as /bin/cat. So, when docker run image /etc/passwd command will run /etc/passwd is command and it is passed to the ENTRYPOINT /bin/cat and the execution is /bin/vat /etc/passwdd
Final Conclusion is ENTRYPOINT specifies a command that will always be executed when container starts.
CMD specifies arguments that will be fed to the ENTRYPOINT
ENTRYPOINT [“executable”, “param1”, “param2”] (exec form, preferred)
ENTRYPOINT command param1 param2 (shell form)


Volume defines a path or creates a mount point in the container that Docker exposes to the host system and it can be mapped using the -v option.
The docker run command initializes the specified volume along with the existing data in the volume within the base image.
VOLUME [“/path”]
VOLUME  /path1 /path2 ..


USER instruction sets the username or UID to use while running the image. Also, GID or user group can also be specified with USER instructions.
USER <user>[:<group>]


The WORKDIR instruction is used to set the working directory for any ADD, COPY, CMD, RUN or ENTRYPOINT instruction that follows it. It can be used multiple times in a Dockerfile.
WORKDIR </path>


ARG instruction is used to define a variable that can be passed at build time to the builder docker build using the –build-arg <variable>=<value>
A dockerfile can contain multiple ARG instructions. Environment variable defined using ENV instruction always override the ARG instruction.
It is not recommended to pass user credentials, github keys as build time variables as the same will be visible in docker history command.
ARG <name>[=<default value>]


SHELL instruction is particulary useful for Windows and it allows to override the default shell (/bin/sh -c) for Linux and (“cmd” “/S”, “/C” ).
SHELL instruction also allows to use alternate shells like zsh, tsh, powershell etc.
SHELL [“<executable>”, “<param1>”, “<param2>”]
Please comment down your thoughts on Dockerfiles and let us know if you have any query.


How to Change Hostname in Linux (CentOS 7.x/RHEL 7.x)

How to Change Hostname in a Linux Machine using hostnamectl command


This article is about how you can change your Linux machine hostname using the hostnamectl command. While installing Redhat 7.x or CentOS 7.x, in case you have not specified any hostname, then by default “localhost.localdomain” will be assigned as default hostname for that machine. You can easily change the hostname for your Redhat or CentOS Linux machine using the hostnamectl command. This method can be used for CentOS 7.x / RHEL 7.x and RHEL 8 operating systems.

Method 1: hostnamectl

hostnamectl command can be used to change or query system hostname and related settings. The advantage of using hostnamectl is that , we dont require machine reboot to make the changes. Let us learn more about hostnamectl command.

Types of Hostnames

  1. Pretty Hostname: The high level pretty hostname is in the form of Human readable format and it can include all kinds of special characters. Example of Pretty hostname can be “John’s Laptop” or “Database Server”.
  2. Static Hostname: It is the traditional hostname, which can be chosen by the user and used to initialize kernel hostname at boot. It is stored in the /etc/hostname file.
  3. Transient Hostname: It is the dynamic hostname maintained by the kernel and is initialized to the static hostname by default.

Changing your Hostname in CentOS 7 or RHEL 7 /RHEL 8

In CentOS and RHEL 7.x onwards hostname query and control is managed by hostnamectl command. You can use hostnamectl command to change or set hostnames. As described above, all the three hostnames viz Pretty hostname, static hostname and transient hostname can be set using hostnamectl.

To view all the Host names

To query about current status of hostname information, issue the hostnamectl command.
# hostnamectl

To change all the Hostnames

To change all the host names command, use below command as root user. This will change the pretty, static and transient hostnames alike.
# hostnamectl set-hostname name

To Set a Particular hostname

In case you want to set a Particular hostname, use below command with required option.
# hostnamectl set-hostname name <options>
Options can be either:
  • –static
  • –pretty
  • –transient
Example of setting Pretty hostname:
# hostnamectl set-hostname "John's Laptop" --pretty
Example of setting Static hostname:
# hostnamectl set-hostname linuxforgeek --static

To clear a particular hostname

Use below command as root user to clear a particular hostname and revert the same to its default.
# hostnamectl set-hostname "" <options>
In above command “” refers to empty string and will clear the hostname and <option> can be either –pretty, –static or –transient.

Making new hostname effective

Once you are done with changing the hostname of your Linux server/machine, you can open up a new terminal session or issue shell command bash to update in the current session
# bash

Method 2 : modify /etc/hostname

Another method to change hostname in RHEL 7.x/CentOS 7.x is by changing hostname in /etc/hostname file followed by the reboot.
# vi /etc/hostname
# reboot
That is all for this post, feel free to comment below for any query.


Top DevOps Interview Questions and Answers for experienced

Top DevOps Interview Questions

1. What are the Benefits of DevOps

Benefits of DevOps are as follows:
  • Release Velocity: Following DevOps practices for Software Development helps in increasing release velocity. We can release code to production more often.
  • Shorter Development Cycle: With Devops approach, we have continuous software delivery. The complete Development cycle from design phase to deployment in productions becomes shorter.
  • Defect Detection: With DevOps approach, we can identify the defects much earlier and the same can be fixed before releasing to production thus helps to increase the quality of software.
  • Fast Recovery from Failure: In any case of Failure, we can recover very fast with the DevOps process.
  • Collaboration: With DevOps silos between development and operations is broken that removes the wall of confusion and better collaboration between dev and ops helps in more productive teams.

2. What are the DevOps tools that you use?

We use following tools in DevOps
  • Jenkins: Jenkins is a popular open source automation server used as a continuous integration tool. We can build, test, run, deploy and deliver software.
  • GIT: Version Control System that helps to track code changes.
  • Docker: It is popular Containerization tool.
  • Ansible, Puppet or Chef: Popular configuartion Management tools.
  • Nagios: Nagios is used for Continuous monitoring and setting up of alerts based upon threshold values.
  • Selenium: Continuos Testing tool.

3. Explain DevOps workflow in your Organization or How do all DevOps tools work together?

Below is the general logical flow in our organization:
  1. Jira tool is used for writing all the requirement and tasks and tracking them.
  2. Based on the assigned tasks, Developers develop the code and check-in the code in the GIT repository. GIT is the version control system.
  3. After the developer’s code is committed in the repository, Jenkins pulls this code from the repository and build it using Ant or Maven.
  4. Build process is automated using Jenkins.
  5. During the build process, automated tests run to validate the code checked in by the developer.
  6. Configuration Management tool like Puppet or Ansible is used to deploy and provision test environment and then Jenkins release this code in the test environment.
  7. Tools like selenium are used to test the code.
  8. Once the code is tested, next Jenkins send it for deployment on the production server.
  9. To Deploy the same code on multiple hosts, Docker images are used.
  10. Once the code is deployed to Production, it is monitored using Continuous Monitoring tools like Nagios.

4. What is CICD in DevOps?

CICD stands for Continuous Integration and Continuous Delivery.
Continuous Integration: It is development practice in which all the developers are required to integrate code into the main branch several times a day. Each code check-in is verified and an automated build is triggered and tested and it helps to detect errors early. Thus CI concept helps to remove the problem of finding later occurrences of issues.
Continuous Delivery: It is development practice by which software is delivered in shorter cycles. All the changes whether it is new features, configuration changes, bug fixes and moved to production safely and quickly that helps to achieve Low-risk releases, Faster time to market, higher quality software and reducing costs.

5. What is Version Control and its benefits?

Version Control is a system that records changes to a files over the time. It allows you to revert files to a previous state, revert entire project to a previous state, you can see who last modified a particular file and compare changes over the time.
Benefits of Version Control System(VCS):
  • All the team members can work freely on any file at any time. Changes can be done later to a common version using VCS.
  • Versioning is maintained by VCS, you can check any past version of file or project anytime.
  • You can track what is changed in a particular file by which commit.
  • Distributed VCS like GIT allows every team member to have the complete project as a local clone/repository.

6. Which VCS tool are you using?

You can describe about the VCS tool used in your environment. GIT and SVN are common Version Control systems.

7. What is GIT and what are its benefits?

GIT is a distributed version control system that helps to track changes to a file and revert back to any particular change or commit. By distributed, it implies that it does not rely upon a central server for storing all the code, files or project. Instead, each developer clones a copy of repository referred as local repository on his local machine so there is no danger of losing data in case of any server outage.

8. How do you take DevOps approach with Amazon Web Services?

As per DevOps, infrastructure is treated as code. AWS can help you to automate your infrastructure using services like CloudFormation, Terraform. We can use AWS cloudformation to create and deploy AWS resources by using templates. We can design our build process, which can create deployable units. All the dependencies and parameters can be passed in the templates. CloudFormation can read these templates and deploy the applications in the AWS cloud.

9. Which scripting languages you use for DevOps?

You can name the scripting language you are comfortable with and using in your environment. Some of the popular scripting languages are:
  • Bash: On Unix based system, Bash or shell scripting is commonly used for automating things.
  • Python: Python is very popular programming language that can also be used for automation.
  • Perl: It can also be used for automation of tasks.
  • Groovy: It is Java based and can be used for scripting purpose for build and automation tasks.

10. Which monitoring tool you use in your environment?

You can describe about the Monitoring tools used in your environment. Some of the popular monitoring tools are Nagios, zabbix, Prometheus etc.

11. What is use of Nagios?

Nagios is an open source tool to monitor IT infrastructure, systems, and network. We can configure alerts based upon threshold values for different critical components like CPU usage, memory consumption, disk space usage of file systems etc.

12. What is the difference between a container and Virtual Machine?

Virtual Machines are managed by a Hypervisor and represent hardware level Virtualization while containers do not have a hypervisor layer and they represent Operating system level virtualization. A container is lightweight compared to a VM.

13. What is a Docker container?

A docker container is a lightweight unit of software that packages up code and all its dependencies so that applications run quickly and works uniformly on all computing environments. Containers are created from docker images
We will update this post with more questions soon.