Tuesday, January 11, 2022

Attaching Persistent storage to a container

How to attach Persistent storage to a container


    As we all know containers are ephemeral in nature and it does not persist the data. If container is removed the data is gone along  with it. So per fix this issue, we can attached a persistent storage with container. To do the same, we configure a folder on host file system to be attached to container for persistent storage.

we will start will creating a folder on host system.




We will also need to take care of SElinux labeling, so we will setup a rule for '/var/local/mysql' folder and anything underneath. So once rule is set run restorecon to change label.







After the selinux labeling is setup we will also have to take care of file system permission, so that container to write on the filesystem of host. Since we are taking example of Mysql process, we will change ownership to mysql uid and gid for the folder.





Now we are all set to run a container. So we will have to pull the image. You might need to login to the registry if needed. 












Now we have the image fetached, lets run the container with the storage mount of '/var/local/mysql' folder of host to container folder of '/var/lib/mysql/data'.

-d = Running container in background
-v = Bind a volume
-e = Environmental variable









After running the container, you will find bunch of files created underneath the /var/local/mysql folder. Also you might notice that the files are owned by Mysql uid and gid.

















Now, we will access the shell of the container to access the mysql process.











We can access the database we created called 'projectmanhattan' and  we will create the database with a table called 'newyork'.














One table is created under the database, lets populate it with some data to test the data persistence of storage.













Now we will delete the old container named 'mydb1' and create another container named 'mydb2' with same properties and mounting the same volume to verify if data persisted or not. We can see that data still exists even after deleting the initial container.
 
















Conclusion: 

We are able to persist the data if we mount the host file system storage in the container.

Hope that helps.







Wednesday, August 29, 2018

Configuring networking in RHEL7 using nmcli tool

To configure networking (IP addressing) in Red Hat Enterprise Linux 7 using 'nmcli' tool

To view existing Network profiles:

# nmcli connection show

To add new connection named 'dynamic' using DHCP:

# nmcli connection add con-name dynamic type ethernet ifname eth0

To enable the newly created network profile use:

# nmcli connection up dynamic

To view all IP details after enabling the new network profile:

# nmcli connection show dynamic | grep -i ip

Or

# ip addr show eth0

Now let's see if  you want to create a network profile name 'mynetwork' with Static IP addressing:

# nmcli connection add con-name mynetwork type ethernet ifname eth0 ip4 192.168.0.1/24 gw4 192.168.0.254

To check use:

# nmcli connection show

To enable the newly created 'mynetwork' connection profile:

# nmcli connection up mynetwork

To verify:

# ip addr show eth0

To view detailed output of IP settings of new profile:

# nmcli connection show mynetwork | less

Since we did not add DNS settings in our previous created profile we use:

# nmcli connection modify mynetwork ipv4.dns 192.168.0.250

To verify:

# nmcli connection show mynetwok | grep -i dns

To apply additional DNS try:

# nmcli connection modify mynetwork +ipv4.dns 8.8.8.8

To apply try:

# nmcli connection reload

Or

# nmcli connection down mynetwork ; nmcli connection up mynetwork

Now check:

# cat /etc/resolv.conf


HTH

Friday, August 5, 2016

Working with Docker Containers

Install Docker Containers:


We can install Docker containers in two different ways, One is to install the same with the yum package manager directly or second method is we can use curl with the get.docker.com site. We will be using yum this time.

1.  Log into our machine as a user with sudo or root privileges.
2.  Make sure our server existing yum packages are up-to-date.
# yum update

3. Add the yum repo:
# vim /etc/yum.repos.d/docker.repo
[dockerrepo]
name=Docker Repository
baseurl=https://yum.dockerproject.org/repo/main/centos/7
enabled=1
gpgcheck=1
gpgkey=https://yum.dockerproject.org/gpg

Install the Docker package.
# yum install docker-engine

After, Docker package has been installed, start the daemon, check its status and enable it system wide using the below commands:
# systemctl start docker
# systemctl status docker
# systemctl enable docker

Verify docker is installed correctly by running a test image in a container.
# docker run hello-world
Unable to find image 'hello-world:latest' locally
    latest: Pulling from hello-world
    a8219747be10: Pull complete
    91c95931e552: Already exists
    hello-world:latest: The image you are pulling has been verified. Important: image verification is a tech preview feature and should not be relied on to provide security.
    Digest: sha256:aa03e5d0d5553b4c3473e89c8619cf79df368babd1.7.1cf5daeb82aab55838d
    Status: Downloaded newer image for hello-world:latest
    Hello from Docker.
    This message shows that your installation appears to be working correctly.

    To generate this message, Docker took the following steps:
     1. The Docker client contacted the Docker daemon.
     2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
            (Assuming it was not already locally available.)
     3. The Docker daemon created a new container from that image which runs the
            executable that produces the output you are currently reading.
     4. The Docker daemon streamed that output to the Docker client, which sent it
            to your terminal.

Now, you can run a few basic Docker commands to get info about Docker:

For system-wide information on Docker
# docker info
# docker version

4. In order to start and run a Docker container, first an image must be downloaded from Docker Hub on your host. Docker Hub offers a great deal of free images from its repositories.
To search for a Docker image, Ubuntu for instance, issue the following command:
# docker search ubuntu

5.  We want to run Ubuntu, So download it locally by running the below command
# docker pull ubuntu

6. To list all the available Docker images on your host issue the following command:
# docker images

7. In order to create and run a container, you need to run a command into a downloaded image, in this caseUbuntu, so a basic command would be to display the distribution version file inside the container using cat command, as in the following example:

# docker run ubuntu cat /etc/issue

8. To run one of the containers again with the command that was executed to create it, first you must get the container ID (or the name automatically generated by Docker) by issuing the below command, which displays a list of the running and stopped (non-running) containers:

# docker ps -l

9. Once the container ID has been obtained, you can start the container again with the command that was used to create it, by issuing the following command:
# docker start <Container ID>

10. In order to interactively connect into a container shell session, and run commands as you do on any other Linux session, issue the following command:
# docker run -it ubuntu bash


11. To quit and return to host from the running container session you must type exit command. The exit command terminates all the container processes and stops it.
# exit

12. To reconnect to the running container you need the container ID or name. Issue docker ps command to get the ID or name and, then, run docker attach command by specifying container ID or name, as illustrated in the image above:
# docker attach <container id>


 

Install Apache Web server in Docker container



Once I start the new docker container as describe earlier, I will start two new containers for my Apache and Mysql deployment
# docker start <Container ID>
# docker run -it ubuntu bash

Once you are in the Ubuntu docker container, install the apache packages
# apt-get update && apt-get install apache2
Now its time to start the service,
# /etc/init.d/apache2 start
To verify if the server is running, try using links command. (We might need to install it if thats not available.
# apt-get install links  (if links command is not installed)
# links http://127.0.0.1
To store the current state of the Docker containers, we need to commit them, so that they start with your configuration next time when you start them by 'exit' command.
# docker commit <container ID> yogesh/apache
Install MySQL server in Docker container

In other TAB, we can start one more Docker container for Mysql server
# apt-get update
# apt-get install mysql-server
(Type password when asked for Mysql database password)
After mysql is installed, start the service:
# /etc/init.d/mysql start
Try and test it out:
# mysql -u root -p
(Type password)
> show databases;
(Displays all default databases)
>exit
(To exit out of server)
Default Logs for mysql are saved in  /var/log/mysql/error.log
To store the current state of the Docker containers, we need to commit them, so that they start with your configuration next time when you start them after the 'exit'.
# docker commit <container ID> yogesh/mysql

You can view the complete procedure in below video:


Installation of Icinga Server for monitoring on RHEL7

Icinga is a modern open source monitoring tool that originated from a Nagios itself. The ICINGA tool is not much different from Nagios as they use the similar plugins as the Nagios uses but the major differences could be seen in Web UI and interface.

We will be going through entire deployment and installation process of Icinga Monitoring Tool  for RHEL 7, using repoforge(Earlier known as RPMforge), EPEL & Icigna  repositories for Apache and Nagios plugins which needs to be installed in the system.


1. Before proceeding with Icinga installation we need to configure RepoForge & ICINGA repositories on the sever by using below command

# rpm -Uvh http://pkgs.repoforge.org/rpmforge-release/rpmforge-release-0.5.3-1.el6.rf.x86_64.rpm

And also the ICINGA repositories:


# rpm --import http://packages.icinga.org/icinga.key
# curl -o /etc/yum.repos.d/ICINGA-release.repo http://packages.icinga.org/epel/ICINGA-release.repo
# yum makecache


2. The next step is to try to install Icinga web interface provided by icinga-gui package. Earlier CentOS/RHEL 7 has some issues with the package, but its fixed in latest version of CentOS/RHEL releases

# yum install icinga-gui

3. After RepoForge & icinga repositories had been added on your system, start with Icinga deployment

# yum install icinga icinga-doc

4. Install Apache development packages:

# yum install httpd-devel

4. As presented on this article introduction, your system needs to have Apache HTTP server and PHP installed in order to be able to run Icinga Web Interface.
After you finished the above steps, a new configuration file should be now present on Apache conf.d path named icinga.conf. In order to be able to access Icinga from a remote location from browser, open this configuration file and replace all its content with the following configurations.

# vim /etc/httpd/conf.d/icinga.conf

Make sure you replace all file content with the following.

ScriptAlias /icinga/cgi-bin "/usr/lib64/icinga/cgi"
<Directory "/usr/lib64/icinga/cgi">
#  SSLRequireSSL
Options ExecCGI
AllowOverride None
AuthName "Icinga Access"
AuthType Basic
AuthUserFile /etc/icinga/passwd
<IfModule mod_authz_core.c>
# Apache 2.4
<RequireAll>
Require all granted
# Require local
Require valid-user
</RequireAll>
</IfModule>
<IfModule !mod_authz_core.c>
# Apache 2.2
Order allow,deny
Allow from all
#  Order deny,allow
#  Deny from all
#  Allow from 127.0.0.1
Require valid-user
</IfModule>
</Directory>
Alias /icinga "/usr/share/icinga/"
<Directory "/usr/share/icinga/">
#  SSLRequireSSL
Options None
AllowOverride All
AuthName "Icinga Access"
AuthType Basic
AuthUserFile /etc/icinga/passwd
<IfModule mod_authz_core.c>
# Apache 2.4
<RequireAll>
Require all granted
# Require local
Require valid-user
</RequireAll>
</IfModule>
<IfModule !mod_authz_core.c>
# Apache 2.2
Order allow,deny
Allow from all
#  Order deny,allow
#  Deny from all
#  Allow from 127.0.0.1
Require valid-user
</IfModule>
</Directory>



5. After you have edited Icinga httpd configuration file, add Apache system user to Icinga system group and use the following system permissions on next system paths.

# usermod -aG icinga apache
# chown -R icinga:icinga /var/spool/icinga/*
# chgrp -R icinga /etc/icinga/*
# chgrp -R icinga /usr/lib64/icinga/*
# chgrp -R icinga /usr/share/icinga/*

6. Before starting Icinga system process and Apache server, make sure you also disable SELinux security mechanism by running 'setenforce 0' command and make the changes permanent by editing /etc/selinux/config file, changing SELINUX context from enforcing to disabled.

# nano /etc/selinux/config


Modify SELINUX directive to look like this.

SELINUX=disabled

You can also use 'getenforce' command to view SELinux status.
7. As the last step before starting Icinga process and web interface, as a security measure you can now modify Icinga Admin password by running the following command, and then start both processes.

# htpasswd -cm /etc/icinga/passwd icingaadmin (Type your preferred password)
# systemctl start icinga
# systemctl start httpd

8. In order to start monitoring public external services on hosts with Icinga, such as HTTP, IMAP, POP3, SSH, DNS, ICMP ping and many others services accessible from internet or LAN you need to install Nagios Plugins package provided by EPEL Repositories.

# rpm -Uvh http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-6.noarch.rpm
# yum install nagios-plugins nagios-plugins-all

9. To login on Icinga Web Interface, open a browser and point it to the URL http://Amazon_EC2_hostname/icinga/. Use icingaadmin as username and the password you chose earlier and you can now see your localhost system status.

That is the process of installing and configuring Icinga server on Red Hat enterprise linux 7.


Checkout below video for more practical representation:

 

Monday, May 23, 2016

How to work with Ansible (Automation tool) on CentOS 7


Ansible is free tool to automation tool for Linux hosts. Its useful for the environment where you have lots of linux hosts/servers to manage and maintain. Lets get started with it,

I have tested the below mentioned steps on CentOS 7:

First of all, we are going to need epel on CentOS to install ansible:



[root@vm2 ~]# uname -r
3.10.0-123.el7.x86_64


[root@vm2 ~]# cat /etc/redhat-release

CentOS Linux release 7.0.1406 (Core)


[root@vm2 ~]# yum install epel-release

Now after installing epel, lets install ansible:

[root@vm2 ~]# yum install ansible

After installation is done, test it out:

[root@vm2 ~]# ansible --version

ansible 2.0.2.0
  config file = /etc/ansible/ansible.cfg
  configured module search path = Default w/o overrides

Now, ansible works on ssh keys. So just generate one if you don't have it.

[root@vm2 ~]# ssh-keygen
<You can have a secure password, or just keep it blank also should work>

Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.

 Now Copy the ssh key to remote hosts:

[root@vm2 ~]# ssh-copy-id 192.168.0.81

[root@vm2 ~]# ssh-copy-id 192.168.1.222

Now Add your hosts/servers in the below mentioned ansible file:

[root@vm2 ~]# vim /etc/ansible/hosts

[servers]
192.168.0.81
192.168.1.222

Now, Save & Exit.

Its time to test ansible with ping module:

[root@vm2 ~]#  ansible -m ping 'servers'
192.168.0.81 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
192.168.1.222 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}

Looks good.

Lets try some other command:

[root@vm2 ~]# ansible -m command -a 'rpm -qa kernel' 'servers'
192.168.0.81 | SUCCESS | rc=0 >>
kernel-3.10.0-327.18.2.el7.x86_64
kernel-3.10.0-327.10.1.el7.x86_64
kernel-3.10.0-327.el7.x86_64
kernel-3.10.0-327.13.1.el7.x86_64

192.168.1.222 | SUCCESS | rc=0 >>
kernel-2.6.32-504.el6.x86_64
kernel-2.6.32-573.7.1.el6.x86_64

Yeah, So Now you are able to fetch these information in single command.


[root@vm2 ~]# ansible -m command -a 'grep CPU /proc/cpuinfo' 'test-servers'
192.168.0.81 | SUCCESS | rc=0 >>
model name    : Intel(R) Core(TM) i3-4130 CPU @ 3.40GHz
model name    : Intel(R) Core(TM) i3-4130 CPU @ 3.40GHz
model name    : Intel(R) Core(TM) i3-4130 CPU @ 3.40GHz
model name    : Intel(R) Core(TM) i3-4130 CPU @ 3.40GHz

192.168.1.222 | SUCCESS | rc=0 >>
model name    : Intel(R) Core(TM) i3-3240 CPU @ 3.40GHz
model name    : Intel(R) Core(TM) i3-3240 CPU @ 3.40GHz
model name    : Intel(R) Core(TM) i3-3240 CPU @ 3.40GHz
model name    : Intel(R) Core(TM) i3-3240 CPU @ 3.40GHz



[root@vm2 ~]# ansible -m command -a 'free -m' 'test-servers'
192.168.1.222 | SUCCESS | rc=0 >>
             total       used       free     shared    buffers     cached
Mem:          7693       5845       1847        187          7        449
-/+ buffers/cache:       5388       2304
Swap:         7999        684       7315

192.168.0.81 | SUCCESS | rc=0 >>
              total        used        free      shared  buff/cache   available
Mem:          11726        6426         390         393        4909        4568
Swap:         16383         130       16253



[root@vm2 ~]# ansible -m command -a 'fdisk -l /dev/sda' 'test-servers'
192.168.0.81 | SUCCESS | rc=0 >>

Disk /dev/sda: 500.1 GB, 500107862016 bytes, 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x0000f988

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048     1230847      614400   83  Linux
/dev/sda2         1230848   525518847   262144000   83  Linux
/dev/sda3       525518848   559073279    16777216   82  Linux swap / Solaris
/dev/sda4       559073280   976773167   208849944    5  Extended
/dev/sda5       559075328   976773167   208848920   83  Linux

192.168.1.222 | SUCCESS | rc=0 >>

Disk /dev/sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xa34fa34f

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          32      256000   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2              32        7681    61440000   83  Linux
/dev/sda3            7681        8701     8192000   82  Linux swap / Solaris
/dev/sda4            8701       60802   418497560    5  Extended
/dev/sda5            8701       60802   418496512   83  Linux


Awesome. You can Also Create/Delete user accounts:

[root@vm2 ~]# ansible -m command -a "useradd spiderman" 'test-servers'
192.168.1.222 | SUCCESS | rc=0 >>


192.168.0.81 | SUCCESS | rc=0 >>




[root@vm2 ~]# ansible -m command -a "grep spiderman /etc/passwd " 'test-servers'
192.168.0.81 | SUCCESS | rc=0 >>
spiderman:x:1002:1002::/home/spiderman:/bin/bash

192.168.1.222 | SUCCESS | rc=0 >>
spiderman:x:505:506::/home/spiderman:/bin/bash



[root@vm2 ~]# ansible -m command -a "userdel -r spiderman" 'test-servers'
192.168.0.81 | SUCCESS | rc=0 >>


192.168.1.222 | SUCCESS | rc=0 >>




Hope this helped you.

Thanks ^_^








Tuesday, November 3, 2015

How to configure Ubuntu 12.04 with Gmail SMTP

For this we need Ubuntu 12.04 LTS installed in your physical or virtual system. 

Lets get started with this,

We need to install required packages for the same,

sudo apt-get install mailutils postfix libsasl2-2 ca-certificates libsasl2-modules

When you install postfix for the first time, system will ask you for the smtp domain name and smtp type. Select "Internet Site" and "smtp.yourdomain.com" where ever applicable. (Where 'yourdomain.com' is your working domain name)

Now lets start configuring SMTP in postfix.

sudo vim /etc/postfix/main.cf

(Edit configuration file, so that it looks like this..)

relayhost = [smtp.gmail.com]:587
smtp_sasl_auth_enable = yes
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
smtp_sasl_security_options = noanonymous
smtp_tls_CAfile = /etc/postfix/cacert.pem
smtp_use_tls = yes

(Save & Exit)

Now your google user name and password goes in this file:

vim /etc/postfix/sasl_passwd

(Edit as below)

[smtp.gmail.com]:587    <GMAILUSERNAME>:<GMAILPASSWD>

(Save & Exit)

Lets make sure postfix can read this file and not everyone else:

sudo chmod 400 /etc/postfix/sasl_passwd
sudo postmap /etc/postfix/sasl_passwd

Lets use the certificate for the authentication:

cat /etc/ssl/certs/Thawte_Premium_Server_CA.pem | sudo tee -a /etc/postfix/cacert.pem

and finally lets restart the postfix service:

/etc/init.d/postfix reload

Now let's test our configuration if its working or not:

echo "Test email from Postfix" | mail -s "Test Postfix" USERNAME@gmail.com

Verify is the mail is sent:

tail -f /var/log/mail.log

If required we can also add your gmail account in aliases:

vim /etc/aliases

root: GMAILADDRESS@gmail.com

(save & exit)

newaliases

Thanks for the visit
:)









Monday, August 10, 2015

Working with apt-get and dpkg package manager tools of 'Debian'



First of all we are going to work with Apt-get package manager. For those who are already familiar with Yum of Red Hat based distros may find this quite identical tool. Apt-get pulls and installs packages from the online repository to ease your pain to find, download and install packages and its dependency manually.

You have to be root using for the following hands-on, Let's get started:

root@yogeshkk21:~# apt-get update

This command pulls package information from repository server and caches the same locally

root@yogeshkk21:~# apt-cache search nginx

We can search for a specific package in the entire cache using this command


root@yogeshkk21:~# apt-get install nginx

This command is to install a package

root@yogeshkk21:~# which nginx

To verify nginx is installed

root@yogeshkk21:~# apt-cache search apache2

Let's search for Apache2 package as well

root@yogeshkk21:~# apt-get install apache2

And Install it same as nginx

root@yogeshkk21:~# apt-get remove ngnix

Now, let's learn How to remove it

root@yogeshkk21:~# apt-get remove --purge nginx

apt-get remove only removes the binary, but keeps associated library and configuration files. So to remove everything, use purge.

root@yogeshkk21:~# apt-get autoremove

This command removes unnecessary packages alongside the removed package

root@yogeshkk21:~# which ngnix

Let's verify

root@yogeshkk21:~# apt-get remove apache2 ; apt-get autoremove apache2

This removed apache2 package along with unnecessary packages of it

root@yogeshkk21:~# which apache2

Let's verify again

root@yogeshkk21:~# apt-get install apache2

Let us install apache2 package again

root@yogeshkk21:~# apt-get upgrade

This command allows you to upgrade any packages required for apache2

root@yogeshkk21:~# apt-get dist-upgrade

This command allows you to upgrade distribution kernel if available

Now if we talk about dpkg package manager, it's the little bit different from the apt-get. This does not pull the dependencies automatically like apt-get does. Let's see:

root@yogeshkk21:~# wget https://www.dropbox.com/download?dl=packages/ubuntu/dropbox_2015.02.12_amd64.deb

Let's try to install Dropbox application on our Ubuntu 12.04 distro.

root@yogeshkk21:~# mv download\?dl\=packages%2Fubuntu%2Fdropbox_2015.02.12_amd64.deb dropbox.deb

Let's rename this downloaded file to something more simple.

root@yogeshkk21:~# dpkg -i dropbox.deb

To install the Deb file, use -i option. Now you will see that lots of dependency errors are been thrown at you. Its simply because, dpkg can't pull all those dependency for you. So, what now? Should I download and install them manually? No, there is a way out.

root@yogeshkk21:~# apt-get update

Let's update our cache once again

root@yogeshkk21:~# apt-get -f upgrade

And this command will do the job for us. It automatically pulls all the required library packages along with other dependencies. Nice, right?

root@yogeshkk21:~# dpkg -i dropbox.deb

Now, let's try that deb file again for installation. And you will notice, it gets installed! :)

root@yogeshkk21:~# which dropbox

Let's verify! Worked well.

root@yogeshkk21:~# dpkg --get-selections

Now let's also find out, how can we list all installed packages. This command will do the magic for us.

root@yogeshkk21:~# dpkg --get-selections  |grep -i dropbox

We can find specific package name from the output using grep.

root@yogeshkk21:~# dpkg --remove dropbox

Just like apt-get we have remove command as well in dpkg and it removes all binary for us.

root@yogeshkk21:~# dpkg --purge dropbox

To remove the application completely, using this command.

So that's how we can use apt-get and dpkg.

HTH ^_^

Above command is tested and worked on:

root@yogeshkk21:~# lsb_release -a
No LSB modules are available.
Distributor ID:    Ubuntu
Description:    Ubuntu 12.04.5 LTS
Release:    12.04
Codename:    precise