k8s cluster

Kubernetes (K8s) is an open-source project by Google. Kubernetes is an open-souce platform to manage and orchestrate the containerized applications. Kubernetes supports continuous integration and delivery of applications. It is known as container platform, microservices platform, portable cloud platform and lot more.

Since Docker is limited to the single machine. If you need more than one machines that is cluster and you are working on container platform then Kubernetes is the best option for you! Speaking of Docker, Kubernetes works great with container tools such as Docker. It also supports rkt.

Docker swarm also provides containercentric environment like kubernetes does. But why anybody should go with Kubernetes is, it is definitely more flexible and powerful.

Further there are steps to create K8s cluster.  In K8s cluster, we typically have master node and many number of nodes which we call them minions. We manage cluster from master by using kubeadm, kubectl.

The following components contribute in the K8s cluster:

1. kube-apiserver: Component on the master that exposes the Kubernetes  API.
2. etcd: It is key-value store for backing up all cluster data.
3. kube-scheduler: This component provisions the pods and assigns the nodes to pods if pods don’t have node assigned to run on.
4. kube-controller-manager: This component runs controllers such as node controller, replication controller, service accounts and token controller, endpoints controller.
5. kube-proxy: This component enables the Kubernetes service abstraction by maintaining network rules on the host and performing connection forwarding.
6. kubeadm: Kubeadm is a tool which helps you to build up K8s cluster.
7. kubectl: Command Line Interface for Kubernetes.
8. kubelet: It is a tool which is responsible for creating, starting and deleting containers which runs on every minion. It communicates with Docker to supervise the process of creating, starting and deleting containers.
9. pods: The basic definition of pod is, collection of containers. Pods are deployed on minion.

Steps to create Kubernetes Cluster on Fedora:

1. First setup your DNS server configuration if you haven’t already.

Make an entry of Hostname and IP address of master node and nodes (minions) in the /etc/hosts file of both master node and node.

192.168.100.10 kube-master
192.168.100.20 kube-minion1

2. Stop and Disable the Firewalld Service and SElinux settings on both master node and nodes.

# systemctl stop firewalld.service && systemctl disable firewalld.service
# setenforce 0

3. Setup the Kubernetes repository on both master node and minions.

create kubernetes.repo file under the path /etc/yum.repos.d/

 [kubernetes]
 name=kubernetes
 baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
 enabled=1
 gpgcheck=1
 repo_gpgcheck=1
 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg 
        https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

4. Install kubeadm and Docker.

After configuring K8s repository execute following commands to install kubeadm (version 1.9.3) and Docker (version 1.13) on both nodes. Start and enable kubelet  and docker daemon.

# dnf install kubeadm docker -y
# systemctl start kubelet && systemctl enable kubelet
# systemctl start docker && systemctl enable docker

5. Initialize Kubernetes master.

By firing below command kubeadm starts and sets up the kubernetes master.

# kubeadm init
[init] Using Kubernetes version: v1.9.3
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
 [WARNING FileExisting-crictl]: crictl not found in system path
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.122.202]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.

[apiclient] All control plane components are healthy after 206.004414 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node master as master by adding a label and a taint
[markmaster] Master master tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: ed97cd.b299e938a599b1cf
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

kubeadm join --token ed97cd.b299e938a599b1cf 192.168.122.202:6443 --discovery-token-ca-cert-hash sha256:ca448b9562b26ede20ce987a9ef09c9912b8112153a94dd163847dafe464973b

We have started the master node but how the master will know what are his nodes/minions. kubeadm generates token in order to communicate with the minions. We need to pass this token on every minion by running below command. This token is generated when we execute ‘kubeadm init‘ command.

# kubeadm join --token ed97cd.b299e938a599b1cf 192.168.122.202:6443 --discovery-token-ca-cert-hash sha256:ca448b9562b26ede20ce987a9ef09c9912b8112153a94dd163847dafe464973b
[preflight] Running pre-flight checks.
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
	[WARNING FileExisting-crictl]: crictl not found in system path
[discovery] Trying to connect to API Server "192.168.122.190:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.122.190:6443"
[discovery] Requesting info from "https://192.168.122.190:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.122.190:6443"
[discovery] Successfully established connection with API Server "192.168.122.190:6443"

This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

After initializing Kubernetes master, execute below commands to access cluster as a root user.

# mkdir -p $HOME/.kube
# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# chown $(id -u):$(id -g) $HOME/.kube/config

On master, run below command to see if the nodes are ready or not.

# kubectl get nodes
NAME     STATUS      ROLES   AGE   VERSION
master   NotReady    master  14m   v1.9.3
node     NotReady    <none>  12m   v1.9.3

To know the status of containers, run below command,

# kubectl get pods --all-namespaces
NAMESPACE     NAME                             READY     STATUS    RESTARTS   AGE
kube-system   etcd-master                      1/1       Running   0          8m
kube-system   kube-apiserver-master            1/1       Running   1          7m
kube-system   kube-controller-manager-master   1/1       Running   0          7m
kube-system   kube-dns-6f4fd4bdf-jfvxs         0/3       Pending   0          7m
kube-system   kube-proxy-9r5ld                 1/1       Running   0          7m
kube-system   kube-proxy-dh4h9                 1/1       Running   0          7m
kube-system   kube-scheduler-master            1/1       Running   0          8m

Right now, the status of nodes is ‘NotReady’. To make cluster ready we need to deploy network. You have many options for deploying pod network such as Flannel, Weave Net, Calico, Romana, kube-Router, Canal. In this article we are using Weave Net.

6. Deploy pod network.

# export kubever=$(kubectl version | base64 | tr -d '\n')
# kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$kubever"
serviceaccount "weave-net" created
clusterrole "weave-net" created
clusterrolebinding "weave-net" created
role "weave-net" created
rolebinding "weave-net" created
daemonset "weave-net" created

Now try to know status of nodes and pods. Nodes should be ready and pods should be running after deploying the pod network.

You can watch the events of Kubernetes cluster by below command,

# kubectl get events -w

We have our pod network ready. Let’s check the status of pods and nodes again on master.

# kubectl get nodes
NAME    STATUS   ROLES    AGE    VERSION
master  Ready    master   14m    v1.9.3  
node    Ready    <none>   12m    v1.9.3
# kubectl get pods --all-namespaces
NAMESPACE     NAME                             READY     STATUS    RESTARTS   AGE
kube-system   etcd-master                      1/1       Running   0          4m
kube-system   kube-apiserver-master            1/1       Running   1          4m
kube-system   kube-controller-manager-master   1/1       Running   0          4m
kube-system   kube-dns-6f4fd4bdf-jfvxs         3/3       Running   0          17m
kube-system   kube-proxy-9r5ld                 1/1       Running   0          17m
kube-system   kube-proxy-dh4h9                 1/1       Running   0          17m
kube-system   kube-scheduler-master            1/1       Running   0          4m
kube-system   weave-net-kkp8t                  2/2       Running   0          7m
kube-system   weave-net-nw66p                  2/2       Running   0          7m

Your Kubernetes cluster is ready and pods are successfully running. The very next step is to deploy something !

Advertisements

Minishift

Minishift is nothing but introducing OpenShift in short manner. Minishift let us run single-node OpenShift cluster in virtual machine. By running Minishift you can develop and deploy daily on your own computer. So single-node cluster and VM are main components of Minishift. Underneath of OpenShift origin there is Kubernetes behind the scenes who manages containerized applications.  Kubernetes works amazing with Docker. If we use Docker as a container format, Kubernetes runs Docker on every node to run your container.

libmachine is used to monitor VM and OpenShift origin is used to manage the running cluster. Since Minishift needs a hypervisor to start the VM. This article covers the part with KVM hypervisor. But you can us Virtualbox too, your choice.

What would you need?

1. You are going to need Fedora installed on your computer.
2. Installed and enabled KVM  before starting getting hands-on Minishift.

 Installation:

This installation has done on Fedora. KVM is default hypervisor present in Fedora. You just need to install libvirt and qemu-kvm on your system.

$ sudo dnf install libvirt qemu-kvm

Now you need to start and enable the services libvirtd and virtlogd services.

$ sudo systemctl start libvirtd
$ sudo systemctl enable libvirtd
$ sudo systemctl start virtlogd
$ sudo systemctl enable virtlogd

There should be something which would provision VM. Now install docker-machine-kvm driver. We are using version 0.7.0.

$ curl -L https://github.com/dhiltgen/docker-machine-kvm/releases/download/v0.7.0/docker-machine-driver-kvm -o /usr/local/bin/docker-machine-driver-kvm

Now give it is needed to make above binary file executable.

$ sudo chmod +x /usr/local/bin/docker-machine-driver-kvm

Create a user virt (or any user which you would like) and then create group libvirt and then add user to the group you created to avoid sudo.

$ sudo usermod -a -G libvirt virt
$ sudo newgrp libvirt

Let’s download the minishift archive of the latest version 1.1.0. you may download it from release page for your operating system.

# wget https://github.com/minishift/minishift/releases/download/v1.1.0/minishift-1.1.0-linux-amd64.tgz

Extract the above archive,

# tar -xvf minishift-1.1.0-linux-amd64.tgz

copy the minishift directory to your prefered location.

# cp minishift /bin/minishift

add your personal /bin folder to your PATH environment

# export PATH=~/bin:$PATH

Now, it’s time to start the local OpenShift cluster by running below command,

# minishift start
Starting local OpenShift cluster using 'kvm' hypervisor...
Downloading ISO 'https://github.com/minishift/minishift-b2d-iso/releases/download/v1.0.2/minishift-b2d.iso'
 40.00 MiB / 40.00 MiB [=====================================================================] 100.00% 0s
E0307 11:30:53.941354    4413 start.go:234] Error starting the VM: Error creating the VM. Error creating machine: Error in driver during machine creation: [Code-55] [Domain-19] Requested operation is not valid: network 'default' is not active. Retrying.
E0307 11:30:54.107901    4413 start.go:234] Error starting the VM: Error starting stopped host: [Code-55] [Domain-19] Requested operation is not valid: network 'default' is not active. Retrying.
E0307 11:30:54.206729    4413 start.go:234] Error starting the VM: Error starting stopped host: [Code-55] [Domain-19] Requested operation is not valid: network 'default' is not active. Retrying.
Error starting the VM: Error creating the VM. Error creating machine: Error in driver during machine creation: [Code-55] [Domain-19] Requested operation is not valid: network 'default' is not active
Error starting stopped host: [Code-55] [Domain-19] Requested operation is not valid: network 'default' is not active
Error starting stopped host: [Code-55] [Domain-19] Requested operation is not valid: network 'default' is not active

You can see, after running above command, it gave errors. Let’s try to dig out the solution. The error is about inactive ‘default’ network. This error may occur to you if your default network is not being activated.

# virsh net-list --all
 Name                 State      Autostart     Persistent
----------------------------------------------------------
 default              inactive   yes           yes
 docker-machines      active     yes           yes
 eth0                 active     yes           yes

To make it active, you can try one option which is to run following command on terminal,

# virsh net-start default 
error: Failed to start network default
error: error creating bridge interface virbr0: File exists

If above command too failed to start your ‘default’ network, then let’s move towards another different option. Run below commands.

# ifconfig virbr0 down
# brctl delbr virbr0
#  virsh net-start default 
Network default started
# virsh net-list --all
 Name                 State      Autostart     Persistent
----------------------------------------------------------
 default              active     yes           yes
 docker-machines      active     yes           yes
 eth0                 active     yes           yes

Now, try to run your local OpenShift cluster again,

# minishift start
Starting local OpenShift cluster using 'kvm' hypervisor...
Downloading OpenShift binary 'oc' version 'v1.5.1'
 19.96 MiB / 19.96 MiB [=====================================================================] 100.00% 0s
-- Checking OpenShift client ... OK
-- Checking Docker client ... OK
-- Checking Docker version ... OK
-- Checking for existing OpenShift container ... OK
-- Checking for openshift/origin:v1.5.1 image ... 
   Pulling image openshift/origin:v1.5.1
   Pulled 0/3 layers, 3% complete
   Pulled 1/3 layers, 73% complete
   Pulled 2/3 layers, 98% complete
   Pulled 3/3 layers, 100% complete
   Extracting
   Image pull complete
-- Checking Docker daemon configuration ... OK
-- Checking for available ports ... OK
-- Checking type of volume mount ... 
   Using Docker shared volumes for OpenShift volumes
-- Creating host directories ... OK
-- Finding server IP ... 
   Using 192.168.42.117 as the server IP
-- Starting OpenShift container ... 
   Creating initial OpenShift configuration
   Starting OpenShift using container 'origin'
   Waiting for API server to start listening
   OpenShift server started
-- Adding default OAuthClient redirect URIs ... OK
-- Installing registry ... OK
-- Installing router ... OK
-- Importing image streams ... OK
-- Importing templates ... OK
-- Login to server ... OK
-- Creating initial project "myproject" ... OK
-- Removing temporary directory ... OK
-- Checking container networking ... OK
-- Server Information ... 
   OpenShift server started.
   The server is accessible via web console at:
       https://192.168.42.117:8443

   You are logged in as:
       User:     developer
       Password: developer

   To login as administrator:
       oc login -u system:admin

You must be wondering the what steps occurred while starting your local OpenShift cluster. Steps proceeded while starting minishift are below:

1.Downloads boot2docker ISO image
2. Starts VM using libmachine.
3.Downloads oc (OpenShift client binary)
4. Transfers both oc and the ISO image into your $HOME/.minishift/cache folder.
5. And last but not the least, your single node cluster is ready on your system.

Now go to your KVM you will be seeing a node which is your minishift.

Screenshot from 2018-03-07 12-38-39

You will see something like below when you enter in the minishift node. And you are ready to deploy an application.

Screenshot from 2018-03-07 12-45-23

lost+found folder

 

Have you ever seen the lost+found folder in a root directory and wondered about what it could be? You might have wondered if you are a beginner in Linux. The lost+found folder is present in operating systems like Linux,macOS and Unix-like operating systems. The name itself is bit interesting,isn’t it? What was the first thought came into your mind after reading that interesting folder’s name?

So,lost+found(not Lost+Found) is a directory at the root of Linux and Unix operating systems.
If you fire find command on a terminal you will see where the folder lies

$ sudo find  /  -name lost+found

While talking about lost+found folder fsck (filesystem check)comes into picture.
fsck is the system utility tool to check the consistency of a file system in Linux and Unix operating systems. fsck goes through the system and try to recover corrupt files. The reason behind file corruption can be power failure,improper shutdown or kernel panic.

You can check and repair your filesystem by running,

$ sudo fsck

fsck might find data which might look like a file with inode and without name of the file on system.Eventhough the data cannot be accessed,it occupies the space.

We can have our corrupted and deleted files back by just reparing filesystem by fsck. After reparing filesystem all corrupted and deleted files get arrived into lost+found directory. But their information is gone. Those files had a name and location now it is not present anymore. So what fsck did is to make available all lost files found into a directory name lost+found. Files that are typically lost because of directory corruption would be linked in that filesystem’s lost+found directory by inode number since name of file gets erased from system.

The files that show up found folder still have data. Nevertheless their names are erased. The another reason files can show up in lost+found directory is inconsistent state of filesystem due to software hardware bug. The file which are present in the lost+found folder might contain data which is useful or not useful. Files can be out of date because we do not know how much serious damage our filesystem have had!
Some files can be recovered whether some cannot!It depends on how much bad damage has filesystem gone through.

Note : If you delete lost+found folder by mistake, mkdir would not work in that case go with mklost+found.For more information there is man page available.

$ sudo man mklost+found

If you are willing to see contents of lost+found run follow commands on terminal:

$ sudo su

or

$ sudo su
cd /lost+found
$ sudo ls

If you have lost some data you may give it a try to recover it by examining lost+found folder. You can also recover a complete file and put back to its previous location.

Inchoate Docker

Once you know what is docker, now it’s the time to cope up with some basic docker commands.

Once you have installed docker in your system you are all ready to try out some commands.

For CentOs, redhat execute this command,

$ sudo yum install docker -y

For Fedora fire below command,

$ sudo dnf install docker -y

For Ubuntu the command is,

$ sudo apt-get install docker.io

After installing docker make sure that docker service is running and  is enabled.

$ sudo systemctl start docker

To check status of docker daemon,

$ sudo systemctl status docker

 

we are going to do following steps:

  1. Pull the image
  2. Run the image to create a container
  3. Enter into the container
  4. Install the application

We are pulling image from public registry of docker. For example, docker.io and registry.access.redhat.com are docker public registries.

Pulling image fedora,

$ sudo docker pull fedora

Screenshot from 2018-02-02 03-13-43

Now, check the pulled images,

$ sudo docker images

Screenshot from 2018-02-02 03-20-36

Run the image to go into the container, we can see the container ID in picture blow,

$ sudo docker run -it fedora bash

Screenshot from 2018-02-02 03-25-48

say we want to install mysql-server in the container,

Screenshot from 2018-02-02 03-39-01

Screenshot from 2018-02-02 03-40-21

We can see the mysql-server is downloaded and installed. Type exit and come out of the container.

In order to save the changes in the container we need to take help of docker commit command. But before that we will see how to see all the running processes.

$ sudo docker ps -a

screenshot-from-2018-02-02-03-46-11.png

$ sudo docker commit

Screenshot from 2018-02-02 03-51-06

If you want to remove a particular process or docker image then the respective commands are,

$ sudo docker rm
$ sudo docker rmi

 

 

Docker!

Opensource is evolving with the greatest technologies. One of the most popular technology is docker. Docker is container based platform.The question occurs here
is what are containers?
CONTAINERS: Containers are a method of operating system that allow you to run application. Containers give security of deploying applications consistently, quickly and reliably. Containers give you guarantee that software will run the same reliability, speed and efficiently regardless of the deploying environment. Let’s assume we have
hosted a http website in one container. If we want to access the website on another system which has not httpd packages installed. In this case we won’t be installing packages in order to access the website. We will the run the container in another system and we will be able to access the website even though we haven’t installed a single packages or dependencies of httpd. Containers are lightweight processes.Containers are read only. Few of us know that Google has its own open-source, container technology i.e Let Me Contain That For You. Every time we use any of he google functionalitites a new container gets allocated by us.

WhatsApp Image 2018-02-14 at 2.20.30 AM
What is Docker?
Docker is the world’s most popular leading software container platform.Docker is an open-source project that automates the deployment of applications inside the containers. Simply we can say docker is a way to cope up with the containers for deployment of applications. Developer sums up an application with all of the parts it needs.Suppose one wants a database software say mysql in his/her system.in order to use mysql he/she needs to install it and then use it. What we will do is just run
the container in his/her system.Container must be contained of all mysql packages and dependencies installed.

Why should we use docker?
Docker lets you put the environment and configuration into code and deploy it. Docker does not rely on any operating system. Docker lets you run the applications in containers instead of virtual machines.Docker ensures that each application only uses resources like CPU, RAM and hard disk that have been assigned to them. Docker uses only read only mount points for security so that two containers cannot read each others data.

How to install Docker on your server?

for Linux distros such as CentOs and RHEL,

$ sudo yum install docker

For fedora the installation command for docker is,

$ sudo dnf install docker

For Ubuntu installation of docker command is,

$ sudo apt-get install docker

After installation of Docker, start and enable docker daemon, with the help of following commands,

$ sudo systemctl start docker
$ sudo systemctl enable docker

If you want to know more about docker read this article.

Redis-The Database

We all are well concerned about the databases. We are familiar with database software such as MySQL, Mongodb, Oracle. These are relational database management systems.
Redis acronyms for REmote DIctionary Server. Redis is an open source, BSD licensed data structure server.
Formal definition could be -A caching and in memory storage system which is “NoSQL” key-value data store, used as database, cache and message broker. We are calling it data structure server because it can contain strings, hashes, lists, sets and sorted sets. Redis is written in ANSI C. Linux is recommended for deploying Redis. Since there is no official support for windows build but Redis may work on Solaris derived systems.
As earlier mentioned Redis is key-value data store. What happens in database softwares like oracle and mongodb is, data get stored in tables. These database softwares use “SQL” query language. The gist of key-value data store is – the ability to store some data, called a value, inside the unique key. The data can be retrieved only if we know the exact key used to store it. For example,you go to the mall and you want to park your vehicle. You are given token number for your parking slot. In this particular case the token number which you have given be the “key” and your vehicle be the “value”. You are not going to get the vehicle until you know the exact token number.
Redis can be seen as memchached.

Screenshot_2017-04-01-21-14-35-856~2

MEMCHACHED:MEMCHACHED is pronounced as mem-cash-dee.It is general-purpose distributed memory caching system.It is used for speeding up dynamic database-driven websites by caching data and objects into RAM.This reduces the need of reading external data source frequently.

RELATIONAL DATABASE VS REDIS:
1)Redis data structures are far more efficient than that of relational databases. Relational databases are kind of restrictive way of structuring data. The reason behind saying this is, relational databases are schema-based. Data is stored in tabular format. Relational databases have no concept of tree or hierarchy. There is nothing bigger than a table and smaller than a field.

2)Relational databases have forein keys(Forein key:the key which is defined in second table but refers to primary key in the first table). The links between nodes are of weak and limited type:forein keys. In Redis there is no concept like forein keys that is no weak nodes.

KEY POINTS OF REDIS:
1)Redis runs in RAM
Redis data is read from and written to RAM. This is comparitively faster than a disk. What happens in mongodb or oracle is information gets loaded from the disk. In Redis information is stored in RAM, the speed is definitely faster because information is going to be loaded from RAM.

2)Redis would not be preferable if your dataset is large. As earlier mentioned Redis runs in RAM. Memory of RAM is of course not as large as disk. As your application contains large amount of data,we will be needing more memory of RAM. The idea of using Redis will not be economical. Can we buy terabytes of RAM?No of course not!
One more thing, we can create a cluster of Redis but it is not trivial.

3)The thing which makes Redis pretty incredible is, its built-in persistence. The data will still be there even if you restart. You can use Redis as a real database instead of just a volatile cache.

4)One more thing which makes Redis great is additional datatypes. Key values can be both simple andcomplex. Simple like strings and complex like Hashes, Lists(ordered collection), sets(unordered collection of non-repeating values), sorted sets(ordered collection of non-repeating values).

MYSQL VS REDIS:
DESCRIPTION:
-MysQL is widely used open source RDBMS.
-Redis is used as database,cache and message broker.

Screenshot_2017-03-31-00-13-56-479~2.jpeg

Database model:
-MySQL is relational database
-Redis is key-value store.

Implementation language:
-MySQL is implemented in C and C++.
-Redis is implemented in ANSI C.

Data scheme:
-MySQL is schema-based.
-Redis is schema-free.

XML support:
-MySQL gives XML support.
-Redis does not give XML support.

Replication methods:
-MySQL uses master-master replication and master-slave replication
-Redis uses master-slave replication

Partitioing methods:
-MySQL uses horizontal partitioning,sharding with MySQL cluster
-Redis uses just sharding

Transaction concepts:
-MySQL has transaction concept named ACID.(ACID stands for four properties of transaction,which are Atomicity,Consistency,Isolation,Durability)
-Redis has transaction concepts like optimistic locking,atomic execution of commands blocks and scripts

User concepts:
-MySQL uses fine-grained authorization concept
-Redis uses simple password-based access control

Following are the steps to install Redis server on your system:

$ sudo dnf install redis

Once you installed redis then start and enable the redis service

$ sudo systemctl start redis
$ sudo systemctl enable redis

 

so are you ready to install and use Redis?

 

Linux Biosness

Have you ever thought of reading the information about BIOS without rebooting your system? Well, if you already have given it a thought then yes, you can read BIOS information without rebooting your system.

dmidecode is the command by which you can know the BIOS information and computer hardware information. dmidecode makes  information available to us about BIOS in human readable manner. biasdecode is nothing but BIOS information decoder. biosdecode parses the BIOS memory and prints information about all structures (or entry points) it knows of.

The DMI table doesn’t only describe what the system is currently made of, it also can report the possible evolutions (such as the fastest supported CPU or the maximal amount of memory supported). DMI stands for Desktop Management Interface.

$ sudo dmidecode
# dmidecode 3.1
Getting SMBIOS data from sysfs.
SMBIOS 2.7 present.
36 structures occupying 1590 bytes.
Table at 0xAAEBC000.

Handle 0x0000, DMI type 0, 24 bytes
BIOS Information
	Vendor: LENOVO
	Version: 9ACN29WW
	Release Date: 10/20/2014
	Address: 0xE0000
	Runtime Size: 128 kB
	ROM Size: 4096 kB
	Characteristics:
		PCI is supported
		BIOS is upgradeable
		BIOS shadowing is allowed
		Boot from CD is supported
		Selectable boot is supported
		EDD is supported
		Japanese floppy for NEC 9800 1.2 MB is supported (int 13h)
		Japanese floppy for Toshiba 1.2 MB is supported (int 13h)
		5.25"/360 kB floppy services are supported (int 13h)
		5.25"/1.2 MB floppy services are supported (int 13h)
		3.5"/720 kB floppy services are supported (int 13h)
		3.5"/2.88 MB floppy services are supported (int 13h)
		8042 keyboard services are supported (int 9h)
		CGA/mono video services are supported (int 10h)
		ACPI is supported
		USB legacy is supported
		BIOS boot specification is supported
		Targeted content distribution is supported
		UEFI is supported
	BIOS Revision: 0.29
	Firmware Revision: 0.29

Handle 0x0001, DMI type 1, 27 bytes
System Information
	Manufacturer: LENOVO
	Product Name: 20351
	Version: Lenovo G50-70
	Serial Number: 1047297501971
	UUID: BE5BA171-A04A-11E4-A961-68F7286EC28A
	Wake-up Type: Power Switch
	SKU Number: LENOVO_MT_20351_BU_idea_FM_Lenovo G50-70
	Family: IDEAPAD

Handle 0x0002, DMI type 2, 16 bytes
Base Board Information
	Manufacturer: LENOVO
	Product Name: Lancer 5A2
	Version: NANANANANO DPK
	Serial Number: YB08756329
	Asset Tag: No Asset Tag
	Features:
		Board is a hosting board
		Board is replaceable
	Location In Chassis: Type2 - Board Chassis Location
	Chassis Handle: 0x0003
	Type: Motherboard
	Contained Object Handles: 0

Handle 0x0003, DMI type 3, 23 bytes
Chassis Information
	Manufacturer: LENOVO
	Type: Notebook
	Lock: Not Present
	Version: Lenovo G50-70
	Serial Number: YB08756329
	Asset Tag: No Asset Tag
	Boot-up State: Safe
	Power Supply State: Safe
	Thermal State: Safe
	Security Status: None
	OEM Information: 0x00000000
	Height: Unspecified
	Number Of Power Cords: 1
	Contained Elements: 0
	SKU Number: Not Specified

Handle 0x0004, DMI type 4, 42 bytes
Processor Information
	Socket Designation: U3E1
	Type: Central Processor
	Family: Core i3
	Manufacturer: Intel(R) Corporation
	ID: 51 06 04 00 FF FB EB BF
	Signature: Type 0, Family 6, Model 69, Stepping 1
	Flags:
		FPU (Floating-point unit on-chip)
		VME (Virtual mode extension)
		DE (Debugging extension)
		PSE (Page size extension)
		TSC (Time stamp counter)
		MSR (Model specific registers)
		PAE (Physical address extension)
		MCE (Machine check exception)
		CX8 (CMPXCHG8 instruction supported)
		APIC (On-chip APIC hardware supported)
		SEP (Fast system call)
		MTRR (Memory type range registers)
		PGE (Page global enable)
		MCA (Machine check architecture)
		CMOV (Conditional move instruction supported)
		PAT (Page attribute table)
		PSE-36 (36-bit page size extension)
		CLFSH (CLFLUSH instruction supported)
		DS (Debug store)
		ACPI (ACPI supported)
		MMX (MMX technology supported)
		FXSR (FXSAVE and FXSTOR instructions supported)
		SSE (Streaming SIMD extensions)
		SSE2 (Streaming SIMD extensions 2)
		SS (Self-snoop)
		HTT (Multi-threading)
		TM (Thermal monitor supported)
		PBE (Pending break enabled)
	Version: Intel(R) Core(TM) i3-4030U CPU @ 1.90GHz
	Voltage: 0.8 V
	External Clock: 100 MHz
	Max Speed: 1900 MHz
	Current Speed: 1800 MHz
	Status: Populated, Enabled
	Upgrade: Socket BGA1168
	L1 Cache Handle: 0x000B
	L2 Cache Handle: 0x000C
	L3 Cache Handle: 0x000D
	Serial Number: To Be Filled By O.E.M.
	Asset Tag: To Be Filled By O.E.M.
	Part Number: To Be Filled By O.E.M.
	Core Count: 2
	Core Enabled: 2
	Thread Count: 4
	Characteristics:
		64-bit capable
		Multi-Core
		Hardware Thread
		Execute Protection
		Enhanced Virtualization
		Power/Performance Control

Handle 0x0005, DMI type 5, 24 bytes
Memory Controller Information
	Error Detecting Method: None
	Error Correcting Capabilities:
		None
	Supported Interleave: One-way Interleave
	Current Interleave: One-way Interleave
	Maximum Memory Module Size: 8192 MB
	Maximum Total Memory Size: 32768 MB
	Supported Speeds:
		Other
	Supported Memory Types:
		Other
	Memory Module Voltage: Unknown
	Associated Memory Slots: 4
		0x0006
		0x0007
		0x0008
		0x0009
	Enabled Error Correcting Capabilities:
		None

Handle 0x0006, DMI type 6, 12 bytes
Memory Module Information
	Socket Designation: DIMM0
	Bank Connections: None
	Current Speed: Unknown
	Type: DIMM
	Installed Size: Not Installed
	Enabled Size: Not Installed
	Error Status: OK

Handle 0x0007, DMI type 6, 12 bytes
Memory Module Information
	Socket Designation: DIMM1
	Bank Connections: None
	Current Speed: Unknown
	Type: DIMM
	Installed Size: Not Installed
	Enabled Size: Not Installed
	Error Status: OK

Handle 0x0008, DMI type 6, 12 bytes
Memory Module Information
	Socket Designation: DIMM2
	Bank Connections: None
	Current Speed: Unknown
	Type: DIMM
	Installed Size: 4096 MB (Single-bank Connection)
	Enabled Size: 4096 MB (Single-bank Connection)
	Error Status: OK

Handle 0x0009, DMI type 6, 12 bytes
Memory Module Information
	Socket Designation: DIMM3
	Bank Connections: None
	Current Speed: Unknown
	Type: DIMM
	Installed Size: Not Installed
	Enabled Size: Not Installed
	Error Status: OK

Handle 0x000A, DMI type 7, 19 bytes
Cache Information
	Socket Designation: L1 Cache
	Configuration: Enabled, Not Socketed, Level 1
	Operational Mode: Write Back
	Location: Internal
	Installed Size: 32 kB
	Maximum Size: 32 kB
	Supported SRAM Types:
		Synchronous
	Installed SRAM Type: Synchronous
	Speed: Unknown
	Error Correction Type: Single-bit ECC
	System Type: Data
	Associativity: 8-way Set-associative

Handle 0x000B, DMI type 7, 19 bytes
Cache Information
	Socket Designation: L1 Cache
	Configuration: Enabled, Not Socketed, Level 1
	Operational Mode: Write Back
	Location: Internal
	Installed Size: 32 kB
	Maximum Size: 32 kB
	Supported SRAM Types:
		Synchronous
	Installed SRAM Type: Synchronous
	Speed: Unknown
	Error Correction Type: Single-bit ECC
	System Type: Instruction
	Associativity: 8-way Set-associative

Handle 0x000C, DMI type 7, 19 bytes
Cache Information
	Socket Designation: L2 Cache
	Configuration: Enabled, Not Socketed, Level 2
	Operational Mode: Write Back
	Location: Internal
	Installed Size: 256 kB
	Maximum Size: 256 kB
	Supported SRAM Types:
		Synchronous
	Installed SRAM Type: Synchronous
	Speed: Unknown
	Error Correction Type: Single-bit ECC
	System Type: Unified
	Associativity: 8-way Set-associative

Handle 0x000D, DMI type 7, 19 bytes
Cache Information
	Socket Designation: L3 Cache
	Configuration: Enabled, Not Socketed, Level 3
	Operational Mode: Write Back
	Location: Internal
	Installed Size: 3072 kB
	Maximum Size: 3072 kB
	Supported SRAM Types:
		Synchronous
	Installed SRAM Type: Synchronous
	Speed: Unknown
	Error Correction Type: Single-bit ECC
	System Type: Unified
	Associativity: 12-way Set-associative

Handle 0x000E, DMI type 8, 9 bytes
Port Connector Information
	Internal Reference Designator: J3A1
	Internal Connector Type: None
	External Reference Designator: USB
	External Connector Type: Access Bus (USB)
	Port Type: USB

Handle 0x000F, DMI type 8, 9 bytes
Port Connector Information
	Internal Reference Designator: J3A1
	Internal Connector Type: None
	External Reference Designator: USB
	External Connector Type: Access Bus (USB)
	Port Type: USB

Handle 0x0010, DMI type 13, 22 bytes
BIOS Language Information
	Language Description Format: Long
	Installable Languages: 4
		en|US|iso8859-1
		fr|CA|iso8859-1
		ja|JP|unicode
		zh|TW|unicode
	Currently Installed Language: en|US|iso8859-1

Handle 0x0011, DMI type 15, 29 bytes
System Event Log
	Area Length: 0 bytes
	Header Start Offset: 0x0000
	Header Length: 32 bytes
	Data Start Offset: 0x0020
	Access Method: General-purpose non-volatile data functions
	Access Address: 0x0000
	Status: Valid, Not Full
	Change Token: 0x12345678
	Header Format: OEM-specific
	Supported Log Type Descriptors: 3
	Descriptor 1: POST memory resize
	Data Format 1: None
	Descriptor 2: POST error
	Data Format 2: POST results bitmap
	Descriptor 3: Log area reset/cleared
	Data Format 3: None

Handle 0x0012, DMI type 16, 23 bytes
Physical Memory Array
	Location: System Board Or Motherboard
	Use: System Memory
	Error Correction Type: None
	Maximum Capacity: 32 GB
	Error Information Handle: 0x0018
	Number Of Devices: 4

Handle 0x0013, DMI type 17, 34 bytes
Memory Device
	Array Handle: 0x0012
	Error Information Handle: Not Provided
	Total Width: Unknown
	Data Width: Unknown
	Size: No Module Installed
	Form Factor: DIMM
	Set: None
	Locator: DIMM0
	Bank Locator: BANK 0
	Type: Unknown
	Type Detail: Unknown
	Speed: Unknown
	Manufacturer: Empty
	Serial Number: Empty
	Asset Tag: Unknown
	Part Number: Empty
	Rank: Unknown
	Configured Clock Speed: Unknown

Handle 0x0014, DMI type 17, 34 bytes
Memory Device
	Array Handle: 0x0012
	Error Information Handle: Not Provided
	Total Width: Unknown
	Data Width: Unknown
	Size: No Module Installed
	Form Factor: DIMM
	Set: None
	Locator: DIMM1
	Bank Locator: BANK 1
	Type: Unknown
	Type Detail: Unknown
	Speed: Unknown
	Manufacturer: Empty
	Serial Number: Empty
	Asset Tag: Unknown
	Part Number: Empty
	Rank: Unknown
	Configured Clock Speed: Unknown

Handle 0x0015, DMI type 17, 34 bytes
Memory Device
	Array Handle: 0x0012
	Error Information Handle: 0x0017
	Total Width: 64 bits
	Data Width: 64 bits
	Size: 4096 MB
	Form Factor: SODIMM
	Set: None
	Locator: DIMM2
	Bank Locator: BANK 2
	Type: DDR3
	Type Detail: Synchronous
	Speed: 1600 MT/s
	Manufacturer: Unknown
	Serial Number: 100C2DA8
	Asset Tag: Unknown
	Part Number: RMT3170ME68F9F1600
	Rank: 1
	Configured Clock Speed: 1600 MT/s

Handle 0x0016, DMI type 17, 34 bytes
Memory Device
	Array Handle: 0x0012
	Error Information Handle: Not Provided
	Total Width: Unknown
	Data Width: Unknown
	Size: No Module Installed
	Form Factor: DIMM
	Set: None
	Locator: DIMM3
	Bank Locator: BANK 3
	Type: Unknown
	Type Detail: Unknown
	Speed: Unknown
	Manufacturer: Empty
	Serial Number: Empty
	Asset Tag: Unknown
	Part Number: Empty
	Rank: Unknown
	Configured Clock Speed: Unknown

Handle 0x0017, DMI type 18, 23 bytes
32-bit Memory Error Information
	Type: OK
	Granularity: Unknown
	Operation: Unknown
	Vendor Syndrome: Unknown
	Memory Array Address: Unknown
	Device Address: Unknown
	Resolution: Unknown

Handle 0x0018, DMI type 18, 23 bytes
32-bit Memory Error Information
	Type: OK
	Granularity: Unknown
	Operation: Unknown
	Vendor Syndrome: Unknown
	Memory Array Address: Unknown
	Device Address: Unknown
	Resolution: Unknown

Handle 0x0019, DMI type 19, 31 bytes
Memory Array Mapped Address
	Starting Address: 0x00000000000
	Ending Address: 0x000FFFFFFFF
	Range Size: 4 GB
	Physical Array Handle: 0x0012
	Partition Width: 4

Handle 0x001A, DMI type 20, 35 bytes
Memory Device Mapped Address
	Starting Address: 0x00000000000
	Ending Address: 0x000FFFFFFFF
	Range Size: 4 GB
	Physical Device Handle: 0x0015
	Memory Array Mapped Address Handle: 0x0019
	Partition Row Position: Unknown
	Interleave Position: 2
	Interleaved Data Depth: 1

Handle 0x001B, DMI type 21, 7 bytes
Built-in Pointing Device
	Type: Touch Pad
	Interface: PS/2
	Buttons: 4

Handle 0x001C, DMI type 22, 26 bytes
Portable Battery
	Location: Fake
	Manufacturer: -Virtual Battery 0-
	Manufacture Date: 07/07/2010
	Serial Number: Battery 0
	Name: CRB Battery 0
	Chemistry: Lithium Ion
	Design Capacity: Unknown
	Design Voltage: Unknown
	SBDS Version: Not Specified
	Maximum Error: Unknown
	OEM-specific Information: 0x00000000

Handle 0x001D, DMI type 24, 5 bytes
Hardware Security
	Power-On Password Status: Disabled
	Keyboard Password Status: Disabled
	Administrator Password Status: Disabled
	Front Panel Reset Status: Disabled

Handle 0x001E, DMI type 32, 20 bytes
System Boot Information
	Status: No errors detected

Handle 0x001F, DMI type 128, 8 bytes
OEM-specific Type
	Header and Data:
		80 08 1F 00 55 AA 55 AA
	Strings:
		Oem Test 1
		Oem Test 2

Handle 0x0020, DMI type 133, 5 bytes
OEM-specific Type
	Header and Data:
		85 05 20 00 01
	Strings:
		KHOIHGIUCCHHII

Handle 0x0021, DMI type 136, 6 bytes
OEM-specific Type
	Header and Data:
		88 06 21 00 FF FF

Handle 0x0022, DMI type 221, 12 bytes
OEM-specific Type
	Header and Data:
		DD 0C 22 00 01 01 00 01 06 02 00 00
	Strings:
		Reference Code - ACPI

Handle 0x0023, DMI type 127, 4 bytes
End Of Table

Since you can specify the type to dmidecode command.  There are some BIOS keywords as follows.

  1. bios
  2. baseboard
  3. cache
  4. chassis
  5. connector
  6. memory
  7. processor
  8. slot
  9. system

syntax for using above bios keywords along with dmidecode command is,

$ sudo dmidecode --type

Since you can specify a number of DMI type instead of type the syntax is,

$ sudo dmidecode -t

for eg,

$ sudo dmidecode -t 6
# dmidecode 3.1
Getting SMBIOS data from sysfs.
SMBIOS 2.7 present.

Handle 0x0006, DMI type 6, 12 bytes
Memory Module Information
	Socket Designation: DIMM0
	Bank Connections: None
	Current Speed: Unknown
	Type: DIMM
	Installed Size: Not Installed
	Enabled Size: Not Installed
	Error Status: OK

Handle 0x0007, DMI type 6, 12 bytes
Memory Module Information
	Socket Designation: DIMM1
	Bank Connections: None
	Current Speed: Unknown
	Type: DIMM
	Installed Size: Not Installed
	Enabled Size: Not Installed
	Error Status: OK

Handle 0x0008, DMI type 6, 12 bytes
Memory Module Information
	Socket Designation: DIMM2
	Bank Connections: None
	Current Speed: Unknown
	Type: DIMM
	Installed Size: 4096 MB (Single-bank Connection)
	Enabled Size: 4096 MB (Single-bank Connection)
	Error Status: OK

Handle 0x0009, DMI type 6, 12 bytes
Memory Module Information
	Socket Designation: DIMM3
	Bank Connections: None
	Current Speed: Unknown
	Type: DIMM
	Installed Size: Not Installed
	Enabled Size: Not Installed
	Error Status: OK