Setting up kubernetes cluster in VMware workstation VM

I have setup kubernetes cluster in my laptop. I have installed vmware workstation version 11.

Kubernetes works in server-client setup, where it has a master providing centralized control for a number of minions. We will be deploying a Kubernetes master with one minion,

Kubernetes has several components:

  • etcd – A highly available key-value store for shared configuration and service discovery.
  • flannel – An etcd backed network fabric for containers.
  • kube-apiserver – Provides the API for Kubernetes orchestration.
  • kube-controller-manager – Enforces Kubernetes services.
  • kube-scheduler – Schedules containers on hosts.
  • kubelet – Processes a container manifest so the containers are launched according to how they are described.
  • kube-proxy – Provides network proxy services.

I have created two centos virtual machine: master and minion [which will be referring as node]

Both the vms has following configuration:

  • 1024MB RAM
  • 1 vCPU
  • 1 Network adapter with setting [NAT]
  • CentOS 7 OS

Note:  Check both the node gets ip, else perform ifup <ethernet adapter>

Modify /etc/hosts file on master and nodes both vms

Map the yum repos for kubernetes packages:

cat /etc/yum.repos.d/virt7-docker-common-release.repo
[virt7-docker-common-release]
name=virt7-docker-common-release
baseurl=http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os/
gpgcheck=0

Note: Please create repo file in master and minion node

Installing package: This needs to be installed on both master and minion.

yum -y install --enablerepo=virt7-docker-common-release kubernetes etcd flannel

Modifying the configuration files:

 cat /etc/kubernetes/config
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://192.168.40.130:8080" #my setup master ip 

Edit /etc/etcd/etcd.conf

# [member]
ETCD_NAME=default
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"

#[cluster]
ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379"

Edit /etc/kubernetes/apiserver

# The address on the local server to listen to.
KUBE_API_ADDRESS="--address=0.0.0.0"

# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"

# Port kubelets listen on
KUBELET_PORT="--kubelet-port=10250"

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.40.130:2379"

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

# Add your own!
KUBE_API_ARGS=""
systemctl start etcd
etcdctl mkdir /kube-centos/network
etcdctl mk /kube-centos/network/config "{ \"Network\": \"172.30.0.0/16\", \"SubnetLen\": 24, \"Backend\": { \"Type\": \"vxlan\" } }"
  • Start ETCD and configure it to hold the network overlay configuration on master: Warning This network must be unused in your network infrastructure! 172.30.0.0/16 is free in our network.
  • Configure flannel to overlay Docker network in /etc/sysconfig/flanneld on the master (also in the nodes as we’ll see):
# Flanneld configuration options

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://centos-master:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/kube-centos/network"

# Any additional options that you want to pass
#FLANNEL_OPTIONS=""
  • Configure flannel to overlay Docker network in /etc/sysconfig/flanneld on the master (also in the nodes as we’ll see):
# Flanneld configuration options

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://centos-master:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/kube-centos/network"

# Any additional options that you want to pass
#FLANNEL_OPTIONS=""
  • Start the appropriate services on master:
for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler flanneld; do
    systemctl restart $SERVICES
    systemctl enable $SERVICES
    systemctl status $SERVICES
done

  • Configure flannel to overlay Docker network in /etc/sysconfig/flanneld on the master (also in the nodes as we’ll see):
# Flanneld configuration options

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://centos-master:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/kube-centos/network"

# Any additional options that you want to pass
#FLANNEL_OPTIONS=""


Configure the Kubernetes services on the nodes.

We need to configure the kubelet and start the kubelet and proxy

  • Edit /etc/kubernetes/kubelet to appear as such:
# The address for the info server to serve on
KUBELET_ADDRESS="--address=0.0.0.0"

# The port for the info server to serve on
KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname
# Check the node number!
KUBELET_HOSTNAME="--hostname-override=centos-minion-n"

# Location of the api-server
KUBELET_API_SERVER="--api-servers=http://centos-master:8080"

# Add your own!
KUBELET_ARGS=""
  • Configure flannel to overlay Docker network in /etc/sysconfig/flanneld (in all the nodes)
# Flanneld configuration options

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://centos-master:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/kube-centos/network"

# Any additional options that you want to pass
#FLANNEL_OPTIONS=""
  • Start the appropriate services on node (centos-minion-n).
for SERVICES in kube-proxy kubelet flanneld docker; do
    systemctl restart $SERVICES
    systemctl enable $SERVICES
    systemctl status $SERVICES
done

Dashboard configuration:https://github.com/kubernetes/dashboard/releases

Check the version of kubernetes: kubectl version

you can check from browser as well :http://192.168.40.130:8080/version #http://<master ip> :8080/version

Appropriate download the supported dashboard  yaml :

https://github.com/kubernetes/dashboard/releases

kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.5.1/src/deploy/kubernetes-dashboard.yaml

https://<masterip:8080/ui

Note: Below version 1.7 its only supports api version 1, so creating pods and services please use api version 1 yaml only.

Pod Creation

A Pod is the basic building block of Kubernetes–the smallest and simplest unit in the Kubernetes object model that you create or deploy. A Pod represents a running process on your cluster.

A Pod encapsulates an application container (or, in some cases, multiple containers), storage resources, a unique network IP, and options that govern how the container(s) should run. A Pod represents a unit of deployment: a single instance of an application in Kubernetes, which might consist of either a single container or a small number of containers that are tightly coupled and that share resources.

pod yaml file:

[root@master pods]# cat mysql.yaml
apiVersion: v1
kind: Pod
metadata:
  name: mysql
  labels:
    name: mysql
spec:
  containers:
    - resources:
        limits :
          cpu: 1
      image: mysql
      name: mysql
      env:
        - name: MYSQL_ROOT_PASSWORD
          # change this
          value: test123
      ports:
          - containerPort: 3306
            name: mysql

While creating pod ,you may hit the below issue:

For solving this issue, modify the highlighted section KUBE_ADMISSION_CONTROL /comment this section

[root@master pods]# cat /etc/kubernetes/apiserver

# default admission control policies
#KUBE_ADMISSION_CONTROL=”–admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota”

Service:A Kubernetes Service is an abstraction which defines a logical set of Pods and a policy by which to access them – sometimes called a micro-service. The set of Pods targeted by a Service is (usually) determined by a Label
Selector

kubectl create -f mysql.yaml

[root@master pods]# kubectl get pods
NAME                    READY     STATUS    RESTARTS   AGE
mysql                   1/1       Running   0          2h

Create Service for mysql pod

[root@master pods]# cat mysql-service.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    name: mysql
  name: mysql
spec:
  externalIPs:
    - 192.168.40.132
  ports:
    # the port that this service should serve on
    - port: 3306
    #       # label keys and values that must match in order to receive traffic for this service
  selector:
    name: mysql

kubectl create -f mysql-service.yaml

kubectl get service
NAME         CLUSTER-IP       EXTERNAL-IP      PORT(S)    AGE
kubernetes   10.254.0.1       <none>           443/TCP    1d
mysql        10.254.140.187   192.168.40.132   3306/TCP   2h