Running docker container : iptables: No chain/target/match by that name

If you are getting below error message

docker: Error response from daemon: driver failed programming external connectivity on endpoint jfrog-artifactory (b402f4e6bbb8591d043dbf64c0405914641aa1751ad46604cc107e5a313ae509): (iptables failed: iptables –wait -t nat -A DOCKER -p tcp -d 0/0 –dport 8085 -j DNAT –to-destination 172.17.0.2:8085 ! -idocker0: iptables: No chain/target/match by that name.

Perform this workaround:

[root@mydev /]# sudo iptables -t filter -F
[root@mydev /]# sudo iptables -t filter -X
[root@mydev /]# systemctl restart docker

Create GKE pod from local image

This blog, I will be covering following:

  • create gke cluster from gcloud sdk
  • few gcloud command line option
  • create container registry in google cloud
  • Push local image to google cloud registry
  • using kubectl create pod using that image

Prerequisites:

My environment is centos 7

sudo tee -a /etc/yum.repos.d/google-cloud-sdk.repo << EOM
[google-cloud-sdk]
name=Google Cloud SDK
baseurl=https://packages.cloud.google.com/yum/repos/cloud-sdk-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
       https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOM
yum install google-cloud-sdk
yum install docker

yum install kubectl
gcloud auth login
gcloud config set project <project name>
//for login and set the project name

create and explore google kubernetes engine (GKE) from gcloud

gcloud container clusters create example-cluster --zone us-central1-c
gcloud container clusters get-credentials  example-cluster   --zone us-central1-c
gcloud container clusters describe  example-cluster --zone us-central1-c

Enable and create container registry on google cloud

create tag and push the docker image on google container registry (gcr)

docker tag openwriteup:ubuntu  gcr.io/openwriteup/hellokubernetes/ubuntu
docker images
docker push gcr.io/openwriteup/hellokubernetes/ubuntu

yaml file to create pod

apiVersion: v1
kind: Pod
metadata:
  labels:
    run: ubuntu
  name: ubuntu
  namespace: default
  resourceVersion: "34418"
  selfLink: /api/v1/namespaces/default/pods/ubuntu
  uid: 7ee750fe-52a3-41f8-90be-5310270debad
spec:
  containers:
  - image: gcr.io/openwriteup/hellokubernetes/ubuntu
    imagePullPolicy: Always
    name: ubuntu
    resources:
      requests:
        cpu: 100m
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-jpglv
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
//kubectl apply -f <.yaml file>
//kubectl get pods
//kubectl logs <pod name>

kubelet.service fail to start up

kubeadm init is failing due to kubelet.service is failed to start.

I performed the below step and it worked for me!!

#yum install -y kubelet kubeadm kubectl docker

Make swap off by #swapoff -a

Now reset kubeadm by #kubeadm reset

Now try #kudeadm init

after that check #systemctl status kubelet

it will be working!!!

x509 cert issues after kubeadm init

While issuing command “kubeadm token list”, reporting the below issu

failed to list bootstrap tokens [Get https://192.168.40.132:6443/api/v1/namespaces/kube-system/secrets?fieldSelector=type%3Dbootstrap.kubernetes.io%2Ftoken: x509: certificate signed by unknown authority (possibly because of “crypto/rsa: verification error” while trying to verify candidate authority certificate “kubernetes”)

Perform following step:

cp /etc/kubernetes/admin.conf ~/.kube/admin.conf

export KUBECONFIG=$HOME/.kube/admin.conf

kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
h94rrx.90dkwkukxgcp3635 23h 2018-10-22T08:29:50-07:00 authentication,signing The default bootstrap token generated by ‘kubeadm init’. system:bootstrappers:kubeadm:default-node-token

 

 

kubelet service is failing :Unable to update cni config: No networks found in /etc/cni/net.d

Hi,

Small troubleshooting  tip:

“kubeadm reset” and try to initiate the cluster again but it is failing ,due to kubelet service was not running.

Executed this command “journalctl -xeu kubelet”

Error message “Unable to update cni config: No networks found in /etc/cni/net.d”

I checked on github : https://github.com/kubernetes/kubernetes/issues/54918

Applied the workaround “chmod 777 /etc/cni/net.d” and try to start the service.

systemctl start kubelet.service

This worked!!!

This all has been done in centos7

 

Pods creation [kubernetes]

 

//This is basic pod yaml file
apiVersion: "v1"
kind: Pod
   //Service,Pod,Replication controller,node :objects of kubernetes
metadata:
   name: mypod
   labels:
 //Additional metadata
      app: demo
      env: test
spec:
//specification for pod, we will be putting containers here
    containers:
       - name: nginx
          image: nginx
          ports:
        //ports again a collection
            - name: http
              containerPort: 80
              protocol: TCP
C:\Users\amitm\Downloads>kubectl describe pod mypod
Name: mypod
Namespace: default
Node: minikube/192.168.99.100
Start Time: Wed, 22 Nov 2017 18:15:53 +0530
Labels: app=demo
 env=test
Annotations: <none>
Status: Pending
IP:
Containers:
 nginx:
 Container ID:
 Image: nginx
 Image ID:
 Port: 80/TCP
 State: Waiting
 Reason: ContainerCreating
 Ready: False
 Restart Count: 0
 Environment: <none>
 Mounts:
 /var/run/secrets/kubernetes.io/serviceaccount from default-token-4462j (ro)
Conditions:
 Type Status
 Initialized True
 Ready False
 PodScheduled True
Volumes:
 default-token-4462j:
 Type: Secret (a volume populated by a Secret)
 SecretName: default-token-4462j
 Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: <none>
Events:
 Type Reason Age From Message
 ---- ------ ---- ---- -------
 Normal Scheduled 4m default-scheduler Successfully assigned mypod to minikube
 Normal SuccessfulMountVolume 4m kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-4462j"
 Normal Pulling 4m kubelet, minikube pulling image "nginx"
C:\Users\amitm\Downloads>kubectl describe pod mypod
Name: mypod
Namespace: default
Node: minikube/192.168.99.100
Start Time: Wed, 22 Nov 2017 18:15:53 +0530
Labels: app=demo
 env=test
Annotations: <none>
Status: Running
IP: 172.17.0.10
Containers:
 nginx:
 Container ID: docker://5851633d71162a58861a48cd2b1a310f1e069df22e449d05c327fd34068e707f
 Image: nginx
 Image ID: docker-pullable://nginx@sha256:9fca103a62af6db7f188ac3376c60927db41f88b8d2354bf02d2290a672dc425
 Port: 80/TCP
 State: Running
 Started: Wed, 22 Nov 2017 18:20:15 +0530
 Ready: True
 Restart Count: 0
 Environment: <none>
 Mounts:
 /var/run/secrets/kubernetes.io/serviceaccount from default-token-4462j (ro)
Conditions:
 Type Status
 Initialized True
 Ready True
 PodScheduled True
Volumes:
 default-token-4462j:
 Type: Secret (a volume populated by a Secret)
 SecretName: default-token-4462j
 Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: <none>
Events:
 Type Reason Age From Message
 ---- ------ ---- ---- -------
 Normal Scheduled 9m default-scheduler Successfully assigned mypod to minikube
 Normal SuccessfulMountVolume 9m kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-4462j"
 Normal Pulling 9m kubelet, minikube pulling image "nginx"
 Normal Pulled 5m kubelet, minikube Successfully pulled image "nginx"
 Normal Created 5m kubelet, minikube Created container
 Normal Started 5m kubelet, minikube Started container

C:\Users\amitm\Downloads>
//creating the service using expose command

C:\Users\amitm\Downloads>kubectl.exe expose pod mypod --type=NodePort
service "mypod" exposed

C:\Users\amitm\Downloads>kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 1d
mypod NodePort 10.0.0.39 <none> 80:32271/TCP 9s
redis ClusterIP 10.0.0.240 <none> 6379/TCP 1d
web NodePort 10.0.0.97 <none> 80:31130/TCP 1d

C:\Users\amitm\Downloads>kubectl describe svc mypod
Name: mypod
Namespace: default
Labels: app=demo
 env=test
Annotations: <none>
Selector: app=demo,env=test
Type: NodePort
IP: 10.0.0.39
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 32271/TCP //Port where our service is exposed on
Endpoints: 172.17.0.10:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>

C:\Users\amitm\Downloads>kubectl get nodes
NAME STATUS ROLES AGE VERSION
minikube Ready <none> 1d v1.8.0

http://<nodeip>:32271 //port which is mentioned as part of node port

 

Deploy minishift on virtual box [windows 10]

I have windows 10 on my personal laptop. Below is the configuration:

virtual box : Version 5.1.22

minishift[minishift-1.7.0-windows-amd64]

C:\Users\amitm\Downloads\minishift-1.7.0-windows-amd64\minishift-1.7.0-windows-amd64>minishift.exe start --vm-driver=virtualbox
-- Starting local OpenShift cluster using 'virtualbox' hypervisor ...
-- Minishift VM will be configured with ...
 Memory: 2 GB
 vCPUs : 2
 Disk size: 20 GB

Downloading ISO ‘https://github.com/minishift/minishift-b2d-iso/releases/download/v1.2.0/minishift-b2d.iso’
40.00 MiB / 40.00 MiB [===================================================================================] 100.00% 0s
— Starting Minishift VM ……………………………….. OK
— Checking for IP address … OK
— Checking if external host is reachable from the Minishift VM …
Pinging 8.8.8.8 … OK
— Checking HTTP connectivity from the VM …
Retrieving http://minishift.io/index.html … OK
— Checking if persistent storage volume is mounted … OK
— Checking available disk space … 0% used OK
— Downloading OpenShift binary ‘oc’ version ‘v3.6.0’
33.92 MiB / 33.92 MiB [===================================================================================================================================] 100.00% 0s– Downloading OpenShift v3.6.0 checksums … OK
— OpenShift cluster will be configured with …
Version: v3.6.0
— Checking `oc` support for startup flags …
host-data-dir … OK
host-pv-dir … OK
host-volumes-dir … OK
routing-suffix … OK
host-config-dir … OK
Starting OpenShift using openshift/origin:v3.6.0 …
Pulling image openshift/origin:v3.6.0
Pulled 1/4 layers, 26% complete
Pulled 1/4 layers, 27% complete
Pulled 1/4 layers, 27% complete
Pulled 1/4 layers, 28% complete
Pulled 1/4 layers, 29% complete
Pulled 1/4 layers, 29% complete
Pulled 1/4 layers, 49% complete
Pulled 1/4 layers, 51% complete
Pulled 1/4 layers, 54% complete
Pulled 1/4 layers, 57% complete
Pulled 1/4 layers, 59% complete
Pulled 1/4 layers, 61% complete
Pulled 1/4 layers, 63% complete
Pulled 1/4 layers, 66% complete
Pulled 1/4 layers, 69% complete
Pulled 1/4 layers, 73% complete
Pulled 1/4 layers, 75% complete
Pulled 1/4 layers, 75% complete
Pulled 1/4 layers, 76% complete
Pulled 1/4 layers, 78% complete
Pulled 1/4 layers, 79% complete
Pulled 2/4 layers, 82% complete
Pulled 2/4 layers, 84% complete
Pulled 2/4 layers, 85% complete
Pulled 3/4 layers, 88% complete
Pulled 3/4 layers, 89% complete
Pulled 3/4 layers, 91% complete
Pulled 3/4 layers, 92% complete
Pulled 3/4 layers, 93% complete
Pulled 3/4 layers, 94% complete
Pulled 3/4 layers, 96% complete
Pulled 3/4 layers, 97% complete
Pulled 3/4 layers, 98% complete
Pulled 3/4 layers, 98% complete
Pulled 4/4 layers, 100% complete
Extracting
Image pull complete
OpenShift server started.

The server is accessible via web console at:
https://192.168.99.100:8443

You are logged in as:
User: developer
Password: <any value>

To login as administrator:
oc login -u system:admin

Setting up oc command lines :

C:\Users\amitm\Downloads\minishift-1.7.0-windows-amd64\minishift-1.7.0-windows-amd64>minishift.exe oc-env
SET PATH=C:\Users\amitm\.minishift\cache\oc\v3.6.0;%PATH%
REM Run this command to configure your shell:
REM @FOR /f “tokens=*” %i IN (‘minishift oc-env’) DO @call %i

C:\Users\amitm\Downloads\minishift-1.7.0-windows-amd64\minishift-1.7.0-windows-amd64>oc
OpenShift Client

This client helps you develop, build, deploy, and run your applications on any OpenShift or Kubernetes compatible
platform. It also includes the administrative commands for managing a cluster under the 'adm' subcommand.

To create a new application, login to your server and then run new-app:

oc login https://mycluster.mycompany.com
 oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-ex.git
 oc logs -f bc/ruby-ex

This will create an application based on the Docker image 'centos/ruby-22-centos7' that builds the source code fromGitHub. A build will start automatically, push the resulting image to the registry, and a deployment will roll that
change out in your project.

Once your application is deployed, use the status, describe, and get commands to see more about the created components:

oc status
 oc describe deploymentconfig ruby-ex
 oc get pods

To make this application visible outside of the cluster, use the expose command on the service we just created to create
a 'route' (which will connect your application over the HTTP port to a public domain name).

oc expose svc/ruby-ex
 oc status

You should now see the URL the application can be reached at.

To see the full list of commands supported, run 'oc --help'.

C:\Users\amitm\Downloads\minishift-1.7.0-windows-amd64\minishift-1.7.0-windows-amd64>oc get pods
No resources found.

minishift.exe ssh /* will allow to login to minishift vm

 

Setting up kubernetes cluster in VMware workstation VM

I have setup kubernetes cluster in my laptop. I have installed vmware workstation version 11.

Kubernetes works in server-client setup, where it has a master providing centralized control for a number of minions. We will be deploying a Kubernetes master with one minion,

Kubernetes has several components:

  • etcd – A highly available key-value store for shared configuration and service discovery.
  • flannel – An etcd backed network fabric for containers.
  • kube-apiserver – Provides the API for Kubernetes orchestration.
  • kube-controller-manager – Enforces Kubernetes services.
  • kube-scheduler – Schedules containers on hosts.
  • kubelet – Processes a container manifest so the containers are launched according to how they are described.
  • kube-proxy – Provides network proxy services.

I have created two centos virtual machine: master and minion [which will be referring as node]

Both the vms has following configuration:

  • 1024MB RAM
  • 1 vCPU
  • 1 Network adapter with setting [NAT]
  • CentOS 7 OS

Note:  Check both the node gets ip, else perform ifup <ethernet adapter>

Modify /etc/hosts file on master and nodes both vms

Map the yum repos for kubernetes packages:

cat /etc/yum.repos.d/virt7-docker-common-release.repo
[virt7-docker-common-release]
name=virt7-docker-common-release
baseurl=http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os/
gpgcheck=0

Note: Please create repo file in master and minion node

Installing package: This needs to be installed on both master and minion.

yum -y install --enablerepo=virt7-docker-common-release kubernetes etcd flannel

Modifying the configuration files:

 cat /etc/kubernetes/config
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://192.168.40.130:8080" #my setup master ip 

Edit /etc/etcd/etcd.conf

# [member]
ETCD_NAME=default
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"

#[cluster]
ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379"

Edit /etc/kubernetes/apiserver

# The address on the local server to listen to.
KUBE_API_ADDRESS="--address=0.0.0.0"

# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"

# Port kubelets listen on
KUBELET_PORT="--kubelet-port=10250"

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.40.130:2379"

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

# Add your own!
KUBE_API_ARGS=""
systemctl start etcd
etcdctl mkdir /kube-centos/network
etcdctl mk /kube-centos/network/config "{ \"Network\": \"172.30.0.0/16\", \"SubnetLen\": 24, \"Backend\": { \"Type\": \"vxlan\" } }"
  • Start ETCD and configure it to hold the network overlay configuration on master: Warning This network must be unused in your network infrastructure! 172.30.0.0/16 is free in our network.
  • Configure flannel to overlay Docker network in /etc/sysconfig/flanneld on the master (also in the nodes as we’ll see):
# Flanneld configuration options

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://centos-master:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/kube-centos/network"

# Any additional options that you want to pass
#FLANNEL_OPTIONS=""
  • Configure flannel to overlay Docker network in /etc/sysconfig/flanneld on the master (also in the nodes as we’ll see):
# Flanneld configuration options

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://centos-master:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/kube-centos/network"

# Any additional options that you want to pass
#FLANNEL_OPTIONS=""
  • Start the appropriate services on master:
for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler flanneld; do
    systemctl restart $SERVICES
    systemctl enable $SERVICES
    systemctl status $SERVICES
done

  • Configure flannel to overlay Docker network in /etc/sysconfig/flanneld on the master (also in the nodes as we’ll see):
# Flanneld configuration options

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://centos-master:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/kube-centos/network"

# Any additional options that you want to pass
#FLANNEL_OPTIONS=""


Configure the Kubernetes services on the nodes.

We need to configure the kubelet and start the kubelet and proxy

  • Edit /etc/kubernetes/kubelet to appear as such:
# The address for the info server to serve on
KUBELET_ADDRESS="--address=0.0.0.0"

# The port for the info server to serve on
KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname
# Check the node number!
KUBELET_HOSTNAME="--hostname-override=centos-minion-n"

# Location of the api-server
KUBELET_API_SERVER="--api-servers=http://centos-master:8080"

# Add your own!
KUBELET_ARGS=""
  • Configure flannel to overlay Docker network in /etc/sysconfig/flanneld (in all the nodes)
# Flanneld configuration options

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://centos-master:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/kube-centos/network"

# Any additional options that you want to pass
#FLANNEL_OPTIONS=""
  • Start the appropriate services on node (centos-minion-n).
for SERVICES in kube-proxy kubelet flanneld docker; do
    systemctl restart $SERVICES
    systemctl enable $SERVICES
    systemctl status $SERVICES
done

Dashboard configuration:https://github.com/kubernetes/dashboard/releases

Check the version of kubernetes: kubectl version

you can check from browser as well :http://192.168.40.130:8080/version #http://<master ip> :8080/version

Appropriate download the supported dashboard  yaml :

https://github.com/kubernetes/dashboard/releases

kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.5.1/src/deploy/kubernetes-dashboard.yaml

https://<masterip:8080/ui

Note: Below version 1.7 its only supports api version 1, so creating pods and services please use api version 1 yaml only.

Pod Creation

A Pod is the basic building block of Kubernetes–the smallest and simplest unit in the Kubernetes object model that you create or deploy. A Pod represents a running process on your cluster.

A Pod encapsulates an application container (or, in some cases, multiple containers), storage resources, a unique network IP, and options that govern how the container(s) should run. A Pod represents a unit of deployment: a single instance of an application in Kubernetes, which might consist of either a single container or a small number of containers that are tightly coupled and that share resources.

pod yaml file:

[root@master pods]# cat mysql.yaml
apiVersion: v1
kind: Pod
metadata:
  name: mysql
  labels:
    name: mysql
spec:
  containers:
    - resources:
        limits :
          cpu: 1
      image: mysql
      name: mysql
      env:
        - name: MYSQL_ROOT_PASSWORD
          # change this
          value: test123
      ports:
          - containerPort: 3306
            name: mysql

While creating pod ,you may hit the below issue:

For solving this issue, modify the highlighted section KUBE_ADMISSION_CONTROL /comment this section

[root@master pods]# cat /etc/kubernetes/apiserver

# default admission control policies
#KUBE_ADMISSION_CONTROL=”–admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota”

Service:A Kubernetes Service is an abstraction which defines a logical set of Pods and a policy by which to access them – sometimes called a micro-service. The set of Pods targeted by a Service is (usually) determined by a Label
Selector

kubectl create -f mysql.yaml

[root@master pods]# kubectl get pods
NAME                    READY     STATUS    RESTARTS   AGE
mysql                   1/1       Running   0          2h

Create Service for mysql pod

[root@master pods]# cat mysql-service.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    name: mysql
  name: mysql
spec:
  externalIPs:
    - 192.168.40.132
  ports:
    # the port that this service should serve on
    - port: 3306
    #       # label keys and values that must match in order to receive traffic for this service
  selector:
    name: mysql

kubectl create -f mysql-service.yaml

kubectl get service
NAME         CLUSTER-IP       EXTERNAL-IP      PORT(S)    AGE
kubernetes   10.254.0.1       <none>           443/TCP    1d
mysql        10.254.140.187   192.168.40.132   3306/TCP   2h

 

Kubernetes Setups Using minikube

Kubernetes should be run anywhere. It can be integrated with cloud providers AWS & Google Cloud. For home lab or testing purpose minikube is perfect fit. Minikube can quickly spin up on a local machine

  • github link : http://github.com/kubernetets/minikube

Minikube is a tool that makes it easy to run kubernetes locally. Minikube runs a single node kubernetes cluster inside  a Linux VM. Its aims for the users who want to test it out or use it for development.

Minikube seutp:

  • It works on linux,windows or Macos
  • Need a virtualization software installed to run minikube
    • virtualbox is free , can be downloaded from www.virtualbox.com
  • minikube can be downloaded from : http://github.com/kubernetes/minikube
  • Cluster can be launched using  in shell terminal/shell/powershell

In my home lab, I have centos7  installed on my laptop

  • 4 GB ram
  • Core i3 processor (gen2)
  • intel VT enabled in the bios

Note: I have installed virtualbox, curl and wget on my centos 7

[root@localhost ~]# curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.17.1/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current

                                 Dload  Upload   Total   Spent    Left  Speed

 51 83.3M   51 42.9M    0     0   166k      0  0:08:32  0:04:24  0:04:08  177k

Once the minikube has downloaded, I am starting minikube

[root@localhost /]# minikube start

Starting local Kubernetes cluster...

Starting VM...

Downloading Minikube ISO

 89.24 MB / 89.24 MB [==============================================] 100.00% 0s

SSH-ing files into VM...

Setting up certs...

Starting cluster components...

Connecting to cluster...

Setting up kubeconfig...

Kubectl is now configured to use the cluster.

Note: I have already downloaded kubectl, no need of any configuration,  download and copy in /usr/local/bin.

Checking the config file

[root@localhost /]# ls  ~/.kube/

config

[root@localhost /]# cat ~/.kube/config

apiVersion: v1

clusters:

- cluster:

    certificate-authority: /root/.minikube/ca.crt

    server: https://192.168.99.100:8443

  name: minikube

contexts:

- context:

    cluster: minikube

    user: minikube

  name: minikube

current-context: minikube

kind: Config

preferences: {}

users:

- name: minikube

  user:

    client-certificate: /root/.minikube/apiserver.crt

    client-key: /root/.minikube/apiserver.key

Using kubectl, I am pulling one image from my local docker registry, and exposing the end  point Url.  I am creating a service , exposing port and pulling image from below command.

kubectl run my-jenkins --image=http://localhost:5000/v1/repositories/jenkinslocal/tags/latest --port=8080

deployment " my-jenkins " created
 kubectl expose deployment my-jenkins --type=NodePort

service "my-jenkins" exposed

minikube service my-jenkins –url

Waiting, endpoint for service is not ready yet...

Waiting, endpoint for service is not ready yet...

Waiting, endpoint for service is not ready yet...
#Above command will take time to pull image, depending on internet speed.

Note: It will pull the image, and provide an end point URL

 

 

Kubernetes introduction

Kubernetes (often abbreviated as k8s) is open source system started by Google to fill this need. When an application grow beyond a single host , a need arisen for what has come to be called an orchestration system. An orchestration system helps users view a set of hosts as unified programmable relaible cluster

Kubernetes Architecture


Kubernetes cluster include following:

Kubernetes master service: These centralized services provide an API collect and surface the current state of the cluster and assign pods to node. Users mostly connect to the master API, this provides a unified view

Master Storage [etcd]: This is persistent storage. Currently all the state are preserved  and store in etcd

Kubelet: This agent runs on every node, and is responsible for driving Docker, reporting status to the master and setting up node-level resources.

Proxy: This also run in each node and provides local container a single network endpoint to reach an array of pods.

pods: A group of containers  that must be placed on a single node and work together as a team. Allowing a set of containers work closely together on a single node.

As user interacts with a Kubernetes master through kuectl that calls Kubernetes API. The master is responsible for storing a description of what users want to run.On each worker node in a cluster kubelet and proxy would be running. Kubelet is responsible for driving Docker and setting up other node-specific states like storage volumes. Proxy is responsible for providing local end point.

Kuberentes works to manage pods. Pods are a grouping of compute resource that provides context for a set of containers. Users can use pods to force a set of containers that work as a team to be scheduled on a single physical node.

Pods define a shared network interface. Unlike regular containers, containers in a pod all share the same network interface. This allows easy access across container using localhost. It also means that different containers in same pod cannot use the same network port.

Storage volume are defined as part of the pod. These volumes can be mapped into multiple containers as needed.