Deploy minishift on virtual box [windows 10]

I have windows 10 on my personal laptop. Below is the configuration:

virtual box : Version 5.1.22

minishift[minishift-1.7.0-windows-amd64]

C:\Users\amitm\Downloads\minishift-1.7.0-windows-amd64\minishift-1.7.0-windows-amd64>minishift.exe start --vm-driver=virtualbox
-- Starting local OpenShift cluster using 'virtualbox' hypervisor ...
-- Minishift VM will be configured with ...
 Memory: 2 GB
 vCPUs : 2
 Disk size: 20 GB

Downloading ISO ‘https://github.com/minishift/minishift-b2d-iso/releases/download/v1.2.0/minishift-b2d.iso’
40.00 MiB / 40.00 MiB [===================================================================================] 100.00% 0s
— Starting Minishift VM ……………………………….. OK
— Checking for IP address … OK
— Checking if external host is reachable from the Minishift VM …
Pinging 8.8.8.8 … OK
— Checking HTTP connectivity from the VM …
Retrieving http://minishift.io/index.html … OK
— Checking if persistent storage volume is mounted … OK
— Checking available disk space … 0% used OK
— Downloading OpenShift binary ‘oc’ version ‘v3.6.0’
33.92 MiB / 33.92 MiB [===================================================================================================================================] 100.00% 0s– Downloading OpenShift v3.6.0 checksums … OK
— OpenShift cluster will be configured with …
Version: v3.6.0
— Checking `oc` support for startup flags …
host-data-dir … OK
host-pv-dir … OK
host-volumes-dir … OK
routing-suffix … OK
host-config-dir … OK
Starting OpenShift using openshift/origin:v3.6.0 …
Pulling image openshift/origin:v3.6.0
Pulled 1/4 layers, 26% complete
Pulled 1/4 layers, 27% complete
Pulled 1/4 layers, 27% complete
Pulled 1/4 layers, 28% complete
Pulled 1/4 layers, 29% complete
Pulled 1/4 layers, 29% complete
Pulled 1/4 layers, 49% complete
Pulled 1/4 layers, 51% complete
Pulled 1/4 layers, 54% complete
Pulled 1/4 layers, 57% complete
Pulled 1/4 layers, 59% complete
Pulled 1/4 layers, 61% complete
Pulled 1/4 layers, 63% complete
Pulled 1/4 layers, 66% complete
Pulled 1/4 layers, 69% complete
Pulled 1/4 layers, 73% complete
Pulled 1/4 layers, 75% complete
Pulled 1/4 layers, 75% complete
Pulled 1/4 layers, 76% complete
Pulled 1/4 layers, 78% complete
Pulled 1/4 layers, 79% complete
Pulled 2/4 layers, 82% complete
Pulled 2/4 layers, 84% complete
Pulled 2/4 layers, 85% complete
Pulled 3/4 layers, 88% complete
Pulled 3/4 layers, 89% complete
Pulled 3/4 layers, 91% complete
Pulled 3/4 layers, 92% complete
Pulled 3/4 layers, 93% complete
Pulled 3/4 layers, 94% complete
Pulled 3/4 layers, 96% complete
Pulled 3/4 layers, 97% complete
Pulled 3/4 layers, 98% complete
Pulled 3/4 layers, 98% complete
Pulled 4/4 layers, 100% complete
Extracting
Image pull complete
OpenShift server started.

The server is accessible via web console at:
https://192.168.99.100:8443

You are logged in as:
User: developer
Password: <any value>

To login as administrator:
oc login -u system:admin

Setting up oc command lines :

C:\Users\amitm\Downloads\minishift-1.7.0-windows-amd64\minishift-1.7.0-windows-amd64>minishift.exe oc-env
SET PATH=C:\Users\amitm\.minishift\cache\oc\v3.6.0;%PATH%
REM Run this command to configure your shell:
REM @FOR /f “tokens=*” %i IN (‘minishift oc-env’) DO @call %i

C:\Users\amitm\Downloads\minishift-1.7.0-windows-amd64\minishift-1.7.0-windows-amd64>oc
OpenShift Client

This client helps you develop, build, deploy, and run your applications on any OpenShift or Kubernetes compatible
platform. It also includes the administrative commands for managing a cluster under the 'adm' subcommand.

To create a new application, login to your server and then run new-app:

oc login https://mycluster.mycompany.com
 oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-ex.git
 oc logs -f bc/ruby-ex

This will create an application based on the Docker image 'centos/ruby-22-centos7' that builds the source code fromGitHub. A build will start automatically, push the resulting image to the registry, and a deployment will roll that
change out in your project.

Once your application is deployed, use the status, describe, and get commands to see more about the created components:

oc status
 oc describe deploymentconfig ruby-ex
 oc get pods

To make this application visible outside of the cluster, use the expose command on the service we just created to create
a 'route' (which will connect your application over the HTTP port to a public domain name).

oc expose svc/ruby-ex
 oc status

You should now see the URL the application can be reached at.

To see the full list of commands supported, run 'oc --help'.

C:\Users\amitm\Downloads\minishift-1.7.0-windows-amd64\minishift-1.7.0-windows-amd64>oc get pods
No resources found.

minishift.exe ssh /* will allow to login to minishift vm

 

Setting up kubernetes cluster in VMware workstation VM

I have setup kubernetes cluster in my laptop. I have installed vmware workstation version 11.

Kubernetes works in server-client setup, where it has a master providing centralized control for a number of minions. We will be deploying a Kubernetes master with one minion,

Kubernetes has several components:

  • etcd – A highly available key-value store for shared configuration and service discovery.
  • flannel – An etcd backed network fabric for containers.
  • kube-apiserver – Provides the API for Kubernetes orchestration.
  • kube-controller-manager – Enforces Kubernetes services.
  • kube-scheduler – Schedules containers on hosts.
  • kubelet – Processes a container manifest so the containers are launched according to how they are described.
  • kube-proxy – Provides network proxy services.

I have created two centos virtual machine: master and minion [which will be referring as node]

Both the vms has following configuration:

  • 1024MB RAM
  • 1 vCPU
  • 1 Network adapter with setting [NAT]
  • CentOS 7 OS

Note:  Check both the node gets ip, else perform ifup <ethernet adapter>

Modify /etc/hosts file on master and nodes both vms

Map the yum repos for kubernetes packages:

cat /etc/yum.repos.d/virt7-docker-common-release.repo
[virt7-docker-common-release]
name=virt7-docker-common-release
baseurl=http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os/
gpgcheck=0

Note: Please create repo file in master and minion node

Installing package: This needs to be installed on both master and minion.

yum -y install --enablerepo=virt7-docker-common-release kubernetes etcd flannel

Modifying the configuration files:

 cat /etc/kubernetes/config
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://192.168.40.130:8080" #my setup master ip 

Edit /etc/etcd/etcd.conf

# [member]
ETCD_NAME=default
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"

#[cluster]
ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379"

Edit /etc/kubernetes/apiserver

# The address on the local server to listen to.
KUBE_API_ADDRESS="--address=0.0.0.0"

# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"

# Port kubelets listen on
KUBELET_PORT="--kubelet-port=10250"

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.40.130:2379"

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

# Add your own!
KUBE_API_ARGS=""
systemctl start etcd
etcdctl mkdir /kube-centos/network
etcdctl mk /kube-centos/network/config "{ \"Network\": \"172.30.0.0/16\", \"SubnetLen\": 24, \"Backend\": { \"Type\": \"vxlan\" } }"
  • Start ETCD and configure it to hold the network overlay configuration on master: Warning This network must be unused in your network infrastructure! 172.30.0.0/16 is free in our network.
  • Configure flannel to overlay Docker network in /etc/sysconfig/flanneld on the master (also in the nodes as we’ll see):
# Flanneld configuration options

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://centos-master:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/kube-centos/network"

# Any additional options that you want to pass
#FLANNEL_OPTIONS=""
  • Configure flannel to overlay Docker network in /etc/sysconfig/flanneld on the master (also in the nodes as we’ll see):
# Flanneld configuration options

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://centos-master:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/kube-centos/network"

# Any additional options that you want to pass
#FLANNEL_OPTIONS=""
  • Start the appropriate services on master:
for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler flanneld; do
    systemctl restart $SERVICES
    systemctl enable $SERVICES
    systemctl status $SERVICES
done

  • Configure flannel to overlay Docker network in /etc/sysconfig/flanneld on the master (also in the nodes as we’ll see):
# Flanneld configuration options

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://centos-master:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/kube-centos/network"

# Any additional options that you want to pass
#FLANNEL_OPTIONS=""


Configure the Kubernetes services on the nodes.

We need to configure the kubelet and start the kubelet and proxy

  • Edit /etc/kubernetes/kubelet to appear as such:
# The address for the info server to serve on
KUBELET_ADDRESS="--address=0.0.0.0"

# The port for the info server to serve on
KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname
# Check the node number!
KUBELET_HOSTNAME="--hostname-override=centos-minion-n"

# Location of the api-server
KUBELET_API_SERVER="--api-servers=http://centos-master:8080"

# Add your own!
KUBELET_ARGS=""
  • Configure flannel to overlay Docker network in /etc/sysconfig/flanneld (in all the nodes)
# Flanneld configuration options

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://centos-master:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/kube-centos/network"

# Any additional options that you want to pass
#FLANNEL_OPTIONS=""
  • Start the appropriate services on node (centos-minion-n).
for SERVICES in kube-proxy kubelet flanneld docker; do
    systemctl restart $SERVICES
    systemctl enable $SERVICES
    systemctl status $SERVICES
done

Dashboard configuration:https://github.com/kubernetes/dashboard/releases

Check the version of kubernetes: kubectl version

you can check from browser as well :http://192.168.40.130:8080/version #http://<master ip> :8080/version

Appropriate download the supported dashboard  yaml :

https://github.com/kubernetes/dashboard/releases

kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.5.1/src/deploy/kubernetes-dashboard.yaml

https://<masterip:8080/ui

Note: Below version 1.7 its only supports api version 1, so creating pods and services please use api version 1 yaml only.

Pod Creation

A Pod is the basic building block of Kubernetes–the smallest and simplest unit in the Kubernetes object model that you create or deploy. A Pod represents a running process on your cluster.

A Pod encapsulates an application container (or, in some cases, multiple containers), storage resources, a unique network IP, and options that govern how the container(s) should run. A Pod represents a unit of deployment: a single instance of an application in Kubernetes, which might consist of either a single container or a small number of containers that are tightly coupled and that share resources.

pod yaml file:

[root@master pods]# cat mysql.yaml
apiVersion: v1
kind: Pod
metadata:
  name: mysql
  labels:
    name: mysql
spec:
  containers:
    - resources:
        limits :
          cpu: 1
      image: mysql
      name: mysql
      env:
        - name: MYSQL_ROOT_PASSWORD
          # change this
          value: test123
      ports:
          - containerPort: 3306
            name: mysql

While creating pod ,you may hit the below issue:

For solving this issue, modify the highlighted section KUBE_ADMISSION_CONTROL /comment this section

[root@master pods]# cat /etc/kubernetes/apiserver

# default admission control policies
#KUBE_ADMISSION_CONTROL=”–admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota”

Service:A Kubernetes Service is an abstraction which defines a logical set of Pods and a policy by which to access them – sometimes called a micro-service. The set of Pods targeted by a Service is (usually) determined by a Label
Selector

kubectl create -f mysql.yaml

[root@master pods]# kubectl get pods
NAME                    READY     STATUS    RESTARTS   AGE
mysql                   1/1       Running   0          2h

Create Service for mysql pod

[root@master pods]# cat mysql-service.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    name: mysql
  name: mysql
spec:
  externalIPs:
    - 192.168.40.132
  ports:
    # the port that this service should serve on
    - port: 3306
    #       # label keys and values that must match in order to receive traffic for this service
  selector:
    name: mysql

kubectl create -f mysql-service.yaml

kubectl get service
NAME         CLUSTER-IP       EXTERNAL-IP      PORT(S)    AGE
kubernetes   10.254.0.1       <none>           443/TCP    1d
mysql        10.254.140.187   192.168.40.132   3306/TCP   2h

 

pyVmomi rpm for centos7

pyVmomi is the Python SDK for the VMware vSphere API that allows you to manage ESX, ESXi, and vCenter. pyVmomi is available on git.

https://github.com/vmware/pyvmomi

I have created a rpm format of same pyVmomi SDK for centos7.  This rpm will be installed in the /opt folder on your centos 7.

Below is the Spec file:

%define BUILD pyvmomi_master.1.0.1.x86_64
Summary: Pyvmomi package
Name: pyvmomi_master
Release: 1.0
Version: 1
License: Apache License 2.0
Requires: python-six
Requires: python-requests
Requires: python-setuptools
BuildArch: noarch

%description
This package contains the vSphere python SDK

%post
%files
%defattr(-,root,root,-)
/opt/pyvmomi-master
%doc
%changeLog
* Fri Jul 14 2017 Amit <amit@openwriteup.com> 1-1.0
- Pyvmomi 6.5

Once you install the rpm, it will be in the /opt/pyvmomi-master folder.

 rpm -ivh pyvmomi_master-1-1.0.noarch.rpm
Preparing...                          ################################# [100%]
Updating / installing...
   1:pyvmomi_master-1-1.0             ################################# [100%]


################################# [100%]
[root@devbox noarch]# ls /opt/pyvmomi-master/
docs  LICENSE.txt  MANIFEST.in  NOTICE.txt  pyVim  pyVmomi  README.rst  requirements.txt  sample  setup.cfg  setup.py  test-requirements.txt  tests  tox.ini

Post installation of the package, we need to run following step:

[root@devbox pyvmomi-master]# python setup.py –help
Common commands: (see ‘–help-commands’ for more)

setup.py build      will build the package underneath ‘build/’
setup.py install    will install the package

 python setup.py install
running install
running bdist_egg
running egg_info
creating pyvmomi.egg-info
writing requirements to pyvmomi.egg-info/requires.txt
writing pyvmomi.egg-info/PKG-INFO
writing top-level names to pyvmomi.egg-info/top_level.txt
writing dependency_links to pyvmomi.egg-info/dependency_links.txt
writing manifest file 'pyvmomi.egg-info/SOURCES.txt'
reading manifest file 'pyvmomi.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
writing manifest file 'pyvmomi.egg-info/SOURCES.txt'

Kubernetes Setups Using minikube

Kubernetes should be run anywhere. It can be integrated with cloud providers AWS & Google Cloud. For home lab or testing purpose minikube is perfect fit. Minikube can quickly spin up on a local machine

  • github link : http://github.com/kubernetets/minikube

Minikube is a tool that makes it easy to run kubernetes locally. Minikube runs a single node kubernetes cluster inside  a Linux VM. Its aims for the users who want to test it out or use it for development.

Minikube seutp:

  • It works on linux,windows or Macos
  • Need a virtualization software installed to run minikube
    • virtualbox is free , can be downloaded from www.virtualbox.com
  • minikube can be downloaded from : http://github.com/kubernetes/minikube
  • Cluster can be launched using  in shell terminal/shell/powershell

In my home lab, I have centos7  installed on my laptop

  • 4 GB ram
  • Core i3 processor (gen2)
  • intel VT enabled in the bios

Note: I have installed virtualbox, curl and wget on my centos 7

[root@localhost ~]# curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.17.1/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current

                                 Dload  Upload   Total   Spent    Left  Speed

 51 83.3M   51 42.9M    0     0   166k      0  0:08:32  0:04:24  0:04:08  177k

Once the minikube has downloaded, I am starting minikube

[root@localhost /]# minikube start

Starting local Kubernetes cluster...

Starting VM...

Downloading Minikube ISO

 89.24 MB / 89.24 MB [==============================================] 100.00% 0s

SSH-ing files into VM...

Setting up certs...

Starting cluster components...

Connecting to cluster...

Setting up kubeconfig...

Kubectl is now configured to use the cluster.

Note: I have already downloaded kubectl, no need of any configuration,  download and copy in /usr/local/bin.

Checking the config file

[root@localhost /]# ls  ~/.kube/

config

[root@localhost /]# cat ~/.kube/config

apiVersion: v1

clusters:

- cluster:

    certificate-authority: /root/.minikube/ca.crt

    server: https://192.168.99.100:8443

  name: minikube

contexts:

- context:

    cluster: minikube

    user: minikube

  name: minikube

current-context: minikube

kind: Config

preferences: {}

users:

- name: minikube

  user:

    client-certificate: /root/.minikube/apiserver.crt

    client-key: /root/.minikube/apiserver.key

Using kubectl, I am pulling one image from my local docker registry, and exposing the end  point Url.  I am creating a service , exposing port and pulling image from below command.

kubectl run my-jenkins --image=http://localhost:5000/v1/repositories/jenkinslocal/tags/latest --port=8080

deployment " my-jenkins " created
 kubectl expose deployment my-jenkins --type=NodePort

service "my-jenkins" exposed

minikube service my-jenkins –url

Waiting, endpoint for service is not ready yet...

Waiting, endpoint for service is not ready yet...

Waiting, endpoint for service is not ready yet...
#Above command will take time to pull image, depending on internet speed.

Note: It will pull the image, and provide an end point URL

 

 

Kubernetes introduction

Kubernetes (often abbreviated as k8s) is open source system started by Google to fill this need. When an application grow beyond a single host , a need arisen for what has come to be called an orchestration system. An orchestration system helps users view a set of hosts as unified programmable relaible cluster

Kubernetes Architecture


Kubernetes cluster include following:

Kubernetes master service: These centralized services provide an API collect and surface the current state of the cluster and assign pods to node. Users mostly connect to the master API, this provides a unified view

Master Storage [etcd]: This is persistent storage. Currently all the state are preserved  and store in etcd

Kubelet: This agent runs on every node, and is responsible for driving Docker, reporting status to the master and setting up node-level resources.

Proxy: This also run in each node and provides local container a single network endpoint to reach an array of pods.

pods: A group of containers  that must be placed on a single node and work together as a team. Allowing a set of containers work closely together on a single node.

As user interacts with a Kubernetes master through kuectl that calls Kubernetes API. The master is responsible for storing a description of what users want to run.On each worker node in a cluster kubelet and proxy would be running. Kubelet is responsible for driving Docker and setting up other node-specific states like storage volumes. Proxy is responsible for providing local end point.

Kuberentes works to manage pods. Pods are a grouping of compute resource that provides context for a set of containers. Users can use pods to force a set of containers that work as a team to be scheduled on a single physical node.

Pods define a shared network interface. Unlike regular containers, containers in a pod all share the same network interface. This allows easy access across container using localhost. It also means that different containers in same pod cannot use the same network port.

Storage volume are defined as part of the pod. These volumes can be mapped into multiple containers as needed.

 

Building a CI/CD pipeline with Docker

CI/CD Terminology :

Docker Registry: A repository of images. Registries are public or private that contain images for download. Some registries allow users to upload images to make them available to others

Dockerfile: A configuration file with build instructions for Docker image.Dockerfile automate the steps for image customization.

As we discussed two types of registry are available:

Public Docker Registry: Users just want to explore or want a in-build solution. They can use hosted docker registries. Docker hub is the example of public registry where user can host the registry.

Private Docker Registry: Lot of organization have security concern for public docker registry.

CI/CD workflow

  • Developers pushes a Commit to Github
  • Github uses a webhook to notify jenkins for update
  • Jenkins pull the Github repository, including the dockerfile describing the image as well as the application and test code
  • Jenkins builds a docker image on the jenkins slave node
  • Jenkins intestates the docker container  on the slave node and does appropriate testing.
  • If image are successful the image is pushed to docker registry.
  • An tester/ user an pull the image from the docker registry and use it.

 

 echo “# projectnode” >> README.md
git init
git add README.md
git commit -m “first commit”
git remote add origin https://github.com/amitopenwriteup/projectnode.git
git push -u origin master

Jenkins setup

https://pkg.jenkins.io/redhat/

sudo wget -O /etc/yum.repos.d/jenkins.repo https://pkg.jenkins.io/redhat/jenkins.repo
sudo rpm –import https://pkg.jenkins.io/redhat/jenkins.io.key
yum install jenkins

Installing Github plugins on jenkins : http<ip>:8080

Create slave node in Jenkins->configuration->Manage nodes

Create a project in jenkins and configure it for source code management

Now map the gitlab in jenkins project

Note: I have created local gitlab in my centos follow the steps mentioned in below blog: https://www.server-world.info/en/note?os=CentOS_7&p=gitlab

Even we can create local docker registry as well

yum install docker-registry

systemctl start docker-registry

In this blog we have created gitlab,jenkins and docker registry.

Now push docker file and code to gitlab. Configure the webhook in git and configure jenkins.

Below is the git configuration

 

All the components of CI/CD are ready in-house lab.

 

Finding the ip address of a Container

There a many ways to find the ip of a container. Some of the methods i am listing down.

[root@localhost ~]# docker run -d –name mylinux rhel7
9f7213f3c373b06a6a5d8d762bd05c2282db9e1ecb0da421b8280e07291c0535

docker inspect

"NetworkSettings": {
            "Bridge": "",
            "SandboxID": "f8b8d1847392e520fc79cff3c3637fdfbd28aaded2290ad851aaab3adce23c6f",
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "Ports": null,
            "SandboxKey": "/var/run/docker/netns/f8b8d1847392",
            "SecondaryIPAddresses": null,
            "SecondaryIPv6Addresses": null,
            "EndpointID": "",
            "Gateway": "",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "IPAddress": "",
            "IPPrefixLen": 0,
            "IPv6Gateway": "",
            "MacAddress": "",
            "Networks": {
                "bridge": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": null,
                    "NetworkID": "fd99db2fc2a78f959b1e3b691d4a38125f5e5a2422267c08136d2ad0746e8e35",
                    "EndpointID": "",
                    "Gateway": "",
                    "IPAddress": "172.17.0.2",
                    "IPPrefixLen": 0,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": ""
                }
docker inspect --format '{{.NetworkSettings.IPAddress}}' mylinux
 172.17.0.2

docker exec command  and check the ip address that executes within the container

docker exec -ti mylinux ip add |grep global

 

Docker registry and Configure private registry on centos 7

Docker Registry: A repository of images. Registries are public or private that contain images for download. Some registries allow users to upload images to make them available to others

Dockerfile: A configuration file with build instructions for Docker image.Dockerfile automate the steps for image customization.

As we discussed two types of registry are available:

Public Docker Registry: Users just want to explore or want a in-build solution. They can use hosted docker registries. Docker hub is the example of public registry where user can host the registry.

Private Docker Registry: Lot of organization have security concern for public docker registry. They want to manage and store the customize image, only want to share in  the organization can go for private docker registry.

Note: In this post, I will be configuring the private docker registry.

cat /etc/centos-release
CentOS Linux release 7.2.1511 (Core)

Installing docker-registry

yum install docker-registry

//Docker Registry Configuration file  :rpm -ql  under etc directory             [root@localhost amit]# rpm -ql docker-registry-0.9.1-7.el7.x86_64
/etc/docker-registry.yml
/etc/sysconfig/docker-registry

//Docker Registry default image store location

Cat /etc/docker-registry.yml

local: &local
<<: *common
storage: local
storage_path: _env:STORAGE_PATH:/var/lib/docker-registry

//Enable and start the docker registry and default port is 5000:                 systemctl enable docker-registry                                                                      systemctl start docker-registry

systemctl status docker-registry
docker-registry.service - Registry server for Docker
 Loaded: loaded (/usr/lib/systemd/system/docker-registry.service; enabled; vendor preset: disabled)
 Active: active (running) since Tue 2017-01-24 13:21:37 IST; 55s ago
 Main PID: 24323 (gunicorn)
 Memory: 162.8M
 CGroup: /system.slice/docker-registry.service

netstat -tupln|grep 5000
tcp 0 0 0.0.0.0:5000 0.0.0.0:* LISTEN 24323/python

Docker registry is running on one of the host node, It need to be accessed by other nodes for uploading/downloading the images.

On the client node which need access the private registry need to modify /etc/sysconfig/docker

DOCKER_OPTS=”–insecure-registry localhost:5000″

link: https://docs.docker.com/registry/insecure/

//tag and push the image to docker registry

docker tag  <image id>  <ip of docker registry>:5000/<name>:<tag>

docker push    <ip of docker registry>:5000/<name>:<tag>

docker tag ff6f0851ef57 localhost:5000/jenkinslocal:latest
docker push localhost:5000/jenkinslocal:latest 
The push refers to a repository [localhost:5000/jenkinslocal]
7fef8c44bf7f: Image successfully pushed 
971a0fc79a1a: Image successfully pushed 
5b8b7745040c: Image successfully pushed 
7d114ad7e1fe: Image successfully pushed 
5b848d38b406: Image successfully pushed 
4e87708e08e8: Image successfully pushed 
d458e9b86b04: Image successfully pushed 
ca787184f0ab: Image successfully pushed 
4238f6371816: Image successfully pushed 
a2eea3e16ec7: Image successfully pushed 
1f764d32a220: Image successfully pushed 
1af14ac896ef: Image successfully pushed 
a7afeb77f416: Image successfully pushed 
cef349a9d76f: Image successfully pushed 
1d16eb83eef5: Image successfully pushed 
dfe1af64a72d: Image successfully pushed 
9f17712cba0b: Image successfully pushed 
223c0d04a137: Image successfully pushed 
fe4c16cbf7a4: Image successfully pushed 
Pushing tag for rev [ff6f0851ef57] on {http://localhost:5000/v1/repositories/jenkinslocal/tags/latest}

 

What’s New in vSphere 6.5: vCenter management clients

vSphere 6.5 VMware introduced html5 support.

In older release VMware was providing two types of clients:

– vSphere Client [ exe installer]

– vSphere Web client [flash based]

vSphere Client, from vSphere 5.5 onwards started  providing lot of restriction. VMware stopped latest virtual hardware vendor support, other core feature configuration from vSphere client.

In case of vSphere Web client [flash based], its performance was not up to mark to handle big environment. This flash based solution has lot of performance issues.

In vSphere 6.5, VMware has two types of clients:

HTML5 [vSphere client]

Flash [vSphere Web Client]

HTML5 [vSphere client]:VMware agrees that Flash is not the solution for the long-term. Our long-term direction is to utilize HTML5. In vSphere 6.5, we have released a supported version of an HTML5 based web client which we call “vSphere Client”. The vSphere Client is part of the vCenter Server (both appliance and Windows) and is configured to work out of the box.

Access Url:  https://<ip or fqdn of VC>/ui

Note:This HTML5 based client was originally released as a fling back in March 2016 and has been releasing a new version every week.

https://labs.vmware.com/flings/vsphere-html5-web-client#instructions

vSphere Web Client: The vSphere Client (HTML5) released in vSphere 6.5 has a subset of features of the vSphere Web Client (Flash/Flex). Until the vSphere Client achieves feature parity, we might continue to enhance and/or add new features to vSphere Web Client.

https://blogs.vmware.com/vsphere/2016/12/new-vcenter-management-clients-vsphere-6-5.html

Cloud Foundry

Cloud Foundry is an open platform as a service, providing a choice of clouds, developer frameworks and application services.

In cloud era, the application platform will be delivered as a service, often described as Platform as a Service (PaaS).Cloud Foundry is an open source project and is available through a variety of private cloud distributions and public cloud instances.

Cloud native stack , this is layered stack. Below diagram shows the cloud native stack of cloud foundry.

Cloud native stack

Its a stack, which provides your software  a environment to run. This layer provides a platform to run developer software. This is four layer stack.

Infrastructure layer [ IAAS]: This enables the complete stack. This layer provides resource. This layer can be aws,vmware vSphere,vCloud air or openstack or Microsoft azure. The kind of operation this layer provides :Provision a server, install vm on server, install os on the vm and operations on the vm [ start,stop]. This layer automate all the operation.Basically all the vendor provides the api to automate the complete IAAS layer operations. This API provide is cloud provide interface [CPI]

Infrastructure Automation: This layer takes care of CPI automation. In case of cloud foundry Bosh take care of automating Cloud provider interface. This will take care of provisioning of vm, creating of database vm,patching,upgrading or high availability etc, This layer will automate all the infrastructure operations task. In case of cloud foundry, BOSH automates all the infra task. If we package our software and provides to BOSH,  BOSH will take care of all the provisioning and configuration

RUNTIME Layer: This is cloud foundry layer. This layer is also called Elastic run time layer as well. This layer containerized the application. It takes care of Domain,routing and complete orchestration. This layer of take care of all the the orchestration task. All kind of scaling can be also orchestrated in this layer.

Application Layer: In this layer all the programming languages comes. This provides the environment for programming languages. This not only provides the environment also provide the supported library as well. It also contains lot of middleware,databases . As developer, need to take care of application rest of the stuff will be take are.

Just like aws lamda, it only provide the environment to developer. You need not to worry about which OS its running, what networking etc. Every thing is fully orchestrated. It has scaling , HA kind of features as well…

This link provides details : https://github.com/cloudfoundry