Cloud Basics and GCP- Part1

Let me write from my experience. I started working on VMware ESXi in 2005. This was the my first experience towards concept of virtualization. There was always a discussion, that virtualization will help us to move towards cloud computing, I thought how??

We used to create datacenter where each server was running hypervisor, and create multiple vms into it. That was the limit. How can I make it accessible that environment as a service to others. When I say Service it means through web. It is not possible for everyone to install hypervisor and create vms. Even, they can do , getting each feature is very costly.

So the big player mainly amazon came with the concept. They have large number of servers [compute],storage,networking devices in a datacenter. All the servers running hypervisor and managed by any centralized access point. Now its comes to make it accessible to everyone as a service [Through Web]. I can place the order, that i need a vm. As a developer, or product owner I don’t need to worry about Datacenter, environment etc.

As a end user, I just need one OS platform where i can configure the stuff. From the web, how someone place a market order or a book order, can place a os order. It make it feasible to access remotely through internet. That is a cloud computing.

Same way Google also came up with the platform called GCP [Google Cloud platform]. Same like other providers It also provides instances [vms] through web. It can have different hypervisor, hardware etc. For end user while placing the order below are the basics.

-Compute,storage and networking: These are the basic need.

Pods creation [kubernetes]

 

//This is basic pod yaml file
apiVersion: "v1"
kind: Pod
   //Service,Pod,Replication controller,node :objects of kubernetes
metadata:
   name: mypod
   labels:
 //Additional metadata
      app: demo
      env: test
spec:
//specification for pod, we will be putting containers here
    containers:
       - name: nginx
          image: nginx
          ports:
        //ports again a collection
            - name: http
              containerPort: 80
              protocol: TCP
C:\Users\amitm\Downloads>kubectl describe pod mypod
Name: mypod
Namespace: default
Node: minikube/192.168.99.100
Start Time: Wed, 22 Nov 2017 18:15:53 +0530
Labels: app=demo
 env=test
Annotations: <none>
Status: Pending
IP:
Containers:
 nginx:
 Container ID:
 Image: nginx
 Image ID:
 Port: 80/TCP
 State: Waiting
 Reason: ContainerCreating
 Ready: False
 Restart Count: 0
 Environment: <none>
 Mounts:
 /var/run/secrets/kubernetes.io/serviceaccount from default-token-4462j (ro)
Conditions:
 Type Status
 Initialized True
 Ready False
 PodScheduled True
Volumes:
 default-token-4462j:
 Type: Secret (a volume populated by a Secret)
 SecretName: default-token-4462j
 Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: <none>
Events:
 Type Reason Age From Message
 ---- ------ ---- ---- -------
 Normal Scheduled 4m default-scheduler Successfully assigned mypod to minikube
 Normal SuccessfulMountVolume 4m kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-4462j"
 Normal Pulling 4m kubelet, minikube pulling image "nginx"
C:\Users\amitm\Downloads>kubectl describe pod mypod
Name: mypod
Namespace: default
Node: minikube/192.168.99.100
Start Time: Wed, 22 Nov 2017 18:15:53 +0530
Labels: app=demo
 env=test
Annotations: <none>
Status: Running
IP: 172.17.0.10
Containers:
 nginx:
 Container ID: docker://5851633d71162a58861a48cd2b1a310f1e069df22e449d05c327fd34068e707f
 Image: nginx
 Image ID: docker-pullable://nginx@sha256:9fca103a62af6db7f188ac3376c60927db41f88b8d2354bf02d2290a672dc425
 Port: 80/TCP
 State: Running
 Started: Wed, 22 Nov 2017 18:20:15 +0530
 Ready: True
 Restart Count: 0
 Environment: <none>
 Mounts:
 /var/run/secrets/kubernetes.io/serviceaccount from default-token-4462j (ro)
Conditions:
 Type Status
 Initialized True
 Ready True
 PodScheduled True
Volumes:
 default-token-4462j:
 Type: Secret (a volume populated by a Secret)
 SecretName: default-token-4462j
 Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: <none>
Events:
 Type Reason Age From Message
 ---- ------ ---- ---- -------
 Normal Scheduled 9m default-scheduler Successfully assigned mypod to minikube
 Normal SuccessfulMountVolume 9m kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-4462j"
 Normal Pulling 9m kubelet, minikube pulling image "nginx"
 Normal Pulled 5m kubelet, minikube Successfully pulled image "nginx"
 Normal Created 5m kubelet, minikube Created container
 Normal Started 5m kubelet, minikube Started container

C:\Users\amitm\Downloads>
//creating the service using expose command

C:\Users\amitm\Downloads>kubectl.exe expose pod mypod --type=NodePort
service "mypod" exposed

C:\Users\amitm\Downloads>kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 1d
mypod NodePort 10.0.0.39 <none> 80:32271/TCP 9s
redis ClusterIP 10.0.0.240 <none> 6379/TCP 1d
web NodePort 10.0.0.97 <none> 80:31130/TCP 1d

C:\Users\amitm\Downloads>kubectl describe svc mypod
Name: mypod
Namespace: default
Labels: app=demo
 env=test
Annotations: <none>
Selector: app=demo,env=test
Type: NodePort
IP: 10.0.0.39
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 32271/TCP //Port where our service is exposed on
Endpoints: 172.17.0.10:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>

C:\Users\amitm\Downloads>kubectl get nodes
NAME STATUS ROLES AGE VERSION
minikube Ready <none> 1d v1.8.0

http://<nodeip>:32271 //port which is mentioned as part of node port

 

Deploy minishift on virtual box [windows 10]

I have windows 10 on my personal laptop. Below is the configuration:

virtual box : Version 5.1.22

minishift[minishift-1.7.0-windows-amd64]

C:\Users\amitm\Downloads\minishift-1.7.0-windows-amd64\minishift-1.7.0-windows-amd64>minishift.exe start --vm-driver=virtualbox
-- Starting local OpenShift cluster using 'virtualbox' hypervisor ...
-- Minishift VM will be configured with ...
 Memory: 2 GB
 vCPUs : 2
 Disk size: 20 GB

Downloading ISO ‘https://github.com/minishift/minishift-b2d-iso/releases/download/v1.2.0/minishift-b2d.iso’
40.00 MiB / 40.00 MiB [===================================================================================] 100.00% 0s
— Starting Minishift VM ……………………………….. OK
— Checking for IP address … OK
— Checking if external host is reachable from the Minishift VM …
Pinging 8.8.8.8 … OK
— Checking HTTP connectivity from the VM …
Retrieving http://minishift.io/index.html … OK
— Checking if persistent storage volume is mounted … OK
— Checking available disk space … 0% used OK
— Downloading OpenShift binary ‘oc’ version ‘v3.6.0’
33.92 MiB / 33.92 MiB [===================================================================================================================================] 100.00% 0s– Downloading OpenShift v3.6.0 checksums … OK
— OpenShift cluster will be configured with …
Version: v3.6.0
— Checking `oc` support for startup flags …
host-data-dir … OK
host-pv-dir … OK
host-volumes-dir … OK
routing-suffix … OK
host-config-dir … OK
Starting OpenShift using openshift/origin:v3.6.0 …
Pulling image openshift/origin:v3.6.0
Pulled 1/4 layers, 26% complete
Pulled 1/4 layers, 27% complete
Pulled 1/4 layers, 27% complete
Pulled 1/4 layers, 28% complete
Pulled 1/4 layers, 29% complete
Pulled 1/4 layers, 29% complete
Pulled 1/4 layers, 49% complete
Pulled 1/4 layers, 51% complete
Pulled 1/4 layers, 54% complete
Pulled 1/4 layers, 57% complete
Pulled 1/4 layers, 59% complete
Pulled 1/4 layers, 61% complete
Pulled 1/4 layers, 63% complete
Pulled 1/4 layers, 66% complete
Pulled 1/4 layers, 69% complete
Pulled 1/4 layers, 73% complete
Pulled 1/4 layers, 75% complete
Pulled 1/4 layers, 75% complete
Pulled 1/4 layers, 76% complete
Pulled 1/4 layers, 78% complete
Pulled 1/4 layers, 79% complete
Pulled 2/4 layers, 82% complete
Pulled 2/4 layers, 84% complete
Pulled 2/4 layers, 85% complete
Pulled 3/4 layers, 88% complete
Pulled 3/4 layers, 89% complete
Pulled 3/4 layers, 91% complete
Pulled 3/4 layers, 92% complete
Pulled 3/4 layers, 93% complete
Pulled 3/4 layers, 94% complete
Pulled 3/4 layers, 96% complete
Pulled 3/4 layers, 97% complete
Pulled 3/4 layers, 98% complete
Pulled 3/4 layers, 98% complete
Pulled 4/4 layers, 100% complete
Extracting
Image pull complete
OpenShift server started.

The server is accessible via web console at:
https://192.168.99.100:8443

You are logged in as:
User: developer
Password: <any value>

To login as administrator:
oc login -u system:admin

Setting up oc command lines :

C:\Users\amitm\Downloads\minishift-1.7.0-windows-amd64\minishift-1.7.0-windows-amd64>minishift.exe oc-env
SET PATH=C:\Users\amitm\.minishift\cache\oc\v3.6.0;%PATH%
REM Run this command to configure your shell:
REM @FOR /f “tokens=*” %i IN (‘minishift oc-env’) DO @call %i

C:\Users\amitm\Downloads\minishift-1.7.0-windows-amd64\minishift-1.7.0-windows-amd64>oc
OpenShift Client

This client helps you develop, build, deploy, and run your applications on any OpenShift or Kubernetes compatible
platform. It also includes the administrative commands for managing a cluster under the 'adm' subcommand.

To create a new application, login to your server and then run new-app:

oc login https://mycluster.mycompany.com
 oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-ex.git
 oc logs -f bc/ruby-ex

This will create an application based on the Docker image 'centos/ruby-22-centos7' that builds the source code fromGitHub. A build will start automatically, push the resulting image to the registry, and a deployment will roll that
change out in your project.

Once your application is deployed, use the status, describe, and get commands to see more about the created components:

oc status
 oc describe deploymentconfig ruby-ex
 oc get pods

To make this application visible outside of the cluster, use the expose command on the service we just created to create
a 'route' (which will connect your application over the HTTP port to a public domain name).

oc expose svc/ruby-ex
 oc status

You should now see the URL the application can be reached at.

To see the full list of commands supported, run 'oc --help'.

C:\Users\amitm\Downloads\minishift-1.7.0-windows-amd64\minishift-1.7.0-windows-amd64>oc get pods
No resources found.

minishift.exe ssh /* will allow to login to minishift vm

 

Setting up kubernetes cluster in VMware workstation VM

I have setup kubernetes cluster in my laptop. I have installed vmware workstation version 11.

Kubernetes works in server-client setup, where it has a master providing centralized control for a number of minions. We will be deploying a Kubernetes master with one minion,

Kubernetes has several components:

  • etcd – A highly available key-value store for shared configuration and service discovery.
  • flannel – An etcd backed network fabric for containers.
  • kube-apiserver – Provides the API for Kubernetes orchestration.
  • kube-controller-manager – Enforces Kubernetes services.
  • kube-scheduler – Schedules containers on hosts.
  • kubelet – Processes a container manifest so the containers are launched according to how they are described.
  • kube-proxy – Provides network proxy services.

I have created two centos virtual machine: master and minion [which will be referring as node]

Both the vms has following configuration:

  • 1024MB RAM
  • 1 vCPU
  • 1 Network adapter with setting [NAT]
  • CentOS 7 OS

Note:  Check both the node gets ip, else perform ifup <ethernet adapter>

Modify /etc/hosts file on master and nodes both vms

Map the yum repos for kubernetes packages:

cat /etc/yum.repos.d/virt7-docker-common-release.repo
[virt7-docker-common-release]
name=virt7-docker-common-release
baseurl=http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os/
gpgcheck=0

Note: Please create repo file in master and minion node

Installing package: This needs to be installed on both master and minion.

yum -y install --enablerepo=virt7-docker-common-release kubernetes etcd flannel

Modifying the configuration files:

 cat /etc/kubernetes/config
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://192.168.40.130:8080" #my setup master ip 

Edit /etc/etcd/etcd.conf

# [member]
ETCD_NAME=default
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"

#[cluster]
ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379"

Edit /etc/kubernetes/apiserver

# The address on the local server to listen to.
KUBE_API_ADDRESS="--address=0.0.0.0"

# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"

# Port kubelets listen on
KUBELET_PORT="--kubelet-port=10250"

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.40.130:2379"

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

# Add your own!
KUBE_API_ARGS=""
systemctl start etcd
etcdctl mkdir /kube-centos/network
etcdctl mk /kube-centos/network/config "{ \"Network\": \"172.30.0.0/16\", \"SubnetLen\": 24, \"Backend\": { \"Type\": \"vxlan\" } }"
  • Start ETCD and configure it to hold the network overlay configuration on master: Warning This network must be unused in your network infrastructure! 172.30.0.0/16 is free in our network.
  • Configure flannel to overlay Docker network in /etc/sysconfig/flanneld on the master (also in the nodes as we’ll see):
# Flanneld configuration options

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://centos-master:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/kube-centos/network"

# Any additional options that you want to pass
#FLANNEL_OPTIONS=""
  • Configure flannel to overlay Docker network in /etc/sysconfig/flanneld on the master (also in the nodes as we’ll see):
# Flanneld configuration options

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://centos-master:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/kube-centos/network"

# Any additional options that you want to pass
#FLANNEL_OPTIONS=""
  • Start the appropriate services on master:
for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler flanneld; do
    systemctl restart $SERVICES
    systemctl enable $SERVICES
    systemctl status $SERVICES
done

  • Configure flannel to overlay Docker network in /etc/sysconfig/flanneld on the master (also in the nodes as we’ll see):
# Flanneld configuration options

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://centos-master:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/kube-centos/network"

# Any additional options that you want to pass
#FLANNEL_OPTIONS=""


Configure the Kubernetes services on the nodes.

We need to configure the kubelet and start the kubelet and proxy

  • Edit /etc/kubernetes/kubelet to appear as such:
# The address for the info server to serve on
KUBELET_ADDRESS="--address=0.0.0.0"

# The port for the info server to serve on
KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname
# Check the node number!
KUBELET_HOSTNAME="--hostname-override=centos-minion-n"

# Location of the api-server
KUBELET_API_SERVER="--api-servers=http://centos-master:8080"

# Add your own!
KUBELET_ARGS=""
  • Configure flannel to overlay Docker network in /etc/sysconfig/flanneld (in all the nodes)
# Flanneld configuration options

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://centos-master:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/kube-centos/network"

# Any additional options that you want to pass
#FLANNEL_OPTIONS=""
  • Start the appropriate services on node (centos-minion-n).
for SERVICES in kube-proxy kubelet flanneld docker; do
    systemctl restart $SERVICES
    systemctl enable $SERVICES
    systemctl status $SERVICES
done

Dashboard configuration:https://github.com/kubernetes/dashboard/releases

Check the version of kubernetes: kubectl version

you can check from browser as well :http://192.168.40.130:8080/version #http://<master ip> :8080/version

Appropriate download the supported dashboard  yaml :

https://github.com/kubernetes/dashboard/releases

kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.5.1/src/deploy/kubernetes-dashboard.yaml

https://<masterip:8080/ui

Note: Below version 1.7 its only supports api version 1, so creating pods and services please use api version 1 yaml only.

Pod Creation

A Pod is the basic building block of Kubernetes–the smallest and simplest unit in the Kubernetes object model that you create or deploy. A Pod represents a running process on your cluster.

A Pod encapsulates an application container (or, in some cases, multiple containers), storage resources, a unique network IP, and options that govern how the container(s) should run. A Pod represents a unit of deployment: a single instance of an application in Kubernetes, which might consist of either a single container or a small number of containers that are tightly coupled and that share resources.

pod yaml file:

[root@master pods]# cat mysql.yaml
apiVersion: v1
kind: Pod
metadata:
  name: mysql
  labels:
    name: mysql
spec:
  containers:
    - resources:
        limits :
          cpu: 1
      image: mysql
      name: mysql
      env:
        - name: MYSQL_ROOT_PASSWORD
          # change this
          value: test123
      ports:
          - containerPort: 3306
            name: mysql

While creating pod ,you may hit the below issue:

For solving this issue, modify the highlighted section KUBE_ADMISSION_CONTROL /comment this section

[root@master pods]# cat /etc/kubernetes/apiserver

# default admission control policies
#KUBE_ADMISSION_CONTROL=”–admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota”

Service:A Kubernetes Service is an abstraction which defines a logical set of Pods and a policy by which to access them – sometimes called a micro-service. The set of Pods targeted by a Service is (usually) determined by a Label
Selector

kubectl create -f mysql.yaml

[root@master pods]# kubectl get pods
NAME                    READY     STATUS    RESTARTS   AGE
mysql                   1/1       Running   0          2h

Create Service for mysql pod

[root@master pods]# cat mysql-service.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    name: mysql
  name: mysql
spec:
  externalIPs:
    - 192.168.40.132
  ports:
    # the port that this service should serve on
    - port: 3306
    #       # label keys and values that must match in order to receive traffic for this service
  selector:
    name: mysql

kubectl create -f mysql-service.yaml

kubectl get service
NAME         CLUSTER-IP       EXTERNAL-IP      PORT(S)    AGE
kubernetes   10.254.0.1       <none>           443/TCP    1d
mysql        10.254.140.187   192.168.40.132   3306/TCP   2h

 

Building a CI/CD pipeline with Docker

CI/CD Terminology :

Docker Registry: A repository of images. Registries are public or private that contain images for download. Some registries allow users to upload images to make them available to others

Dockerfile: A configuration file with build instructions for Docker image.Dockerfile automate the steps for image customization.

As we discussed two types of registry are available:

Public Docker Registry: Users just want to explore or want a in-build solution. They can use hosted docker registries. Docker hub is the example of public registry where user can host the registry.

Private Docker Registry: Lot of organization have security concern for public docker registry.

CI/CD workflow

  • Developers pushes a Commit to Github
  • Github uses a webhook to notify jenkins for update
  • Jenkins pull the Github repository, including the dockerfile describing the image as well as the application and test code
  • Jenkins builds a docker image on the jenkins slave node
  • Jenkins intestates the docker container  on the slave node and does appropriate testing.
  • If image are successful the image is pushed to docker registry.
  • An tester/ user an pull the image from the docker registry and use it.

 

 echo “# projectnode” >> README.md
git init
git add README.md
git commit -m “first commit”
git remote add origin https://github.com/amitopenwriteup/projectnode.git
git push -u origin master

Jenkins setup

https://pkg.jenkins.io/redhat/

sudo wget -O /etc/yum.repos.d/jenkins.repo https://pkg.jenkins.io/redhat/jenkins.repo
sudo rpm –import https://pkg.jenkins.io/redhat/jenkins.io.key
yum install jenkins

Installing Github plugins on jenkins : http<ip>:8080

Create slave node in Jenkins->configuration->Manage nodes

Create a project in jenkins and configure it for source code management

Now map the gitlab in jenkins project

Note: I have created local gitlab in my centos follow the steps mentioned in below blog: https://www.server-world.info/en/note?os=CentOS_7&p=gitlab

Even we can create local docker registry as well

yum install docker-registry

systemctl start docker-registry

In this blog we have created gitlab,jenkins and docker registry.

Now push docker file and code to gitlab. Configure the webhook in git and configure jenkins.

Below is the git configuration

 

All the components of CI/CD are ready in-house lab.

 

Finding the ip address of a Container

There a many ways to find the ip of a container. Some of the methods i am listing down.

[root@localhost ~]# docker run -d –name mylinux rhel7
9f7213f3c373b06a6a5d8d762bd05c2282db9e1ecb0da421b8280e07291c0535

docker inspect

"NetworkSettings": {
            "Bridge": "",
            "SandboxID": "f8b8d1847392e520fc79cff3c3637fdfbd28aaded2290ad851aaab3adce23c6f",
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "Ports": null,
            "SandboxKey": "/var/run/docker/netns/f8b8d1847392",
            "SecondaryIPAddresses": null,
            "SecondaryIPv6Addresses": null,
            "EndpointID": "",
            "Gateway": "",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "IPAddress": "",
            "IPPrefixLen": 0,
            "IPv6Gateway": "",
            "MacAddress": "",
            "Networks": {
                "bridge": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": null,
                    "NetworkID": "fd99db2fc2a78f959b1e3b691d4a38125f5e5a2422267c08136d2ad0746e8e35",
                    "EndpointID": "",
                    "Gateway": "",
                    "IPAddress": "172.17.0.2",
                    "IPPrefixLen": 0,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": ""
                }
docker inspect --format '{{.NetworkSettings.IPAddress}}' mylinux
 172.17.0.2

docker exec command  and check the ip address that executes within the container

docker exec -ti mylinux ip add |grep global

 

vSphere intregrated Container (VIC)

Currently VMware is  providing two docker solutions:

  • VIC [vSphere integrated Container]
  • Photon Controller

Both of these products will enable containers on VMware products. In this blog  I will mainly focus on VIC.

VIC: VIC allows customers to run “Container as a VM” in vSphere  infrastructure. It can be deployed on standalone ESXi host or vCenter Server. When  a container will run as a VM, it will provide multiple benefits over standalone container linux VM.

Resource Management: In normal container case, we have a linux VM or hot which runs a docker daemon and launches lots of containers. A developer can connect these container via docker client. Over a period of time, linux VM/ Host will consume significant amount of memory for containers and eventually container can run out of memory scenarios. Since its doesn’t allow to use the shared memory. With VIC, since containers are deployed as VM, Its provides esxi memory management feature. All VIC containers are VM to vSphere, so all the memory management feature will be enabled for VIC container.

Tenancy: If user deploy container on linux vm/host, it doesn’t provide the option to assign resource per developer. Suppose we have multiple developers who are using containers, docker can’t provide the resource allocation option. In-case of VIC, it has the concept of VCH(Virtual Container Host), which controls access to a pool of vSphere resources. A VCH is designed to be single tenant, each with their own pool of resources. vSphere admin can deploy multiple VCH on ESXi host or vCenter server, can be assigned to individual developer. So multiple container can provide multi-tenancy option.

Container as a Service [CAAS]: One of the main concerns for developers is security and networking in-case of linux based docker. With VIC,since all the container will be deployed as vm, it will be facilitated to use vSphere security and networking feature. With Vic, each container will get its own vNic. A vSphere admin can monitor resource that are being consumed.

Virtual Container Host

A container host in VIC is a Virtual Container Host (VCH). A VCH is not in itself a VM – it is an abstract dynamic resource boundary that is defined and controlled by vSphere into which containerVMs can be provisioned. As such, a VCH can be a subset of a physical host or a subset of a cluster of hosts.However a container host also represents an API endpoint with an isolated namespace for accessing the control plane, so a functionally equivalent service must be provisioned to the vSphere infrastructure that provides the same endpoint for each VCH. There are various ways in which such an service could be deployed, but the simplest representation is to run it in a VM.

Given that a VCH in many cases will represent a subset of resource from a cluster of physical hosts, it is actually closer in concept to something like Docker Swarm than a traditional container host.

configure git on personal linux box

Developers , those want to use git can map linux virtual machine to github. In this way dev can port the code to github. Its easy to maintain the repository as well.

  1. Create a account on github.com
  2. After creating the account ,create a repository for yourself.
  3. SSH to your linux box and generate rsa key
    • ssh-keygen -t rsa -C “<emailid>
      copy the public key from .ssh/<>.pub to git
  4. Copy the public key to github
    • git3
  5. Commands to add file to new repository
    • git config –global user.email “<email id>”

      git clone git@github.com:<username>/mycode.git
      Cloning into ‘mycode’…
      remote: Counting objects: 3, done.
      remote: Total 3 (delta 0), reused 0 (delta 0), pack-reused 0
      Receiving objects: 100% (3/3), done.
      Checking connectivity… done.

      root [ ~ ]# cd mycode
      root [ ~/mycode ]# git add README.md
      root [ ~/mycode ]# git commit -m “add README.md”
      On branch master
      Your branch is up-to-date with ‘origin/master’.
      nothing to commit, working directory clean

      root [ ~/mycode ]# git push -u origin master
      Warning: Permanently added the RSA host key for IP address ‘192.30.252.129’ to the list of known hosts.
      Branch master set up to track remote branch master from origin.
      Everything up-to-date

    • Screenshot from github
    • git1
  6. Commands to add file to existing repository
    • cd <existing folder>

      root [ / ]# cd root/mycode/
      root [ ~/mycode ]# ls
      README.md
      root [ ~/mycode ]# git init
      Reinitialized existing Git repository in /root/mycode/.git/

      root [ ~/mycode ]# git add README1.md
      root [ ~/mycode ]# git commit README1.md

      [master 86ccf2f] new file: README1.md
      1 file changed, 0 insertions(+), 0 deletions(-)
      create mode 100644 README1.md

      root [ ~/mycode ]# git push -u origin master
      Counting objects: 3, done.
      Compressing objects: 100% (2/2), done.
      Writing objects: 100% (3/3), 278 bytes | 0 bytes/s, done.
      Total 3 (delta 0), reused 0 (delta 0)
      To git@github.com:amit23comp/mycode.git
      fc752b5..86ccf2f master -> master
      Branch master set up to track remote branch master from origin

    • git2

 

The Git Story: export pyvmomi git

In this blog we will be using some real time project to explore it.

Below is the github link:

https://github.com/vmware/pyvmomi

vpsherepython

https://github.com/vmware/pyvmomi.git

$ git clone https://github.com/vmware/pyvmomi-community-samples.git

You can download the offline zip and perform the below steps: pyvmomi-master.zip

git add .
git commit -m “committing pyvmomi”

[root@mytest pyvmomi-master]# git log
commit e3155b203764691577f82a6cc7e0b841fd5f4256
Author: Amit <amit@openwriteup.com>
Date: Mon Oct 19 09:29:43 2015 -0400
committing pyvmommi

This link will provide details for development pyvmommi:

http://vmware.github.io/pyvmomi-community-samples/#getting-started

The Git story: Add,commit,delete commands

In this blog, we will see git in details. Git has three stages:
-Working dir
-Staging
-Repository

Whenever we create a new file it would be part of working dir. We need to add this file in staging.Post staging we need to commit to store changes in repository

– Initial stage of working dir
[root@mytest amit]#
[root@mytest amit]# git status
# On branch master
nothing to commit (working directory clean)
[root@mytest amit]#

-Created two file test2.sh and test3.sh in working dir
[root@mytest amit]# git status
# On branch master
# Untracked files:
# (use “git add <file>…” to include in what will be committed)
#
# test2.sh
# test3.sh
nothing added to commit but untracked files present (use “git add” to track)

-Adding test2.sh in staging and commit
[root@mytest amit]#
[root@mytest amit]# git add test2.sh
[root@mytest amit]# git status
# On branch master
# Changes to be committed:
# (use “git reset HEAD <file>…” to unstage)
#
# new file: test2.sh
#
# Untracked files:
# (use “git add <file>…” to include in what will be committed)
#
# test3.sh

#git commit -m “second file”
[master 8a31c70] Second file
Committer: Amit <amit@openwriteup.com>
1 files changed, 2 insertions(+), 0 deletions(-)
create mode 100644 test2.sh

Want to check diff between files version

git diff
[root@mytest amit]# git diff
diff –git a/test.sh b/test.sh
index 7a870bd..2b56848 100644
— a/test.sh
+++ b/test.sh
@@ -1,2 +1,2 @@
#!/bin/bash
-echo “Hello World”
+echo “i theHello World”

1 #!/bin/bash
2 echo “i theHello World”

If file is added in staging ,it will not show in diff.

[root@mytest amit]# git diff
[root@mytest amit]# git diff –staged
diff –git a/test.sh b/test.sh
index 7a870bd..2b56848 100644
— a/test.sh
+++ b/test.sh
@@ -1,2 +1,2 @@
#!/bin/bash
-echo “Hello World”
+echo “i theHello World”

git to Delete file:
git rm test.sh
rm ‘test.sh’

[root@mytest amit]# git status
# On branch master
# Changes to be committed:
# (use “git reset HEAD <file>…” to unstage)
#
# deleted: test.sh
#

root@mytest amit]# git commit -m “deleted”
[master ab81bbb] deleted
Committer: Amit <amit@openwriteup.com>
Your name and email address were configured automatically based
on your username and hostname. Please check that they are accurate.
1 files changed, 0 insertions(+), 2 deletions(-)
delete mode 100644 test.sh
[root@mytest amit]# git status
# On branch master
nothing to commit (working directory clean)

Renaming the file in Git
[root@mytest amit]# git mv test2.sh test4.sh
[root@mytest amit]# git status
# On branch master
# Changes to be committed:
# (use “git reset HEAD <file>…” to unstage)
#
# renamed: test2.sh -> test4.sh
#
[root@mytest amit]# git commit -m “renamed”
[master 1c8445b] renamed
Committer: Amit <amit@openwriteup.com>1 files changed, 0 insertions(+), 0 deletions(-)
rename test2.sh => test4.sh (100%)
[root@mytest amit]# git status
# On branch master
nothing to commit (working directory clean)