kubelet.service fail to start up

kubeadm init is failing due to kubelet.service is failed to start.

I performed the below step and it worked for me!!

#yum install -y kubelet kubeadm kubectl docker

Make swap off by #swapoff -a

Now reset kubeadm by #kubeadm reset

Now try #kudeadm init

after that check #systemctl status kubelet

it will be working!!!

x509 cert issues after kubeadm init

While issuing command “kubeadm token list”, reporting the below issu

failed to list bootstrap tokens [Get https://192.168.40.132:6443/api/v1/namespaces/kube-system/secrets?fieldSelector=type%3Dbootstrap.kubernetes.io%2Ftoken: x509: certificate signed by unknown authority (possibly because of “crypto/rsa: verification error” while trying to verify candidate authority certificate “kubernetes”)

Perform following step:

cp /etc/kubernetes/admin.conf ~/.kube/admin.conf

export KUBECONFIG=$HOME/.kube/admin.conf

kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
h94rrx.90dkwkukxgcp3635 23h 2018-10-22T08:29:50-07:00 authentication,signing The default bootstrap token generated by ‘kubeadm init’. system:bootstrappers:kubeadm:default-node-token

 

 

Autodeploy Image: An error occurred while generating the image [Entry is too large to be added]

This issue occurred with below environment:

vcsa 6.5, embedded vum,autodeploy and image builder service enabled.

  • For image customization we mapped more images [ approx 2 GB] so it was not allowing to map new image, and throwing below error message
Error while Autodeploy Image... An error occurred while generating the image : Error : An error occurred while performing the task Entry is too large to be added to cache, please remove any imported depots you are not using...
  • Delete the images, which are not in-use, or apply the below workaround [increase the cacheSize]
cat /etc/vmware-imagebuilder/sca-config/imagebuilder-config.propsloglevel=INFO
vmomiPort=8098
httpPort=8099
cacheSize_GB=4

ls -lh /storage/imagebuilder/exports/
total 361M

Go in vcsa --> Administrator-->System Configuration-->Services

Restart Auto Deploy

Restart ImageBuilder Service

 

 

How to set vrops [vrealize operation manager] forgotten root password

Recently, in my testing environment i forgot the root password. I did the following step to reset password:

-Restart the vrops node

-Edit the Boot option “init=/bin/bash”

Boot Options vga=0x311 elevator=noop noexec=on nousb audit=1 init=/bin/bash

-Once system boots, type below command:

"passwd root"

It will prompt for new password. Provide the new password and reboot the system!!!

kubelet service is failing :Unable to update cni config: No networks found in /etc/cni/net.d

Hi,

Small troubleshooting  tip:

“kubeadm reset” and try to initiate the cluster again but it is failing ,due to kubelet service was not running.

Executed this command “journalctl -xeu kubelet”

Error message “Unable to update cni config: No networks found in /etc/cni/net.d”

I checked on github : https://github.com/kubernetes/kubernetes/issues/54918

Applied the workaround “chmod 777 /etc/cni/net.d” and try to start the service.

systemctl start kubelet.service

This worked!!!

This all has been done in centos7

 

pyVmomi module: Script for fetching hardware information from ESXi

import argparse
from pyVmomi import vim
from pyVim.connect import SmartConnect,Disconnect
import atexit
import ssl

def validate_options():
  parser = argparse.ArgumentParser(description='input parameters')
  parser.add_argument('-d','--dest_host',dest='dhost',required=True,help='The ESxi destination host IP')
  parser.add_argument('-v','--vc_host',dest='vchost',required=False,help='The VC ip')
  parser.add_argument('-u','--vc_user',dest='vcuser',required=True,help='VC username')
  parser.add_argument('-p','--vc_pass',dest='vcpasswd',required=True,help='VC passwd')
  args = parser.parse_args()
  return args

def getHostID(content,dhost):
  if content.searchIndex.FindByIp(None,dhost,False):
    host = content.searchIndex.FindByIp(None,dhost,False)
  else:
    host = content.searchIndex.FindByDnsName(None,dhost,False)
  return host

def get_HostInfo(content,dhost):
   search_index = content.searchIndex
   root_folder =  content.rootFolder
   view_ref = content.viewManager.CreateContainerView(container=root_folder,type=[vim.HostSystem], recursive=True)
   host = view_ref.view[0]
   #print host.name
   print 'UUID INFO %s' %(host.summary.hardware.uuid)
   print 'Hardware Model %s' %(host.summary.hardware.model)
   print '%s Server has %s Biosversion'%(host.hardware.biosInfo.vendor,host.hardware.biosInfo.biosVersion)
   pcilist=host.hardware.pciDevice
   print '{0}'.format("Vendor Name").ljust(20)+ '{0}'.format("Device Name").ljust(120)+ '{0}'.format("Slot").ljust(30)+ '{0}'.format('Device ID').ljust(10)
   print '*************************************************************************************************************************************************************************************************'
   for i in pcilist:
     a = i.vendorName
     b = i.deviceName
     c = i.deviceId
     d = i.slot
     print '{0}'.format(a).ljust(20)+ '{0}'.format(b).ljust(120)+ '{0}'.format(d).ljust(30)+ '{0}'.format(c).ljust(10)
     #print '%s has devicename %s and  device ID %s'%(i.vendorName,i.deviceName,i.deviceId)
   print '*********************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************' 
def main():
  opts = validate_options()
  if opts.vchost:
    print 'Connecting to vcenter and collecting sensor info for %s' %opts.dhost
  else:
    print 'Connecting to esxi host for %s' %opts.dhost
    opts.vchost = opts.dhost
  si = SmartConnect(host=opts.vchost, user=opts.vcuser, pwd=opts.vcpasswd)
  content = si.RetrieveContent()
  #print content
  hostinfo = get_HostInfo(content,opts.dhost)
  hostid = getHostID(content,opts.dhost)
  sensorinfo=hostid.runtime.healthSystemRuntime.systemHealthInfo.numericSensorInfo
  print '{0}'.format("Sensor").ljust(30)+ '{0}'.format("Sensor Detail").ljust(90)+ '{0}'.format('Status').ljust(10)+ '{0}'.format('Reading').ljust(10) +'{0}'.format('Units').ljust(13)+ '{0}'.format('Summary').ljust(20)
  print '**************************************************************************************************************************************************************************************************************'
  for i in sensorinfo:
    j = i.healthState
    a=str(i.currentReading)
    b=i.baseUnits
    c=i.sensorType
    print '{0}'.format(c).ljust(30)+ '{0}'.format(i.name).ljust(90)+ '{0}'.format(j.label).ljust(10)+ '{0}'.format(a).ljust(10) + '{0}'.format(b).ljust(13)+ '{0}'.format(j.summary).ljust(20)

if __name__ =='__main__':
  main()
How to run this script : 

python <name of script> -v <vc server> -d <esxi host which hardware want to list> -u <vc user name> -p <vc password>

This script is written in python. I have used the  pyVmomi module.

vSphere On-disk Metadata Analyzer (VOMA)

VOMA helps in performing VMFS file system metadata checks. This utility scans the VMFS volume metadata and highlights any inconsistencies.

VOMA provides four modules, and except for lvm, each of them has a fix function:

lvm-  Checks datastore’s logical device header,logical volume header and physical extent mapping

  • vmfs – This module checks vmfs hearder,resource file, heartbeat region, file descriptor ,connectivity etc.0
  • ptbl –  Module checks the partition table and provide table structure.
    • Phase 1: Checking device for valid primary GPT
    • Phase 2: Checking device for a valid backup GPT
    • Phase 3: Checking device for valid MBR table
    • Phase 4: Searching for valid file system headers
voma [OPTIONS] -m module -d device

-m, --module      Name of the module to run.

                    Available Modules are

                      1. lvm

                      2. vmfs

                      3. ptbl

-f, --func        Function(s) to be done by the module.

                     Options are

                       query   - list functions supported by module

                       check   - check for Errors

                       fix     - check & fix

                       dump    - collect metadata dump

-d, --device      Device/Disk to be used

-s, --logfile     Path to file, redirects the output to given file

-x, --extractDump Extract the dump collected using VOMA

-D, --dumpfile    Dump file to save the metadata dump collected

-v, --version     Prints voma version and exit.

-h, --help        Print this help message.
Example:

voma -m vmfs -f check -d /vmfs/devices/disks/naa.xxxx:x

voma -m vmfs -f dump -d /vmfs/devices/disks/naa.xxxx:x -D dumpfilename
voma -m vmfs -f check -d /vmfs/devices/disks/<device-id>

Checking if device is actively used by other hosts

Initializing VMFS Checker..|Scanning for VMFS-3/VMFS-5 host activity (512 bytes/HB, 2048 HBs).

Found 3  actively heartbeating hosts on device '/vmfs/devices/disks/<device id>

1): MAC address

2): MAC address

3): MAC address
voma -m ptbl -f check -d /vmfs/devices/disks/<device id>

Running Partition table checker version 0.1 in check mode

Phase 1: Checking device for valid primary GPT

Phase 2: Checking device for a valid backup GPT

Phase 3: Checking device for valid MBR table

Phase 4: Searching for valid file system headers

No valid LVM headers detected

Add-EsxSoftwareDepot : 32 At line:1 char:1 Error message

Unable to add using “Add-EsxSoftwareDepot “, Giving Below error message

“Add-EsxSoftwareDepot : 32 At line:1 char:1”

Checked vmware kb and vmtn :

https://communities.vmware.com/thread/495611

I also checked the “get-culture”, This is also reporting.

Facing the issue due to space, Check your C: drive on the image builder windows server.

This issue occurred, C: drive was having only 4 MB!!!

 

 

 

 

Error reading file \’/usr/lib/vmware-imagebuilder/etc/schemas/vib20-extensibility.rng

While downloading the new image from auto-deploy, I got the following error message:

Something went wrong while converting items to pxe profile:(vmodl.fault.SystemError) {

   dynamicType = <unset>,

   dynamicProperty = (vmodl.DynamicProperty) [],

   msg = 'Invalid Fault',

   faultCause = (imagebuilder.fault.IbFault) {

      dynamicType = <unset>,

      dynamicProperty = (vmodl.DynamicProperty) [],

      msg = 'Unable to obtain XML schema: Error loading schema XML data: Error reading file \'/usr/lib/vmware-imagebuilder/etc/schemas/vib20-extensibility.rng\': failed to load external entity "/usr/lib/vmware-imagebuilder/etc/schemas/vib20-extensibility.rng".',

      faultCause = <unset>,

      faultMessage = (vmodl.LocalizableMessage) [

         (vmodl.LocalizableMessage) {

            dynamicType = <unset>,

            dynamicProperty = (vmodl.DynamicProperty) [],

            key = '',

            arg = (vmodl.KeyAnyValue) [],

            message = 'Unable to obtain XML schema: Error loading schema XML data: Error reading file \'/usr/lib/vmware-imagebuilder/etc/schemas/vib20-extensibility.rng\': failed to load external entity "/usr/lib/vmware-imagebuilder/etc/schemas/vib20-extensibility.rng".'

         }

      ],

      errorCode = 0,

      errorMessage = (vmodl.LocalizableMessage) {

         dynamicType = <unset>,

         dynamicProperty = (vmodl.DynamicProperty) [],

         key = '',

         arg = (vmodl.KeyAnyValue) [],

         message = 'Unable to obtain XML schema: Error loading schema XML data: Error reading file \'/usr/lib/vmware-imagebuilder/etc/schemas/vib20-extensibility.rng\': failed to load external entity "/usr/lib/vmware-imagebuilder/etc/schemas/vib20-extensibility.rng".'

      },

      additionalData = (vmodl.KeyAnyValue) []

   },

   faultMessage = (vmodl.LocalizableMessage) [],

   reason = 'Method DepotManagerDownloadVib threw undeclared fault of type imagebuilder.fault.IbFault'

}

Solution:

-Install powercli 6.5. And check on this location for vib20-extensibility

 

C:\Program Files (x86)\VMware\Infrastructure\PowerCLI\Modules\VMware.ImageBuilder

-copy this file to vcsa appliance :

/usr/lib/vmware-imagebuilder/etc/schemas

This will resolve the issue!!!

Pods creation [kubernetes]

 

//This is basic pod yaml file
apiVersion: "v1"
kind: Pod
   //Service,Pod,Replication controller,node :objects of kubernetes
metadata:
   name: mypod
   labels:
 //Additional metadata
      app: demo
      env: test
spec:
//specification for pod, we will be putting containers here
    containers:
       - name: nginx
          image: nginx
          ports:
        //ports again a collection
            - name: http
              containerPort: 80
              protocol: TCP
C:\Users\amitm\Downloads>kubectl describe pod mypod
Name: mypod
Namespace: default
Node: minikube/192.168.99.100
Start Time: Wed, 22 Nov 2017 18:15:53 +0530
Labels: app=demo
 env=test
Annotations: <none>
Status: Pending
IP:
Containers:
 nginx:
 Container ID:
 Image: nginx
 Image ID:
 Port: 80/TCP
 State: Waiting
 Reason: ContainerCreating
 Ready: False
 Restart Count: 0
 Environment: <none>
 Mounts:
 /var/run/secrets/kubernetes.io/serviceaccount from default-token-4462j (ro)
Conditions:
 Type Status
 Initialized True
 Ready False
 PodScheduled True
Volumes:
 default-token-4462j:
 Type: Secret (a volume populated by a Secret)
 SecretName: default-token-4462j
 Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: <none>
Events:
 Type Reason Age From Message
 ---- ------ ---- ---- -------
 Normal Scheduled 4m default-scheduler Successfully assigned mypod to minikube
 Normal SuccessfulMountVolume 4m kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-4462j"
 Normal Pulling 4m kubelet, minikube pulling image "nginx"
C:\Users\amitm\Downloads>kubectl describe pod mypod
Name: mypod
Namespace: default
Node: minikube/192.168.99.100
Start Time: Wed, 22 Nov 2017 18:15:53 +0530
Labels: app=demo
 env=test
Annotations: <none>
Status: Running
IP: 172.17.0.10
Containers:
 nginx:
 Container ID: docker://5851633d71162a58861a48cd2b1a310f1e069df22e449d05c327fd34068e707f
 Image: nginx
 Image ID: docker-pullable://nginx@sha256:9fca103a62af6db7f188ac3376c60927db41f88b8d2354bf02d2290a672dc425
 Port: 80/TCP
 State: Running
 Started: Wed, 22 Nov 2017 18:20:15 +0530
 Ready: True
 Restart Count: 0
 Environment: <none>
 Mounts:
 /var/run/secrets/kubernetes.io/serviceaccount from default-token-4462j (ro)
Conditions:
 Type Status
 Initialized True
 Ready True
 PodScheduled True
Volumes:
 default-token-4462j:
 Type: Secret (a volume populated by a Secret)
 SecretName: default-token-4462j
 Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: <none>
Events:
 Type Reason Age From Message
 ---- ------ ---- ---- -------
 Normal Scheduled 9m default-scheduler Successfully assigned mypod to minikube
 Normal SuccessfulMountVolume 9m kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-4462j"
 Normal Pulling 9m kubelet, minikube pulling image "nginx"
 Normal Pulled 5m kubelet, minikube Successfully pulled image "nginx"
 Normal Created 5m kubelet, minikube Created container
 Normal Started 5m kubelet, minikube Started container

C:\Users\amitm\Downloads>
//creating the service using expose command

C:\Users\amitm\Downloads>kubectl.exe expose pod mypod --type=NodePort
service "mypod" exposed

C:\Users\amitm\Downloads>kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 1d
mypod NodePort 10.0.0.39 <none> 80:32271/TCP 9s
redis ClusterIP 10.0.0.240 <none> 6379/TCP 1d
web NodePort 10.0.0.97 <none> 80:31130/TCP 1d

C:\Users\amitm\Downloads>kubectl describe svc mypod
Name: mypod
Namespace: default
Labels: app=demo
 env=test
Annotations: <none>
Selector: app=demo,env=test
Type: NodePort
IP: 10.0.0.39
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 32271/TCP //Port where our service is exposed on
Endpoints: 172.17.0.10:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>

C:\Users\amitm\Downloads>kubectl get nodes
NAME STATUS ROLES AGE VERSION
minikube Ready <none> 1d v1.8.0

http://<nodeip>:32271 //port which is mentioned as part of node port