Running docker container : iptables: No chain/target/match by that name

If you are getting below error message

docker: Error response from daemon: driver failed programming external connectivity on endpoint jfrog-artifactory (b402f4e6bbb8591d043dbf64c0405914641aa1751ad46604cc107e5a313ae509): (iptables failed: iptables –wait -t nat -A DOCKER -p tcp -d 0/0 –dport 8085 -j DNAT –to-destination ! -idocker0: iptables: No chain/target/match by that name.

Perform this workaround:

[root@mydev /]# sudo iptables -t filter -F
[root@mydev /]# sudo iptables -t filter -X
[root@mydev /]# systemctl restart docker

Boto 3: Basic and setup

Boto : Boto is a SDK designed to improve the use of the python programming language in aws.

Setup requirement:

  • Aws signup
  • Python version 2.7
  • Pycharm 3.3 [ide]
  • Pip setup

For programmatic access: We need to enable the access in the aws iam:

For programmatically user access , secret key and access id.

Setup awscli on windows

  • Awscli
  • Prerequires:
    • Check your system has Python 2.7
    • Pip is configured

Configure awscli

Configure the access key and id

C:\Users\amitm>aws configure

AWS Access Key ID [****************Z2FA]:

AWS Secret Access Key [****************V270]:

Default region name [ap-south-1]:

Default output format [test]:

Check setup?

C:\Users\amitm>aws s3 ls

2018-10-15 13:08:20 cf-templates-106h68kzl5m34-us-east-2

2018-11-08 23:39:48 openwriteup

2018-11-08 23:46:12 openwriteup-1

2018-11-09 00:16:44 test-openwriteup

What is awscli??

  • This is a command line tool
  • If we are writing script we can use it
  • Testing purpose or want to use shell or powershell it is useful that

Setup boto3

  • Pip install boto3
  • Test boto3
    • Python
    • Import boto3
    • Help(boto3)


  • A low-level interface to a growing number of Amazon Web Services. The botocore package is the foundation for the AWS CLIas well as boto3.
  • Botocore provides the low level clients, session, and credential & configuration data. Boto 3 builds on top of Botocore by providing its own session, resources and collections.
  • botocore does not provide higher-level abstractions on top of these services, operations and responses. That is left to the application layer. The goal of botocore is to handle all of the low-level details of making requests and getting results from a service

Core concepts of boto3


  • higher-level, object-oriented API
  • generated from resource description
  • uses identifiers and attributes
  • has actions (operations on resources)
  • exposes subresources and collections


import boto3

s3 = boto3.resource('s3')

bucket = s3.Bucket('mybucket')

for obj in bucket.objects.all():

print(obj.key, obj.last_modified)

Boto Client:

  • low-level service access
  • generated from service description
  • exposes botocore client to the developer
  • typically maps 1:1 with the service API
  • snake-cased method names (e.g. ListBuckets API => list_buckets method)


import boto3

client = boto3.client('s3')

response = client.list_objects(Bucket='mybucket')

for content in response['Contents']:

obj_dict = client.get_object(Bucket='mybucket', Key=content['Key'])

print(content['Key'], obj_dict['LastModified'])


Difference Between resource and client:

Resource object is very high level object, every operation with resource object would be high level operation. We may not have all the operation with resource.

Client is low level object, so whatever operation we want to perform its always be available. Client operations are mostly dictionary operation.


  • stores configuration information (primarily credentials and selected region)
  • allows you to create service clients and resources

Simple object to get it connected to particular aws account or iam account. If i want to connect any iam acocunt, session object will be used.


  • Automatically handles pagination
  • Yields individual pages
  • You must process each pages

Example: I have three thousand object in my s3 bucket, which i want to list. Boto3 Api can only list till a limit (1000 object). In such cases paginator can be used to list all the 3k objects. It will be using 3 pages to list .


Waiter are used for reach waiting to reach certain state

Example: I have ec2 instance, which i newly launched, it takes some time to reach running state. For that purpose we can use waiter


VMware vidm api: “User is not authorized to perform the task” [Generate OAuth Bearer Token]

issue: For one of the automation task for vmware vidm get attribute api, for admin user the code was failing.
It was giving error message “User is not authorized to perform the task”

After creating the Remote App Access client, generate an OAuth bearer token.

Create Remote app client:

Download and install the Postman app. You can download Postman from

Steps to generate Oatuh Bearer token

Local docker registry as default registry

Problem statement: In multi node docker environment, make private registry as default registry

Environment detail: Oracle virtual box has two centos 7 instance installed. 
- node 1: docker and docker registry setup
    yum install docker*
    enable the docker service and start it:
     systemctl enable docker
     systemctl start docker
    running a registry in container form: 
     docker run -d -p 5000:5000 --restart=always --name registry registry:2
    push any local image to local registry [below is the example to setup image from to private registry]
     docker pull ubuntu:16.04 /*it will pull from*/
     docker tag ubuntu:16.04 localhost:5000/my-ubuntu
     docker push localhost:5000/my-ubuntu
    docker image remove ubuntu:16.04
    docker image remove localhost:5000/my-ubuntu
- node 2: only docker installed
    yum install docker*
 Note: we will pull local image from node1

Steps to make private registry default registry and accessible remotely

 Stop docker service : systemclt stop docker
- Check docker info command: docker info /* check for registry and insecure registry)
- Add entry in /etc/sysconfig/docker file on all the node(node1 and node 2 in this case)
   vi /etc/sysconfig/docker
- Add entry in /etc/docker/daemon.json 
 vi /etc/docker/daemon.json 
"insecure-registries" : [ "" ]
- start the docker service
  systemctl start docker
check the docker info command
 docker info
Experimental: false
Insecure Registries:
from node 2 pull the image
 docker pull my-ubuntu

[root@target-2 /]# docker pull my-ubuntu
Using default tag: latest
Trying to pull repository ...
latest: Pulling from
7b378fa0f908: Pull complete
4d77b1b29f2e: Pull complete
7c793be88bae: Pull complete
ecc05c8a19c0: Pull complete

kubernetes pods keep crashing with “CrashLoopBackOff” {GKE}

As in my last blog

I have deployed the pod, deployed using local image on GKE. It is failing with CrashLoopBackOff error. It was not giving any logs as well.

kubectl logs <pod name>
//no log message is reporting

After I have added the command block in the pod yaml

apiVersion: v1
kind: Pod
    run: ubuntu
  name: ubuntu
  namespace: default
  - image:
    imagePullPolicy: Never
    name: ubuntu
        cpu: 100m
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo hello; sleep 10;done"]
  dnsPolicy: ClusterFirst
  enableServiceLinks: true

#command and args has added in the yaml file
Now I have executed
kubectl apply -f pod.yaml
[root@target-1 ~]# kubectl get pods
ubuntu   1/1     Running   0          9m30s

Create GKE pod from local image

This blog, I will be covering following:

  • create gke cluster from gcloud sdk
  • few gcloud command line option
  • create container registry in google cloud
  • Push local image to google cloud registry
  • using kubectl create pod using that image


My environment is centos 7

sudo tee -a /etc/yum.repos.d/google-cloud-sdk.repo << EOM
name=Google Cloud SDK
yum install google-cloud-sdk
yum install docker

yum install kubectl
gcloud auth login
gcloud config set project <project name>
//for login and set the project name

create and explore google kubernetes engine (GKE) from gcloud

gcloud container clusters create example-cluster --zone us-central1-c
gcloud container clusters get-credentials  example-cluster   --zone us-central1-c
gcloud container clusters describe  example-cluster --zone us-central1-c

Enable and create container registry on google cloud

create tag and push the docker image on google container registry (gcr)

docker tag openwriteup:ubuntu
docker images
docker push

yaml file to create pod

apiVersion: v1
kind: Pod
    run: ubuntu
  name: ubuntu
  namespace: default
  resourceVersion: "34418"
  selfLink: /api/v1/namespaces/default/pods/ubuntu
  uid: 7ee750fe-52a3-41f8-90be-5310270debad
  - image:
    imagePullPolicy: Always
    name: ubuntu
        cpu: 100m
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    - mountPath: /var/run/secrets/
      name: default-token-jpglv
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
//kubectl apply -f <.yaml file>
//kubectl get pods
//kubectl logs <pod name>

pyVmomi in whl format

I have converted pyvmomi package in whl format. I have copied in git repos with all the immediate dependency.

My requirement was offline installation where pip3 setup was configured. Rather than going for tar.gz, I have converted whl format.

pyvmomi package in whl format Coverted in whl format for python3. I have copied all the dependency as well Order to install on python3

pip3 install six-1.15.0-py2.py3-none-any.whl

pip3 install chardet-3.0.4-py2.py3-none-any.whl

pip3 install certifi-2020.6.20-py2.py3-none-any.whl

pip3 install idna-2.10-py2.py3-none-any.whl

pip3 install urllib3-1.25.10-py2.py3-none-any.whl

pip3 install requests-2.24.0-py2.py3-none-any.whl

pip3 install pyvmomi-7.0-py2.py3-none-any.whl

This module i have designed for offline environment, with all the dependency order.

git repos:

Please note: This is converted by me, just sharing. It can have issues as well.

create k8s secret for docker registry

I am writing my experience, it is well documented on k8s site as well. Below is the link to refer:

Step 1: login to docker from command line: “docker login”. It will prompt for username and password. Provide that, it will create on json file “~/.docker/config.json”

Output of json : cat ~/.docer/config.json

We will use that json file and create the secret

kubectl create secret generic regcred \
    --from-file=.dockerconfigjson=<path/to/.docker/config.json> \

once it create the secret, we can just check the output
kubectl get secret regcred -o yaml

Now this secret we can use in any pod/deployment for image pulling. Check this blog:

Pull image from a private registry [K8s] using secret

If you created a private registry, and want to pull the image in k8s deployment. So how to use username/password for registry in k8s deployment or pod.

It would be great to use secret, Documented in k8s as well

  1. Create secret for your registry

kubectl create secret docker-registry regcred –docker-server=<your-registry-server> –docker-username=<your-name> –docker-password=<your-pword> –docker-email=<your-email>

2. Kubectl get secret regcred –output=yaml

3. Use secret while creating pod