Setting up pyCharm
- Install pycharm
https://www.jetbrains.com/pycharm/
Go to pycharm configure->setting
Default project->Python interpreter
Add python interpreter
install boto3 package
Now create new package
Setting up pyCharm
https://www.jetbrains.com/pycharm/
Go to pycharm configure->setting
Default project->Python interpreter
Add python interpreter
install boto3 package
Now create new package
Boto : Boto is a SDK designed to improve the use of the python programming language in aws.
Setup requirement:
For programmatic access: We need to enable the access in the aws iam:
For programmatically user access , secret key and access id.
Setup awscli on windows
Configure awscli
https://docs.aws.amazon.com/cli/latest/userguide/install-windows.html
Configure the access key and id
C:\Users\amitm>aws configure
AWS Access Key ID [****************Z2FA]: AWS Secret Access Key [****************V270]: Default region name [ap-south-1]: Default output format [test]:
Check setup?
C:\Users\amitm>aws s3 ls
2018-10-15 13:08:20 cf-templates-106h68kzl5m34-us-east-2 2018-11-08 23:39:48 openwriteup 2018-11-08 23:46:12 openwriteup-1 2018-11-09 00:16:44 test-openwriteup
What is awscli??
Setup boto3
Botocore
Core concepts of boto3
Resources
example:
import boto3 s3 = boto3.resource('s3') bucket = s3.Bucket('mybucket') for obj in bucket.objects.all(): print(obj.key, obj.last_modified)
Boto Client:
example:
import boto3 client = boto3.client('s3') response = client.list_objects(Bucket='mybucket') for content in response['Contents']: obj_dict = client.get_object(Bucket='mybucket', Key=content['Key']) print(content['Key'], obj_dict['LastModified'])
Difference Between resource and client:
Resource object is very high level object, every operation with resource object would be high level operation. We may not have all the operation with resource.
Client is low level object, so whatever operation we want to perform its always be available. Client operations are mostly dictionary operation.
Session:
Simple object to get it connected to particular aws account or iam account. If i want to connect any iam acocunt, session object will be used.
Pagination
Example: I have three thousand object in my s3 bucket, which i want to list. Boto3 Api can only list till a limit (1000 object). In such cases paginator can be used to list all the 3k objects. It will be using 3 pages to list .
Waiter
Waiter are used for reach waiting to reach certain state
Example: I have ec2 instance, which i newly launched, it takes some time to reach running state. For that purpose we can use waiter
In my test environment, I have amazon VPC, which I am accessing using Linux server.
For performing all the activities on amazon vpc, I have used python script.For automation in amazon VPC, aws provides module boto3, which need to be installed using python pip. Using this module we list all the running instances.In below script I am creating a config file and then reading that config file.Following script perform all these steps:
</script>
import ConfigParser,boto3,os,sys,paramiko from collections import defaultdict config = ConfigParser.RawConfigParser() #When adding sections or items, add them in reverse order config.add_section('EC2') config.add_section('USER') config.set('EC2','SSHKey','<keyname>') config.set('EC2','VPC_IP','<vpcname>') config.set('EC2','Security_Group','<securitygroupname>') config.set('EC2','DisableAPI_Termination','False') config.set('USER','Username','<ec2user>') config.set('USER','AWS_Profile','<aws user to login>') config.set('EC2','Region','<aws region>') config.set('USER','Private_Key','<path_to_privatekey>') #Writing configruation to config file name = raw_input("Enter the config file name::: ") with open(name, 'wb') as configfile: config.write(configfile) #Reading the config file config1 = ConfigParser.ConfigParser() config1.read(name) ses = boto3.Session(profile_name = config1.get("USER", "AWS_Profile")) ec2 = ses.resource('ec2') key = paramiko.rsakey.RSAKey.from_private_key_file(filename=config1.get("USER","Private_Key")) running_instances = ec2.instances.filter(Filters=[{ 'Name': 'instance-state-name', 'Values': ['running']}]) ec2info = defaultdict() for instance in running_instances: for tag in instance.tags: if 'Name' in tag['Key']: name = tag['Value'] ec2info[instance.id] = { 'Name':name, 'Type': instance.instance_type, 'State':instance.state['Name'], 'Private IP':instance.private_ip_address, 'Public IP': instance.public_ip_address, 'Launch Time': instance.launch_time } attributes = ['Name','Type','State','Private IP','Public IP','Launch Time'] for instance_id, instance in ec2info.items(): for key in attributes: print("{0}:{1}".format(key,instance[key])) print("------")
output of the script :
——
Name:testinstance
Type:m4.xlarge
State:running
Private IP:10.140.30.209
Public IP:None
Launch Time:2016-08-26 23:09:17+00:00
——
Aws cloudformation templates is provide to way deploy the apps in aws cloud. Using this we can create template for apps or services, and can be easily deploy whenever require we can easily provision .
As part of amazon aws exploration, i found that my clients environment are based on cloud formulation script. I tried to explore the environment, and found its quite same as our workflows (not exactly same).
In a standard way, when we want to create a template, we need to create all the require resources. We should have all the resources information which is required for application. The deployment is automated by an AWS Cloudformation template. The template starts the installation process by creating all the required AWS resources such as the Amazon VPC, security groups, public and private subnets, Internet gateways, NAT gateways, and the Amazon S3 bucket.
In this section we will discuss some of the base commands, which can be executed on the jumpbox:
aws cloudformation list-stacks
It will list all the available stacks.
aws cloudformation describe-stacks –stack-name <name>
Lot of other commands are also available: http://docs.aws.amazon.com/cli/latest/reference/cloudformation/index.html
In my search, I got Docker datacenter cloudformation template, Its good stuff to get understanding. PS: you should have knowledge of docker.
https://s3.amazonaws.com/quickstart-reference/docker/latest/doc/docker-datacenter-on-the-aws-cloud.pdf
How to create Cloudformation Template
CloudFormation is described as a JSON (JavaScript Object Notation) template. It’s a model-driven template in that the AWS infrastructure is instantiated according to its own specification of proper order of execution.
aws cloudformation get-template –stack-name <name>
This command will give the complete details of template, with resources, output, subnet etc.
When we use cloudformation, we work with template and stack. Template describes the resources and properties, and when we create stack, we provision the resources using template.
Templates: An AWS CloudFormation template is a text file whose format complies with the JSON standard. You can save these files with any extension, such as .json
, .template
, or .txt
. AWS CloudFormation uses these templates as blueprints for building your AWS resources. For example, in a template, you can describe an Amazon EC2 instance, such as the instance type, the AMI ID, block device mappings, and its Amazon EC2 key pair name. Whenever you create a stack, you also specify a template that AWS CloudFormation uses to create whatever you described in the template.
Stacks:When you use AWS CloudFormation, you manage related resources as a single unit called a stack. You create, update, and delete a collection of resources by creating, updating, and deleting stacks. All the resources in a stack are defined by the stack’s AWS CloudFormation template. Suppose you created a template that includes an Auto Scaling group, Elastic Load Balancing load balancer, and an Amazon Relational Database Service (Amazon RDS) database instance. To create those resources, you create a stack by submitting the template that you created, and AWS CloudFormation provisions all those resources for you.
This blog is for those, who are very new to aws and python. They want to start both of them together. Assuming they have setup boto3 environment in their test lab.
In lab setup type python: python
It will give python prompt, we can explore boto3.
>>> import boto3
>>> dir(boto3)
[‘DEFAULT_SESSION’, ‘NullHandler’, ‘Session’, ‘__author__’, ‘__builtins__’, ‘__doc__’, ‘__file__’, ‘__name__’, ‘__package__’, ‘__path__’, ‘__version__’, ‘_get_default_session’, ‘client’, ‘docs’, ‘exceptions’, ‘logging’, ‘resource’, ‘resources’, ‘session’, ‘set_stream_logger’, ‘setup_default_session’, ‘utils’]
Perform help (boto3) ,It will show the package content with this package..
PACKAGE CONTENTS
compat
docs (package)
dynamodb (package)
ec2 (package)
exceptions
resources (package)
s3 (package)
session
utils
Lets import the resources : from boto3 import resources
>>> dir (boto3.resources)
[‘__builtins__’, ‘__doc__’, ‘__file__’, ‘__name__’, ‘__package__’, ‘__path__’, ‘action’, ‘base’, ‘collection’, ‘factory’, ‘model’, ‘params’, ‘response’]
>>>
PACKAGE CONTENTS
action
base
collection
factory
model
params
response
/*perform following on your python console or write a .py script import boto3 ec2=boto3.resource(ec2) #help(ec2) /*it will list all the available option with ec2*/ #help(ec2.instances) /*search for filter*/ #help(ec2.instances.filter /*list the filter option and list syntax /* instance_iterator = ec2.instances.filter( | DryRun=True|False, | InstanceIds=[ | 'string', | ], | Filters=[ | { | 'Name': 'string', | 'Values': [ | 'string', | ] | }, | ] */
import boto3 ec2=boto3.resource('ec2') instances = ec2.instances.filter( Filters=[{'Name': 'instance-state-name', 'Values': ['running']}]) for instance in instances: print(instance.id, instance.instance_type)
Boto is python sdk in amazon, which can be used for automation purpose for ec2,s3 etc.
Installation: Boto can be installed using using pip or offline. for pip the command: pip install boto3
Offline installation we need to download the offline bundle and install, it has the dependency , download and install it.
Configuration : Before we use boto we need to set the configuration, which can be done using : aws configure
Alternatively, you can create the credential file yourself. By default, its location is at ~/.aws/credentials:
[default] aws_access_key_id = YOUR_ACCESS_KEY aws_secret_access_key = YOUR_SECRET_KEY
You may also want to set a default region. This can be done in the configuration file. By default, its location is at ~/.aws/config:
[default] region=us-east-1
Alternatively, you can pass a region_name when creating clients and resources.
Lets see some the sample code:
import boto3
ec=boto3.resource(‘ec2’)
if we print ec, it shows “ec2.ServiceResource()”
Just give help(ec),it will show up the complete description what we can do ,what methods are available. Below example we want to check all the instance status
# Boto 3 #!/usr/bin/python import boto3 ec=boto3.resource('ec2') #help(ec) #check the status of all the ec2 instances for status in ec.meta.client.describe_instance_status()['InstanceStatuses']: print(status)
Recently, One of my client they were doing the proof of concept for amazon aws. Till Now, I just used for trial purpose, took a single ec2 instance and assigned a public ip, was able to access easily using putty. Just download the .pem file and access using ssh by any linux box.
In my client case they were using vpc (virtual private cloud), in that they have defined the availability zone (subnet are defined). When I was selecting any vpc, it has shows mapped availability zone. Client has provided one jump server as well from where we will be doing ssh or can use aws command line to control the environment. The complete environment is access aws private ip ,since it has mapped to it.
Problem faced and applied solution:
During the phase of starting, initially i have created ec2 instance in my assigned vpc, when i was trying to access it was not pinging from jump box. After googling a lot I found that “Security group” which i am using, doesn’t mention Source ip range from which it has to communicate.
I created a new security group mentioning source “0.0.0.0/0” but it has started giving the security warning, so i went back and created the correct range “192.168.0.0/32”. After that i was able to ping my ec2 instance from my jump server.
Next step i created a key pair and downloaded the .pem file, while i was using that .pem file in my environment somehow it was not able to do ssh from the jump server. I was able to ping the aws instance but not able to connect.
I have moved the .pem file to file format: mv <file.pem> <file>
ssh <file option> <file> ec2-user@<instance private ip>
now i was running aws command from the jump server (which is already configured on the system)
aws ec2 describe-instances
It was giving error not configured. Tried the below command
aws configure
This command ask “Access key id”, “Secret access key”,”default region”, and “output format”.
Access key id and secret access key information will get from IAM service configuration (Identity and Access Management), In users segment select your user. In that section you can create access key and activate it. Download it, It has both the information access key and secret.
Region information you can find from aws web page, which region you have selected, and output format (text,json or xml..) , once you ready with this information, please use the same command “aws configure”, will able to configure. Now if you run any command ” aws help” It will work from command line…
It was overall a good experience, yet to have lot of information to fetch…will definitely share
In this blog, we will learn how to ssh to your linux amazon instance from your personal computer.
Prerequisites:
– Rhel ec2 instance creation access
– Linux personal computer with proper internet connection.
For connecting to amazon instance from your computer, we need to create a key pair. For doing this,we have to create key pair. A key pair can be created on you computer or from aws site as well.
– If we create key pair from your computer, then you need to import the public key in your aws,and while creating the instance use that key as part of security profile.
-if we create key pair from amazon then, we need to download the private key and copy to the
linux personal computer, and ssh to ec2 instance using that
Create key pair using the console:
– Open the Amazon EC2 console .
– Slect the region for the key pair.
– In the navigation, under Network & Security, choose key Pairs.
– Choose Create key Pair.
– Enter name for key pair,and create. It will generate a file with .pem extension, which save it.
Note: This file cannot be regenerate or get it again, so please save it at safe place.
– copy the <filename>.pem to your linux box, perform chmod.
chmod 400 <filename>.pem
Import you key pair to amazon EC2:
Instead of using amazon key pair , you can use create an RSA key pair and import the public key to Amazon EC2.
– Generate a key pair with a third party tool of your choice.
– In this case, we are using ssh-keygen. It will generate public and private key.
– Copy public key to a local file
Import the public key
-Open the amazon ec2 console.
-From the navigation, select the region.
-In navigation pane,under network & Security, choose key pair.
-Choose import key pair and browse the public key you have