Friday, November 6, 2020

kind(multi-node (including HA) clusters)

kind is a tool for running local Kubernetes clusters using Docker container “nodes”.

kind was primarily designed for testing Kubernetes itself, but may be used for local development or CI.

Why kind?

  • kind supports multi-node (including HA) clusters
  • kind supports building Kubernetes release builds from source
    • support for make / bash / docker, or bazel, in addition to pre-published builds
  • kind supports Linux, macOS and Windows
  • kind is a CNCF certified conformant Kubernetes installer

Installation and usage

NOTE: kind does not require kubectl, but you will not be able to perform some of the examples in our docs without it. To install kubectl see the upstream reference here https://kubernetes.io/docs/tasks/tools/install-kubectl/
On Linux:
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.9.0/kind-linux-amd64
chmod +x ./kind
mv ./kind /some-dir-in-your-PATH/kind

On Mac (homebrew):
brew install kind

On Windows:
curl.exe -Lo kind-windows-amd64.exe https://kind.sigs.k8s.io/dl/v0.9.0/kind-windows-amd64
Move-Item .\kind-windows-amd64.exe c:\some-dir-in-your-PATH\kind.exe

Creating a Cluster 

Creating a Kubernetes cluster is as simple as 
kind create cluster
kind create cluster --name kind-2

Interacting With Your Cluster

After creating a cluster, you can use kubectl to interact with it by using the configuration file generated by kind.
By default, the cluster access configuration is stored in ${HOME}/.kube/config if $KUBECONFIG environment variable is not set.
When you list your kind clusters, you will see something like the following:
kind get clusters
kind
kind-2
In order to interact with a specific cluster, you only need to specify the cluster name as a context in kubectl:
kubectl cluster-info --context kind-kind
kubectl cluster-info --context kind-kind-2


Deleting a Cluster

If you created a cluster with kind create cluster then deleting is equally simple:
kind delete cluster

If the flag --name is not specified, kind will use the default cluster context name kind and delete that cluster.

Resources: https://kind.sigs.k8s.io/docs/user/quick-start 

Saturday, October 17, 2020

Setting Multiple Profile for AWS CLI


AWS CLI

The AWS Command Line Interface (AWS CLI) is an open source tool that enables you to interact with AWS services using commands in your command-line shell.

AWS CLI versions

The AWS CLI is available in two versions and information in this guide applies to both versions unless stated otherwise.

Version 2.x – The current, generally available release of the AWS CLI that is intended for use in production environments. This version does include some "breaking" changes from version 1 that might require you to change your scripts so that they continue to operate as you expect.

Version 1.x – The previous version of the AWS CLI that is available for backwards compatiblity.

Named profiles
A named profile is a collection of settings and credentials that you can apply to a AWS CLI command. When you specify a profile to run a command, the settings and credentials are used to run that command.

The AWS CLI supports using any of multiple named profiles that are stored in the config and credentials files. You can configure additional profiles by using aws configure with the --profile option, or by adding entries to the config and credentials files.

Execute the below command, it will ask access key and secret key for the ‘user1’
$ aws configure --profile user1

$ ~/.aws/credentials

[default] 
aws_access_key_id=AKIAIOSFODNN7EXAMPLE aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY 

[user1] 
aws_access_key_id=AKIAI44QH8DHBEXAMPLE aws_secret_access_key=je7MtGbClwBF/2Zp9Utk/h3yCo8nvbEXAMPLEKEY


Each profile can specify different credentials—perhaps from different IAM users—and can also specify different AWS Regions and output formats.

$ cat ~/.aws/config

[default] 
region=us-west-2
 output=json 

[profile user1] 
region=us-east-1 
output=text

Using profiles with the AWS CLI

To use a named profile, add the --profile profile-name option to your command. The following example lists all of your Amazon EC2 instances using the credentials and settings defined in the user1 profile from the previous example files.

$ aws ec2 describe-instances --profile user1

To use a named profile for multiple commands, you can avoid specifying the profile in every command by setting the AWS_PROFILE environment variable at the command line.

$ export AWS_PROFILE=user1

Now you can use AWS CLI without specifying the --profile in the following command.

Wednesday, January 30, 2019

Converting .pem file to .ppk to use in Putty(Windows) or vice versa

If you ever face a situation like, you have .pem file that is used in linux and some of your colleague wants the .ppk file to use in Putty (Windows) to connect to Linux host.

You can convert the file via below steps:

Install the putty tools, if you don`t have on Linux:

sudo apt-get install putty-tools

Now you can convert it to pem via below command:

puttygen key.pem -O private -o key.ppk

-o Tells it where to write out the converted putty private key
-O private Tells it that you want a putty private key

Now you have key.ppk that can be used in Putty (Windows)

Saturday, December 15, 2018

HTTP to HTTPS Behind Elastic Load Balancer in AWS

In the most common configurations, when running your web app behind ​Nginx or Apache, ​your https:// request will get redirected to ​http://. Sometimes, you may want to rewrite all HTTP requests to HTTPS.

The Amazon Elastic Load Balancer (ELB) supports a HTTP header called X-FORWARDED-PROTO. All the HTTPS requests going through the ELB will have the value of X-FORWARDED-PROTO equal to “HTTPS”. For the HTTP requests, you can force HTTPS by adding a simple rewrite rule, as follows:

1. Nginx

In your nginx site config file check if the value of X_FORWARDED_PROTO is https, if not, rewrite it:


server {
    listen 80;
    ....
    location / {
        if ($http_x_forwarded_proto != 'https') {
            rewrite ^ https://$host$request_uri? permanent;
        }
    ....
    }
}
 2. Apache

Same goes for Apache, add this rewrite rule to your site’s config file:


<VirtualHost *:80>
...
RewriteEngine On
RewriteCond %{HTTP:X-Forwarded-Proto} !https
RewriteRule ^.*$ https://%{SERVER_NAME}%{REQUEST_URI}
...
</VirtualHost>
3. IIS

Install IIS Url-Rewrite module, using the configuration GUI add these settings

<rewrite xdt:Transform="Insert">
<rules>
<rule name="HTTPS rewrite behind ELB rule" stopProcessing="true">
<match url="^(.*)$" ignoreCase="false" />
<conditions>
<add input="{HTTP_X_FORWARDED_PROTO}" pattern="^http$" ignoreCase="false" />
</conditions>
<action type="Redirect" redirectType="Found" url="https://{SERVER_NAME}{URL}" />
</rule>
</rules>
</rewrite>

4. HAProxy
frontend node1-https
        bind 192.168.20.19:443 ssl crt /etc/ssl/cert.pem
        mode http
        maxconn 50000
        option httpclose
        option forwardfor
        reqadd X-Forwarded-Proto:\ https

Minikube installatin on Ubuntu 16.04

Before you begin
Enable Virtualization in Bios
VT-x or AMD-v virtualization must be enabled in your computer’s BIOS.

Install a Hypervisor
If you do not already have a hypervisor installed, install the appropriate one for your OS now:

macOS: VirtualBox or VMware Fusion, or HyperKit.

Linux: VirtualBox or KVM.

Note: Minikube also supports a --vm-driver=none option that runs the Kubernetes components on the host and not in a VM. Using this driver requires Docker and a linux environment, but not a hypervisor.
Windows: VirtualBox or Hyper-V.

Installation on Debian/Ubuntu

sudo apt-get update && sudo apt-get install -y apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubectl


If you are on Ubuntu or one of other Linux distributions that support snap package manager, kubectl is available as a snap application.

Switch to the snap user and run the installation command:

sudo snap install kubectl --classic
Test to ensure the version you installed is sufficiently up-to-date:

kubectl version


Check the kubectl configuration
Check that kubectl is properly configured by getting the cluster state:

kubectl cluster-info
If you see a URL response, kubectl is correctly configured to access your cluster.

If you see a message similar to the following, kubectl is not correctly configured or not able to connect to a Kubernetes cluster.

The connection to the server <server-name:port> was refused - did you specify the right host or port?

For example, if you are intending to run a Kubernetes cluster on your laptop (locally), you will need a tool like minikube to be installed first and then re-run the commands stated above.

If kubectl cluster-info returns the url response but you can’t access your cluster, to check whether it is configured properly, use:
kubectl cluster-info dump

Enabling shell autocompletion
kubectl includes autocompletion support, which can save a lot of typing!

The completion script itself is generated by kubectl, so you typically just need to invoke it from your profile.

Common examples are provided here. For more details, consult kubectl completion -h.

On Linux, using bash
On CentOS Linux, you may need to install the bash-completion package which is not installed by default.

yum install bash-completion -y
To add kubectl autocompletion to your current shell, run source <(kubectl completion bash).

To add kubectl autocompletion to your profile, so it is automatically loaded in future shells run:

echo "source <(kubectl completion bash)" >> ~/.bashrc

Install Minikube

Got to https://github.com/kubernetes/minikube/releases

download latest version & Install

sudo dpkg -i minikube_0.30-0.deb

Start minikube

sudo minikube start

It will download the required component and start the minikube

sudo kubectl cluster-info
Kubernetes master is running at https://192.168.99.100:8443
CoreDNS is running at https://192.168.99.100:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Sunday, June 10, 2018

Setting up CloudWatch with your EC2 Instances on AWS with CloudWatch Agent

Requirement

  • IAM Roles for the Instance to run with AmazonSSMFullAccess
  • AWS System Manager, AmazonEC2RoleforSSM Policy attached to your user
  • Install or Update the SSM Agent
  • AWS CloudWatch Agent
  • Your AWS Instance must have internet access or direct access to CloudWatch so the data can be pushed to CloudWatch
Create IAM Roles to Use with CloudWatch Agent on Amazon EC2 Instances

The first procedure creates the IAM role that you need to attach to each Amazon EC2 instance that runs the CloudWatch agent. This role provides permissions for reading information from the instance and writing it to CloudWatch.

The second procedure creates the IAM role that you need to attach to the Amazon EC2 instance being used to create the CloudWatch agent configuration file, if you are going to store this file in Systems Manager Parameter Store so that other servers can use it. This role provides permissions for writing to Parameter Store, in addition to the permissions for reading information from the instance and writing it to CloudWatch. This role includes permissions sufficient to run the CloudWatch agent as well as to write to Parameter Store.


To create the IAM role necessary for each server to run CloudWatch agent

Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/.

In the navigation pane on the left, choose Roles, Create role.

For Choose the service that will use this role, choose EC2 Allows EC2 instances to call AWS services on your behalf. Choose Next: Permissions.

In the list of policies, select the check box next to CloudWatchAgentServerPolicy. Use the search box to find the policy, if necessary.

If you will use SSM to install or configure the CloudWatch agent, select the check box next to AmazonEC2RoleforSSM. Use the search box to find the policy, if necessary. This policy is not necessary if you will start and configure the agent only through the command line.

Choose Next: Review

Confirm that CloudWatchAgentServerPolicy and optionally AmazonEC2RoleforSSM appear next to Policies. In Role name, type a name for the role, such as CloudWatchAgentServerRole. Optionally give it a description, and choose Create role.

The role is now created.


The following procedure creates the IAM role that can also write to Parameter Store. You need to use this role if you are going to store the agent configuration file in Parameter Store so that other servers can use it.

To create the IAM role necessary for an administrator to save an agent configuration file to Systems Manager Parameter Store

Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/.

In the navigation pane on the left, choose Roles, Create role.

For Choose the service that will use this role, choose EC2 Allows EC2 instances to call AWS services on your behalf. Choose Next: Permissions.

In the list of policies, select the check box next to CloudWatchAgentAdminPolicy. Use the search box to find the policy, if necessary.

If you will use SSM to install or configure the CloudWatch agent, select the check box next to AmazonEC2RoleforSSM. Use the search box to find the policy, if necessary. This policy is not necessary if you will start and configure the agent only through the command line.

Choose Next: Review

Confirm that CloudWatchAgentAdminPolicy and optionally AmazonEC2RoleforSSM appear next to Policies. In Role name, type a name for the role, such as CloudWatchAgentAdminRole. Optionally give it a description, and choose Create role.

The role is now created.

Install or Update the SSM Agent

Before you can use Systems Manager to install the CloudWatch agent, you must make sure that the instance is configured correctly for Systems Manager.

SSM Agent is installed, by default, on Amazon Linux base AMIs dated 2017.09 and later. SSM Agent is also installed, by default, on Amazon Linux 2 and Ubuntu Server 18.04 LTS AMIs. Here is the documentation to install the agent on various version.

Attach an IAM Role to the Instance

An IAM role for the instance profile is required when you install the CloudWatch agent on an Amazon EC2 instance. This role enables the CloudWatch agent to perform actions on the instance. Use one of the roles you created earlier. 

If you are going to use this instance to create the CloudWatch agent configuration file and copy it to Systems Manager Parameter Store, use the role you created that has permissions to write to Parameter Store. This role may be called CloudWatchAgentAdminRole.

For all other instances, select the role that includes just the permissions needed to install and run the agent. This role may be called CloudWatchAgentServerRole.

Installing CloudWatch Agent on your Linux Instances
  • Navigate to your EC2 section 
  • In the navigation pane, choose Run Command.
  • In the Command document list, choose AWS-ConfigureAWSPackage
  • In the Targets area, choose the instance or multiple instances on which to install the CloudWatch agent. If you do not see a specific instance, it might not be configured for Run Command.
  • In the Action list, choose Install.
  • In the Name field, type AmazonCloudWatchAgent.
  • Leave Version set to latest to install the latest version of the agent.
  • Choose Run.
  • Optionally, in the Targets and outputs areas, select the button next to an instance name and choose View output. Systems Manager should show that the agent was successfully installed.

Optionally You can use an Amazon S3 download link to download the CloudWatch agent package on an Amazon EC2 instance server.

To use the command line to install the CloudWatch agent on an Amazon EC2 instance

Download the CloudWatch agent. For a Linux server, type the following:

wget https://s3.amazonaws.com/amazoncloudwatch-agent/linux/amd64/latest/AmazonCloudWatchAgent.zip

Unzip the package.

unzip AmazonCloudWatchAgent.zip

Install the package. On a Linux server, change to the directory containing the package and type:

sudo ./install.sh


Create the CloudWatch Agent Configuration File with the Wizard

Start the CloudWatch agent configuration wizard by typing the following:

sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-config-wizard

If you are going to use Systems Manager to install and configure the agent, be sure to answer Yes when prompted whether to store the file in Systems Manager Parameter Store. You can also choose to store the file in Parameter Store even if you aren't using the SSM Agent to install the CloudWatch agent. To be able to store the file in Parameter Store, you must use an IAM role with sufficient permissions. For more information, see Create IAM Roles and Users for Use With CloudWatch Agent.


Deploying CloudWatch Configuration File

We will now deploy cloud watch configuration to the client (the instances which we need to monitor)
  • In the navigation pane, choose Run Command.
  • Click on Run Command once the page loads up 
  • In the Command document list, choose AmazonCloudWatch-ManageAgent
  • In the Targets area, choose the instance or multiple instances on which you want to deploy CloudWatch Configuration on
  • Under Action select configure 
  • Under Mode leave it as ec2
  • Change the Optional Configuration Source to ssm
  • Under Optional Configuration Location enter the exact same name of the parameter you created in the Parameter Store (previous section). In this example, the parameter is called CloudWatchLinux
  • Optional Restart should be set to Yes (This will restart the CloudWatch agent, not the instance)
  • Now click on Run

Now go to Cloudwatch and you would start receiving custom metrics that you defined in CloudWatch Configuration.

Friday, March 9, 2018

About Kubernetes and it's Architecture

Kubernetes more frequently pronounced as (K8s) it’s an open source container orchestration platform designed for running distributed applications and services at scale. It was originally designed by Google and is now maintained by the Cloud Native Computing Foundation

Kubernetes components can be logically split up into these two categories:

  • Master (Control Panel): These components run on the master nodes of the cluster and form the control plane.
  • Node (Worker): Applications and services running on the worker nodes to receive instructions and run containers.
Master (Control Plane) has following components. These components are responsible for maintaining the state of the cluster:
  1. etcd
  2. API Server.
  3. Controller Manager
  4. SchedulerEvery worker node consists of the following components. 
Every worker node consists of the following components. These components are responsible for deploying and running the application containers.
  1. Kubelet
  2. Container Runtime (Docker etc)
These components are responsible for deploying and running the application containers.



















Let’s discuss more about Master components one by one

etcd 

It is a simple, key value storage which is used to store the kubernetes cluster data (such as number of pods, their states, namespaces etc). in simple words, it is the database of kubernetes. It is only accessible from the API server for security reasons. etcd enables notifications to the cluster about configuration changes with the help of watchers. Notifications are API requests on each etcd cluster node to trigger the update of information in the node’s storage.

kube-apiserver

Need to interact with your Kubernetes cluster? Talk to the API. The Kubernetes API is the front end of the Kubernetes control plane, handling internal and external requests.The kube-apiserver is responsible for API validation for all resource creation requests before the resources are actually generated and saved to the data store. Users can communicate with the API server via the kubectl command line client or through REST API calls.

kube-controller-manager

Kubernetes manages applications through various controllers that operate on the general model of comparing the current status against a known spec. These controllers are control loops that continuously ensure that the current state of the cluster (the status) matches the desired state (the spec). They watch the current cluster state stored in etcd through the kube-apiserver and create, update, and delete resources as necessary. kube-controller-manager is shipped with many controllers such as:
  • Node Lifecycle controller
  • DaemonSet controller
  • Deployment controller
  • Namespace controller

kube-scheduler

Since Kubernetes is an orchestration framework, it has builtin logic for managing the scheduling of pods. The kube-scheduler component is responsible for this. Scheduling decision depends on a number of factors such as:

Resource requirements of the app
Resource availability across nodes
Whether the pod spec has affinity labels requesting scheduling on particular nodes
Whether the nodes have certain taints/tolerations preventing them from being included in the scheduling process

The kube-scheduler takes all of these considerations into account when scheduling new workloads.

cloud-controller-manager

The cloud-controller-manager is responsible for managing the controllers associated with builtin cloud providers. These controllers are dedicated to abstracting resources offered by individual cloud providers and mapping them to Kubernetes objects.
Cloud controllers are the main way that Kubernetes is able to offer a relatively standardized experience across many different cloud providers.

Node Components

The node, or worker, machines are the primary workhorses of Kubernetes clusters. While the master components handle most of the organization and logic, the nodes are responsible for running containers, reporting health information back to the master servers, and managing access to the containers through network proxies.

Container Runtime(Famously Docker)

The container runtime is the component responsible for running the containers within pods. Rather than being a specific technology as with the other components, the container runtime in this instance is a descriptor for any container runtime that is capable of running containers with the appropriate features. Docker, CRI-O and containerd are some of the popular options for container runtimes.

kubelet

The kubelet component is an agent that runs on every worker node of the cluster. It is responsible for managing all containers running in every pod in the cluster. It does this by ensuring that the current state of containers in a pod matches the spec for that pod stored in etcd.

kube-proxy

The kube-proxy component is a network proxy that runs on each node. It is responsible for forwarding requests. The proxy is somewhat flexible and can handle simple or round robin TCP, UDP or SCTP forwarding. Each node interacts with the Kubernetes service through kube-proxy.

Monday, August 7, 2017

Containers vs Virtual Machine vs Docker

Container

Using containers, everything required to make a piece of software run is packaged into isolated containers. Unlike VMs, containers do not bundle a full operating system - only libraries and settings required to make the software work are needed. This makes for efficient, lightweight, self-contained systems and guarantees that software will always run the same, regardless of where it’s deployed. Docker is an example of container technology, there are many other likes containerd, CRI-O etc

Virtual Machine(VM'S)

Virtual machines (VMs) are an abstraction of physical hardware turning one server into many servers. The hypervisor allows multiple VMs to run on a single machine. Each VM includes a full copy of an operating system, one or more apps, necessary binaries and libraries - taking up tens of GBs. VMs can also be slow to boot.

Containers vs Virtual Machines

Containers and virtual machines have similar resource isolation and allocation benefits, but function differently because containers virtualize the operating system instead of hardware, containers are more portable and efficient














Docker

Docker is the world's leading software containerization platform, there are many other  Docker’s website describes it as “an open-source engine that automates the deployment of any application as a lightweight, portable, self-sufficient container that will run virtually anywhere.”.

  • Docker is built on top of LXC, and therefore runs containers, not VMs as VirtualBox for instance 
  • Docker containers are made of portable “images”, similar to LXC/VZ templates, but much more powerful (versionning, inheritance …) 
  • Docker “images” can easily be created via Dockerfile’s, where are set the base image and the steps to run in order to create your image 
  • Docker allows to have run multiple instances of your container without needing to copy the image (base system) files 
  • Docker daemon (which manages / runs LXC containers) provides a REST API used by Docker CLI utility … but this REST API can be used by any application 
  • Docker runs on virtually all operating systems (Linux, Mac OS, Windows …) and platforms (Google Cloud Platform, Amazon EC2) 
Docker is the world's leading software containerization platform.