Dylan Scott

Dylan Scott

Developer. DevOps

© 2020

Kubernetes From Scratch: Building the Golden Image

Introduction

Hello and in this chapter we will take the first step to automating a manual deployment of Kubernetes. We will be building a base VM image that contains all of the components required for a Kubernetes master or worker node. Note at this point we are not making a distinction between a master or a worker node, it is just a generic image that could become either or later down the road.

To build this image we are going to use a tool called Packer. Packer is a tool developed by Hashicorp that allows you to automate the building of machines for various cloud providers and platforms. The builds are defined as JSON allowing a developer to be cloud agnostic building the same machine anywhere that is appropriate. The official definition from Packer states:

HashiCorp Packer is easy to use and automates the creation of any type of machine image. It embraces modern configuration management by encouraging you to use automated scripts to install and configure the software within your Packer-made images. Packer brings machine images into the modern age, unlocking untapped potential and opening new opportunities.

Whoa! Hold on. One last bit of housekeeping…

Before you get started you will need:

  • An AWS Account
  • An IAM user within the AWS account that has full access to EC2
  • The AWS CLI installed and configured with these credentials by running aws configure
  • The Packer CLI installed

The source code for this project can be found here: https://github.com/dylanrhysscott/kubernetes-from-scratch. Each chapter can be found as a release. If you want a specific chapter please look here: https://github.com/dylanrhysscott/kubernetes-from-scratch/releases

Let’s get started!

First start by creating a directory to house our project. I am going to use the directory kubernetes-from-scratch. Within this directory create a templates folder. This will house our JSON files for use with Packer. Then finally create a scripts directory inside templates and in addition create a JSON file to house our build definition. I’ve chosen k8s-node.json.

mkdir kubernetes-from-scratch
cd kubernetes-from-scratch
mkdir templates
cd templates
mkdir scripts
touch k8s-node.json

Your directory structure should look something like this:

├── LICENSE
├── README.md
└── templates
    ├── k8s-node.json
    └── scripts

Creating the build file

Next open the k8s-node.json file and populate it with the following:

{
    "variables": {
        "region": "eu-west-2",
        "source_ami": "ami-023143c216b0108ea",
        "instance_type": "t2.micro",
        "ssh_user": "admin"
    },
    "builders": [
        {
            "type": "amazon-ebs",
            "ami_name": "kubernetes-generic-node",
            "profile": "default",
            "region": "{{ user `region`}}",
            "source_ami": "{{ user `source_ami` }}",
            "instance_type": "{{ user `instance_type` }}",
            "ssh_username": "{{ user `ssh_user` }}"
        }
    ],
    "provisioners": [
        {
            "type": "shell",
            "script": "./scripts/base.sh",
            "execute_command": "sudo sh -c '{{ .Vars }} {{ .Path }}'"
        },
        {
            "type": "shell",
            "script": "./scripts/provision.sh",
            "execute_command": "sudo sh -c '{{ .Vars }} {{ .Path }}'"
        }
    ]
}

Defining variables

First in our build we define some variables. This will allow us to override values at build time if it required making our templates reusable:

  • region - refers to the AWS region in which we want to deploy a server and build our image
  • source_ami - refers to the base AMI upon which we will provision. I’ve chosen Debian.
  • instance_type - refers to the instance size on which we will be provisioning. This can be changed later in Terraform when we deploy our custom AMI
  • ssh_user - refers to the SSH user Packer will connect with. For debian based AMI’s this is admin

Note: This AMI ID refers to the Debian image in eu-west-2. If you are building in a different region change this value to one for your region - https://wiki.debian.org/Cloud/AmazonEC2Image/Buster

Configuring the builder

Next in our JSON structure we define an array of builders. This is an array of objects that Packer will iterate over at build time creating a build artefact for each provider. We are going to use the amazon-ebs builder for EC2. Each builder object takes a type property to tell Packer what type of machine we are building. In this case amazon-ebs. Then we have some AWS builder specific properties:

  • ami_name - Specifies the name of our custom AMI once built. I have chosen kubernetes-generic-node
  • profile - Specifies the AWS CLI profile to use when authenticating with AWS. I have chosen default as it is the only profile I have configured in .aws/config
  • region - Specifies the AWS region our builder should use. Here I am passing in a reference to our user defined variable called region.
  • source_ami - Specifies the base AMI our builder should start with. Here I am passing in a reference to our user defined variable called source_ami.
  • instance_type - Specifies the instance size our builder should start with. Here I am passing in a reference to our user defined variable called instance_type.
  • ssh_username - Specifies the SSH username of our chosen AMI. Packer will use this when connecting. Here I am passing in a reference to our user defined variable called ssh_username.

Provisioning

Finally we define an array of provisioners - Provisioners can be anything from shell scripts through to configuration management tools like Salt or Ansible. Although we will be using Ansible for the deployment of our cluster later I have opted to use shell scripts for our generic provisioning with Packer. We are going to define two shell scripts each mapping to a pr. Each provisioner is configured with the following properties:

Note: Provisioners are executed in definition order!

  • type - In both cases this is shell
  • script - A relative path to the script file to be uploaded and executed by Packer. Our first script will reside in the scripts directory and be called base.sh. The second will be provision.sh
  • execute_command - This tells Packer how the script should be executed. To avoid using sudo for every line in our shell scripts I have told packer to execute our shell scripts in a sudo terminal. See here in the docs for more details

That’s it for our Packer template! Now we need to create our provisioning scripts

Creating the scripts

Create two files in the scripts directory and mark them as executable

cd scripts
touch base.sh
touch provision.sh
chmod +x *.sh

Open base.sh and populate it with the following:

#!/bin/bash
echo "Updating base image..."
# Runs a basic apt update
apt-get update && apt-get upgrade -y
# Install common packages to allow image to work with Ansible
# Also install pre reqs for Docker
apt-get install -y gcc make build-essential python-openssl curl apt-transport-https ca-certificates gnupg2 software-properties-common

I’ve commented key points in the script for learning - This script basically upgrades our base Debian image and installs some prerequisite packages.

Next open provision.sh and populate it with the following:

# Sets some env vars to determine what versions of binaries to fetch
ETCD_VER=v3.3.18
K8S_VER=v1.17.2
ETCD_URL=https://github.com/etcd-io/etcd/releases/download
K8S_URL=https://github.com/kubernetes/kubernetes/releases/download
cd /
echo "Downloading etcd..."
# Fetches ETCD binary from Github
curl --progress-bar -LO ${ETCD_URL}/${ETCD_VER}/etcd-${ETCD_VER}-linux-amd64.tar.gz
# Extract and move the binaries into the PATH
tar -xf /etcd-${ETCD_VER}-linux-amd64.tar.gz
mv etcd-${ETCD_VER}-linux-amd64/etcd* /usr/local/bin/
# Cleanup
rm -rf etcd-${ETCD_VER}-linux-amd64*
echo "Downloading K8s binaries..."
# Fetches specified kubernetes release from github
curl --progress-bar -LO ${K8S_URL}/${K8S_VER}/kubernetes.tar.gz
# Extract the release
tar -xf /kubernetes.tar.gz
# Run the built in script to download the binaries 
# Skip the confirmation as interactive shell prompts cause build to fail
KUBERNETES_SKIP_CONFIRM=true /kubernetes/cluster/get-kube-binaries.sh
echo "Extracting K8s binaries..."
# Extract and move the binaries into PATH
cd /kubernetes/server/
tar -xf ./kubernetes-server-linux-amd64.tar.gz
echo "Removing unneeded binaries..."
# Remove some redundant binary files
rm -rf /kubernetes/server/kubernetes/server/bin/*.docker_tag
rm -rf /kubernetes/server/kubernetes/server/bin/*.tar
echo "Installing K8s binaries..."
cp -R /kubernetes/server/kubernetes/server/bin/* /usr/bin
echo "Creating config directories..."
# Setup config directories
cd /
mkdir -p /etc/etcd
mkdir -p /var/lib/kubernetes
# Cleanup
rm -rf ./kubernetes.tar.gz ./kubernetes
echo "Installing container runtime..."
# Install Docker
curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"
apt-get update
apt-get install -y docker-ce docker-ce-cli containerd.io

Again this has been commented but it’s doing quite a lot - lets step through it:

  • We first start in our script by defining some variables (bear in mind this will execute remotely on AWS EC2). This will allow us to choose versions of etcd and Kubernetes later as well as the source for downloads
  • Next using curl we download the specified etcd version from Github, extracting and installing the binaries in our PATH. Once done we clean up our source to reduce our AMI footprint
  • Then we do the same for the Kubernetes binaries with a few additional steps:
    • The Kubernetes repo does not contain actual binaries so we run a script to fetch them
    • The resulting tar file is extracted
    • We remove some binaries / tar files that are not relevant and install them into our path
  • We create some config directories that will be used by our cluster later
  • Finally we install Docker according to the documentation - https://docs.docker.com/install/linux/docker-ce/debian/

Putting it all together

With everything in place we can test our build. From the templates directory run the following

packer build k8s-node.json

All being well you should see something similar to the following. Packer will output our scripts to stdout. Output has been snipped for brevity:

==> amazon-ebs: Prevalidating AMI Name: kubernetes-generic-node
    amazon-ebs: Found Image ID: ami-023143c216b0108ea
==> amazon-ebs: Creating temporary keypair: packer_5e32e36e-3c20-0175-5e8a-b1563111e267
==> amazon-ebs: Creating temporary security group for this instance: packer_5e32e36f-ad9a-871b-b1ef-82fa4b265c02
==> amazon-ebs: Authorizing access to port 22 from [0.0.0.0/0] in the temporary security groups...
==> amazon-ebs: Launching a source AWS instance...
==> amazon-ebs: Adding tags to source instance
    amazon-ebs: Adding tag: "Name": "Packer Builder"
    amazon-ebs: Instance ID: i-072609d94ab7e259d
==> amazon-ebs: Waiting for instance (i-072609d94ab7e259d) to become ready...
==> amazon-ebs: Using ssh communicator to connect: 18.130.177.2
==> amazon-ebs: Waiting for SSH to become available...
==> amazon-ebs: Connected to SSH!
==> amazon-ebs: Provisioning with shell script: ./scripts/base.sh
<snip - SCRIPT OUTPUT>
==> amazon-ebs: Provisioning with shell script: ./scripts/provision.sh
<snip - SCRIPT OUTPUT>
==> amazon-ebs: Stopping the source instance...
    amazon-ebs: Stopping instance
==> amazon-ebs: Waiting for the instance to stop...
==> amazon-ebs: Creating AMI kubernetes-generic-node from instance i-072609d94ab7e259d
    amazon-ebs: AMI: ami-0f909b4e3ef030c5e
==> amazon-ebs: Waiting for AMI to become ready...
==> amazon-ebs: Terminating the source AWS instance...
==> amazon-ebs: Cleaning up any extra volumes...
==> amazon-ebs: No volumes to clean up, skipping
==> amazon-ebs: Deleting temporary security group...
==> amazon-ebs: Deleting temporary keypair...
Build 'amazon-ebs' finished.

==> Builds finished. The artifacts of successful builds are:
--> amazon-ebs: AMIs were created:
eu-west-2: ami-0f909b4e3ef030c5e

Congratulations! You have just made your first golden image. Be sure to make a note of the AMI ID as we’ll need it in later chapters. We can confirm our AMI has been created by visiting the EC2 section of the AWS console and selecting AMIs:

AWS Console AMI Success

Final thoughts

By using this method we can ensure we have a repeatable build for our golen image regardless of cloud provider. Check out what other builders are available for Packer and see if the same can be achieved with a different provider. It’s worth noting Packer does not manage the resulting artifacts. So they will remain in your AWS account until deleted manually. You will also be charged for the compute time used by EC2 while provising. Packer does clean up compute resources on failure and success so you are just left with the storage costs of build artefacts.

Packer is an incredibly powerful tool and gives us a good foundation upon which to build. Next we will look at deploying the infrastructure required to support our new cluster in Terraform.

<- Previous lesson

Next lesson ->

Course index