Architecture & DesignUsing Jenkins with Kubernetes AWS, Part 3

Using Jenkins with Kubernetes AWS, Part 3

Developer.com content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

In the first article, “Using Jenkins with Kubernetes AWS, Part 1,” on automating Kubernetes installation with Jenkins, we installed Jenkins on CoreOS, created the pre-requisite artifacts for installing Kubernetes, and created a Jenkins node. In the second article, “Using Jenkins with Kubernetes AWS, Part 2,” we configured a Jenkinsfile and created a Jenkins pipeline. In this article, we shall run the Jenkins pipeline to install Kubernetes and subsequently test the Kubernetes cluster. This article has the following sections:

Running the Jenkins Pipeline

Click Build Now to run the Jenkins Pipeline, as shown in Figure 1.

Build Now starts the Jenkins Pipeline
Figure 1: Build Now starts the Jenkins Pipeline

The Jenkins Pipeline gets started and a progress bar indicates the progress of the pipeline. A Stage View for the various stages in the pipeline also gets displayed, as shown in Figure 2. The Kube-aws render stage in the Stage View has a “paused” link because we requested user input for worker count (and instance type user input, which shall be prompted for subsequently) in the Jenkinsfile. Click the “paused” link.

Obtaining the Public IP Address
Figure 2: Obtaining the Public IP Address

In the Console Output for the Jenkins Pipeline, click the Input requested link, as shown in Figure 3.

Input requested for number of nodes
Figure 3: Input requested for number of nodes

A Number of Nodes dialog gets displayed, prompting for user input for the number of nodes, as shown in Figure 4. A default value as configured in the Jenkinsfile is also set. Click Proceed after specifying a value.

Specifying Number of Nodes
Figure 4: Specifying Number of Nodes

The Pipeline continues to run and again gets paused at another input request for the instance type. Click Input requested, as shown in Figure 5.

Input Requested for Instance Type
Figure 5: Input Requested for Instance Type

The Instance Type dialog gets displayed (see Figure 6). Select the default value (or specify a different value) and click Proceed.

Specifying Instance Type
Figure 6: Specifying Instance Type

The pipeline continues to run. In the Deploy Cluster stage, another Input requested link is presented, as shown in Figure 7. Click the link.

Input Requested for Should Cluster be Deployed
Figure 7: Input Requested for Should Cluster be Deployed

In the Should Deploy Cluster? dialog, select the default value of “yes” and click Proceed, as shown in Figure 8.

Should Deploy Cluster?
Figure 8: Should Deploy Cluster?

The pipeline continues to run. Creating the AWS resources for a Kubernetes cluster could take a while, as indicated by the message in the Console Output shown in Figure 9.

Creating AWS Resources
Figure 9: Creating AWS Resources

The pipeline runs to completion. A “SUCCESS” message indicates that the pipeline has run successfully, as shown in Figure 10.

Jenkins Pipeline Run completed Successfully
Figure 10: Jenkins Pipeline Run completed Successfully

The Stage View for the Jenkins Pipeline displays the various stages of the pipeline having completed, as shown in Figure 11. The Stage view includes links for Last build, Last stable build, Last successful build, and Last completed build.

Stage View
Figure 11: Stage View

Click Full Stage View to display the full stage view separately, as shown in Figure 12.

Selecting Full Stage View
Figure 12: Selecting Full Stage View

The Full Stage View gets displayed, as shown in Figure 13.

Full Stage View
Figure 13: Full Stage View

In the Dashboard, the icon adjacent to the Jenkins Pipeline turns green to indicate successful completion, as shown in Figure 14.

Jenkins Dashboard with Jenkins Pipeline listed as having completed Successfully
Figure 14: Jenkins Dashboard with Jenkins Pipeline listed as having completed Successfully

To display the console output, select Console Output for the Build, as shown in Figure 15.

Build History>Console Output
Figure 15: Build History>Console Output

The Console Output gets displayed (see Figure 16).

Console Output
Figure 16: Console Output

A more detailed Console Output is listed in the following code segment:

Started by user Deepak Vohra
[Pipeline] node
Running on jenkins in /var/jenkins/workspace/install-kubernetes
[Pipeline] {
   [Pipeline] stage (set env)
   Using the 'stage' step without a block argument is deprecated
   Entering stage set env
   Proceeding
   [Pipeline] sh
   [install-kubernetes] Running shell script
   + sudo yum install gnupg2
   Loaded plugins: priorities, update-motd, upgrade-helper
   Package gnupg2-2.0.28-1.30.amzn1.x86_64 already installed and
      latest version
   Nothing to do
   [Pipeline] sh
   [install-kubernetes] Running shell script
   + gpg2 --keyserver pgp.mit.edu --recv-key FC8A365E
   gpg: directory '/home/ec2-user/.gnupg' created
   gpg: new configuration file '/home/ec2-user/.gnupg/gpg.conf'
        created
   ...
   ...
   [Pipeline] sh
   [install-kubernetes] Running shell script
   + gpg2 --fingerprint FC8A365E
   pub   4096R/FC8A365E 2016-03-02 [expires: 2021-03-01]
         Key fingerprint = 18AD 5014 C99E F7E3 BA5F  6CE9 50BD
                           D3E0 FC8A 365E
   uid   [ unknown] CoreOS Application Signing Key
         <security@coreos.com>
   sub   2048R/3F1B2C87 2016-03-02 [expires: 2019-03-02]
   sub   2048R/BEDDBA18 2016-03-08 [expires: 2019-03-08]
   sub   2048R/7EF48FD3 2016-03-08 [expires: 2019-03-08]

   [Pipeline] sh
   [install-kubernetes] Running shell script
   + wget https://github.com/coreos/coreos-kubernetes/releases/
      download/v0.7.1/kube-aws-linux-amd64.tar.gz
   --2016-11-29 21:22:04-- https://github.com/coreos/
      coreos-kubernetes/releases/download/v0.7.1/
      kube-aws-linux-amd64.tar.gz
   Resolving github.com (github.com)... 192.30.253.112,
      192.30.253.113
   Connecting to github.com (github.com)|192.30.253.112|:443...
      connected.
   HTTP request sent, awaiting response... 302 Found
   Location: https://github-cloud.s3.amazonaws.com/releases/
      41458519/309e294a-29b1-
   ...
   ...
   2016-11-29 21:22:05 (62.5 MB/s) - 'kube-aws-linux-amd64.tar.gz'
      saved [4655969/4655969]

   [Pipeline] sh
   [install-kubernetes] Running shell script
   + wget https://github.com/coreos/coreos-kubernetes/releases/
   download/v0.7.1/kube-aws-linux-amd64.tar.gz.sig
   --2016-11-29 21:22:05--  https://github.com/coreos/
      coreos-kubernetes/releases/download/v0.7.1/kube-aws-linux-
      amd64.tar.gz.sig
   Resolving github.com (github.com)... 192.30.253.113,
      192.30.253.112
   Connecting to github.com (github.com)|192.30.253.113|:443...
      connected.
   HTTP request sent, awaiting response... 302 Found
   Location: https://github-cloud.s3.amazonaws.com/releases/
      41458519/0543b716-2bf4-
   ...
   ...
   Saving to: 'kube-aws-linux-amd64.tar.gz.sig'

   0K                          100% 9.21M=0s

   2016-11-29 21:22:05 (9.21 MB/s) -
      'kube-aws-linux-amd64.tar.gz.sig' saved [287/287]

   [Pipeline] sh
   [install-kubernetes] Running shell script
   + gpg2 --verify kube-aws-linux-amd64.tar.gz.sig kube-aws-
   linux-amd64.tar.gz
   gpg: Signature made Mon 06 Jun 2016 09:32:47 PM UTC using RSA
        key ID BEDDBA18
   gpg: Good signature from "CoreOS Application Signing Key
        <security@coreos.com>" [unknown]
   gpg: WARNING: This key is not certified with a trusted
        signature!
   gpg: There is no indication that the signature belongs to the
        owner.
   Primary key fingerprint: 18AD 5014 C99E F7E3 BA5F  6CE9 50BD
                            D3E0 FC8A 365E
      Subkey fingerprint: 55DB DA91 BBE1 849E A27F  E733 A6F7
                          1EE5 BEDD BA18
   [Pipeline] sh
   [install-kubernetes] Running shell script
   + tar zxvf kube-aws-linux-amd64.tar.gz
   linux-amd64/
   linux-amd64/kube-aws
   [Pipeline] sh
   [install-kubernetes] Running shell script
   + sudo mv linux-amd64/kube-aws /usr/local/bin
   [Pipeline] sh
   [install-kubernetes] Running shell script
   ...
   ...
   [Pipeline] sh
   [install-kubernetes] Running shell script
   + aws ec2 create-volume --availability-zone us-east-1c
   --size 10 --volume-type gp2
   {
      "AvailabilityZone": "us-east-1c",
      "Encrypted":        false,
      "VolumeType":       "gp2",
      "VolumeId":         "vol-b325332f",
      "State":            "creating",
      "Iops":             100,
      "SnapshotId":       "",
      "CreateTime":       "2016-11-29T21:22:07.949Z",
      "Size":             10
   }
   [Pipeline] sh
   [install-kubernetes] Running shell script
   + aws ec2 create-key-pair --key-name kubernetes-coreos
   --query KeyMaterial --output text
   [Pipeline] sh
   [install-kubernetes] Running shell script
   + chmod 400 kubernetes-coreos.pem
   [Pipeline] stage (Kube-aws init)
   Using the 'stage' step without a block argument is deprecated
   Entering stage Kube-aws init
   Proceeding
   [Pipeline] deleteDir
   [Pipeline] sh
   [install-kubernetes] Running shell script
   + mkdir coreos-cluster
   [Pipeline] sh
   [install-kubernetes] Running shell script
   + cd coreos-cluster
   [Pipeline] sh
   [install-kubernetes] Running shell script
   + kube-aws init --cluster-name=kubernetes-coreos-cluster
   --external-dns-name=NOSQLSEARCH.COM --region=us-east-1
   --availability-zone=us-east-1c --key-name=kubernetes-coreos
   --kms-key-arn=arn:aws:kms:us-east-1:672593526685:key/
      c9748fda-2ac6-43ff-a267-d4edc5b21ad9
   Success! Created cluster.yaml

   Next steps:
   1. (Optional) Edit cluster.yaml to parameterize the cluster.
   2. Use the "kube-aws render" command to render the stack
      template.
   [Pipeline] stage (Kube-aws render)
   Using the 'stage' step without a block argument is deprecated
   Entering stage Kube-aws render
   Proceeding
   [Pipeline] input
   Input requested
   Approved by Deepak Vohra
   [Pipeline] input
   Input requested
   Approved by Deepak Vohra
   [Pipeline] sh
   [install-kubernetes] Running shell script
   + kube-aws render
   Success! Stack rendered to stack-template.json.

   Next steps:
   1. (Optional) Validate your changes to cluster.yaml with
      "kube-aws validate"
   2. (Optional) Further customize the cluster by modifying
      stack-template.json or files in ./userdata.
   3. Start the cluster with "kube-aws up".
   [Pipeline] sh
   [install-kubernetes] Running shell script
   + sed -i 's/#workerCount: 1/workerCount: 3/' cluster.yaml
   [Pipeline] sh
   [install-kubernetes] Running shell script
   + sed -i 's/#workerInstanceType: m3.medium/
      workerInstanceType: t2.micro/' cluster.yaml
   [Pipeline] sh
   [install-kubernetes] Running shell script
   + kube-aws validate
   Validating UserData...
   UserData is valid.

   Validating stack template...
   Validation Report: {
      Capabilities: ["CAPABILITY_IAM"],
      CapabilitiesReason: "The following resource(s) require
         capabilities: [AWS::IAM::Role]",
      Description: "kube-aws Kubernetes cluster
         kubernetes-coreos-cluster"
   }
   stack template is valid.

   Validation OK!
   [Pipeline] stage (Archive CFN)
   Using the 'stage' step without a block argument is deprecated
   Entering stage Archive CFN
   Proceeding
   [Pipeline] step
   Archiving artifacts
   Recording fingerprints
   [Pipeline] stage (Deploy Cluster)
   Using the 'stage' step without a block argument is deprecated
   Entering stage Deploy Cluster
   Proceeding
   [Pipeline] input
   Input requested
   Approved by Deepak Vohra
   [Pipeline] echo
   Deploying Kubernetes cluster
   [Pipeline] sh
   [install-kubernetes] Running shell script
   + kube-aws up
   Creating AWS resources. This should take around 5 minutes.
   Success! Your AWS resources have been created:
   Cluster Name:    kubernetes-coreos-cluster
   Controller IP:   34.193.183.134

   The containers that power your cluster are now being downloaded.

   You should be able to access the Kubernetes API once the
      containers finish downloading.
   [Pipeline] sh
   [install-kubernetes] Running shell script
   + kube-aws status
   Cluster Name:    kubernetes-coreos-cluster
   Controller IP:   34.193.183.134
   [Pipeline] step
   Archiving artifacts
   Recording fingerprints
   [Pipeline] }
[Pipeline]   // Node
[Pipeline] End of Pipeline
Finished: SUCCESS

Testing the Kubernetes Cluster

Having installed Kubernetes, next we shall test the cluster by running some application. First, we need to configure the Controller Ip on the Public DNS Name (the nosqlsearch.com domain). Copy the Controller IP from the Console Output, as shown in Figure 17.

Obtaining the Public IP Address
Figure 17: Obtaining the Public IP Address

The Kubernetes Controller Ip may also be obtained from the EC2 Console, as shown in Figure 18.

Obtaining the Kubernetes Controller Ip
Figure 18: Obtaining the Kubernetes Controller Ip

Add an A (Host) entry to the DNS Zone file for the nosqlsearch.com domain at the hosting provider, as shown in Figure 19. Adding an A record would be slightly different for different hosting providers.

Obtaining the Public IP Address
Figure 19: Obtaining the Public IP Address

SSH Log in to the Kubernetes Master using the Master’s Ip.

ssh -i "kubernetes-coreos.pem" core@34.193.178.55

The CoreOS command prompt gets displayed, as shown in Figure 20.

Obtaining the Public IP Address
Figure 20: Obtaining the Public IP Address

Install the kubectl binaries:

sudo wget https://storage.googleapis.com/kubernetes-release/
   release/v1.3.0/bin/linux/amd64/./kubectl
sudo chmod +x ./kubectl

List the nodes:

./kubectl get nodes

The Kubernetes cluster nodes get listed (see Figure 21).

Obtaining the Public IP Address
Figure 21: Obtaining the Public IP Address

To test the cluster, create a deployment for nginx consisting of three replicas.

kubectl  run nginx --image=nginx --replicas=3

Subsequently, list the deployments:

kubectl get deployments

The “nginx” deployment should get listed, as shown in Figure 22.

Obtaining the Public IP Address
Figure 22: Obtaining the Public IP Address

List the cluster-wide Pods:

kubectl get pods -o wide

Create a service of type LoadBalancer from the nginx deployment:

kubectl expose deployment nginx --port=80 --type=LoadBalancer

List the services:

kubectl get services

The cluster-wide Pods get listed, as shown in Figure 23. The “nginx” service gets created and listed including cluster ip and the external Ip.

Obtaining the Public IP Address
Figure 23: Obtaining the Public IP Address

Invoke the nginx service at the cluster ip. The nginx service output HTML markup gets displayed, as shown in Figure 24.

Obtaining the Public IP Address
Figure 24: Obtaining the Public IP Address

Conclusion

In three articles, we discussed installing Kubernetes cluster using a Jenkins project. We created a Jenkins Pipeline project with a Jenkinsfile to install the cluster. A Jenkins pipeline automates Kubernetes installation, and the same Jenkins pipeline may be modified as required and re-run to create multiple Kubernetes clusters.

Get the Free Newsletter!

Subscribe to Developer Insider for top news, trends & analysis

Latest Posts

Related Stories