vCommander Build

Scenario Download Link

7.0.1
Download from GitHub


Kubernetes is an open-source system for deploying and managing containerized applications within a hybrid or cloud environment. Using the Embotics vCommander cloud management platform, you can instantiate a Kubernetes cluster, and then use vCommander’s orchestration, self-service, cloud governance and cost optimization features to manage the cluster.


This article shows you how to use vCommander 7.0+ to get a Kubernetes cluster up and running quickly on AWS, and to add the deployed cluster to vCommander’s inventory as a managed system. While there are many ways to deploy Kubernetes, this example solution uses the kubeadm deployment and installation method on CentOS 7 Linux, 64-bit architecture.


This article is intended for systems administrators, engineers and IT professionals. Previous experience with Linux, Docker and AWS is required.

Prerequisites

Before you begin, you must:

Overview

To provision a Kubernetes cluster on AWS with vCommander, you will carry out the following steps:

  • Create a CentOS 7 AMI in AWS.
  • Test an SSH connection to the deployed instance.
  • Create guest OS credentials for the “centos” user; these credentials are referenced by the workflows you will import.
  • Install a workflow plug-in step that automatically adds the deployed cluster to vCommander’s inventory.
  • Import completion workflows from the Embotics GitHub repository; these workflows will run once the cluster is deployed.
  • Create a custom attribute for the Kubernetes version.
  • Create a custom attribute for the managed system name.
  • Synchronize the inventory for your AWS managed system.
  • Create a service catalog entry for users to request a Kubernetes cluster.
  • Submit a service request.

Create a CentOS 7 AMI in AWS

Create a generic AMI in AWS to use as the base image for all nodes in the Kubernetes cluster.

  1. Log into the AWS console, navigate to EC2, and click Launch Instance.
  2. Choose an Amazon Machine Image (AMI): Go to the AWS Marketplace tab, search for “centos”, and select CentOS 7 (x86_64) - with Updates HVM. This image has no software cost, but will incur AWS usage fees.
  3. Review the AMI details and click Continue.
  4. Instance Type page: Select “t2.medium”.  T2.Medium is a good starting point for Kubernetes deployments. You may want to choose a larger instance type, depending on your application workloads. Click Next: Configure Instance Details.
  5. Configure Instance Details page: Configure options appropriate for your organization. Click Next: Add Storage.
  6. Add Storage page: Kubernetes can run on any storage class or volume type. Keep the default size of 8 GiB. Click Next: Add Tags.
  7. Add Tags page: Add tags as required. Click Next: Configure Security Group.
  8. On the Configure Security Group page, configure the following firewall rules:
    • SSH: TCP port 22
    • Custom TCP: TCP port 6443
  9. Click Review and Launch.
  10. A dialog appears, prompting you to select an existing key pair or create a new one. If you already have an AWS key pair, select it in the list. If not, select Create a new key pair. Enter a key pair name, such as kubernetes-aws-vcommander, and click Download Key Pair. See Managing Key Pairs in the vCommander documentation to learn more.
  11. Save the .pem file to a known location.

    Important: Do not lose your SSH private key file! This PEM-encoded file is required to connect the vCommander workflow to the deployed EC2 instances.

  12. Click Launch Instances.
  13. Under Instances, right-click the instance and select Image > Create Image.

Once AWS has created the image, which may take up to five minutes, your AMI is available for use.

Test an SSH connection to the deployed instance

Ensure that you can open an SSH connection to the instance you just deployed, using the PEM-encoded SSH key you saved earlier. vCommander workflows will use this key to authenticate to AWS. For example:

ssh -i /path/to/my-key-pair.pem ec2-user@ec2-198-51-100-1.compute-1.amazonaws.com

To learn more, see Connect to Your Container Instance in the AWS documentation.

Create guest OS credentials for the “centos” user

The completion workflows use “centos” user credentials to open an SSH connection to the deployed instances. Before importing the workflows, you must create a set of credentials for the centos user, using the PEM-encoded key from the instance you just created.

  1. In vCommander, go to Configuration > Credentials.
  2. Click Add.
  3. In the Add Credentials dialog, select RSA Key for the Credentials Type.
  4. Enter aws for the Name.
  5. This name is hard-coded in the completion workflows, so enter the name exactly as shown.
  6. Enter centos for the Username.
  7. Open the key.pem file from the instance in a text editor, copy the entire contents, and paste the contents into the RSA Key field.
  8. Enter Kubernetes-AWS for the Description.
  9. For Category, keep the default setting, Guest OS Credentials.
  10. Click OK.

Install a plug-in workflow step package

Go to the Embotics GitHub Repository and download and install the Kubernetes plug-in workflow step package, which contains a plug-in workflow step to add the deployed Kubernetes cluster to vCommander’s inventory as a managed system. The completion workflows in this scenario reference this plug-in step.


To learn how to download and install workflow plug-in steps, see Adding Workflow Plug-In Steps.

Import completion workflows

Import two vCommander completion workflows to complete the provisioning and configuration of the cluster.

  1. Go to the Embotics Git repository and download the following workflows:
    • aws-post-deploy-k8s-kubeadm-component.yaml: a component-level completion workflow that runs on each provisioned node and provides common utilities (like Docker)
    • aws-post-deploy-k8s-kubeadm-svc.yaml: a service-level completion workflow that facilitates configuration of the Kubernetes cluster
  2. In vCommander, go to Configuration > Service Request Configuration > Completion Workflow.
  3. Click Import and browse to the aws-post-deploy-k8s-kubeadm-component.yaml file you downloaded.

  4. vCommander automatically validates the workflow. Click Import.
  5. Repeat this process to import the second downloaded workflow, aws-post-deploy-k8s-kubeadm-svc.yaml.

Create a custom attribute for the Kubernetes version

To enable requesters to select which version of Kubernetes to install, create a custom attribute.

  1. In vCommander, go to Configuration > Custom Attributes.
  2. Click Add.
  3. Name the attribute kubernetes_version.
    This name is hard-coded in the completion workflows, so enter the name exactly as shown.
  4. Keep the default values for all other settings on this page.
  5. Click Next, add the appropriate Kubernetes versions as shown in the following image, and click OK:

Create a custom attribute for the managed system name

To store the name of the Kubernetes managed system, create another custom attribute.

  1. In vCommander, go to Configuration > Custom Attributes.
  2. Click Add.
  3. Name the attribute kubernetes_name.
    This name is hard-coded in the completion workflows, so enter the name exactly as shown.
  4. In the Type drop-down list, select Text.
  5. Keep the default values for all other settings on this page.
  6. Click Next, choose Free Form, and click Finish.

Synchronize the inventory for your AWS managed system

To ensure that your newly created AMI is available to add to the service catalog, synchronize the inventory for your AWS managed system.

  1. In vCommander, go to Views > Operational.
  2. Right-click your AWS managed system and select Synchronize Inventory.

Create a service catalog entry

Next, create an entry in the service catalog that:

  • Allows the requester to choose which Kubernetes version to deploy (optional)
  • Allows the requester to specify the name of the vCommander managed system
  • Provisions three instances from the (previously created) EC2 AMI
  • Applies the component-level completion workflow to each deployed instance
  • Applies the service-level completion workflow to the deployed cluster

  1. In vCommander, go to Configuration > Service Request Configuration > Service Catalog > Add Service.
  2. Enter a name and description for the service, applying a custom icon and categories if you wish:

  3. On the next page, add the AMI for provisioning the base instances for the cluster. Click Add > Template, Image or AMI and navigate to the AMI you created earlier.

  4. The workflows support any number of nodes, but in this example, we’re creating a cluster of a master and two worker nodes. This means that you need to click Add to Service three times. When you click Close, the three components are visible.

  5. Create a custom component to store the value for the Kubernetes version custom attribute. On the Component Blueprints page, click Add > New Component Type.

  6. Enter kubernetes_version for the Name, with an Annual Cost of 0. Click Add to Service.

    This name is hard-coded in the completion workflows, so enter the name exactly as shown.

  7. Create a second custom component to store the value for the name of the Kubernetes cluster when it’s added to vCommander as a managed system. On the Component Blueprints page, click Add > New Component Type.
  8. Enter kubernetes_name for the Name, with an Annual Cost of 0. Click Add to Service.

    This name is hard-coded in the completion workflows, so enter the name exactly as shown.

  9. Next, configure the blueprint for each of the VM components. On the Infrastructure tab:
    • For Completion Workflow, select aws-post-deploy-k8s-kubeadm-component.
    • Customize the Deployed Name to match your enterprise naming convention, using vCommander variables. In the image below, the variable #{uniqueNumber[3]} is used to add a three-digit unique number to the VM name.

  10. On the Resources tab:
    • Set the Instance Type to t2.medium (at minimum).

      Note: Increase the resources to support more concurrent pods/containers per host, if needed.

    • From the Key Pair list, select the key pair created in AWS earlier.  
  11. Perform this configuration for the remaining two VM components.

  12. Once you have configured all three VM components, configure the first custom component. On the Component Blueprint page for kubernetes_version:
    • Go to the Attributes tab.
    • Click Add Attributes.
    • Select kubernetes_version in the list and click OK.

    • Back on the Attributes tab, choose a default value for kubernetes_version from the drop-down list.

  13. If you want to allow requesters to choose the Kubernetes version, add the custom attribute to the request form. On the Form tab, in the Toolbox on the right, click the kubernetes_version form element.

  14. Click Edit to enable the Required flag if desired and click OK.
  15. Configure the blueprint for the second custom component. On the Component Blueprint page for kubernetes_name:
    • Go to the Attributes tab.
    • Click Add Attributes.
    • Select kubernetes_name in the list and click OK.
  16. If you want to allow requesters to choose the name of the Kubernetes managed system, add the custom attribute to the request form. On the Form tab, in the Toolbox on the right, click the kubernetes_name form element.
  17. Click Edit to enable the Required flag and click OK.
  18. On the Deployment page, for Completion Workflow, select aws-post-deploy-k8s-kubeadm-svc.

  19. For the purposes of this walk-through, we’ll skip the Intelligent Placement page. To learn more, see Intelligent Placement.
  20. On the Visibility page, specify who can request this service.
  21. Save the service.

The service catalog entry is now published.

Submit a service request

Our service is now configured and ready to test. In vCommander or the Service Portal, go to the Service Catalog and request the Kubernetes service. Notice that you can specify the cluster name and select the Kubernetes version on the request form.

Once the service request has completed, the new cluster is added to vCommander’s inventory as a Kubernetes managed system.