Making the Canonical distribution of Kubernetes, deployed using juju, work with the vsphere cloud provider

DISCLAIMER: As per the 1.9.4 update setting UUID in the configuration file is no longer supported, which renders this guide obsolete

Lately I have been working with setting up a production ready kubernetes cluster. We are hosted in a vsphere private cloud.

I came accross juju and the Canonical Distribution of Kubernetes (https://jujucharms.com/canonical-kubernetes/) which gives you a really smart and easy way to handle your cluster.

Unfortunately CDK does not support vsphere storage provisioning out of the box even though it's supported natively in Kubernetes (it does require some setup).

I know that Canonical are working on adding vsphere support but until then, here's how to achieve it.

This post covers Kubernetes 1.9+ because setting up the vsphere cloud provider for kubernetes was simplified a lot with this release.

Official VMware docs: https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/existing.html 

Note that they differ from the current k8s docs

I assume that you have a vcenter user with proficient permissions, otherwise check out the official docs for details.

Here's what I did

Step 1: Enable disk UUID on Node virtual machines

You can do this either using the vsphere CLI (which is described in the official docs), or using vCenter, which I found to be easier. But of course it depends on how many nodes you have.

In (webbased) vCenter:

  1. turn off the node
  2. Go to the config tab
  3. Click "VM options" in the menu
  4. Click "Edit..."
  5. Open Advanced
  6. Click "Edit Configuration"
  7. Add "disk.EnableUUID" with a value of 1
  8. Save the configuration and turn the node back on



Repeat the proces for all you worker nodes

Step 2: Create a vsphere.conf to be uploaded to the Kubernertes master


After that we need to prepare vsphere.conf. This the template I used, be sure to adapt values to your specific setup

[Global]
# properties in this section will be used for all specified vCenters unless overriden in VirtualCenter section.
user = "username"
password = "password"
port = "443" #Optional
insecure-flag = "1" #set to 1 if the vCenter uses a self-signed cert
datacenters = "Datacenter" # I only have one datacenter
vm-uuid="__uuid__" # we will set this value automagically
[VirtualCenter "10.200.1.2"]
#Even though it's the same as the Global configuration it should still be there, otherewise the cloud provider will interpret the configuration file as an pre k8s 1.9 style config file. [Workspace]
# Specify properties which will be used for various vSphere Cloud Provider functionality.
# e.g. Dynamic provisioing, Storage Profile Based Volume provisioning etc. 
 server = "10.200.1.2"
datacenter = "Datacenter"
folder = "k8s-storage" #Make sure this folder exists in your vmware Datacenter
default-datastore = "MyStorage" #Datastore to use for provisioning volumes using storage classes/dynamic provisioning
resourcepool-path = "" # Used fordummy VM creation. Optional  [Disk]
        scsicontrollertype = pvscsi  [Network]
        public-network = "My Network"

Step 3: upload the file to the kubernetes master node

Do that using the following command:
juju scp /path/to/your/vsphere.conf kubernetes-master/0

The vsphere cloud provider can under normal circumstances read the machines vm-ware uuid from /sys/class/dmi/id/product_serial. However, since CDK uses snap images of Kubernetes and the kube-apiserver and the kube-controllermanager are not run using classic confinement, meaning they simply cannot access that file. Hence we need to specify vm-uuid in the vsphere configuration file.

# if you have multiple masters, otherwise leave out the loop
for i in $(seq 0 1 NumberOfMasters)
do
juju ssh kubernetes-master/${i} "uuid='sed -e 's/\s//g' /sys/class/dmi/id/product_serial'; sed -i -e 's/__uuid__/$uuid/' vsphere.conf; tr -d $'\r' < vsphere.conf > tmp & mv tmp vsphere.conf"
juju ssh kubernetes-master/${i} "sudo chown root:root /home/ubuntu/vsphere.conf; sudo mv /home/ubuntu/vsphere.conf /root/cdk/"
done

Step 4: Tell kube-apiserver, kube-controller-manager and kubelet to use vsphere for provisioning

For the master do
juju config kubernetes-master controller-manager-extra-args="cloud-provider=vsphere cloud-config=/root/cdk/vsphere.conf" api-extra-args="cloud-provider=vsphere cloud-config=/root/cdk/vsphere.conf"
And the nodes just need to know we are using vSphere
juju config kubernetes-worker kubelet-extra-args="cloud-provider=vsphere"
Et voila, now you just need to define a storageclass in Kubernetes to use to autoprovision vsphere storage

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
 name: mysuperfaststorage
provisioner: kubernetes.io/vsphere-volume
parameters:
 diskformat: zeroedthick
 fstype:     ext  4

Btw you probably want to rename your storage class :-)

Inspiration taken from: https://www.bountysource.com/issues/48834558-make-it-easy-to-deploy-the-vsphere-provider

Kommentarer

Populære opslag fra denne blog

Localization of graphic resources for a Python Flask website