Cluster-API-Plunder

Plunder

In order for Cluster-API to provision and make use of the cluster-api-plunder provider, a working implementation of Plunder should be up and running (and configured to provision servers). In order to get plunder up and running please follow this quickstart guide.

Asciinema of deployment

asciicast

Cluster-API (version v0.2.7 [latest])

Install Cluster-API (CAPI) CRDS/Controller

Install the Cluster-API components (CRDs and the deploymentSet for the controller).

kubectl create -f https://github.com/kubernetes-sigs/cluster-api/releases/download/v0.2.7/cluster-api-components.yaml

Modify concurrency

The current release of Cluster-API (as of writing 0.2.7) is limited to machine deployment concurrency of (1) i.e. it will do one deployment at a time. To improve the speed of deployments we should be able to modify the concurrency up from 1, below is the suggested modification to be applied to the capi-controller-manager.

kubectl edit deployment capi-controller-manager -n capi-system

The changes should look something like below:

[...]
    spec:
      containers:
      - args:
        - --enable-leader-election
        - --machine-concurrency=4
        command:
[...]

Note: More testing is required around this setting.

Cluster-API-Plunder

Create the secret

In order for the provider to work it will require a secret that contains the client configuration to allow communication with Plunder.

kubectl create secret generic plunder --from-file=./plunderclient.yaml --namespace=capi-system

Install Cluster-API-Plunder (CAPPlunder?) CRD/Controller

Install the latest Cluster-API-Plunder CRDS/Controller

kubectl create -f https://raw.githubusercontent.com/plunder-app/cluster-api-plunder/master/cluster-api-plunder-components.yaml

Verify

We can check that everything looks as expected by examining what is in the capi-system namespace:

kubectl get pods -n capi-system
NAME                                       READY   STATUS    RESTARTS   AGE
capi-controller-manager-d8cf78f47-5l6wl    1/1     Running   1          42h
capp-controller-manager-58c9f64ffd-t66qh   1/1     Running   1          42h

We can also describe the capp pod to ensure that everything (secrets etc.) are all being used as expected.

Creating a Guest Cluster

Currently the Cluster-API provider for plunder doesn't have anything specific when it comes to the cluster configuration, this should change in the future when we start to consider the additional configuration or infra components that may make up a kubernetes cluster. Some additions for considerations:

Below is an example cluster configuration, the only important section should be the cidrBlock section.

Cluster.yaml

apiVersion: cluster.x-k8s.io/v1alpha2
kind: Cluster
metadata:
  name: cluster-plunder
spec:
  clusterNetwork:
    pods:
      cidrBlocks: ["192.168.0.0/16"]
  infrastructureRef:
    apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
    kind: PlunderCluster
    name: cluster-plunder
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
kind: PlunderCluster
metadata:
  name: cluster-plunder

When it comes to deploying the actual cluster there are a few fields that need to be populated for the plunderMachine.spec:

Finally, when it comes to the Cluster-API spec then the machine.Spec fields that are important:

Machines.yaml

apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
kind: PlunderMachine
metadata:
  name: controlplane
  namespace: default
spec:
  ipaddress: 172.16.1.123
  deploymentType: preseed
---
apiVersion: cluster.x-k8s.io/v1alpha2
kind: Machine
metadata:
  labels:
    cluster.x-k8s.io/cluster-name: cluster-plunder
    cluster.x-k8s.io/control-plane: "true"
  name: controlplane
  namespace: default
spec:
  version: "v1.14.2"
  bootstrap:
    data: ""
  infrastructureRef:
    apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
    kind: PlunderMachine
    name: controlplane
    namespace: default
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
kind: PlunderMachine
metadata:
  name: worker1
  namespace: default
spec:
  ipaddress: 172.16.1.124
  deploymentType: preseed
---
apiVersion: cluster.x-k8s.io/v1alpha2
kind: Machine
metadata:
  labels:
    cluster.x-k8s.io/cluster-name: cluster-plunder
  name: worker1
  namespace: default
spec:
  version: "v1.14.2"
  bootstrap:
    data: ""
  infrastructureRef:
    apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
    kind: PlunderMachine
    name: worker1
    namespace: default

Applying the configuration

We can apply our configuration in the normal manner and watch the results appear in a few places.

kubectl create -f ./cluster.yaml; kubectl create -f ./machines.yaml

Viewing Kubernetes events

kubectl get events
LAST SEEN   TYPE     REASON             OBJECT                        MESSAGE
20m         Normal   PlunderProvision   plundermachine/controlplane   Plunder has begun provisioning the Operating System
17m         Normal   PlunderProvision   plundermachine/controlplane   Task has been succesfully completed in 3m41s Seconds
17m         Normal   PlunderInstall     plundermachine/controlplane   Kubernetes Control Plane installation has begun
14m         Normal   PlunderInstall     plundermachine/controlplane   Task has been succesfully completed in 2m40s Seconds
9m48s       Normal   PlunderProvision   plundermachine/worker1        Plunder has begun provisioning the Operating System
6m53s       Normal   PlunderProvision   plundermachine/worker1        Task has been succesfully completed in 2m56s Seconds
6m53s       Normal   PlunderInstall     plundermachine/worker1        Kubernetes worker installation has begun
5m17s       Normal   PlunderInstall     plundermachine/worker1        Task has been succesfully completed in 1m35s Seconds
14m         Normal   PlunderProvision   plundermachine/worker2        Plunder has begun provisioning the Operating System
11m         Normal   PlunderProvision   plundermachine/worker2        Task has been succesfully completed in 3m11s Seconds
11m         Normal   PlunderInstall     plundermachine/worker2        Kubernetes worker installation has begun
9m49s       Normal   PlunderInstall     plundermachine/worker2        Task has been succesfully completed in 1m25s Seconds
5m16s       Normal   PlunderProvision   plundermachine/worker3        Plunder has begun provisioning the Operating System
2m11s       Normal   PlunderProvision   plundermachine/worker3        Task has been succesfully completed in 3m6s Seconds
2m11s       Normal   PlunderInstall     plundermachine/worker3        Kubernetes worker installation has begun
40s         Normal   PlunderInstall     plundermachine/worker3        Task has been succesfully completed in 1m30s Seconds

View the Kubernetes machine objects

kubectl get machines
NAME           PROVIDERID                    PHASE
controlplane   plunder://00:50:56:a5:46:cb   provisioned
worker1        plunder://00:50:56:a5:35:eb   provisioned
worker2        plunder://00:50:56:a5:d0:ba   provisioned
worker3        plunder://00:50:56:a5:d9:78   provisioned

From the new control plane

user@controlplane-Cq2Fv:~$ kubectl get nodes
NAME                 STATUS     ROLES    AGE     VERSION
controlplane-cq2fv   NotReady   master   15m     v1.14.2
worker-4cney         NotReady   <none>   2m2s    v1.14.2
worker-9umck         NotReady   <none>   11m     v1.14.2
worker-p49sx         NotReady   <none>   6m35s   v1.14.2