- Related issues: #3902 support elemental cloud-init via harvester-node-manager
- TBD
Testing From Harvester UI
- TBD
Testing From Rancher Fleet UI / Harvester Fleet Controller
- TBD
Pre-Reqs:
- Have an available multi-node Harvester cluster, w/out your ssh-key present on any nodes
- Provision cluster however is easiest
- K9s (or other similar kubectl tooling)
- kubectl
- audit elemental toolkit for an understanding of stages
- audit harvester configuration to correlate properties to elemental-toolkit based stages / functions
Negative Tests:
Validate Non-YAML Files Get .yaml as Suffix On File-System
- Prepare a YAML loadout of a CloudInit resource that takes the shape of:
apiVersion: node.harvesterhci.io/v1beta1
kind: CloudInit
metadata:
name: write-file-with-non-yaml-filename
spec:
matchSelector: {}
filename: 99_filewrite.log
contents: |
stages:
fs:
- name: "write file test"
commands:
- echo "hello, there" > /etc/sillyfile.conf
- Log on to any one of the nodes in the cluster and validate:
99_filewrite.log.yaml
is present within /oem/
directory on the node
- the contents of
99_filewrite.log.yaml
look appropriate
- Validate that
kubectl describe cloudinits/write-file-with-non-yaml-filename
looks appropriate
- Delete the CloudInit CRD via
kubectl delete cloudinits/write-file-with-non-yaml-filename
Validate Filename with non specified suffix ends up as .yaml
- Prepare a YAML loadout of a CloudInit resource that takes the shape of:
apiVersion: node.harvesterhci.io/v1beta1
kind: CloudInit
metadata:
name: write-file-with-non-suffix-filename
spec:
matchSelector: {}
filename: 99_filewrite
contents: |
stages:
fs:
- name: "write file test"
commands:
- echo "hello, there" > /etc/sillyfile.conf
- Log on to any one of the nodes in the cluster and validate:
99_filewrite.yaml
is present within /oem/
directory on the node
- the contents of
99_filewrite.yaml
look appropriate
- Validate that
kubectl describe cloudinits/write-file-with-non-suffix-filename
looks appropriate
- Delete the CloudInit CRD via
kubectl delete cloudinits/write-file-with-non-suffix-filename
Validate That Non-YAML CloudInit CRD is Rejected
- Prepare a YAML loadout of a CloudInit resource that takes the shape of:
apiVersion: node.harvesterhci.io/v1beta1
kind: CloudInit
metadata:
name: consolelogoverwrite
spec:
matchSelector: {}
filename: install/console.log
contents: |
hello there
- When trying to apply the YAML, it should be rejected at a Webhook level, test via:
kubectl create -f your-file-name.yaml
- Validate that is rejected.
Positive Tests:
Validate that a CloudInit Resource Gets Applied Cluster Wide
- Prepare a YAML loadout of a CloudInit resource that takes the shape of:
apiVersion: node.harvesterhci.io/v1beta1
kind: CloudInit
metadata:
name: ssh-access-test
spec:
matchSelector: {}
filename: 99_ssh_test_cluster_wide.yaml
contents: |
stages:
network:
- authorized_keys:
rancher:
- ATTACH-YOUR-SSH-KEY-HERE
- Apply it to the cluster via
kubectl create -f filename.yaml
- Validate that the CloudInit Resource was scaffolded appropriately
kubectl describe cloudinits/ssh-access-test
- Reboot all nodes in the cluster
- Once nodes are rebooted validate that you can log in to each node via the SSH Key provided.
- Delete the CRD
kubectl delete cloudinits/ssh-access-test
- Remove the ssh-key entry from
/home/rancher/.ssh/authorized_keys
on each node
- Reboot the nodes
- Ensure that the
/oem/99_ssh_test_cluster_wide.yaml
is not present on any node
Validate that a CloudInit Resource that targets a Single Node Is Applied Correctly
- Prepare a YAML loadout of a CloudInit resource that takes the shape of:
apiVersion: node.harvesterhci.io/v1beta1
kind: CloudInit
metadata:
name: ssh-access-test-one-node
spec:
matchSelector:
kubernetes.io/hostname: NOTE-FOR-THIS-FIELD change to be the Node Name like for instance-> harvester-node-0
filename: 99_ssh_test_single_node.yaml
contents: |
stages:
network:
- authorized_keys:
rancher:
- ATTACH-YOUR-SSH-KEY-HERE
- Apply it to the cluster via
kubectl create -f filename.yaml
- validate that the CloudInit resource was built appropriately
kubectl describe cloudinits/ssh-access-test-one-node
- Reboot the node that has the host-name filed in, as in the node tied to
kubernetes.io/hostname
- Validate that you can ssh as rancher onto the node that was rebooted without the need for password
- Delete the CRD
kubectl delete cloudinits/ssh-access-test-one-node
- Remove the ssh-key from
/home/rancher/.ssh/authorized_keys
- Reboot the node
- Validate that the file
/oem/99_ssh_test_single_node.yaml
is not present in the /oem
directory on that single node
Validate that custom labels are reflected upon reboot of nodes with CloudInit Resource
- On two or more nodes in your multi-node cluster, create a label on each host of something like
testingmode: testing
- Prepare a YAML loadout of a CloudInit resource that takes the shape of:
apiVersion: node.harvesterhci.io/v1beta1
kind: CloudInit
metadata:
name: write-file-reflects-nodes
spec:
matchSelector:
testingmode: testing
filename: 99_filewrite_nodes.yaml
contents: |
stages:
fs:
- name: "write file test reflects nodes"
commands:
- echo "hello, there" > /etc/sillyfile.conf
- validate that the cloudinit resource was built correctly
kubectl describe cloudinits/write-file-reflects-nodes
- Check the nodes applied to, to make sure
/oem/99_filewrite_nodes.yaml
is present, if not you may need to wait for the controller to reconcile the label logic on the nodes
- Reboot all nodes that had the label applied
- Log into each node and ensure that
sudo cat /etc/sillyfile.conf
exists.
- Remove or modify the label on one of the nodes.
- Reboot that node.
- Validate that the
/oem/99_filewrite_nodes.yaml
file is removed, you may need to wait for a bit for the controller to reconcile something like watch -n 3 ls -alh /oem
as it may take sometime
- Reboot that node again.
- Once node is back up validate that
/etc/sillyfile.conf
no longer exists on the node.
Validate that CloudInit CRD respects pausing being turned on
- On a single node in the cluster have a label you can reference it by, this could be something like
kubernetes.io/hostname
- Prepare a YAML file that takes the shape of something like:
apiVersion: node.harvesterhci.io/v1beta1
kind: CloudInit
metadata:
name: silly-file-one-node-no-changes
spec:
matchSelector:
kubernetes.io/hostname: harvester-node-0
paused: false
filename: 99_filewrite_singlenode_wont_get_changes.yaml
contents: |
stages:
fs:
- name: "write file test reflects nodes"
commands:
- echo "hello, there" > /etc/sillyfile-that-doesnt-get-changes.conf
- Apply the resource
kubectl create -f
- Reboot the node
- Check that the node has the file
cat /etc/sillyfile-that-doesnt-get-changes.conf
- Patch the file with YAML that takes the shape of something like (
kubectl patch --patch-file your-filepath/filename.yaml cloudinits/silly-file-one-node-no-changes --type=merge
):
apiVersion: node.harvesterhci.io/v1beta1
kind: CloudInit
metadata:
name: silly-file-one-node-no-changes
spec:
matchSelector:
kubernetes.io/hostname: harvester-node-0
paused: true
filename: 99_filewrite_singlenode_wont_get_changes.yaml
contents: |
stages:
fs:
- name: "write file test reflects nodes"
commands:
- echo "hello, there NEW CHANGES" > /etc/sillyfile-that-doesnt-get-changes.conf
- Audit that paused is turned on
kubectl describe cloudinits/silly-file-one-node-no-changes
- Reboot the node
- Ensure that
cat /etc/sillyfile-that-doesnt-get-changes.conf
did not receive the new text of NEW CHANGES
within it, since pause
boolean was toggled on CRD