Manual Test cases for harvester.
Manual Test Cases for harvester
- Incoming Test Cases
- Adapt alertmanager to dedicated storage network
- Add backup-taget connection status
- Add extra disks by using raw disks
- Add websocket disconnect notification
- Alertmanager supports main stream receivers
- All Namespace filtering in VM list
- Auto provision lots of extra disks
- Boot installer under Legacy BIOS and UEFI
- Check can start VM after Harvester upgrade
- Check conditions when stop/pause VM
- Check DNS on install with Github SSH keys
- Check IPAM configuration with IPAM
- Check IPv4 static method in ISO installer
- Check logs on Harvester
- Check Network interface link status can match the available NICs in Harvester vlanconfig
- Check rancher-monitoring-grafana volume size
- Check support bundle for SLE Micro OS
- Check the OS types in Advanced Options
- Check the VM is available when Harvester upgrade failed
- Check version compatibility during an upgrade
- Check volume status after upgrade
- Clone image (e2e_fe)
- collect Fleet logs and YAMLs in support bundles
- Collect system logs
- Config logging in Harvester Dashboard
- Configure VLAN interface on ISO installer UI
- Create a harvester-specific StorageClass for Longhorn
- Create multiple VM instances using VM template with EFI mode selected
- Dashboard Storage usage display when node disk have warning
- Dedicated storage network
- Delete VM template default version (e2e_fe)
- Deny the vlanconfigs overlap with the other
- Deploy guest cluster to specific node with Node selector label
- Download backing images
- enable/disable alertmanager on demand
- Enabling and Tuning KSM
- enhance double check of VM’s resource modification
- enhance node scheduling when vm selects network
- Function keys on web VNC interface
- Generate Install Support Config Bundle For Single Node
- Harvester Cloud Provider compatibility check
- Harvester pull Rancher agent image from private registry
- Harvester rebase check on SLE Micro
- Harvester supports event log
- Harvester supports kube-audit log
- Harvester uses active-backup as the default bond mode
- Image filtering by labels
- Image filtering by labels (e2e_fe)
- Image handling consistency between terraform data resource and Harvester UI created image
- Image naming with inline CSS (e2e_fe)
- Image upload does not start when HTTP Proxy is configured
- Improved resource reservation
- Install Harvester over previous GNU/Linux install
- Instance metadata variables are not expanded
- ISO installation console UI Display
- Ksmd support merge_across_node on/off
- Limit VM of guest cluster in the same namespace
- Local cluster user input topology key
- Logging Output Filter
- Multiple Disks Swapping Paths
- Namespace pending on terminating
- Negative change backup target while restoring backup
- Negative Harvester installer input same NIC IP and VIP
- Negative Restore a backup while VM is restoring
- Networkconfigs function check
- NIC ip and vip can’t be the same in Harvester installer
- Node disk manager should prevent too many concurrent disk formatting occur within a short period
- Node join fails with self-signed certificate
- Node promotion for topology label
- Polish harvester machine config in Rancher
- Press the Enter key in setting field shouldn’t refresh page
- Prevent normal users create harvester-public namespace
- Project owner role on customized project open Harvester cluster
- Project owner should not see additional alert
- Promote remaining host when delete one
- rancher-monitoring status when hosting NODE down
- RBAC Cluster Owner
- RBAC Create VM with restricted admin user
- Reinstall agent node
- Remove Pod Scheduling from harvester rke2 and rke1
- Restart Button Web VNC window
- Restart/Stop VM with in progress Backup
- restored VM can not be cloned
- Restored VM name does not support uppercases
- Restricted admin should not see cattle-monitoring-system volumes
- Setup and test local Harvester upgrade responder
- Support configuring a VLAN at the management interface in installer config
- Support multiple VLAN physical interfaces
- Support private registry for Rancher agent image in Air-gap
- Support Volume Clone
- Support Volume Snapshot
- Sync harvester node’s topology labels to rke2 guest-cluster’s node
- Sync image display name to image labels
- template with EFI (e2e_fe)
- Terraform import VLAN
- Terraformer import KUBECONFIG
- Testing Harvester Storage Tiering
- The count of volume snapshots should not include VM’s snapshots
- Topology aware scheduling of guest cluster workloads
- Unable to stop VM which in starting state
- Upgrade guest cluster kubernetes version can also update the cloud provider chart version
- Upgrade Harvester on node that has bonded NICs for management interface
- Upgrade support of audit and event log
- VLAN Upgrade Test
- VM boot stress test
- VM Import/Migration
- VM IP addresses should be labeled per network interface
- VM label names consistentency before and after the restore
- VM Snapshot support
- VM template is not working with Node scheduling
- VMIs created from VM Template don’t have LiveMigrate evictionStrategy set
- VMs can’t start if a node contains more than ~60 VMs
- VolumeSnapshot Management
- Wrong mgmt bond MTU size during initial ISO installation
- Zero downtime upgrade
- Advanced
- Addons
- Enable Harvester addons and check deployment state
- PCI Devices Controller
- vGPU/SR-IOV GPU
- VM import with EFI mode and secure boot
- Change api-ui-source bundled (e2e_fe)
- Change api-ui-source external (e2e_fe)
- Change log level debug
- Change log level Info (e2e_fe)
- Change log level Trace (e2e_fe)
- Cluster TLS customization
- Fleet support with Harvester
- Set backup target S3 (e2e_fe)
- Set backup-target NFS (e2e_fe)
- Set backup-target NFS invalid target
- Set backup-target S3 invalid target
- SSL Certificate
- Timeout option for support bundle
- Verify that vm-force-reset-policy works
- Authentication
- Authentication Validation
- Change user password (e2e_fe)
- Create SSH key from templates page
- First Time Login (e2e_fe)
- Login after password reset (e2e_fe)
- Logout from the UI and login again
- Multi-browser login
- UI enables option to display password on login page
- Verify SSH key was added from Github during install
- Backup and Restore
- Backup S3 reduce permissions
- Backup Single VM (e2e_be)
- Backup Single VM that has been live migrated before (e2e_be)
- Backup single VM with node off
- Backup Target error message
- Create Backup Target (e2e_be)
- Delete backup from backups list (e2e_be)
- Delete first backup in chained backup (e2e_be)
- Delete last backup in chained backup (e2e_be)
- Delete middle backup in chained backup (e2e_be)
- Delete multiple backups
- Edit backup read YAML from file
- Edit backup via YAML (e2e_be)
- Filter backups
- Negative create backup on store that is full (NFS)
- Negative Create Backup Target
- Negative delete backup while restore is in progress
- Negative delete multiple backups
- Negative delete single backup
- Negative disrupt backup server while restore is in progress
- Negative edit backup read from file YAML
- Negative edit backup YAML
- Negative initiate a backup while system is taking another backup
- Negative Power down the node where the VM is getting replaced by the restore
- Negative power down the node where the VM is getting restored
- Negative restore backup replace existing VM
- Negative restore backup replace existing VM with backup from same VM that is turned on
- Negative restore backup replace existing VM with backup from same VM that is turned on (e2e_be)
- Restore backup create new vm (e2e_be)
- Restore backup create new vm in another namespace
- Restore Backup for VM that was live migrated (e2e_be)
- Restore backup replace existing VM with backup from same VM (e2e_be)
- Restore First backup in chained backup
- Restore last backup in chained backup
- Restore middle backup in chained backup
- VM Backup with metadata
- Deployment Tests for Harvester
- Add a node to existing cluster (e2e_be)
- Additional trusted CA configure-ability
- Automatically get VIP during PXE installation
- Change DNS servers while installing
- Change DNS settings on vagrant-pxe-harvester install
- Http proxy setting on harvester
- Install 2 node Harvester with a Harvester token with multiple words
- Install Harvester from USB disk
- Install Harvester on a bare Metal node using ISO image
- Install Harvester on a bare Metal node using PXE boot (e2e_be)
- Install Harvester on a virtual nested node using ISO image
- Install Harvester on NVMe SSD
- Install Option HwAddr for Network Interface
- Install Option install.device support symbolic link
- Manual upgrade from 0.3.0 to 1.0.0
- Power down a node out of three nodes available for the Cluster
- Power down the management node.
- PXE instll without iso_url field
- Reboot the management node/added node.
- Remove a node from the existing cluster
- Verify and Configure Networking Connection (e2e_be)
- Verify Configuring SSH keys
- Verify Configuring via HTTP URL
- Verify the installation confirmation screen
- Verify the Installer Options
- Verify the Proxy configuration
- VIP Load balancer verification (e2e_be)
- Harvester Rancher Integration
- 02-Integrate to Rancher from Harvester settings (e2e_be)
- 03-Manage VM in Downstream Harvester
- 04-Manage Node in Downstream Harvester
- 05-Manage Image in Downstream Harvester
- 06-Manage Network in Downstream Harvester
- 07-Add and grant project-owner user to harvester (e2e_be)
- 08-Add and grant project-readonly user to harvester
- 09-Add and grant project-member user to harvester
- 10-Add and grant project-custom user to harvester
- 11-Create New Project in Harvester
- 13-Add and grant project-owner user to custom project
- 14-Add and grant project-readonly user to custom project
- 15-Add and grant project-member user to custom project
- 16-Add and grant project-custom user to custom project
- 17-Delete Imported Harvester Cluster (e2e_be)
- 18-Delete Failed Imported Harvester Cluster
- 20-Create RKE1 Kubernetes Cluster
- 21-Delete RKE1 Kubernetes Cluster
- 22-Create RKE2 Kubernetes Cluster (e2e_be)
- 23-Delete RKE2 Kubernetes Cluster (e2e_be)
- 24-Delete RKE1 Kubernetes Cluster in Provisioning
- 25-Delete RKE1 Kubernetes Cluster in Failure
- 26-Delete RKE2 Kubernetes Cluster in Provisioning
- 27-Delete RKE2 Kubernetes Cluster in Failure
- 30-Configure Harvester LoadBalancer service
- 31-Specify “pool” IPAM mode in LoadBalancer service
- 32-Deploy Harvester CSI provider to RKE 1 Cluster
- 33-Deploy Harvester CSI provider to RKE 2 Cluster
- 34-Hot plug and unplug volumes in RKE1 cluster
- 35-Hot plug and unplug volumes in RKE2 cluster
- 36-Remove Harvester LoadBalancer service
- 37-Import Online Harvester From the Airgapped Rancher
- 37-Import Online Harvester From the Airgapped Rancher
- 38-Import Airgapped Harvester From the Airgapped Rancher
- 39-Standard user no Harvester Access
- 40-RBAC Add restricted admin User Harvester
- 41-Import Harvester into nested Rancher
- 42-Add cloud credential KUBECONFIG
- 43-Scale up node driver RKE1
- 44-Scale up node driver RKE2
- 45-Scale down node driver RKE1
- 46-Scale down node driver RKE2
- 49-Overprovision Harvester
- 50-Use fleet when a harvester cluster is imported to rancher
- 51-Use harvester cloud provider to provision an LB - rke1
- 52-Use harvester cloud provider to provision an LB - rke2
- 53-Disable Harvester flag with Harvester cluster added
- 54-Import Airgapped Harvester From the Online Rancher
- 55-Import Harvester to Rancher in airgapped different subnet
- 56-Import Harvester to Rancher in airgapped different subnet
- 57-Import airgapped harvester from airgapped rancher with Proxy
- 58-Negative-Fully power cycle harvester node machine should recover RKE2 cluster
- 59-Create K3s Kubernetes Cluster
- 60-Delete K3s Kubernetes Cluster
- 61-Deploy Harvester cloud provider to k3s Cluster
- 62-Configure the K3s “DHCP” LoadBalancer service
- 62-Configure the K3s “DHCP” LoadBalancer service
- 63-Configure the K3s “Pool” LoadBalancer service
- 65-Configure the K3s “Pool” LoadBalancer health check
- 66-Deploy Harvester csi driver to k3s Cluster
- 67-Harvester persistent volume on k3s Cluster
- 68-Fully airgapped rancher integrate with harvester with no proxy
- 69-DHCP Harvester LoadBalancer service no health check
- 70-Pool LoadBalancer service no health check
- 71-Manually Deploy Harvester csi driver to RKE2 Cluster
- 72-Use ipxe example to test fully airgapped rancher integration
- Check can apply the resource quota limit to project and namespace
- Check default and customized project and namespace details page
- Create a VM through the Rancher dashboard
- Create RKE2 cluster with no cloud provider
- Delete 3 node RKE2 cluster
- Provision RKE2 cluster with resource quota configured
- Rancher Resource quota management
- Reboot a cluster and check VIP
- Hosts
- Add/remove disk to Host config
- Agent Node should not rely on specific master Node
- Attach unpartitioned NVMe disks to host
- Automatically get VIP during PXE installation
- Check crash dump when there’s a kernel panic
- check detailed network status in host page
- Check Longhorn volume mount point
- Check redirect for editing server URL setting
- Cluster with Witness Node
- Delete Host (e2e_be)
- Delete host that has VMs on it
- Disk can only be added once on UI
- Disk devices used for VM storage should be globally configurable
- Download host YAML
- Edit Config (e2e_be)
- Edit Config YAML (e2e_be)
- Host list should display the disk error message on failure
- Maintenance mode for host with multiple VMs
- Maintenance mode for host with one VM (e2e_be)
- Maintenance mode on node with no vms (e2e_be)
- Migrate back VMs that were on host after taking host out of maintenance mode
- Move Longhorn storage to another partition
- Node Labeling for VM scheduling
- Nodes with cordoned status should not be in VM migration list
- Power down and power up the node
- Power down the node
- Power node triggers VM reschedule
- PXE instll without iso_url field
- Reboot a cluster and check VIP
- Reboot host that is in maintenance mode (e2e_be)
- Reboot host trigger VM migration
- Reboot node
- Recover cordon and maintenace node after harvester node machine reboot
- Remove a management node from a 3 nodes cluster and add it back to the cluster by reinstalling it
- Remove unavailable node with VMs on it
- Set maintenance mode on the last available node shouldn’t be allowed
- Shut down host in maintenance mode and verify label change
- Shut down host then delete hosted VM
- Start Host in maintenance mode (e2e_be)
- Take host out of maintenance mode that has been rebooted (e2e_be)
- Take host out of maintenance mode that has not been rebooted (e2e_be)
- Temporary network disruption
- Test NTP server timesync
- Turn off host that is in maintenance mode (e2e_be)
- Verify Enabling maintenance mode
- Verify the Filter on the Host page
- Verify the info of the node
- Verify the state for Powered down node
- Images
- Add Labels (e2e_be_fe)
- Create Images with valid image URL (e2e_be_fe)
- Create with invalid image (e2e_be_fe)
- Delete the image (e2e_be_fe)
- Delete VM with exported image(e2e_fe)
- Edit images (e2e_be_fe)
- Update image labels after deleting source VM(e2e_fe)
- Upload Cloud Image (e2e_be)
- Upload image that is invalid
- Upload ISO Image(e2e_fe)
- Verify the options available for image
- Live Migration
- Initiate multiple migrations at one time
- Migrate a turned on VM from one host to another
- Migrate a VM created with cloud init config data
- Migrate a VM created with user data config
- Migrate a VM that has multiple volumes
- Migrate a VM that was created from a template
- Migrate a VM that was created using a restore backup to new VM
- Migrate a VM with 1 backup
- Migrate a VM with a saved SSH Key
- Migrate a VM with multiple backups
- Migrate a VM with multiple networks
- Migrate to Node without replicaset
- Migrate VM from Restored backup
- Negative migrate a turned on VM from one host to another
- Negative network disconnection for a longer time while migration is in progress
- Negative network disconnection for a short time while migration is in progress
- Negative node down while migration is in progress
- Negative node un-schedulable during live migration
- Support volume hot plug live migrate
- Test aborting live migration
- Test zero downtime for live migration download test
- Test zero downtime for live migration ping test
- Misc
- Button of Download KubeConfig (e2e_fe)
- Check favicon and title on pages
- Check Harvester CloudInit CRDs within Harvester, Terraform & Rancher
- Create support bundle in multi-node Harvester cluster with one node off
- Download kubeconfig after shutting down harvester cluster
- Test NTP server timesync
- Verify network data template
- Network
- Add multiple Networks via form
- Add multiple Networks via YAML (e2e_be)
- Add network reachability detection from host for the VLAN network
- Add VLAN network (e2e_be)
- Create new network (e2e_be_fe)
- Delete external VLAN network via form
- Delete external VLAN network via YAML (e2e_be)
- Delete management network via form
- Delete management network via YAML (e2e_be)
- Disable and enable vlan cluster network
- Edit network via form change external VLAN to management network
- Edit network via form change management network to external VLAN
- Edit network via YAML change external VLAN to management network (e2e_be)
- Edit network via YAML change management network to external VLAN (e2e_be)
- Enabling vlan on a bonded NIC on vagrant install
- Negative network comes back up after reboot external VLAN (e2e_be)
- Negative network comes back up after reboot management network (e2e_be)
- Switch the vlan interface of harvester node
- Try to add a network with no name (e2e_be)
- Validate network connectivity external VLAN (e2e_be)
- Validate network connectivity invalid external VLAN (e2e_be)
- Validate network connectivity management network (e2e_be)
- VIP configured in a VLAN network should be reached
- VIP is accessibility with VLAN enabled on management port
- Node Driver
- Add a custom “Docker Install URL”
- Add a custom “Insecure Registries”
- Add a custom “Registry Mirrors”
- Add a custom “Storage Driver”
- Add cluster driver
- Add the different roles to the cluster
- Add/remove a node in the created harvester cluster
- Backup and restore of harvester cluster
- Basic functional verification of Harvester cluster after creation
- Cluster add labs
- Cluster add Taints
- Create a 3 nodes harvester cluster with RKE1 (only with mandatory info, other values stays with default)
- Create a 3 nodes harvester cluster with RKE2 (only with mandatory info, other values stays with default)
- Create a harvester cluster and add Taint to a node
- Create a harvester cluster with 3 master nodes
- Create a harvester cluster with a non-default version of k8s
- Create a harvester cluster with different images
- Create a harvester cluster, template drop-down list validation
- Create harvester cluster using non-default CPUs, Memory, Disk
- Create harvester clusters with different Bus
- Create harvester clusters with different Networks
- Deactivate/activate/delete Harvester Node Driver
- Delete Cluster
- Guest CSI Driver
- Import External Harvester
- Import internal harvester
- Use a non-admin user
- Verify “Add Node Pool”
- Rancher2 Terraform Provider Integration
- Templates
- allow users to create cloud-config template on the VM creating page
- Chain VM templates and images
- Create SSH key from templates page
- Verify network data template
- Volume size should be editable on derived template
- Terraform Provider
- Installation of the Harvester terraform provider (e2e_be)
- Target Harvester by setting the variable kubeconfig with your kubeconfig file in the provider.tf file (e2e_be)
- Target Harvester with the default kubeconfig located in $HOME/.kube/config (e2e_be)
- Test a deployment with ALL resources at the same time (e2e_be)
- Test the harvester_clusternetwork resource (e2e_be)
- Test the harvester_image resource (e2e_be)
- Test the harvester_network resource (e2e_be)
- Test the harvester_ssh_key resource (e2e_be)
- Test the harvester_virtualmachine resource (e2e_be)
- Test the harvester_volume resource (e2e_be)
- Terraformer
- Check that you can communicate with the Harvester cluster
- Import and make changes to clusternetwork resource
- Import and make changes to image resource
- Import and make changes to network resource
- Import and make changes to ssh_key resource
- Import and make changes to virtual machine resource
- Import and make changes to volume resource
- UI
- Verify the external link at the bottom of the page
- Verify the Harvester UI URL (e2e_fe)
- Verify the left side menu (e2e_fe)
- Verify the links which navigate to the internal pages
- Upgrade Harvester
- Rejoin node machine after Harvester upgrade
- Upgrade Harvester from new cluster network design (after v1.1.0)
- Upgrade Harvester from traditonal cluster network design (before v1.1.0)
- Upgrade Harvester in Fully Airgapped Environment
- Upgrade Harvester with bonded NICs on network
- Upgrade Harvester with HDD Disks
- Upgrade Harvester with IPv6 DHCP
- Virtual Machines
- Add a network to an existing VM with only 1 network (e2e_be_fe)
- Add a network to an existing VM with two networks
- Chain VM templates and images
- Check VM creation required-fields
- Clone VM and don’t select start after creation
- Clone VM that is turned off
- Clone VM that is turned on
- Clone VM that was created from existing volume
- Clone VM that was created from image
- Clone VM that was created from template
- Clone VM that was not created from image
- CPU overcommit on VM (e2e_fe)
- Create a new VM and add Enable USB tablet option (e2e_be_fe)
- Create a new VM and add Install guest agent option (e2e_be_fe)
- Create a new VM with Network Data from the form (e2e_fe)
- Create a new VM with Network Data from YAML (e2e_be)
- Create a new VM with User Data from the form
- Create a VM on a VLAN with an existing machine and then change the existing machine’s VLAN
- Create a VM with 2 networks (e2e_be)
- Create a vm with all the default values (e2e_be_fe)
- Create a VM with Start VM on Creation checked (e2e_be)
- Create a VM with start VM on creation unchecked (e2e_be)
- Create multiple instances of the vm with ISO image (e2e_be)
- Create multiple instances of the vm with raw image (e2e_be_fe)
- Create multiple instances of the vm with Windows Image (e2e_be)
- Create new VM with a machine type of PC (e2e_be)
- Create new VM with a machine type of q35 (e2e_be)
- Create one VM on a VLAN and then move another VM to that VLAN
- Create one VM on a VLAN that has other VMs then change it to a different VLAN
- Create Single instances of the vm with ISO image
- Create Single instances of the vm with ISO image (e2e_be)
- Create Single instances of the vm with ISO image with machine type pc
- Create Single instances of the vm with raw image (e2e_be)
- Create Single instances of the vm with Windows Image (e2e_be)
- Create two VMs in the same VLAN (e2e_be)
- Create two VMs on separate VLANs
- Create two VMs on the same VLAN and change one
- Create VM and add SSH key (e2e_be)
- Create vm using a template of default version
- Create vm using a template of default version with machine type pc
- Create vm using a template of default version with machine type q35
- Create vm using a template of non-default version (e2e_fe)
- Create vm with both CPU and Memory not in cluster (e2e_be)
- Create vm with CPU not in cluster. (e2e_be)
- Create VM with existing Volume (e2e_be_fe)
- Create vm with Memory not in cluster. (e2e_be)
- Create VM with resources that are only on one node in cluster CPU
- Create VM with resources that are only on one node in cluster CPU (e2e_be)
- Create VM with resources that are only on one node in cluster CPU and Memory (e2e_be)
- Create VM with resources that are only on one node in cluster Memory
- Create VM with resources that are only on one node in cluster Memory (e2e_be)
- Create VM with saved SSH key (e2e_be)
- Create VM with the default network (e2e_be)
- Create VM with two disk volumes (e2e_be)
- Create VM without memory provided (e2e_fe)
- Create Windows VM
- Delete multiple VMs with disks (e2e_be)
- Delete multiple VMs without disks (e2e_be)
- Delete single vm all disks (e2e_be)
- Delete VM Negative (e2e_be)
- Delete VM with exported image (e2e_fe)
- Edit a VM and add install Enable usb tablet option (e2e_be)
- Edit a VM and add install guest agent option (e2e_be)
- Edit a VM from the form to add Network Data
- Edit a VM from the form to add user data (e2e_fe)
- Edit a VM from the YAML to add Network Data (e2e_be)
- Edit a VM from the YAML to add user data (e2e_be)
- Edit an existing VM to another machine type (e2e_be)
- Edit vm and insert ssh and check the ssh key is accepted for the login (e2e_be_fe)
- Edit vm config after Eject CDROM and delete volume
- Edit VM Form Negative
- Edit vm network and verify the network is working as per configuration (e2e_be)
- Edit VM via form with CPU
- Edit VM via form with CPU and Memory
- Edit VM via form with Memory
- Edit VM via YAML with CPU (e2e_be)
- Edit VM via YAML with CPU and Memory (e2e_be)
- Edit VM via YAML with Memory (e2e_be)
- Edit VM with resources that are only on one node in cluster CPU and Memory
- Edit VM YAML Negative
- Memory overcommit on VM
- Negative vm clone tests
- Run multiple instances of the console
- Start VM and stop node Negative
- Start VM Negative (e2e_be)
- Stop VM Negative (e2e_fe)
- Update image labels after deleting source VM
- Validate QEMU agent installation
- Verify operations like Stop, restart, pause, download YAML, generate template (e2e_be)
- Verify that vm-force-reset-policy works
- View log function on virtual machine
- VM on error state
- VM scheduling on Specific node (e2e_fe)
- VM’s CPU maximum limitation
- Volumes
- A volume can’t be attached to another VM (Yaml)
- Add/remove disk to Host config
- Check Longhorn volume mount point
- Create image from Volume(e2e_fe)
- Create Volume root disk blank Form with label
- Create volume root disk VM Image Form with label (e2e_be)
- Create volume root disk VM Image Form(e2e_fe)
- Delete volume that is not attached to a VM (e2e_be_fe)
- Delete volume that was attached to VM but now is not (e2e_be_fe)
- Detach volume from virtual machine (e2e_fe)
- Edit Volume Form add label
- Edit volume increase size via form (e2e_fe)
- Edit volume increase size via YAML (e2e_be)
- Edit volume to increase size when vm is running
- Edit Volume YAML add label (e2e_be)
- Negative delete Volume that is in use (e2e_be)
- Support Volume Hot Unplug (e2e_fe)
- Validate volume shows as in use when attached (e2e_be)
- Verify that vm-force-reset-policy works
- Verify that VMs stay up when disks are evicted
- Webhooks