Blockbridge provides a Container Storage Interface (CSI) driver to deliver persistent, secure, multi-tenant, cluster-accessible storage for Kubernetes. This guide describes how to deploy Blockbridge as the storage backend for Kubernetes containers.
If you’ve configured other Kubernetes storage drivers before, you may want to start with the Quickstart section. The rest of the guide has detailed information about features, driver configuration options and troubleshooting.
REQUIREMENTS & VERSIONS
Supported Versions
Blockbridge supports Kubernetes versions 1.21 and above.
K8s Version | K8s Released | K8s EOL | Blockbridge Release | Driver Version | CSI Spec Version |
---|---|---|---|---|---|
1.21 | 08 Apr 2021 | 6.0+ | 3.2.0 | 1.8.0 | |
1.22 | 04 Aug 2021 | 6.0+ | 3.2.0 | 1.8.0 | |
1.23 | 07 Dec 2021 | 6.0+ | 3.2.0 | 1.8.0 | |
1.24 | 03 May 2022 | 6.0+ | 3.2.0 | 1.8.0 | |
1.25 | 23 Aug 2022 | 6.0+ | 3.2.0 | 1.8.0 | |
1.26 | 08 Dec 2022 | 6.0+ | 3.2.0 | 1.8.0 | |
1.27 | 11 Apr 2023 | 28 Jun 2024 | 6.0+ | 3.2.0 | 1.8.0 |
1.28 | 15 Aug 2023 | 28 Aug 2024 | 6.0+ | 3.2.0 | 1.8.0 |
1.29 | 13 Dec 2023 | 28 Jun 2024 | 6.0+ | 3.2.0 | 1.8.0 |
Supported Features
Feature | Description |
---|---|
Dynamic Volume Provisioning | Automatically provision, attach, and format network-attached storage for use with your containers |
Multi-host Volume Mobility | Instantly move containers and persistent volumes between your K8s nodes enabling support for highly available containers |
Thin Snapshots | Create space efficient point-in-time snapshots for backup and restore |
Thin Clones | Create volumes from a snapshot or directly from an existing volume |
High Availability | End-to-end high-availability and transparent recovery from infrastructure failures |
Quality Of Services | Automated performance provisioning and run-time management |
Encryption & Secure Erase | Independently keyed multi-tenant encryption and automatic secure erase |
iSCSI/TCP | Support for high-performance single-queue SCSI-native storage access |
NVMe/TCP | Support for high-performance multi-queue NVMe-native storage access optimized for low latency |
Supported K8s Environments
For production workloads the following distributions are supported:
MicroK8s is recommended for trial deployments, but is not supported for production use.
Requirements
The following minimum requirements must be met to use Blockbridge with Kubernetes:
- Kubernetes 1.26+.
-
Ensure the host running the kubelet has iSCSI client support installed and iscsid enabled.
For RedHat derivitives:
sudo sh -c "dnf install -y iscsi-initiator-utils && systemctl enable --now iscsid"
For Debian derivitives (including Ubuntu):
sudo sh -c "apt update && apt install -y open-iscsi && systemctl enable --now iscsid"
-
For NVMe/TCP volumes, install and load the nvme-tcp kernel module:
For RedHat derivitives:
sudo sh -c "modprobe nvme-tcp && echo nvme-tcp > /etc/modules-load.d/blockbridge-nvme-tcp.conf"
For Debian derivitives (including Ubuntu):
sudo sh -c "apt update && apt install -y linux-modules-extra-$(uname -r) && \ modprobe nvme-tcp && echo nvme-tcp > /etc/modules-load.d/blockbridge-nvme-tcp.conf"
Note: NVMe/TCP is only supported with Linux Kernel versions 5.15+.
See CSI Deploying for more information.
Blockbridge Driver Version History
3.2.0 - Mon, Apr 22 2024
- Add support for Kubernetes 1.29.
- Add support for Natve NVMe Multipathing.
- Driver is now built with Go 1.22.2.
3.1.0 - Fri, Apr 28 2023
- Add support for Kubernetes 1.27.
- Add support for NVMe/TCP.
- Advertise Velero support in the default VolumeSnapshotClass.
- Updated dependencies and sidecar containers to support CSI 1.8.
- Driver is now built with Go 1.20.3.
3.0.0 - Fri, Jan 20 2023
- Add support for Kubernetes 1.26.
- Added support for Volume Snapshots and Clones.
- Add support for K8s distributions which containerize iscsid, including Talos Linux.
- Adjusted driver deployment manifest to enable Rolling Upgrades.
2.0.0 - Tue, Jul 30 2019
- Improve driver stability.
- Fixes occasional remount issue due to missing MountPropagation.
- Fixes a rare segfault due to mismanaged generic error handling.
- Supports K8s 1.14.
- Update to support CSI 1.0.0 release.
- Build driver with Go 1.12
1.0.0 - Mon, Jul 02 2018
- Initial driver release.
- Supports K8s 1.10.
- CSI v0.3.0.
QUICKSTART
This is a brief guide on how to install and configure the Blockbridge Kubernetes driver. In this section, you will:
- Create a Blockbridge account for your Kubernetes storage.
- Create an authentication token for the Kubernetes driver.
- Define a secret in Kubernetes with the token and the Blockbridge API host.
- Deploy the Kubernetes driver.
Many of these topics have more information available by selecting the information ⓘ links next to items where they appear.
Blockbridge Configuration
These steps use the containerized Blockbridge CLI utility to create an account and an authorization token.
-
Use the containerized CLI to create a
kubernetes
account. ⓘdocker run --rm -it -v blockbridge-cli:/data docker.io/blockbridge/cli:latest-alpine bb \ --no-ssl-verify-peer account create --name kubernetes
When prompted, enter the management hostname and
system
credentials. ⓘEnter a default management host: blockbridge.mycompany.example Authenticating to https://blockbridge.mycompany.example/api Enter user or access token: system Password for system: .... Authenticated; token expires in 3599 seconds.
-
Create a persistent authorization for the
kubernetes
account. ⓘdocker run --rm -it -e BLOCKBRIDGE_API_SU=kubernetes -v blockbridge-cli:/data \ docker.io/blockbridge/cli:latest-alpine bb --no-ssl-verify-peer authorization create \ --notes 'csi-blockbridge driver access'
== Created authorization: ATH476D194C40626436 == Authorization: ATH476D194C40626436 serial ATH476D194C40626436 account kubernetes (ACT076D194C40626412) user kubernetes (USR1B6D194C4062640F) enabled yes created at 2024-04-18 01:59:02 +0000 access type online token suffix cYjGHWIw restrict auth enforce 2-factor false == Access Token access token 1/tywnGIxh................92HXLCcYjGHWIw *** Remember to record your access token!
-
Set the
BLOCKBRIDGE_API_HOST
environment variable to your Blockbridge management hostname and theBLOCKBRIDGE_API_KEY
variable to the newly generated access token. ⓘexport BLOCKBRIDGE_API_HOST=blockbridge.mycompany.example export BLOCKBRIDGE_API_KEY="1/tywnGIxh................92HXLCcYjGHWIw"
Kubernetes Configuration
The following steps install and configure the Blockbridge Kubernetes driver on your cluster. Your session must already be authenticated with your Kubernetes cluster to proceed.
-
Create a new directory for managing your driver deployment configuration and change into it
mkdir -p ~/blockbridge-deploy && cd ~/blockbridge-deploy
-
Fetch the snapshot-support and csi-blockbridge deployment manifests
First, fetch the snapshot-support deployment manifest:
curl -OsSf https://get.blockbridge.com/kubernetes/6.0/snapshot-support-v3.2.0.yaml
Next, fetch the csi-blockbridge deployment manifest:
curl -OsSf https://get.blockbridge.com/kubernetes/6.0/csi-blockbridge-v3.2.0.yaml
-
Deploy the snapshot-support manifest
kubectl apply -f snapshot-support-v3.2.0.yaml
-
Prepare a kustomization file
Initialize a
kustomization.yaml
file using the the csi-blockbridge resources:docker run --rm -w /config -v $(pwd):/config registry.k8s.io/kustomize/kustomize:v5.4.1 \ create --namespace kube-system --resources csi-blockbridge-v3.2.0.yaml
-
Configure driver authentication ⓘ
Place the credentials and api URL in an env file:
cat > blockbridge-secret.env <<-EOF api-url=https://$BLOCKBRIDGE_API_HOST/api access-token=$BLOCKBRIDGE_API_KEY ssl-verify-peer=false EOF
Note: The following example snippet assumes you’ve setBLOCKBRIDGE_API_HOST
andBLOCKBRIDGE_API_KEY
in a prior step. Double-check the contents of your blockbridge-secret.env file; ensure the access token and api-url fields are present and correct.Add it to the kustomization deployment file:
docker run --rm -w /config -v $(pwd):/config registry.k8s.io/kustomize/kustomize:v5.4.1 \ edit add secret blockbridge --from-env-file blockbridge-secret.env
-
Deploy the Blockbridge driver. ⓘ
kubectl apply -k .
-
Check that the driver is running. ⓘ
kubectl -n kube-system get pods -l role=csi-blockbridge
NAME READY STATUS RESTARTS AGE csi-blockbridge-controller-0 3/3 Running 0 6s csi-blockbridge-node-4679b 2/2 Running 0 5s
CONFIGURATION & DEPLOYMENT
This section discusses how to configure the Blockbridge Kubernetes driver in detail.
Linked Blockbridge Account
The Blockbridge driver creates and maintains its storage under a tenant account on your Blockbridge installation.
The driver is configured with two pieces of information: the API endpoint and the authentication token.
configuration | description |
---|---|
BLOCKBRIDGE_API_URL | Blockbridge controlplane API endpoint URL specified as https://hostname.example/api |
BLOCKBRIDGE_API_KEY | Blockbridge controlplane access token |
The API endpoint is specified as a URL pointing to the Blockbridge controlplane’s API. The access token authenticates the driver with the Blockbridge controlplane, in the context of the specified account.
Account Creation
Use the containerized Blockbridge CLI to create the account.
docker run --rm -it -v blockbridge-cli:/data docker.io/blockbridge/cli:latest-alpine bb \
--no-ssl-verify-peer account create --name kubernetes
When prompted, enter the management hostname and system
credentials. ⓘ
Enter a default management host: blockbridge.mycompany.example
Authenticating to https://blockbridge.mycompany.example/api
Enter user or access token: system
Password for system: ....
Authenticated; token expires in 3599 seconds.
Validate that the account has been created.
== Created account: kubernetes (ACT0762194C40656F03)
== Account: kubernetes (ACT0762194C40656F03)
name kubernetes
label kubernetes
serial ACT0762194C40656F03
created 2022-11-19 16:15:15 +0000
disabled no
Authorization Token
Blockbridge supports revokable persistent authorization tokens. This
section demonstrates how to create a persistent authorization token in
the freshly created kubernetes
account suitable for use as
authentication for the driver.
To create the token we’ll temporarily switch to the kubernetes
account to create a persistent authorization.
docker run --rm -it -e BLOCKBRIDGE_API_SU=kubernetes -v blockbridge-cli:/data \
docker.io/blockbridge/cli:latest-alpine bb --no-ssl-verify-peer authorization \
create --notes 'csi-blockbridge driver access'
This creates the authorization and displays the access token.
== Created authorization: ATH4762194C4062668E
== Authorization: ATH4762194C4062668E
serial ATH4762194C4062668E
account kubernetes (ACT0762194C40656F03)
user kubernetes (USR1B62194C40656FBD)
enabled yes
created at 2022-11-19 11:15:47 -0500
access type online
token suffix ot50v2vA
restrict auth
enforce 2-factor false
== Access Token
access token 1/Nr7qLedL/P0KXxbrB8+jpfrFPBrNi3X+8H9BBwyOYg/mvOot50v2vA
*** Remember to record your access token!
Make a note of the displayed access token somewhere safe. Set the
environment variables BLOCKBRIDGE_API_HOST
and BLOCKBRIDGE_API_KEY
to use in the forthcoming steps to install the driver.
export BLOCKBRIDGE_API_HOST=blockbridge.mycompany.example
export BLOCKBRIDGE_API_KEY="1/Nr7qLedL/P0KXxbrB8+jpfrFPBrNi3X+8H9BBwyOYg/mvOot50v2vA"
Driver Installation
Here’s how to install the Blockbridge driver in your Kubernetes cluster.
Authenticate with Kubernetes
First, ensure your session is authenticated to your Kubernetes cluster. Running
kubectl version
should show a version for both the client and server.
kubectl version
Client Version: v1.29.1
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.29.2
kubectl
authentication is
beyond the scope of this guide. Please refer to the specific
instructions for the Kubernetes service or installation you are
using.Deploy Volume Snapshot Infrastructure
Volume Snapshot support depends on several components:
- Volume Snapshot Custom Resource Definitions
- Volume Snapshot Controller
- Snapshot Validation Webhook
- A CSI driver utilizing the CSI Snapshotter sidecar
If your Kubernetes distribution doesn’t bundle upstream volume snapshot support, you need to deploy the minimal requirements before deploying the Blockbridge CSI Driver.
Fetch the snapshot-support deployment manifest:
curl -OsSf https://get.blockbridge.com/kubernetes/6.0/snapshot-support-v3.2.0.yaml
Deploy volume snapshot CRDs and the snapshot controller:
kubectl apply -f snapshot-support-v3.2.0.yaml
Deploying the snapshot validation webhook is required for production use and is outside the scope of this guide.
Prepare the driver deployment
We’ll be using the Kubernetes kustomize
tool to simplify the driver deployment process. While this guide uses the official k8s docker container for kustomize, feel free to install it using your preferred method.
Create a new directory for managing your driver deployment configuration and change into it:
mkdir -p ~/blockbridge-deploy && cd ~/blockbridge-deploy
Fetch the csi-blockbridge deployment manifest:
curl -OsSf https://get.blockbridge.com/kubernetes/6.0/csi-blockbridge-v3.2.0.yaml
Initialize the kustomization.yaml
file using the v3.2.0 resources:
docker run --rm -w /config -v $(pwd):/config registry.k8s.io/kustomize/kustomize:v5.4.1 \
create --namespace kube-system --resources csi-blockbridge-v3.2.0.yaml
Configure Driver Authentication
The secret contains both the Blockbridge API endpoint URL and access
token. Save the previously generated authentication token and endpoint
to the blockbridge-secret.env
file.
Use BLOCKBRIDGE_API_HOST
and BLOCKBRIDGE_API_KEY
with the correct
values for the Blockbridge controlplane, and the access token you
created earlier in the kubernetes account.
cat > blockbridge-secret.env <<-EOF
api-url=https://$BLOCKBRIDGE_API_HOST/api
access-token=$BLOCKBRIDGE_API_KEY
ssl-verify-peer=false
EOF
Configure the blockbridge
secret as part of the driver deployment:
docker run --rm -w /config -v $(pwd):/config registry.k8s.io/kustomize/kustomize:v5.4.1 \
edit add secret blockbridge --from-env-file blockbridge-secret.env
ssl-verify-peer
flag. This setting implicitly trusts the
default controlplane self-signed certificate. Configuring certificate
verification, including specifying custom-supplied CA certificates, is
beyond the scope of this guide. Please contact Blockbridge
Support for more information.Deploy the Blockbridge Driver
Deploy the Blockbridge Driver components using kubectl
:
kubectl apply -k .
Each resource created as part of the deployment displays a corresponding “created” message:
[...]
secret/blockbridge-g6cmf5d2d9 created
statefulset.apps/csi-blockbridge-controller created
daemonset.apps/csi-blockbridge-node created
volumesnapshotclass.snapshot.storage.k8s.io/blockbridge-gp created
csidriver.storage.k8s.io/csi.blockbridge.com created
[...]
For further detail on driver deployment, see the Kubernetes CSI Developer Documentation.
Ensure the Driver is Operational
Finally, check that the driver is up and running.
kubectl get pods -A -l role=csi-blockbridge
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system csi-blockbridge-node-pm2sw 2/2 Running 0 15m
kube-system csi-blockbridge-controller-0 4/4 Running 0 15m
Storage Classes
The Blockbridge driver comes with a default “general purpose”
StorageClass blockbridge-gp
. This is the default StorageClass
for dynamic provisioning of storage volumes. It provisions using the
default Blockbridge storage template configured in the Blockbridge
controlplane.
kubectl get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
blockbridge-gp (default) csi.blockbridge.com Delete Immediate false 29h
blockbridge-nvme csi.blockbridge.com Delete Immediate false 29h
blockbridge-nvme-multipath csi.blockbridge.com Delete Immediate false 29h
blockbridge-tls csi.blockbridge.com Delete Immediate false 29h
microk8s-hostpath (default) microk8s.io/hostpath Delete WaitForFirstConsumer false 6d2h
There are a variety of additional storage class configuration options available, including:
- Using transport encryption (tls).
- Using a custom tag-based query.
- Using a named service template.
- Using explicitly specified provisioned IOPS.
- Using NVMe/TCP instead of iSCSI.
- Enabling native NVMe multipathing.
There are several additional example storage classes in
storageclasses.yaml
. You can download, edit, and apply these storage
classes as needed.
curl -OsSf https://get.blockbridge.com/kubernetes/6.0/csi-blockbridge/storageclasses.yaml
cat storageclasses.yaml
... [output trimmed] ...
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: blockbridge-gp
namespace: kube-system
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: csi.blockbridge.com
kubectl apply -f storageclasses.yaml
storageclass.storage.k8s.io "blockbridge-gp" configured
VERIFICATION TESTING
This section has a few basic tests you can use to validate that your Blockbridge driver is working properly.
Example Applications
We have a small collection of example applications ready for deployment. To deploy them all, apply the examples manifest:
kubectl apply -f https://get.blockbridge.com/kubernetes/6.0/examples-v3.2.0.yaml
This results in 5 example application pods demonstrating different features of the Blockbridge csi driver:
- blockbridge-nvme-app - consumes an NVMe PVC.
- blockbridge-iscsi-app - consumes an iSCSI PVC.
- blockbridge-clone-app - volume sourced from an existing iSCSI volume.
- blockbridge-snapshot-restore-app - volume sourced from a snapshot.
- blockbridge-inline-pvc-app - this application makes use of a generic ephemeral volume, instead of an independently managed PVC.
kubectl get pods
NAME READY STATUS RESTARTS AGE
blockbridge-nvme-app 2/2 Running 0 70s
blockbridge-inline-pvc-app 2/2 Running 0 70s
blockbridge-iscsi-app 2/2 Running 0 70s
blockbridge-clone-app 2/2 Running 0 71s
blockbridge-snapshot-restore-app 2/2 Running 0 70s
The following sections go into further detail about how each example application’s volumes are configured.
Volume Creation
This test verifies that Blockbridge storage volumes are now available via Kubernetes persistent volume claims (PVC).
To test this out, create a PersistentVolumeClaim. It will dynamically provision a volume in Blockbridge and make it accessible to applications.
kubectl apply -f https://get.blockbridge.com/kubernetes/6.0/examples/iscsi-pvc.yaml
persistentvolumeclaim "blockbridge-iscsi-pvc" created
Alternatively, download the example volume yaml, modify it as needed, and apply.
curl -OsSL https://get.blockbridge.com/kubernetes/6.0/examples/iscsi-pvc.yaml
cat iscsi-pvc.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: blockbridge-iscsi-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: blockbridge-gp
kubectl apply -f ./iscsi-pvc.yaml
persistentvolumeclaim "blockbridge-iscsi-pvc" created
Use kubectl get pvc blockbridge-iscsi-pvc
to check that the PVC was created
successfully.
kubectl get pvc blockbridge-iscsi-pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
blockbridge-iscsi-pvc Bound pvc-a8f61548-5f07-44a4-beb2-87cc71ae40e7 5Gi RWO blockbridge-gp 3m59s
Pod Creation
This test creates a Pod (application) that uses a previously created PVC. When you create the Pod, it attaches the volume, then formats and mounts it, making it available to the specified application.
kubectl apply -f https://get.blockbridge.com/kubernetes/6.0/examples/iscsi-app.yaml
pod "blockbridge-iscsi-app" created
Alternatively, download the application yaml, modify as needed, and apply.
curl -OsSL https://get.blockbridge.com/kubernetes/6.0/examples/iscsi-app.yaml
cat iscsi-app.yaml
---
kind: Pod
apiVersion: v1
metadata:
name: blockbridge-iscsi-app
spec:
containers:
- name: my-frontend
image: busybox
volumeMounts:
- mountPath: "/data"
name: my-bb-volume
command: [ "sleep", "1000000" ]
- name: my-backend
image: busybox
volumeMounts:
- mountPath: "/data"
name: my-bb-volume
command: [ "sleep", "1000000" ]
volumes:
- name: my-bb-volume
persistentVolumeClaim:
claimName: csi-pvc-blockbridge
kubectl apply -f ./iscsi-app.yaml
pod "blockbridge-iscsi-app" created
Verify that the pod is running successfully.
kubectl get pod blockbridge-iscsi-app
NAME READY STATUS RESTARTS AGE
blockbridge-iscsi-app 2/2 Running 0 13s
Pod Data Access
Inside the app container, write data to the mounted volume.
kubectl exec -ti blockbridge-iscsi-app -c my-frontend /bin/sh
/ # df /data
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/blockbridge/2f93beb2-61eb-456b-809e-22e27e4f73cf
5232608 33184 5199424 1% /data
/ # touch /data/hello-world
/ # exit
kubectl exec -ti blockbridge-iscsi-app -c my-backend /bin/sh
/ # ls /data
hello-world
TROUBLESHOOTING
App Stuck in ContainerCreating
When the application is stuck in ContainerCreating, check to see if the mount has failed.
Symptom
Check the app status.
kubectl get pod/blockbridge-iscsi-app
NAME READY STATUS RESTARTS AGE
blockbridge-iscsi-app 0/2 ContainerCreating 0 20s
kubectl describe pod/blockbridge-iscsi-app
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 10s default-scheduler Successfully assigned default/blockbridge-iscsi-app to kubelet.localnet
Normal SuccessfulAttachVolume 10s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-71c37e84-b302-11e9-a93f-0242ac110003"
Warning FailedMount 1s kubelet, kubelet.localnet MountVolume.MountDevice failed for volume "pvc-71c37e84-b302-11e9-a93f-0242ac110003" : rpc error: code = Unknown desc = runtime_error: /etc/iscsi/initiatorname.iscsi not found; ensure 'iscsi-initiator-utils' is installed.
Resolution
- Ensure the host running the kubelet has iSCSI client support installed on the host/node.
- For CentOS/RHEL, install the
iscsi-initiator-utils
package on the host running the kubelet.
dnf install iscsi-initiator-utils
- For Ubuntu, install the
open-iscsi
package on the host running the kubelet.
apt install open-iscsi
Symptom
Check the app status.
kubectl get pod/blockbridge-iscsi-app
NAME READY STATUS RESTARTS AGE
blockbridge-iscsi-app 0/2 ContainerCreating 0 20s
kubectl describe pod/blockbridge-iscsi-app
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 10m default-scheduler Successfully assigned default/blockbridge-iscsi-app to crc-l6qvn-master-0
Normal SuccessfulAttachVolume 10m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-9b2e8116-62a6-4089-8b0d-fab0f839b7aa"
Warning FailedMount 9m56s kubelet MountVolume.MountDevice failed for volume "pvc-9b2e8116-62a6-4089-8b0d-fab0f839b7aa" : rpc error: code = Unknown desc = exec_error: Failed to connect to bus: No data available
iscsiadm: can not connect to iSCSI daemon (111)!
iscsiadm: Could not login to [iface: default, target: iqn.2009-12.com.blockbridge:t-pjwajzvdho-471c1b66-e24d-4377-a16b-71ac1d580061, portal: 172.16.100.129,3260].
iscsiadm: initiator reported error (20 - could not connect to iscsid)
iscsiadm: Could not log into all portals
Resolution
- Ensure the host running the kubelet has iSCSI daemon installed and started on the host/node.
- For CentOS/RHEL, install the
iscsi-initiator-utils
package on the host running the kubelet.
dnf install iscsi-initiator-utils
systemctl enable --now iscsid
- For Ubuntu, install the
open-iscsi-utils
package on the host running the kubelet.
apt install open-iscsi-utils
systemctl enable --now iscsid
Symptom
NVMe volume fails to be attached, with an error reported by the csi-blockbridge-node pod:
ERROR: Failed to find `nvme` command: please ensure the 'nvme-cli' package is installed and try again.
- Ensure the host running the kubelet has iSCSI daemon installed and started on the host/node.
- For CentOS/RHEL, install the
iscsi-initiator-utils
package on the host running the kubelet.
dnf install iscsi-initiator-utils
systemctl enable --now iscsid
- For Ubuntu, install the
open-iscsi-utils
package on the host running the kubelet.
apt install open-iscsi-utils
systemctl enable --now iscsid
Resolution
Ensure the host running the kubelet has the nvme-cli
package
installed and the nvme-tcp
kernel module loaded. See the driver
requirements section for more details.
Provisioning Unauthorized
In this failure mode, provisioning fails with an “unauthorized” message.
Symptom
Check the PVC describe output.
kubectl describe pvc csi-pvc-blockbridge
Provisioning failed due to “unauthorized” because the authorization access token is not valid. Ensure the correct access token is entered in the secret.
Warning ProvisioningFailed 6s (x2 over 19s) csi.blockbridge.com csi-provisioner-blockbridge-0 2caddb79-ec46-11e8-845d-465903922841 Failed to provision volume with StorageClass "blockbridge-gp": rpc error: code = Internal desc = unauthorized_error: unauthorized: unauthorized
Resolution
- Edit
blockbridge-secret.env
and ensure correct access token and API URL are set. - Re-deploy the driver:
kubectl apply -k .
Provisioning Storage Class Invalid
Provisioning fails with an “invalid storage class” error.
Symptom
Check the PVC describe output:
kubectl describe pvc csi-pvc-blockbridge
Provisioning failed because the storage class specified was invalid.
Warning ProvisioningFailed 7s (x3 over 10s) persistentvolume-controller storageclass.storage.k8s.io "blockbridge-gp" not found
Resolution
Ensure the StorageClass exists with the same name.
kubectl get storageclass blockbridge-gp
Error from server (NotFound): storageclasses.storage.k8s.io "blockbridge-gp" not found
- If it doesn’t exist, then create the storageclass.
kubectl apply -f https://get.blockbridge.com/kubernetes/6.0/csi/v1.0.0/csi-storageclass.yaml
- Alternatively, download and edit the desired storageclass.
curl -OsSL https://get.blockbridge.com/kubernetes/6.0/csi/v1.0.0/csi-storageclass.yaml
Make whatever changes you need to in csi-storageclass.yaml
. Apply the updates using kubectl
:
kubectl -f apply ./csi-storageclass.yaml
In the background, the PVC continually retries. Once the above changes are complete, it will pick up the storage class change.
kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
csi-pvc-blockbridge Bound pvc-6cb93ab2-ec49-11e8-8b89-46facf8570bb 5Gi RWO blockbridge-gp 4s
App Stuck in Pending
One of the causes for an application stuck in pending is a missing Persistent Volume Claim (PVC).
Symptom
The output of kubectl get pod
show that the app has a status of Pending.
kubectl get pod blockbridge-iscsi-app
NAME READY STATUS RESTARTS AGE
blockbridge-iscsi-app 0/2 Pending 0 14s
Use kubectl describe pod
to reveal more information. In this case, the PVC is not
found.
kubectl describe pod blockbridge-iscsi-ap
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 12s (x6 over 28s) default-scheduler persistentvolumeclaim "csi-pvc-blockbridge" not found
Resolution
Create the PVC if necessary and ensure that it’s valid. First, validate that it’s missing.
kubectl get pvc csi-pvc-blockbridge
Error from server (NotFound): persistentvolumeclaims "csi-pvc-blockbridge" not found
If it’s missing, create it.
kubectl apply -f https://get.blockbridge.com/kubernetes/6.0/examples/csi-pvc.yaml
persistentvolumeclaim "csi-pvc-blockbridge" created
In the background, the application retries automatically and succeeds in starting.
kubectl describe pod blockbridge-iscsi-app
Normal Scheduled 8s default-scheduler Successfully assigned blockbridge-iscsi-app to aks-nodepool1-56242131-0
Normal SuccessfulAttachVolume 8s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-5332e169-ec4f-11e8-8b89-46facf8570bb"
Normal SuccessfulMountVolume 8s kubelet, aks-nodepool1-56242131-0 MountVolume.SetUp succeeded for volume "default-token-bx8b9"