Blockbridge provides a Container Storage Interface (CSI) driver to deliver persistent, secure, multi-tenant, cluster-accessible storage for Kubernetes. This guide describes how to deploy Blockbridge as the storage backend for Kubernetes containers.

If you’ve configured other Kubernetes storage drivers before, you may want to start with the Quickstart section. The rest of the guide has detailed information about features, driver configuration options and troubleshooting.


REQUIREMENTS & VERSIONS

Supported Versions

The current release of the Blockbridge CSI driver supports Kubernetes versions 1.27 through 1.32.

Supported Features

Feature Description
Dynamic Volume Provisioning Automatically provision, attach, and format network-attached storage for use with your containers
Topology-Aware Volume Placement Ensure volumes are placed on nodes with sufficient resources and proximity to the application
Raw Block Volumes Provide direct access to block devices for advanced use cases
Multi-Host Volume Mobility Instantly move containers and persistent volumes between your K8s nodes enabling support for highly available containers
Thin Snapshots Create space efficient point-in-time snapshots for backup and restore
Thin Clones Create volumes from a snapshot or directly from an existing volume
High Availability End-to-end high-availability and transparent recovery from infrastructure failures
Quality Of Services Automated performance provisioning and run-time management
Encryption & Secure Erase Independently keyed multi-tenant encryption and automatic secure erase
iSCSI/TCP Support for high-performance single-queue SCSI-native storage access
NVMe/TCP Support for high-performance multi-queue NVMe-native storage access optimized for low latency

Supported K8s Environments

For production workloads the following distributions are supported:

MicroK8s is recommended for trial deployments, but is not supported for production use.

Requirements

The following minimum requirements must be met to use Blockbridge with Kubernetes:

  • Kubernetes 1.27+.
  • iSCSI service and kernel support.
  • NVMe kernel support.

For RedHat derivitives

Install and enable iSCSI support:

sudo sh -c "dnf install -y iscsi-initiator-utils && systemctl enable --now iscsid"

Load NVMe/TCP kernel support:

sudo sh -c "modprobe nvme-tcp && echo nvme-tcp > /etc/modules-load.d/blockbridge-nvme-tcp.conf"

For Debian derivitives (including Ubuntu)

Install and enable iSCSI support:

sudo sh -c "apt update && apt install -y open-iscsi && systemctl enable --now iscsid"

Install and load NVMe/TCP kernel support:

sudo sh -c "apt update && apt install -y linux-modules-extra-$(uname -r) && \
  modprobe nvme-tcp && echo nvme-tcp > /etc/modules-load.d/blockbridge-nvme-tcp.conf"

Blockbridge Driver Version History

3.3.0 - Fri, Jun 13 2025

  • Supports Kubernetes versions 1.27-1.32.
  • Add support for topology aware provisioning.
    • New BlockbridgeClusters secret format supporting multiple clusters
    • Automatic storage cluster selection based on Kubernetes zone constraint
    • Backward compatibility with existing single-cluster configurations
  • Add support for raw block attachments.
  • Change default controller deployment to use leader election.
  • Update sidecar containers to support CSI 1.11.0.
  • Go toolchain updated to 1.24.4.

3.2.0 - Mon, Apr 22 2024

  • Add support for Kubernetes 1.29.
  • Add support for Natve NVMe Multipathing.
  • Driver is now built with Go 1.22.2.

3.1.0 - Fri, Apr 28 2023

  • Add support for Kubernetes 1.27.
  • Add support for NVMe/TCP.
  • Advertise Velero support in the default VolumeSnapshotClass.
  • Updated dependencies and sidecar containers to support CSI 1.8.
  • Driver is now built with Go 1.20.3.

3.0.0 - Fri, Jan 20 2023

  • Add support for Kubernetes 1.26.
  • Added support for Volume Snapshots and Clones.
  • Add support for K8s distributions which containerize iscsid, including Talos Linux.
  • Adjusted driver deployment manifest to enable Rolling Upgrades.

2.0.0 - Tue, Jul 30 2019

  • Improve driver stability.
  • Fixes occasional remount issue due to missing MountPropagation.
  • Fixes a rare segfault due to mismanaged generic error handling.
  • Supports K8s 1.14.
  • Update to support CSI 1.0.0 release.
  • Build driver with Go 1.12

1.0.0 - Mon, Jul 02 2018

  • Initial driver release.
  • Supports K8s 1.10.
  • CSI v0.3.0.

QUICKSTART

This is a brief guide on how to install and configure the Blockbridge Kubernetes driver. In this section, you will:

  1. Create a tenant account on one or more Blockbridge clusters.
  2. Create a persistent authorization in each tenant account.
  3. Create a Secret resource.
  4. If desired, assign topology labels to your Kubernetes nodes.
  5. If necessary, deploy snapshot support infrastructure.
  6. Deploy the Blockbridge CSI driver.

Many of these topics have more information available by selecting the information links next to items where they appear.

Preparation

Start by identifying each Blockbridge Cluster you’d like to use with Kubernetes. Note the management hostname or IP address of each backend and its Kubernetes topology zone (corresponding to the topology.kubernetes.io/zone label).

Backend Configuration

These steps use the containerized Blockbridge CLI utility to create a tenant account and authorization token on each backend cluster. For multi-backend deployments, be sure to name each tenant account according to its zone.

Follow the steps below for each backend storage cluster.

  1. Use the containerized CLI to create a tenant account.

    docker run --rm -it -v blockbridge-cli:/data docker.io/blockbridge/cli:latest-alpine \
      bb -kH <HOST-ZONE-N> account create --name <ACCOUNT-ZONE-N>
    

    When prompted, enter your system credentials.

    Authenticating to https://<HOST-ZONE-N>/api
    
    Enter user or access token: system
    Password for system: ....
    Authenticated; token expires in 3599 seconds.
    
  2. Create a persistent authorization for the tenant account.

    docker run --rm -it -e BLOCKBRIDGE_API_SU=<ACCOUNT-ZONE-N> -v blockbridge-cli:/data \
      docker.io/blockbridge/cli:latest-alpine bb -kH <HOST-ZONE-N> \
      authorization create --notes csi-blockbridge
    
    == Created authorization: ATH476D194C40626436
    
    == Authorization: ATH476D194C40626436
    serial                ATH476D194C40626436
    account               <ACCOUNT-ZONE-N> (ACT076D194C40626412)
    user                  <ACCOUNT-ZONE-N> (USR1B6D194C4062640F)
    enabled               yes
    created at            2025-04-18 011:59:02 +0000
    access type           online
    token suffix          cYjGHWIw
    restrict              auth
    enforce 2-factor      false
    
    == Access Token
    access token          1/tywnGIxh................92HXLCcYjGHWIw
    
    *** Remember to record your access token!
    
  3. Record each access token with your notes for later use.

Blockbridge Secret Resource

Storage cluster configuration is provided to the CSI driver via a Kubernetes Secret resource. A blockbridge secret defines the set of storage clusters used by the CSI driver. Create a secret.json file with the following contents:

[
  {
    "api-url": "https://<HOST-ZONE-1>/api",
    "access-token": "<TOKEN-ZONE-1>",
    "ssl-verify-peer": false,
    "labels": {
      "topology.kubernetes.io/zone": "zone-1"
    }
  },
  {
    "api-url": "https://<HOST-ZONE-2>/api",
    "access-token": "<TOKEN-ZONE-2>",
    "ssl-verify-peer": false,
    "labels": {
      "topology.kubernetes.io/zone": "zone-2"
    }
  },
  {
    "api-url": "https://<HOST-ZONE-3>/api",
    "access-token": "<TOKEN-ZONE-3>",
    "ssl-verify-peer": false,
    "labels": {
      "topology.kubernetes.io/zone": "zone-3"
    }
  }
]

This example assumes 3 availability zones – adjust to match your deployment. Use the management hostname or IP address for each storage cluster, along with the access tokens generated in the previous step of this guide.

When you’re happy with the contents of secret.json, create the resource in your Kubernetes cluster:

kubectl create secret generic -n kube-system blockbridge --from-file=BlockbridgeClusters=secret.json

Node Topology Labels

Skip to the next step if you only have a single backend storage cluster.

Ensure your nodes are correctly labeled and that the labels are consistent with the BlockbridgeClusters configuration. To add a label to a node:

kubectl label node <NODE_NAME> topology.kubernetes.io/zone=<ZONE_NAME>

To remove a label from a node:

kubectl label node <NODE_NAME> topology.kubernetes.io/zone-

Confirm each zone has the correct set of nodes:

kubectl get nodes -L topology.kubernetes.io/zone

For example:

NAME      STATUS   ROLES                       AGE   VERSION        ZONE
rancher   Ready    control-plane,etcd,master   12d   v1.32.5+k3s1
worker1   Ready    control-plane,etcd,master   11d   v1.32.5+k3s1   zone-1
worker2   Ready    control-plane,etcd,master   11d   v1.32.5+k3s1   zone-2
worker3   Ready    control-plane,etcd,master   11d   v1.32.5+k3s1   zone-3

Snapshot Support

Volume Snapshot support depends on several components:

  • Volume Snapshot Custom Resource Definitions
  • Volume Snapshot Controller
  • Snapshot Validation Webhook
  • A CSI driver utilizing the CSI Snapshotter sidecar

If your Kubernetes distribution doesn’t bundle upstream volume snapshot support, you need to deploy the minimal requirements before deploying the Blockbridge CSI Driver.

  • Rancher with K3s does not include snapshot support infrastructure.
  • Rancher with RKE does include snapshot support infrastructure!

If your cluster doesn’t have the follwing CRDs, it likely doesn’t have the snapshot controller deployed:

kubectl get crd volumesnapshotclasses.snapshot.storage.k8s.io
kubectl get crd volumesnapshots.snapshot.storage.k8s.io
kubectl get crd volumesnapshotcontents.snapshot.storage.k8s.io

To install the RKE2 snapshot support:

kubectl create -f https://get.blockbridge.com/kubernetes/6.0/rke2-snapshot-support.yaml

Driver Deployment

Deploy the Blockbridge CSI driver:

kubectl create -f https://get.blockbridge.com/kubernetes/6.0/csi-blockbridge-v3.3.0.yaml

Validate that node and controller pods are running:

kubectl -n kube-system get pods -l role=csi-blockbridge
NAME                                    READY     STATUS    RESTARTS   AGE
csi-blockbridge-controller-0            3/3       Running   0          6s
csi-blockbridge-node-4679b              2/2       Running   0          5s

Example Storage Classes

The Blockbridge driver comes with a default “general purpose” StorageClass blockbridge-gp. This is the default StorageClass for dynamic provisioning of storage volumes. It provisions using the default Blockbridge storage template configured in the Blockbridge controlplane.

kubectl get storageclass
NAME                         PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
blockbridge-gp (default)     csi.blockbridge.com     Delete          Immediate              false                  2m50s
blockbridge-nvme             csi.blockbridge.com     Delete          Immediate              false                  2m50s
blockbridge-nvme-multipath   csi.blockbridge.com     Delete          Immediate              false                  2m50s
blockbridge-tls              csi.blockbridge.com     Delete          Immediate              false                  2m50s
blockbridge-topo             csi.blockbridge.com     Delete          WaitForFirstConsumer   false                  2m50s
local-path (default)         rancher.io/local-path   Delete          WaitForFirstConsumer   false                  12d

There are a variety of additional storage class configuration options available, including:

  1. Using transport encryption (tls).
  2. Using a custom tag-based query.
  3. Using a named service template.
  4. Using explicitly specified provisioned IOPS.
  5. Using NVMe/TCP instead of iSCSI.
  6. Enabling native NVMe multipathing.
  7. Using WaitForFirstConsumer volume binding mode.

There are several additional example storage classes in csi-storageclass.yaml. You can download, edit, and apply these storage classes as needed.

curl -OsSf https://get.blockbridge.com/kubernetes/6.0/csi/csi-storageclass.yaml
cat storageclasses.yaml
... [output trimmed] ...
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: blockbridge-gp
  namespace: kube-system
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: csi.blockbridge.com
kubectl apply -f storageclasses.yaml
storageclass.storage.k8s.io "blockbridge-gp" configured

VERIFICATION TESTING

This section has a few basic tests you can use to validate that your Blockbridge driver is working properly.

Example Applications

We have a small collection of example applications ready for deployment. To deploy them all, apply the examples manifest:

kubectl apply -f https://get.blockbridge.com/kubernetes/6.0/examples-v3.3.0.yaml

This results in 7 example application pods demonstrating different features of the Blockbridge csi driver:

  • blockbridge-app-east/blockbridge-app-west - use a WaitForFirstConsumer StorageClass.
  • blockbridge-raw-block-app - use a PVC specifying an accessMode of RawBlock.
  • blockbridge-nvme-app - consumes an NVMe PVC.
  • blockbridge-iscsi-app - consumes an iSCSI PVC.
  • blockbridge-clone-app - volume sourced from an existing iSCSI volume.
  • blockbridge-snapshot-restore-app - volume sourced from a snapshot.
  • blockbridge-inline-pvc-app - this application makes use of a generic ephemeral volume, instead of an independently managed PVC.
kubectl get pods
NAME                               READY   STATUS    RESTARTS   AGE
blockbridge-nvme-app               2/2     Running   0          70s
blockbridge-inline-pvc-app         2/2     Running   0          70s
blockbridge-clone-app              2/2     Running   0          71s
blockbridge-snapshot-restore-app   2/2     Running   0          70s
blockbridge-app-east               1/1     Running   0          2m21s
blockbridge-app-west               1/1     Running   0          2m21s
blockbridge-iscsi-app              2/2     Running   0          5h46m
blockbridge-raw-block-app          1/1     Running   0          2m38s

Volume Creation

This test verifies that Blockbridge storage volumes are now available via Kubernetes persistent volume claims (PVC).

To test this out, create a PersistentVolumeClaim. It will dynamically provision a volume in Blockbridge and make it accessible to applications.

kubectl apply -f https://get.blockbridge.com/kubernetes/6.0/examples/iscsi-pvc.yaml
persistentvolumeclaim "blockbridge-iscsi-pvc" created

Alternatively, download the example volume yaml, modify it as needed, and apply.

curl -OsSL https://get.blockbridge.com/kubernetes/6.0/examples/iscsi-pvc.yaml
cat iscsi-pvc.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: blockbridge-iscsi-pvc
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  storageClassName: blockbridge-gp
kubectl apply -f ./iscsi-pvc.yaml
persistentvolumeclaim "blockbridge-iscsi-pvc" created

Use kubectl get pvc blockbridge-iscsi-pvc to check that the PVC was created successfully.

kubectl get pvc blockbridge-iscsi-pvc
NAME                    STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS     AGE
blockbridge-iscsi-pvc   Bound    pvc-a8f61548-5f07-44a4-beb2-87cc71ae40e7   5Gi        RWO            blockbridge-gp   3m59s

Pod Creation

This test creates a Pod (application) that uses a previously created PVC. When you create the Pod, it attaches the volume, then formats and mounts it, making it available to the specified application.

kubectl apply -f https://get.blockbridge.com/kubernetes/6.0/examples/iscsi-app.yaml
pod "blockbridge-iscsi-app" created

Alternatively, download the application yaml, modify as needed, and apply.

curl -OsSL https://get.blockbridge.com/kubernetes/6.0/examples/iscsi-app.yaml
cat iscsi-app.yaml
---
kind: Pod
apiVersion: v1
metadata:
  name: blockbridge-iscsi-app
spec:
  containers:
    - name: my-frontend
      image: busybox
      volumeMounts:
      - mountPath: "/data"
        name: my-bb-volume
      command: [ "sleep", "1000000" ]
    - name: my-backend
      image: busybox
      volumeMounts:
      - mountPath: "/data"
        name: my-bb-volume
      command: [ "sleep", "1000000" ]
  volumes:
    - name: my-bb-volume
      persistentVolumeClaim:
        claimName: csi-pvc-blockbridge
kubectl apply -f ./iscsi-app.yaml
pod "blockbridge-iscsi-app" created

Verify that the pod is running successfully.

kubectl get pod blockbridge-iscsi-app
NAME               READY     STATUS    RESTARTS   AGE
blockbridge-iscsi-app   2/2       Running   0          13s

Pod Data Access

Inside the app container, write data to the mounted volume.

kubectl exec -ti blockbridge-iscsi-app -c my-frontend /bin/sh
/ # df /data
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/blockbridge/2f93beb2-61eb-456b-809e-22e27e4f73cf
                       5232608     33184   5199424   1% /data

/ # touch /data/hello-world
/ # exit
kubectl exec -ti blockbridge-iscsi-app -c my-backend /bin/sh
/ # ls /data
hello-world

TROUBLESHOOTING

App Stuck in ContainerCreating

When the application is stuck in ContainerCreating, check to see if the mount has failed.

Symptom

Check the app status.

kubectl get pod/blockbridge-iscsi-app
NAME               READY   STATUS              RESTARTS   AGE
blockbridge-iscsi-app   0/2     ContainerCreating   0          20s

kubectl describe pod/blockbridge-iscsi-app
Events:
  Type     Reason                  Age   From                            Message
  ----     ------                  ----  ----                            -------
  Normal   Scheduled               10s   default-scheduler               Successfully assigned default/blockbridge-iscsi-app to kubelet.localnet
  Normal   SuccessfulAttachVolume  10s   attachdetach-controller         AttachVolume.Attach succeeded for volume "pvc-71c37e84-b302-11e9-a93f-0242ac110003"
  Warning  FailedMount             1s    kubelet, kubelet.localnet       MountVolume.MountDevice failed for volume "pvc-71c37e84-b302-11e9-a93f-0242ac110003" : rpc error: code = Unknown desc = runtime_error: /etc/iscsi/initiatorname.iscsi not found; ensure 'iscsi-initiator-utils' is installed.

Resolution

  • Ensure the host running the kubelet has iSCSI client support installed on the host/node.
  • For CentOS/RHEL, install the iscsi-initiator-utils package on the host running the kubelet.
dnf install iscsi-initiator-utils
  • For Ubuntu, install the open-iscsi package on the host running the kubelet.
apt install open-iscsi

Symptom

Check the app status.

kubectl get pod/blockbridge-iscsi-app
NAME               READY   STATUS              RESTARTS   AGE
blockbridge-iscsi-app   0/2     ContainerCreating   0          20s

kubectl describe pod/blockbridge-iscsi-app
Events:
  Type     Reason                  Age    From                     Message
  ----     ------                  ----   ----                     -------
  Normal   Scheduled               10m    default-scheduler        Successfully assigned default/blockbridge-iscsi-app to crc-l6qvn-master-0
  Normal   SuccessfulAttachVolume  10m    attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-9b2e8116-62a6-4089-8b0d-fab0f839b7aa"
  Warning  FailedMount             9m56s  kubelet                  MountVolume.MountDevice failed for volume "pvc-9b2e8116-62a6-4089-8b0d-fab0f839b7aa" : rpc error: code = Unknown desc = exec_error: Failed to connect to bus: No data available
iscsiadm: can not connect to iSCSI daemon (111)!
iscsiadm: Could not login to [iface: default, target: iqn.2009-12.com.blockbridge:t-pjwajzvdho-471c1b66-e24d-4377-a16b-71ac1d580061, portal: 172.16.100.129,3260].
iscsiadm: initiator reported error (20 - could not connect to iscsid)
iscsiadm: Could not log into all portals

Resolution

  • Ensure the host running the kubelet has iSCSI daemon installed and started on the host/node.
  • For CentOS/RHEL, install the iscsi-initiator-utils package on the host running the kubelet.
dnf install iscsi-initiator-utils
systemctl enable --now iscsid
  • For Ubuntu, install the open-iscsi-utils package on the host running the kubelet.
apt install open-iscsi-utils
systemctl enable --now iscsid

Symptom

NVMe volume fails to be attached, with an error reported by the csi-blockbridge-node pod:

ERROR: Failed to find `nvme` command: please ensure the 'nvme-cli' package is installed and try again.

Resolution

Ensure the host running the kubelet has the nvme-cli package installed and the nvme-tcp kernel module loaded. See the driver requirements section for more details.

Provisioning Unauthorized

In this failure mode, provisioning fails with an “unauthorized” message.

Symptom

Check the PVC describe output.

kubectl describe pvc csi-pvc-blockbridge

Provisioning failed due to “unauthorized” because the authorization access token is not valid. Ensure the correct access token is entered in the secret.

  Warning  ProvisioningFailed    6s (x2 over 19s)  csi.blockbridge.com csi-provisioner-blockbridge-0 2caddb79-ec46-11e8-845d-465903922841  Failed to provision volume with StorageClass "blockbridge-gp": rpc error: code = Internal desc = unauthorized_error: unauthorized: unauthorized

Resolution

Verify your access tokens and backend API URLs are correct for each Storage Cluster.

Provisioning Storage Class Invalid

Provisioning fails with an “invalid storage class” error.

Symptom

Check the PVC describe output:

kubectl describe pvc csi-pvc-blockbridge

Provisioning failed because the storage class specified was invalid.

  Warning  ProvisioningFailed  7s (x3 over 10s)  persistentvolume-controller  storageclass.storage.k8s.io "blockbridge-gp" not found

Resolution

Ensure the StorageClass exists with the same name.

kubectl get storageclass blockbridge-gp
Error from server (NotFound): storageclasses.storage.k8s.io "blockbridge-gp" not found
  • If it doesn’t exist, then create the storageclass.
kubectl apply -f https://get.blockbridge.com/kubernetes/6.0/csi/v1.0.0/csi-storageclass.yaml
  • Alternatively, download and edit the desired storageclass.
curl -OsSL https://get.blockbridge.com/kubernetes/6.0/csi/v1.0.0/csi-storageclass.yaml

Make whatever changes you need to in csi-storageclass.yaml. Apply the updates using kubectl:

kubectl -f apply ./csi-storageclass.yaml

In the background, the PVC continually retries. Once the above changes are complete, it will pick up the storage class change.

kubectl get pvc
NAME                    STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
csi-pvc-blockbridge     Bound     pvc-6cb93ab2-ec49-11e8-8b89-46facf8570bb   5Gi        RWO            blockbridge-gp     4s

App Stuck in Pending

One of the causes for an application stuck in pending is a missing Persistent Volume Claim (PVC).

Symptom

The output of kubectl get pod show that the app has a status of Pending.

kubectl get pod blockbridge-iscsi-app
NAME               READY     STATUS    RESTARTS   AGE
blockbridge-iscsi-app   0/2       Pending   0          14s

Use kubectl describe pod to reveal more information. In this case, the PVC is not found.

kubectl describe pod blockbridge-iscsi-ap
Events:
  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  12s (x6 over 28s)  default-scheduler  persistentvolumeclaim "csi-pvc-blockbridge" not found

Resolution

Create the PVC if necessary and ensure that it’s valid. First, validate that it’s missing.

kubectl get pvc csi-pvc-blockbridge
Error from server (NotFound): persistentvolumeclaims "csi-pvc-blockbridge" not found

If it’s missing, create it.

kubectl apply -f https://get.blockbridge.com/kubernetes/6.0/examples/csi-pvc.yaml
persistentvolumeclaim "csi-pvc-blockbridge" created

In the background, the application retries automatically and succeeds in starting.

kubectl describe pod blockbridge-iscsi-app
  Normal   Scheduled               8s  default-scheduler                  Successfully assigned blockbridge-iscsi-app to aks-nodepool1-56242131-0
  Normal   SuccessfulAttachVolume  8s  attachdetach-controller            AttachVolume.Attach succeeded for volume "pvc-5332e169-ec4f-11e8-8b89-46facf8570bb"
  Normal   SuccessfulMountVolume   8s  kubelet, aks-nodepool1-56242131-0  MountVolume.SetUp succeeded for volume "default-token-bx8b9"