Blockbridge provides a Container Storage Interface (CSI) driver to deliver persistent, secure, multi-tenant, cluster-accessible storage for Kubernetes. This guide describes how to deploy Blockbridge as the storage backend for Kubernetes containers.

If you’ve configured other Kubernetes storage drivers before, you may want to start with the Quickstart section. The rest of the guide has detailed information about features, driver configuration options and troubleshooting.


REQUIREMENTS & VERSIONS

Supported Versions

The current release of the Blockbridge CSI driver supports Kubernetes versions 1.28 through 1.33.

Supported Features

Feature Description
Dynamic Volume Provisioning Automatically provision, attach, and format network-attached storage for use with your containers
Topology-Aware Volume Placement Ensure volumes are placed on nodes with sufficient resources and proximity to the application
Raw Block Volumes Provide direct access to block devices for advanced use cases
Multi-Host Volume Mobility Instantly move containers and persistent volumes between your K8s nodes enabling support for highly available containers
Thin Snapshots Create space efficient point-in-time snapshots for backup and restore
Thin Clones Create volumes from a snapshot or directly from an existing volume
High Availability End-to-end high-availability and transparent recovery from infrastructure failures
Quality Of Services Automated performance provisioning and run-time management
Encryption & Secure Erase Independently keyed multi-tenant encryption and automatic secure erase
iSCSI/TCP Support for high-performance single-queue SCSI-native storage access
NVMe/TCP Support for high-performance multi-queue NVMe-native storage access optimized for low latency

Supported K8s Environments

For production workloads the following distributions are supported:

MicroK8s is recommended for trial deployments, but is not supported for production use.

Requirements

The following minimum requirements must be met to use Blockbridge with Kubernetes:

  • Kubernetes 1.27+.
  • iSCSI service and kernel support.
  • NVMe kernel support.

For RedHat derivitives

Install and enable iSCSI support:

sudo sh -c "dnf install -y iscsi-initiator-utils && systemctl enable --now iscsid"

Load NVMe/TCP kernel support:

sudo sh -c "modprobe nvme-tcp && echo nvme-tcp > /etc/modules-load.d/blockbridge-nvme-tcp.conf"

For Debian derivitives (including Ubuntu)

Install and enable iSCSI support:

sudo sh -c "apt update && apt install -y open-iscsi && systemctl enable --now iscsid"

Install and load NVMe/TCP kernel support:

sudo sh -c "apt update && apt install -y linux-modules-extra-$(uname -r) && \
  modprobe nvme-tcp && echo nvme-tcp > /etc/modules-load.d/blockbridge-nvme-tcp.conf"

Blockbridge Driver Version History

3.4.0 - Mon, Aug 11 2025

Features

  • Volume Expansion: Added support for online and offline volume expansion
    • Dynamic volume resizing for both raw block and filesystem volumes
    • Automatic filesystem expansion
    • Online filesystem expansion with supported filesystems
  • NVMe Data Integrity: Added header and data digest support for NVMe over TCP
    • Configurable digest options for enhanced data integrity
    • Protection against data corruption during transmission
  • Supports Kubernetes versions 1.28-1.33.

Bugfixes

  • Prevent removal of a PV containing snapshots.

Configuration Changes

  • Volume Expansion: StorageClasses can now enable allowVolumeExpansion: true
  • NVMe Options: New StorageClass parameters for configuring digest authentication

3.3.0 - Fri, Jun 13 2025

  • Supports Kubernetes versions 1.27-1.32.
  • Add support for topology aware provisioning.
    • New BlockbridgeClusters secret format supporting multiple clusters
    • Automatic storage cluster selection based on Kubernetes zone constraint
    • Backward compatibility with existing single-cluster configurations
  • Add support for raw block attachments.
  • Change default controller deployment to use leader election.
  • Update sidecar containers to support CSI 1.11.0.
  • Go toolchain updated to 1.24.4.

3.2.0 - Mon, Apr 22 2024

  • Add support for Kubernetes 1.29.
  • Add support for Natve NVMe Multipathing.
  • Driver is now built with Go 1.22.2.

3.1.0 - Fri, Apr 28 2023

  • Add support for Kubernetes 1.27.
  • Add support for NVMe/TCP.
  • Advertise Velero support in the default VolumeSnapshotClass.
  • Updated dependencies and sidecar containers to support CSI 1.8.
  • Driver is now built with Go 1.20.3.

3.0.0 - Fri, Jan 20 2023

  • Add support for Kubernetes 1.26.
  • Added support for Volume Snapshots and Clones.
  • Add support for K8s distributions which containerize iscsid, including Talos Linux.
  • Adjusted driver deployment manifest to enable Rolling Upgrades.

2.0.0 - Tue, Jul 30 2019

  • Improve driver stability.
  • Fixes occasional remount issue due to missing MountPropagation.
  • Fixes a rare segfault due to mismanaged generic error handling.
  • Supports K8s 1.14.
  • Update to support CSI 1.0.0 release.
  • Build driver with Go 1.12

1.0.0 - Mon, Jul 02 2018

  • Initial driver release.
  • Supports K8s 1.10.
  • CSI v0.3.0.

QUICKSTART

This is a brief guide on how to install and configure the Blockbridge Kubernetes driver. In this section, you will:

  1. Create a tenant account on one or more Blockbridge clusters.
  2. Create a persistent authorization in each tenant account.
  3. Create a Secret resource.
  4. If desired, assign topology labels to your Kubernetes nodes.
  5. If necessary, deploy snapshot support infrastructure.
  6. Deploy the Blockbridge CSI driver.

Many of these topics have more information available by selecting the information links next to items where they appear.

Preparation

Start by identifying each Blockbridge Cluster you’d like to use with Kubernetes. Note the management hostname or IP address of each backend and its Kubernetes topology zone (corresponding to the topology.kubernetes.io/zone label).

Backend Configuration

These steps use the containerized Blockbridge CLI utility to create a tenant account and authorization token on each backend cluster. For multi-backend deployments, be sure to name each tenant account according to its zone.

Follow the steps below for each backend storage cluster.

  1. Use the containerized CLI to create a tenant account.

    docker run --rm -it -v blockbridge-cli:/data docker.io/blockbridge/cli:latest-alpine \
      bb -kH <HOST-ZONE-N> account create --name <ACCOUNT-ZONE-N>
    

    When prompted, enter your system credentials.

    Authenticating to https://<HOST-ZONE-N>/api
    
    Enter user or access token: system
    Password for system: ....
    Authenticated; token expires in 3599 seconds.
    
  2. Create a persistent authorization for the tenant account.

    docker run --rm -it -e BLOCKBRIDGE_API_SU=<ACCOUNT-ZONE-N> -v blockbridge-cli:/data \
      docker.io/blockbridge/cli:latest-alpine bb -kH <HOST-ZONE-N> \
      authorization create --notes csi-blockbridge
    
    == Created authorization: ATH476D194C40626436
    
    == Authorization: ATH476D194C40626436
    serial                ATH476D194C40626436
    account               <ACCOUNT-ZONE-N> (ACT076D194C40626412)
    user                  <ACCOUNT-ZONE-N> (USR1B6D194C4062640F)
    enabled               yes
    created at            2025-04-18 011:59:02 +0000
    access type           online
    token suffix          cYjGHWIw
    restrict              auth
    enforce 2-factor      false
    
    == Access Token
    access token          1/tywnGIxh................92HXLCcYjGHWIw
    
    *** Remember to record your access token!
    
  3. Record each access token with your notes for later use.

Blockbridge Secret Resource

Storage cluster configuration is provided to the CSI driver via a Kubernetes Secret resource. A blockbridge secret defines the set of storage clusters used by the CSI driver. Create a secret.json file with the following contents:

[
  {
    "api-url": "https://<HOST-ZONE-1>/api",
    "access-token": "<TOKEN-ZONE-1>",
    "ssl-verify-peer": false,
    "labels": {
      "topology.kubernetes.io/zone": "zone-1"
    }
  },
  {
    "api-url": "https://<HOST-ZONE-2>/api",
    "access-token": "<TOKEN-ZONE-2>",
    "ssl-verify-peer": false,
    "labels": {
      "topology.kubernetes.io/zone": "zone-2"
    }
  },
  {
    "api-url": "https://<HOST-ZONE-3>/api",
    "access-token": "<TOKEN-ZONE-3>",
    "ssl-verify-peer": false,
    "labels": {
      "topology.kubernetes.io/zone": "zone-3"
    }
  }
]

This example assumes 3 availability zones – adjust to match your deployment. Use the management hostname or IP address for each storage cluster, along with the access tokens generated in the previous step of this guide.

When you’re happy with the contents of secret.json, create the resource in your Kubernetes cluster:

kubectl create secret generic -n kube-system blockbridge --from-file=BlockbridgeClusters=secret.json

Node Topology Labels

Skip to the next step if you only have a single backend storage cluster.

Ensure your nodes are correctly labeled and that the labels are consistent with the BlockbridgeClusters configuration. To add a label to a node:

kubectl label node <NODE_NAME> topology.kubernetes.io/zone=<ZONE_NAME>

To remove a label from a node:

kubectl label node <NODE_NAME> topology.kubernetes.io/zone-

Confirm each zone has the correct set of nodes:

kubectl get nodes -L topology.kubernetes.io/zone

For example:

NAME      STATUS   ROLES                       AGE   VERSION        ZONE
rancher   Ready    control-plane,etcd,master   12d   v1.32.5+k3s1
worker1   Ready    control-plane,etcd,master   11d   v1.32.5+k3s1   zone-1
worker2   Ready    control-plane,etcd,master   11d   v1.32.5+k3s1   zone-2
worker3   Ready    control-plane,etcd,master   11d   v1.32.5+k3s1   zone-3

Snapshot Support

Volume Snapshot support depends on several components:

  • Volume Snapshot Custom Resource Definitions
  • Volume Snapshot Controller
  • Snapshot Validation Webhook
  • A CSI driver utilizing the CSI Snapshotter sidecar

If your Kubernetes distribution doesn’t bundle upstream volume snapshot support, you need to deploy the minimal requirements before deploying the Blockbridge CSI Driver.

  • Rancher with K3s does not include snapshot support infrastructure.
  • Rancher with RKE does include snapshot support infrastructure!

If your cluster doesn’t have the follwing CRDs, it likely doesn’t have the snapshot controller deployed:

kubectl get crd volumesnapshotclasses.snapshot.storage.k8s.io
kubectl get crd volumesnapshots.snapshot.storage.k8s.io
kubectl get crd volumesnapshotcontents.snapshot.storage.k8s.io

To install the snapshot support:

kubectl create -f https://get.blockbridge.com/kubernetes/6.1/rke2-snapshot-support.yaml

Driver Deployment

Deploy the Blockbridge CSI driver:

  • kubectl create -f https://get.blockbridge.com/kubernetes/6.1/csi-blockbridge-v3.4.0.yaml
    
  • kubectl create -f https://get.blockbridge.com/kubernetes/6.0/csi-blockbridge-v3.4.0.yaml
    

Validate that node and controller pods are running:

kubectl -n kube-system get pods -l role=csi-blockbridge
NAME                                    READY     STATUS    RESTARTS   AGE
csi-blockbridge-controller-0            3/3       Running   0          6s
csi-blockbridge-node-4679b              2/2       Running   0          5s

Driver Configuration

Blockbridge StorageClass Parameters

Use these settings in the parameters field of a Kubernetes StorageClass to control how Blockbridge provisions volumes. They let you choose the service type, placement and performance characteristics, wire protocol, and data-protection features for each class you expose to your users.

Option Default Description
serviceType (none) Select a named Blockbridge service template. If unspecified, the backend’s default service template is used.
storageQuery (none) Tag/query passed to the Blockbridge backend to select storage (e.g., "tier=ssd"). The value is forwarded to the backend and may be parsed as a tag-based query.
transportEncryption (none) If set to "tls", transport encryption is used. (iSCSI only)
iops (none) Request a specific IOPS value for the provisioned volume (string form). Passed through to the backend.
protocol iscsi Target protocol for the exported volume ("iscsi" or "nvme")
ctrlLossTmo "-1" NVMe controller loss timeout in seconds. A value of "-1" (the default) disables timeout.
multipath false Enables multipath I/O if set to "true".
headerDigest false Enables NVMe header digests if set to "true".
dataDigest false Enables NVMe data digests if set to "true".

StorageClass Options

Use these Kubernetes-native options together with the Blockbridge-specific parameters above. Recommended defaults for most deployments: enable allowVolumeExpansion to allow users to resize claims, and use volumeBindingMode: WaitForFirstConsumer when you need topology-aware scheduling.

Option Default Description
provisioner (none) The name of the provisioner to use for this StorageClass. To create a Blockbridge storage class, set this to csi.blockbridge.com.
allowVolumeExpansion false If true, allows PersistentVolumeClaim capacity to be increased. The Blockbridge controller advertises the volume expansion capability when this is enabled.
volumeBindingMode Immediate (Kubernetes default) Controls when binding and provisioning occurs. WaitForFirstConsumer is used for topology-aware provisioning so scheduling and provisioning can consider pod placement.
storageclass.kubernetes.io/is-default-class false If true, this StorageClass is the default for provisioning PersistentVolumes.
csi.storage.k8s.io/fstype xfs The filesystem type to format the volume with. Supported options: ext4, xfs.

The Default StorageClass annotation

Use the storageclass.kubernetes.io/is-default-class annotation to mark exactly one StorageClass as the cluster-wide default. When a PVC does not specify spec.storageClassName, Kubernetes assigns the default StorageClass automatically.

Example StorageClass with the default annotation:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: blockbridge-gp
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: csi.blockbridge.com
allowVolumeExpansion: true
volumeBindingMode: WaitForFirstConsumer

Changing the default does not affect existing PersistentVolumes or Claims; it only influences future PVCs that omit storageClassName. If you always set storageClassName on PVCs, the default is ignored for those claims.

Verify the default (look for “(default)” in the NAME column):

kubectl get storageclass

PVC Options

Use these Kubernetes fields on a PersistentVolumeClaim (PVC) to influence how Blockbridge provisions and exposes storage for your workloads. Specify them under the PVC’s spec, in combination with your chosen StorageClass parameters.

Option Default Description
dataSource (none) Use to create a volume from an existing VolumeSnapshot or another PVC. When a dataSource is provided (for example, kind: VolumeSnapshot), the driver will attempt to create the volume as a clone of that source.
volumeMode Filesystem The intended usage mode for the volume. Filesystem provisions a filesystem-backed volume; Block provisions a raw block device for direct use by Pods. Ensure the Pod and PVC are configured correctly for block mode.

Example Storage Classes

The Blockbridge driver comes with a default “general purpose” StorageClass blockbridge-gp. This is configured to be the default StorageClass for dynamic provisioning of storage volumes. It provisions using the default Blockbridge storage template configured in the Blockbridge controlplane.

The following sections detail several example StorageClasses.

General purpose (default) — iSCSI

A simple, general-purpose class that uses iSCSI, enables expansion, and waits for the first consumer for topology-aware scheduling.

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: blockbridge-gp
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: csi.blockbridge.com
allowVolumeExpansion: true
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete
parameters:
  protocol: "iscsi"

Performance NVMe/TCP with header and data integrity

High-performance NVMe/TCP class with header/data digests for enhanced integrity.

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: blockbridge-nvme-secure
provisioner: csi.blockbridge.com
allowVolumeExpansion: true
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete
parameters:
  protocol: "nvme"
  headerDigest: "true"
  dataDigest: "true"

Production tier with explicit IOPS

Targets production tagged backend storage via a query, requests provisioned IOPS, and retains volumes by default.

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: blockbridge-ssd-20k
provisioner: csi.blockbridge.com
allowVolumeExpansion: true
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Retain
parameters:
  storageQuery: "production"
  iops: "20000"
  multipath: "true"

Upgrade Procedure

This procedure describes how to upgrade the Blockbridge CSI driver in-place. In most environments, deleting the existing deployment and recreating it with the new release is sufficient and non-disruptive for running workloads.

Key points

  • Running Pods continue to read/write during the upgrade. Existing mounts stay attached.
  • Provisioning, attach/detach, and snapshot operations may pause briefly while controller and node pods restart.
  • Secrets, PersistentVolumes, PersistentVolumeClaims, and application workloads are not removed by this procedure.

Prerequisites

  • Confirm the target driver version supports your Kubernetes version (see Supported Versions).
  • Ensure node requirements (iSCSI and/or NVMe/TCP) are satisfied on all nodes (see Requirements).
  • If you have customized the default StorageClass or other manifests, back them up before proceeding.

Remove the currently deployed release

kubectl delete -f https://get.blockbridge.com/kubernetes/6.1/csi-blockbridge-v3.3.0.yaml

Deploy the new release

kubectl create -f https://get.blockbridge.com/kubernetes/6.1/csi-blockbridge-v3.4.0.yaml

Verify the upgrade

kubectl -n kube-system get pods -l role=csi-blockbridge
kubectl get csidrivers

Wait until the controller StatefulSet and node DaemonSet pods are Running. New operations (provision, attach, snapshot) will succeed after the controller and node pods are ready.

If you don’t have the original manifest you can still upgrade by deleting labeled resources, then installing the new release. Use with care:

kubectl -n kube-system delete statefulset,daemonset,service,deployment,serviceaccount,role,rolebinding -l role=csi-blockbridge
kubectl delete clusterrole,clusterrolebinding,csidriver,storageclass,volumesnapshotclass -l role=csi-blockbridge
kubectl create -f https://get.blockbridge.com/kubernetes/6.1/csi-blockbridge-v3.4.0.yaml

Post-upgrade notes

  • Volume expansion: To use the new volume expansion capability, set allowVolumeExpansion: true on your StorageClasses. Existing volumes continue to function; expansion applies when you edit a PVC’s requested size.
  • NVMe/TCP integrity options: If desired, update StorageClasses to set headerDigest/dataDigest parameters.
  • Topology: Existing BlockbridgeClusters secrets remain valid. Multi-cluster topology labeling continues to be honored.
  • StorageClasses: If the release includes updated example StorageClasses, you may apply them as needed. Changing a StorageClass does not affect existing volumes.

Rollback

To revert, delete the new deployment and recreate the previous one using its manifest:

kubectl delete -f https://get.blockbridge.com/kubernetes/6.1/csi-blockbridge-v3.4.0.yaml
kubectl create -f https://get.blockbridge.com/kubernetes/6.1/csi-blockbridge-v3.3.0.yaml

TROUBLESHOOTING

App Stuck in ContainerCreating

When the application is stuck in ContainerCreating, check to see if the mount has failed.

Symptom

Check the app status.

kubectl get pod/blockbridge-iscsi-app
NAME               READY   STATUS              RESTARTS   AGE
blockbridge-iscsi-app   0/2     ContainerCreating   0          20s

kubectl describe pod/blockbridge-iscsi-app
Events:
  Type     Reason                  Age   From                            Message
  ----     ------                  ----  ----                            -------
  Normal   Scheduled               10s   default-scheduler               Successfully assigned default/blockbridge-iscsi-app to kubelet.localnet
  Normal   SuccessfulAttachVolume  10s   attachdetach-controller         AttachVolume.Attach succeeded for volume "pvc-71c37e84-b302-11e9-a93f-0242ac110003"
  Warning  FailedMount             1s    kubelet, kubelet.localnet       MountVolume.MountDevice failed for volume "pvc-71c37e84-b302-11e9-a93f-0242ac110003" : rpc error: code = Unknown desc = runtime_error: /etc/iscsi/initiatorname.iscsi not found; ensure 'iscsi-initiator-utils' is installed.

Resolution

  • Ensure the host running the kubelet has iSCSI client support installed on the host/node.
  • For CentOS/RHEL, install the iscsi-initiator-utils package on the host running the kubelet.
dnf install iscsi-initiator-utils
  • For Ubuntu, install the open-iscsi package on the host running the kubelet.
apt install open-iscsi

Symptom

Check the app status.

kubectl get pod/blockbridge-iscsi-app
NAME               READY   STATUS              RESTARTS   AGE
blockbridge-iscsi-app   0/2     ContainerCreating   0          20s

kubectl describe pod/blockbridge-iscsi-app
Events:
  Type     Reason                  Age    From                     Message
  ----     ------                  ----   ----                     -------
  Normal   Scheduled               10m    default-scheduler        Successfully assigned default/blockbridge-iscsi-app to crc-l6qvn-master-0
  Normal   SuccessfulAttachVolume  10m    attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-9b2e8116-62a6-4089-8b0d-fab0f839b7aa"
  Warning  FailedMount             9m56s  kubelet                  MountVolume.MountDevice failed for volume "pvc-9b2e8116-62a6-4089-8b0d-fab0f839b7aa" : rpc error: code = Unknown desc = exec_error: Failed to connect to bus: No data available
iscsiadm: can not connect to iSCSI daemon (111)!
iscsiadm: Could not login to [iface: default, target: iqn.2009-12.com.blockbridge:t-pjwajzvdho-471c1b66-e24d-4377-a16b-71ac1d580061, portal: 172.16.100.129,3260].
iscsiadm: initiator reported error (20 - could not connect to iscsid)
iscsiadm: Could not log into all portals

Resolution

  • Ensure the host running the kubelet has iSCSI daemon installed and started on the host/node.
  • For CentOS/RHEL, install the iscsi-initiator-utils package on the host running the kubelet.
dnf install iscsi-initiator-utils
systemctl enable --now iscsid
  • For Ubuntu, install the open-iscsi-utils package on the host running the kubelet.
apt install open-iscsi-utils
systemctl enable --now iscsid

Symptom

NVMe volume fails to be attached, with an error reported by the csi-blockbridge-node pod:

ERROR: Failed to find `nvme` command: please ensure the 'nvme-cli' package is installed and try again.

Resolution

Ensure the host running the kubelet has the nvme-cli package installed and the nvme-tcp kernel module loaded. See the driver requirements section for more details.

Provisioning Unauthorized

In this failure mode, provisioning fails with an “unauthorized” message.

Symptom

Check the PVC describe output.

kubectl describe pvc csi-pvc-blockbridge

Provisioning failed due to “unauthorized” because the authorization access token is not valid. Ensure the correct access token is entered in the secret.

  Warning  ProvisioningFailed    6s (x2 over 19s)  csi.blockbridge.com csi-provisioner-blockbridge-0 2caddb79-ec46-11e8-845d-465903922841  Failed to provision volume with StorageClass "blockbridge-gp": rpc error: code = Internal desc = unauthorized_error: unauthorized: unauthorized

Resolution

Verify your access tokens and backend API URLs are correct for each Storage Cluster.

Provisioning Storage Class Invalid

Provisioning fails with an “invalid storage class” error.

Symptom

Check the PVC describe output:

kubectl describe pvc csi-pvc-blockbridge

Provisioning failed because the storage class specified was invalid.

  Warning  ProvisioningFailed  7s (x3 over 10s)  persistentvolume-controller  storageclass.storage.k8s.io "blockbridge-gp" not found

Resolution

Ensure the StorageClass exists with the same name.

kubectl get storageclass blockbridge-gp
Error from server (NotFound): storageclasses.storage.k8s.io "blockbridge-gp" not found
  • If it doesn’t exist, then create the storageclass.
kubectl apply -f https://get.blockbridge.com/kubernetes/6.0/csi/v1.0.0/csi-storageclass.yaml
  • Alternatively, download and edit the desired storageclass.
curl -OsSL https://get.blockbridge.com/kubernetes/6.0/csi/v1.0.0/csi-storageclass.yaml

Make whatever changes you need to in csi-storageclass.yaml. Apply the updates using kubectl:

kubectl -f apply ./csi-storageclass.yaml

In the background, the PVC continually retries. Once the above changes are complete, it will pick up the storage class change.

kubectl get pvc
NAME                    STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
csi-pvc-blockbridge     Bound     pvc-6cb93ab2-ec49-11e8-8b89-46facf8570bb   5Gi        RWO            blockbridge-gp     4s

App Stuck in Pending

One of the causes for an application stuck in pending is a missing Persistent Volume Claim (PVC).

Symptom

The output of kubectl get pod show that the app has a status of Pending.

kubectl get pod blockbridge-iscsi-app
NAME               READY     STATUS    RESTARTS   AGE
blockbridge-iscsi-app   0/2       Pending   0          14s

Use kubectl describe pod to reveal more information. In this case, the PVC is not found.

kubectl describe pod blockbridge-iscsi-ap
Events:
  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  12s (x6 over 28s)  default-scheduler  persistentvolumeclaim "csi-pvc-blockbridge" not found

Resolution

Create the PVC if necessary and ensure that it’s valid. First, validate that it’s missing.

kubectl get pvc csi-pvc-blockbridge
Error from server (NotFound): persistentvolumeclaims "csi-pvc-blockbridge" not found

If it’s missing, create it.

kubectl apply -f https://get.blockbridge.com/kubernetes/6.0/examples/csi-pvc.yaml
persistentvolumeclaim "csi-pvc-blockbridge" created

In the background, the application retries automatically and succeeds in starting.

kubectl describe pod blockbridge-iscsi-app
  Normal   Scheduled               8s  default-scheduler                  Successfully assigned blockbridge-iscsi-app to aks-nodepool1-56242131-0
  Normal   SuccessfulAttachVolume  8s  attachdetach-controller            AttachVolume.Attach succeeded for volume "pvc-5332e169-ec4f-11e8-8b89-46facf8570bb"
  Normal   SuccessfulMountVolume   8s  kubelet, aks-nodepool1-56242131-0  MountVolume.SetUp succeeded for volume "default-token-bx8b9"