This guide explains how to deploy Proxmox VE with Blockbridge iSCSI and NVMe storage using the native Blockbridge storage driver.
For most users, the Quickstart section is the best place to begin. It provides a step-by-step configuration sequence and is the fastest way to get started. The remaining sections cover detailed information on configuring and managing Proxmox with Blockbridge.
Last updated on Aug 20, 2025.
FEATURE OVERVIEW
Formats & Content Types
Blockbridge provides block-level storage optimized for performance, security, and efficiency. Block storage is used by Proxmox to store raw disk images. Disk images are attached to virtual machines and typically formatted with a filesystem for use by the guest.
Proxmox supports several built-in storage types. Environments with existing enterprise or datacenter storage systems can use the LVM or iSCSI/kernel storage types for shared storage in support of high-availability. For service providers, however, these solutions are simply not scalable. The configuration management required to implement and maintain Proxmox on traditional shared storage systems is too large a burden. We developed our Proxmox-native driver specifically to address these challenges.
The table below provides a high-level overview of the capabilities of popular block storage types. For a complete list of storage types, visit the Proxmox Storage Wiki.
Description | Level | High-Availability | Shared | Snapshots | Multitenant | Multipath |
---|---|---|---|---|---|---|
NVMe/Blockbridge | block | yes | yes | yes | yes | yes |
iSCSI/Blockbridge | block | yes | yes | yes | yes | yes |
Ceph/RBD | block | yes | yes | yes | no | yes |
iSCSI/kernel | block | inherit [1] | yes | no | no | yes |
LVM | block | inherit [1] | yes [2] | no | no | yes |
LVM-thin | block | no | no | yes | no | yes |
iSCSI/ZFS | block | no | yes | yes | no | no |
Note 1: LVM and iSCSI inherit the availability characteristics of the underlying storage.
Note 2: LVM can be deployed on iSCSI-based storage to achieve shared storage.
High Availability
Blockbridge provides highly-available storage that is self-healing. Controlplane (i.e., API) and dataplane (i.e., iSCSI, NVME) services transparently failover in the event of hardware failure. Depending on your network configuration, it may be appropriate to deploy Linux multipathing for protection against network failure. The Blockbridge driver supports automated multipath management.
Multi-Tenancy & Multi-Proxmox
Blockbridge implements features critical for multi-tenant environments, including management segregation, automated performance shaping, and always-on encryption. The Blockbridge driver leverages these functions, allowing you to create storage pools dedicated for different users, applications, and performance tiers. Service providers can safely deploy multiple Proxmox clusters on Blockbridge storage without the risk of collision.
High Performance
Blockbridge is heavily optimized for performance. Expect approximately a 5x write latency and IOPS advantage when compared to native Proxmox CEPH/RBD solution. Optionally, the Blockbridge driver can tune your hosts for the best possible latency and performance.
At-Rest & In-Flight Encryption
Blockbridge implements always-on per-virtual disk encryption, automated key management, and instant secure erase for at-rest security. The Blockbridge driver also supports in-flight encryption for end-to-end protection.
Snapshots & Clones
Snapshots and Clones are thin and instantaneous. Both technologies take advantage of an allocate-on-write storage architecture for significantly improved latency compared to copy-on-write strategies.
Blockbridge 5.2.0 adds support for rolling back to the most recent snapshot. Support for snapshot rollback is available in version 2.1.0+ of our Proxmox Plugin.
Thin Provisioning & Data Reduction
Blockbridge supports thin-provisioning, pattern elimination, and latency-optimized adaptive data reduction. These features are transparent to Proxmox.
Version History
Qualified Releases
Version | Date | Summary |
---|---|---|
3.5.0 | Aug 11 2025 | Support for Debian Trixie and PVE9. |
3.4.1 | Jul 11 2025 | Disabled NVMe controller loss timeout. |
3.4.0 | Apr 11 2025 | Updated for latest PVE storage API. |
3.3.0 | Nov 19 2024 | Veeam v12.2 support for Proxmox; storage driver enhancements; API bug fix. |
3.2.0 | May 23 2024 | Multipath option for local binding; fixed VM detach bug. |
3.1.0 | Jun 05 2023 | Multi-tenant networking; startup/migration performance gains; LVM DoS fix. |
3.0.1 | May 08 2023 | Bug fixes and udev improvements. |
3.0.0 | Oct 03 2022 | Added NVMe/TCP (preview, Proxmox 7.2+). |
2.3.3 | May 02 2022 | Fixed multipath deactivation issue. |
2.3.2 | Mar 21 2022 | Device management overhaul; CPU and reliability improvements; bug fixes. |
2.3.1 | Feb 24 2022 | Fixed disk move failures during rename. |
2.3.0 | Feb 18 2022 | Better multipath integration; resize and multipath fixes. |
2.2.1 | Jan 21 2022 | Bug fixes, logging, and new bbpve tool. |
2.2.0 | Dec 08 2021 | PVE7.1 and APIVER 10 support; volume rename and flags; bug fixes. |
2.1.1 | Oct 26 2021 | Bug fixes; CLI and vss label improvements. |
2.1.0 | Aug 24 2021 | PVE7/Debian 11 support; container FS; snapshot rollback; clone optimizations. |
Release Changelog
Version | Type | Description |
---|---|---|
3.5.0 | Feature | Adds support for Debian Trixie and Proxmox VE 9, enabling the plugin to operate on the latest OS and virtualization platform. pve9 |
3.4.1 | Feature | Auto adjust NVMe controller loss timeout by default to prevent unintended storage errors or controller resets. nvme |
3.4.0 | Feature | Updates the plugin to be compatible with the latest PVE storage API, ensuring full functionality with current Proxmox APIs. pve8 |
3.3.0 | Feature | Allows integration of Proxmox VE servers into Veeam Backup & Replication 12.2, enabling centralized backup and recovery management for Proxmox environments. veeam |
3.3.0 | Feature | Enhances the Blockbridge storage driver to fully support Veeam backup and restore operations by applying specific driver configuration quirks. veeam |
3.3.0 | Bugfix | Fixes a minor defect that could cause the volume_update_notes API request to fail under certain conditions. |
3.2.0 | Feature | Introduces a multipath deployment option to bind storage paths to local network interfaces, ensuring multiple network paths are used for redundancy and performance. multipath |
3.2.0 | Bugfix | Fixes a rare bug in volume detach operations affecting virtual machines with more than 10 disks, preventing potential data access issues. |
3.1.0 | Feature | Adds network selectors for multi-tenant Proxmox deployments, allowing the storage driver to select optimal physical networks corresponding to isolated customer networks. mulit-tenancy |
3.1.0 | Feature | Improves large-scale startup and migration performance by limiting vdisk inspection when multipath is enabled, reducing API calls to the Blockbridge control plane. scale |
3.1.0 | Feature | Converts the volume inspection helper function from CLI calls to direct API calls, eliminating CLI startup overhead and improving performance. scale |
3.1.0 | Bugfix | Patches a potential DoS vulnerability in automatic LVM detection by updating the LVM global_filter to skip plugin-managed block devices. security |
3.0.1 | Feature | Implements minor improvements for processing udev change events, increasing stability during device updates. scale |
3.0.1 | Feature | General performance enhancements for improved responsiveness and reliability. |
3.0.0 | Feature | Major release introducing NVMe/TCP support, providing high-performance storage connectivity in preview mode for service provider customers. nvme |
3.0.0 | Feature | Supports both iSCSI and NVMe storage, with NVMe support requiring Proxmox VE 7.2. nvme |
2.3.3 | Bugfix | Resolves a rare multipath volume deactivation issue caused by poorly timed administrative LVM probes, ensuring stranded multipath devices are properly removed with I/O queueing disabled. multipath |
2.3.2 | Feature | Implements a performance-optimized overhaul of Linux device management rules to avoid unnecessary probing of partitions, filesystems, and LVM volumes, improving VM startup times and reducing CPU usage during system reboot and migration. scale |
2.3.2 | Bugfix | Fixes compatibility issues with older Perl dependencies in Proxmox VE 6. pve6 compat |
2.3.2 | Bugfix | Cleans up occasional stranded iSCSI sessions during volume deactivation to prevent hung processes. iscsi |
2.3.1 | Bugfix | Corrects occasional failures when moving disks between virtual machines by ensuring the disk is explicitly detached before the rename operation. |
2.3.0 | Feature | Improves integration, reliability, and performance with Linux multipathing, including better handling of multipath devices. multipath |
2.3.0 | Feature | Storage pools now support switching between single-path and multipath configurations. multipath |
2.3.0 | Feature | Enhances interaction with multipathd , particularly for read-only media. multipath |
2.3.0 | Bugfix | Fixes online volume resize issues when using multipath devices. multipath |
2.2.1 | Feature | Adds bbpve , a management and diagnostic tool for improved supportability and troubleshooting. |
2.2.1 | Feature | Persistent logging of unexpected task failures for easier debugging. support |
2.2.1 | Bugfix | Fixes remaining paths that could supply tainted data to Proxmox, preventing potential inconsistencies. compat |
2.2.1 | Bugfix | Corrects harmless but annoying warnings related to undefined size values. support |
2.2.0 | Feature | Supports Proxmox VE 7.1 and APIVER 10, ensuring compatibility with newer Proxmox releases. pve7 compat |
2.2.0 | Feature | Adds volume rename support, allowing users to rename storage volumes safely. |
2.2.0 | Feature | Adds support for the protected volume flag, preventing accidental modifications or deletions. |
2.2.0 | Bugfix | Closes a race condition where volume activation could succeed before the associated devpath was populated, particularly for cloud-init drives. |
2.2.0 | Bugfix | Fixes multipath device path management issues for more reliable storage handling. multipath |
2.2.0 | Feature | Cleans up handling of certificate verification errors to improve security and stability. security |
2.1.1 | Bugfix | Requires blockbridge-cli version 5.2.1 to ensure host attach repair leaves healthy connections intact. |
2.1.1 | Feature | Changes vss_label_prefix to an enum with default pool ; can be set to none to disable VSS label prefixing. |
2.1.0 | Feature | Adds support for PVE7 and Debian 11 (Bullseye), including necessary driver API updates and packaging changes. containers |
2.1.0 | Feature | Enables container filesystem (rootdir) support for external storage plugins, allowing storage of container data previously limited to fixed types. conatiners |
2.1.0 | Feature | Adds snapshot rollback functionality (limited to the most recent snapshot, requires Blockbridge 5.2.0+). snapshots |
2.1.0 | Bugfix | Fixes issues with resizing attached disks that triggered Perl taint checking errors. compat |
2.1.0 | Feature | Optimizes linked clones by using a single shared base snapshot per disk, ensuring template disks remain static. clones |
2.1.0 | Feature | Relaxes restrictions on snapshot names, allowing characters beyond those permitted in iSCSI target IQNs. snapshots |
2.1.0 | Bugfix | Prefixes backend VSS labels with the Proxmox pool ID to prevent collisions and confusion when multiple pools use the same backend account. support |
Proxmox Compatibility
PVE Version | Debian Version | QEMU Version | Linux Kernel | Release Date |
---|---|---|---|---|
9.0 | 13.0 (Trixie) | 10.0.2 | 6.14.8-2 | August 2025 |
8.4 | 12.10 (Bookworm) | 9.2.0 | 6.8.12, 6.14 | April 2025 |
8.3 | 12.8 (Bookworm) | 9.0.2 | 6.8, 6.11 | November 2024 |
8.2 | 12.5 (Bookworm) | 8.1.5 | 6.8 | April 2024 |
8.1 | 12.2 (Bookworm) | 8.1.2 | 6.5 | November 2023 |
8.0 | 12.0 (Bookworm) | 8.0.2 | 6.2 | June 2023 |
7.4 | 11.6 (Bullseye) | 7.2 | 5.15 | March 2023 |
7.3 | 11.5 (Bullseye) | 7.1 | 5.15 | Nov 2022 |
7.2 | 11.3 (Bullseye) | 6.2 | 5.15 | May 2022 |
7.2 | 11.3 (Bullseye) | 6.2 | 5.15 | May 2022 |
7.1 | 11.1 (Bullseye) | 6.0 | 5.11 | November 2021 |
7.0 | 11.0 (Bullseye) | 5.2 | 5.11 | July 2021 |
6.4 | 10.9 (Buster) | 5.2 | 5.4 LTS | April 2021 |
6.3 | 10.6 (Buster) | 5.1 | 5.4 LTS | November 2020 |
6.2 | 10.4 (Buster) | 5.0 | 5.4 LTS | May 2020 |
6.1 | 10.2 (Buster) | 4.1.1 | 5.3 | March 2020 |
6.0 | 10.0 (Buster) | 4.0.0 | 5.0 | July 2019 |
QUICKSTART
This section provides a quick reference for installing and configuring the Blockbridge Proxmox VE shared storage plugin.
Driver Installation
Scripted Proxmox Driver Installation
-
Scripted Proxmox Driver Installation for Blockbridge Release 6.1 - single node
# installs the driver on a single node curl -fsS https://get.blockbridge.com/6.1/pve | bash
Scripted Proxmox Driver Installation for Blockbridge Release 6.1 - all nodes
# validates the cluster is healthy and all nodes are responding curl -fsS https://get.blockbridge.com/6.1/pve | bash -s -- --all-nodes --dry-run # installs the driver on each node in a cluster curl -fsS https://get.blockbridge.com/6.1/pve | bash -s -- --all-nodes
Tip: Running curl directly into bash executes remote code immediately and comes with risks. If you’re concerned, download the script first, review it, and then run it.Tip: It is safe to run the all-nodes function on a partially installed cluster in order to add new nodes. -
Scripted Proxmox Driver Installation for Blockbridge Release 6.0 - single node
# installs the driver on a single node curl -fsS https://get.blockbridge.com/6.0/pve | bash
Scripted Proxmox Driver Installation for Blockbridge Release 6.0 - all nodes
# validates the cluster is healthy and all nodes are responding curl -fsS https://get.blockbridge.com/6.0/pve | bash -s -- --all-nodes --dry-run # installs the driver on each node in a cluster curl -fsS https://get.blockbridge.com/6.0/pve | bash -s -- --all-nodes
Tip: Running curl directly into bash executes remote code immediately and comes with risks. If you’re concerned, download the script first, review it, and then run it.Tip: It is safe to run the all-nodes function on a partially installed cluster in order to add new nodes. -
Scripted Proxmox Driver Installation for Blockbridge Release 5.2 - single node
# installs the driver on a single node curl -fsS https://get.blockbridge.com/5.2/pve | bash
Scripted Proxmox Driver Installation for Blockbridge Release 5.2 - all nodes
# validates the cluster is healthy and all nodes are responding curl -fsS https://get.blockbridge.com/5.2/pve | bash -s -- --all-nodes --dry-run # installs the driver on each node in a cluster curl -fsS https://get.blockbridge.com/5.2/pve | bash -s -- --all-nodes
Tip: Running curl directly into bash executes remote code immediately and comes with risks. If you’re concerned, download the script first, review it, and then run it.Tip: It is safe to run the all-nodes function on a partially installed cluster in order to add new nodes.
Manual Proxmox Driver Installation
-
Manual Proxmox Driver Installation for Blockbridge Release 6.1
-
Import the Blockbridge software release signing key.
curl -fsS https://get.blockbridge.com/tools/6.1/debian/blockbridge-archive-keyring.gpg > \ /usr/share/keyrings/blockbridge-archive-keyring.gpg
-
Verify the signing key fingerprint.
gpg --show-keys --fingerprint /usr/share/keyrings/blockbridge-archive-keyring.gpg pub rsa4096 2016-11-01 [SC] 9C1D E2AE 5970 CFD4 ADC5 E0BA DDDE 845D 7ECF 5373 uid Blockbridge (Official Signing Key) <security@blockbridge.com> sub rsa4096 2016-11-01 [E]
-
Add the Blockbridge Tools repository.
curl -fsS https://get.blockbridge.com/tools/6.1/debian/blockbridge-$(source /etc/os-release && \ echo "$VERSION_CODENAME").sources > /etc/apt/sources.list.d/blockbridge-tools.sources
-
Install the plugin.
apt update ; apt install blockbridge-proxmox
-
Restart Proxmox services.
systemctl try-reload-or-restart pvedaemon pveproxy pvestatd pvescheduler pve-ha-lrm
Tip: Restarting the specified PVE services does not affect running VMs. -
-
Manual Proxmox Driver Installation for Blockbridge Release 6.0
-
Import the Blockbridge software release signing key.
curl -fsS https://get.blockbridge.com/tools/6.0/debian/blockbridge-archive-keyring.gpg > \ /usr/share/keyrings/blockbridge-archive-keyring.gpg
-
Verify the signing key fingerprint.
gpg --show-keys --fingerprint /usr/share/keyrings/blockbridge-archive-keyring.gpg pub rsa4096 2016-11-01 [SC] 9C1D E2AE 5970 CFD4 ADC5 E0BA DDDE 845D 7ECF 5373 uid Blockbridge (Official Signing Key) <security@blockbridge.com> sub rsa4096 2016-11-01 [E]
-
Add the Blockbridge Tools repository.
curl -fsS https://get.blockbridge.com/tools/6.0/debian/blockbridge-$(source /etc/os-release && \ echo "$VERSION_CODENAME").sources > /etc/apt/sources.list.d/blockbridge-tools.sources
-
Install the plugin.
apt update ; apt install blockbridge-proxmox
-
Restart Proxmox services.
systemctl try-reload-or-restart pvedaemon pveproxy pvestatd pvescheduler pve-ha-lrm
Tip: Restarting the specified PVE services does not affect running VMs. -
-
Manual Proxmox Driver Installation for Blockbridge Release 5.2
-
Import the Blockbridge software release signing key.
curl -fsS https://get.blockbridge.com/tools/5.2/debian/blockbridge-archive-keyring.gpg > \ /usr/share/keyrings/blockbridge-archive-keyring.gpg
-
Verify the signing key fingerprint.
gpg --show-keys --fingerprint /usr/share/keyrings/blockbridge-archive-keyring.gpg pub rsa4096 2016-11-01 [SC] 9C1D E2AE 5970 CFD4 ADC5 E0BA DDDE 845D 7ECF 5373 uid Blockbridge (Official Signing Key) <security@blockbridge.com> sub rsa4096 2016-11-01 [E]
-
Add the Blockbridge Tools repository.
curl -fsS https://get.blockbridge.com/tools/5.2/debian/blockbridge-$(source /etc/os-release && \ echo "$VERSION_CODENAME").sources > /etc/apt/sources.list.d/blockbridge-tools.sources
-
Install the plugin.
apt update ; apt install blockbridge-proxmox
-
Restart Proxmox services.
systemctl try-reload-or-restart pvedaemon pveproxy pvestatd pvescheduler pve-ha-lrm
Tip: Restarting the specified PVE services does not affect running VMs. -
Authentication Token
This section describes creating a dedicated Blockbridge account for your Proxmox storage, and then creating an authorization token to use it. These steps only need to happen once.
-
Disable SSL certificate verification [Optional]
bb config api ssl-verify-peer disabled
-
Log in to your Blockbridge controlplane as the
system
user.bb auth login Enter a default management host: blockbridge.yourcompany.com Authenticating to https://blockbridge.yourcompany.com/api Enter user or access token: system Password for system: Authenticated; token expires in 3599 seconds. == Authenticated as user system.
-
Create a dedicated Blockbridge account for your Proxmox cluster or pool.
bb account create --name proxmox
-
Define a reservable storage limit.
bb account update --account proxmox --size-reserve-limit 32TiB
-
Use ‘substitute user’ to switch to the new account.
Note that you will have to re-authenticate as the system user.
bb auth login --su proxmox Authenticating to https://blockbridge.yourcompany.com/api Enter user or access token: system Password for system: ...... Authenticated; token expires in 3599 seconds. == Authenticated as user proxmox.
-
Create a persistent authorization token.
bb authorization create --notes "Proxmox Cluster token" == Created authorization: ATH4762194C412D97FE ... [output trimmed] ... == Access Token access token 1/LtVVws54+bGvb/l...njz8A
Proxmox Configuration
-
Configure a blockbridge storage backend by adding a new storage object
pvesm add blockbridge shared-block-gp -api_url https://blockbridge.yourcompany.com/api \ -ssl_verify_peer 0 -auth_token 1/nalF+/S1pO............2qitqUX79LWtpw
shared-block-gp
is a placeholder name. Please use whatever naming scheme is
appropriate for your environmentDEPLOYMENT & MANAGEMENT
This section describes how to configure and manage the Blockbridge Proxmox storage plugin.
Driver Options
The following driver options can be configured in your Proxmox Storage Definition.
Parameter | Type | Values | Description |
---|---|---|---|
api_url |
string | Blockbridge controlplane API endpoint | |
auth_token |
string | Blockbridge controlplane API authentiction token | |
ssl_verify_peer |
boolean | 0,1 (default) | Enable or disable peer certificate verification |
service_type |
string | Override default provisioning template selection | |
query_include |
string-list | Require specific tags when provisioning storage | |
query_exclude |
string-list | Reject specific tags when provisioning storage | |
transport_encryption |
enum | ‘tls’,’none’ (default) | Transport data encryption protocol |
multipath |
boolean | 1,0 (default) | Automatically detect and configure storage paths |
protocol |
string | ‘nvme’,’iscsi’ (default) | Storage protocol (3.0.0+) |
local_bind |
boolean | 1,0 (default) | (multipath) Bind transport to local interfaces. (3.2.0+) |
local_interfaces |
string-list | (multipath) Restrict local_bind to explicit list of interfaces. (3.2.0+) | |
quirks |
string-list | ‘veeam-path-activation’ | Enable PVE storage API bypass compensation for Veeam (3.3.0+) |
ctrl_loss_tmo |
integer | Override the NVMe controller loss timeout, in seconds. Controller loss timeout is disabled by default. Warning: Enabling controller loss timeout may result in data loss. (3.4.1) |
Driver Authentication
Create a persistent authorization for Proxmox use
Optional: Disable SSL certificate verification
bb config api ssl-verify-peer disabled
Log in to your Blockbridge controlplane as the system
user.
bb auth login
Enter a default management host: blockbridge.yourcompany.com
Authenticating to https://blockbridge.yourcompany.com/api
Enter user or access token: system
Password for system:
Authenticated; token expires in 3599 seconds.
== Authenticated as user system.
Create a dedicated proxmox
account for storage and management isolation.
bb account create --name proxmox
== Created account: proxmox (ACT0762194C407BA625)
== Account: proxmox (ACT0762194C407BA625)
name proxmox
label proxmox
serial ACT0762194C407BA625
created 2021-01-27 16:58:53 -0500
disabled no
With the system
username and password, use the “substitute user” function to
switch to the newly created proxmox
account:
bb auth login --su proxmox
Authenticating to https://blockbridge.yourcompany.com/api
Enter user or access token: system
Password for system: ......
Authenticated; token expires in 3599 seconds.
== Authenticated as user proxmox.
Create a persistent authorization for use by the Blockbridge storage plugin.
bb authorization create --notes "Proxmox Cluster token"
== Created authorization: ATH4762194C412D97FE
== Authorization: ATH4762194C412D97FE
notes Proxmox Cluster token
serial ATH4762194C412D97FE
account proxmox (ACT0762194C407BA625)
user proxmox (USR1B62194C407BA0E5)
enabled yes
created at 2021-01-27 16:59:08 -0500
access type online
token suffix rDznjz8A
restrict auth
enforce 2-factor false
== Access Token
access token 1/LtVVws54+bGvb/l...njz8A
*** Remember to record your access token!
Proxmox Storage Definition
Configure a blockbridge storage backend by adding a new storage object:
pvesm add blockbridge shared-block-gp -api_url https://blockbridge.yourcompany.com/api \
-ssl_verify_peer 0 -auth_token 1/nalF+/S1pO............2qitqUX79LWtpw
Alternatively manually add a new section to /etc/pve/storage.cfg
. The /etc/pve
directory is an automatically synchronized filesystem (proxmox cluster filesystem,
or just pmxcfs
), so you only need to edit the file on a single node; the changes
are synchronized to all cluster members.
For example, edit storage.cfg
to add this section:
blockbridge: shared-block-gp
api_url https://blockbridge.yourcompany.com/api
auth_token 1/nalF+/S1pO............2qitqUX79LWtpw
ssl_verify_peer 0
Upgrading the Blockbridge Plugin
Take extra care when upgrading core PVE packages: new Proxmox releases are frequently accompanied by Storage Plugin API changes, which need a corresponding Blockbridge Plugin update. Always deploy to a staging environment before going live to production!
These instructions are careful to only install Blockbridge package updates. Follow these instructions to upgrade the Blockbridge storage plugin using the apt
package management CLI. If preferred, the web-based Proxmox management interface can be used to list and install package updates.
Upgrade via the CLI
Upgrades must be performed on all proxmox nodes. For environments with many nodes, we strongly recommend using a configuration management tool, such as Ansible to orchestrate package updates.
First, update the list of available packages:
apt update
Check to see if any updated Blockbridge packages are available:
apt list --upgradable -q blockbridge-\*
Listing...
blockbridge-proxmox/bullseye 2.3.2-193~bullseye1 all [upgradable from: 2.3.1-161~bullseye1]
In this example, an update from 2.3.1
to 2.3.2
is available. Take a look at Version
History to see a summary of changes for a given release.
Install the updated blockbridge-proxmox
package using the apt install
command:
apt install blockbridge-proxmox
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following packages will be upgraded:
blockbridge-proxmox
1 upgraded, 0 newly installed, 0 to remove and 16 not upgraded.
Need to get 21.0 kB of archives.
After this operation, 0 B of additional disk space will be used.
Get:1 https://get.blockbridge.com/tools/5.2/debian bullseye/main amd64 blockbridge-proxmox all 2.3.2-193~bullseye1 [21.0 kB]
Fetched 21.0 kB in 0s (130 kB/s)
Reading changelogs... Done
(Reading database ... 87522 files and directories currently installed.)
Preparing to unpack .../blockbridge-proxmox_2.3.2-193~bullseye1_all.deb ...
Unpacking blockbridge-proxmox (2.3.2-193~bullseye1) over (2.3.1-161~bullseye1) ...
Setting up blockbridge-proxmox (2.3.2-193~bullseye1) ...
apt upgrade
command! The correct-looking command apt upgrade blockbridge-proxmox
will ignore the supplied package name and proceed to install all available updates.Finally, reload PVE services to load the updated driver:
# systemctl try-reload-or-restart pvedaemon pveproxy pvestatd pvescheduler pve-ha-lrm
The Unattended Upgrades Service
By default, PVE is configured to perform unattended daily package upgrades. To confirm your system is configured for automatic updates, use the apt-config
tool:
# apt-config dump APT::Periodic::Unattended-Upgrade
APT::Periodic::Unattended-Upgrade "1";
# apt-config dump Unattended-Upgrade::Origins-Pattern
Unattended-Upgrade::Origins-Pattern "";
Unattended-Upgrade::Origins-Pattern:: "origin=Debian,codename=${distro_codename},label=Debian";
Unattended-Upgrade::Origins-Pattern:: "origin=Debian,codename=${distro_codename},label=Debian-Security";
Unattended-Upgrade::Origins-Pattern:: "origin=Debian,codename=${distro_codename}-security,label=Debian-Security";
The APT::Perodic::Unattended-Upgrade
value of "1"
indicates unattended-upgrades will execute once per day; a value of 0 disables unattended upgrades. The patterns specified in Unattended-Upgrade::Origins-Pattern
list what package origins are eligible for unattended upgrade. By default, security updates and general updates for the current named Debian release are considered.
Blockbridge software is published using an origin of Blockbridge
, so will not be automatically updated. Depending on your security policy and appetite for surprise package updates, you may want to adjust the unattended-upgrades configuration. We recommend disabling automatic updating for all but critical security updates. To do this, edit /etc/apt/apt.conf.d/50unattended-upgrades
and comment out the non-security origins. After these changes, the Unattended-Upgrade::Origins-Pattern
setting will look something like this:
Unattended-Upgrade::Origins-Pattern {
// Codename based matching:
// This will follow the migration of a release through different
// archives (e.g. from testing to stable and later oldstable).
// Software will be the latest available for the named release,
// but the Debian release itself will not be automatically upgraded.
// "origin=Debian,codename=${distro_codename}-updates";
// "origin=Debian,codename=${distro_codename}-proposed-updates";
// "origin=Debian,codename=${distro_codename},label=Debian";
"origin=Debian,codename=${distro_codename},label=Debian-Security";
"origin=Debian,codename=${distro_codename}-security,label=Debian-Security";
// Archive or Suite based matching:
// Note that this will silently match a different release after
// migration to the specified archive (e.g. testing becomes the
// new stable).
// "o=Debian,a=stable";
// "o=Debian,a=stable-updates";
// "o=Debian,a=proposed-updates";
// "o=Debian Backports,a=${distro_codename}-backports,l=Debian Backports";
};
Finally, confirm that only security origins are enabled using apt-config
:
# apt-config dump Unattended-Upgrade::Origins-Pattern
Unattended-Upgrade::Origins-Pattern "";
Unattended-Upgrade::Origins-Pattern:: "origin=Debian,codename=${distro_codename},label=Debian-Security";
Unattended-Upgrade::Origins-Pattern:: "origin=Debian,codename=${distro_codename}-security,label=Debian-Security";
Troubleshooting
The Blockbridge plugin logs all interactions with both Proxmox and your Blockbridge installation to syslog at LOG_INFO
level. You can see the logs with journalctl -f | grep blockbridge:
.
If you need to troubleshoot Proxmox integration, review storage settings, or run low-level Blockbridge API operations, the bbpve
command-line tool is the go-to utility.
Inspect software components and pool configuration
Use bbpve
to summarize package and kernel versions:
bbpve --version
Component versions:
OS: Debian GNU/Linux 11 (bullseye)
PVE: pve-manager/7.1-11/8d529482 (running kernel: 5.15.27-1-pve)
Plugin: 2.3.2-193~bullseye1
CLI: 5.3.0-1779~bullseye1
Display a compact summary of all Blockbridge storage pools. This display omits the API access token, to avoid inadvertently leaking sensitive information:
bbpve --pools
bb1
api_url https://blockbridge-01.example.com/api
bb2
api_url https://blockbridge-02.example.com/api
transport_encryption tls
bb2-mpath
api_url https://blockbridge-02.example.com/api
multipath 1
Issue low-level commands directly to the Blockbridge API
bbpve
provides a wrapper around the underlying Blockbridge command line suite. This allows you to execute the CLI in the context of a defined Proxmox storage pool.
As an example, say we wanted to enumerate the storage devices logically contained within the bb1
storage pool. Instead of logging in to the correct account by hand using bb auth login
, we can simply execute the operation using the configured api endpoint and access token:
bpve bb1 disk list
label [2] serial vss [1] capacity size size limit status
--------- ------------------- -------------------- -------- -------- ---------- ------
base DSK1962194C4062644E bb1:base-100-disk-0 40.0GiB 2.333GiB none online
base DSK1962194C40626417 bb1:vm-100-cloudinit 4.0MiB 128.0KiB none online
base DSK1962194C406268BA bb1:vm-101-cloudinit 4.0MiB 128.0KiB none online
base DSK1962194C4062697E bb1:vm-102-cloudinit 4.0MiB 128.0KiB none online
base DSK1962194C40626907 bb1:vm-102-disk-1 1.0GiB 0b none online
base DSK1962194C40626CA8 bb1:vm-105-disk-0 32.0GiB 0b none online
base DSK1962194C406269DD bb1:vm-110-cloudinit 4.0MiB 128.0KiB none online
base DSK1962194C406268A2 bb1:vm-110-disk-0 40.0GiB 2.453GiB none online
base DSK1962194C406269C5 bb1:vm-110-disk-1 2.0GiB 0b none online
base DSK1962194C40626984 bb1:vm-110-disk-2 112.0MiB 23.0MiB none online
base DSK1962194C406269BC bb1:vm-110-disk-3 112.0MiB 23.0MiB none online
base DSK1962194C406269FD bb1:vm-111-cloudinit 4.0MiB 0b none online
base DSK1962194C406269E5 bb1:vm-111-disk-0 40.0GiB 2.453GiB none online
base DSK1962194C4062699C bb1:vm-111-disk-1 2.0GiB 0b none online
base DSK1962194C40626FC2 bb1:vm-200-disk-2 1.0GiB 0b none online
This can be useful in diagnosing configuration errors, tuning storage performance, or making adjustments to disk configuration that’s not currently possible using the PVE interface directly.
PROXMOX STORAGE PRIMITIVES
Proxmox offers multiple interfaces for storage management.
- The GUI offers storage management scoped to the context of virtual machine.
- The pvesm command provides granular storage management for specific node.
- The qm command allows for VM specific volume management.
- The pvesh API tool provides fine-grained storage and VM management, and can operate on any node in your Proxmox cluster. To see the available resources, check out the browsable api viewer
For additional detail and for topics not covered in this guide, head over to the Proxmox VE Documentation Index.
Device Naming Specification
Proxmox does not maintain internal state about storage devices or connectivity. In practice, this means that Proxmox relies on device naming to know which devices are associated with virtual machines and how those device are connected to the virtual storage controller. The general device name format is as follows:
Device Filename Specification:
vm-<vmid>-disk-<unique-id>
<vmid>: <integer> (100 - N)
Specify owner VM
<disk-id>: <integer> (1 - N)
Unique naming of disk files
Show Storage Pools
Proxmox supports multiple pools of storage. This flexibility allows for optimization of storage resources based on requirements. With Blockbridge, you can offer different classes of storage. For example, one pool can be IOPS-limited, while another can impose quality-of-service with strict performance guarantees.
Not all Proxmox storage pools allow for shared access. As such, the interfaces that you use to view storage pools are scoped to a node. When working with shared storage types, such as Blockbridge, each node will return its own view of the storage, consistent with the other nodes’ views.
PVESM
Show available storage types on the local node:
pvesm status
Name Type Status Total Used Available %
backup pbs active 65792536 7402332 55018432 11.25%
local dir active 7933384 6342208 1168472 79.94%
shared-block-gp blockbridge active 268435456 83886080 184549376 31.25%
shared-block-iops blockbridge active 268435456 33669120 234766336 12.54%
shared-file cephfs active 59158528 995328 58163200 1.68%
PVESH
Show available storage types on proxmox-1
pvesh get /nodes/proxmox-1/storage/
┌──────────────────────┬───────────────────┬─────────────┬────────┬────────────┬─────────┬────────┬────────────┬────────────┬─────────┐
│ content │ storage │ type │ active │ avail │ enabled │ shared │ total │ used │ used % │
╞══════════════════════╪═══════════════════╪═════════════╪════════╪════════════╪═════════╪════════╪════════════╪════════════╪═════════╡
│ backup │ backup │ pbs │ 1 │ 52.47 GiB │ 1 │ 0 │ 62.74 GiB │ 7.06 GiB │ 11.25% │
├──────────────────────┼───────────────────┼─────────────┼────────┼────────────┼─────────┼────────┼────────────┼────────────┼─────────┤
│ images │ shared-block-gp │ blockbridge │ 1 │ 240.00 GiB │ 1 │ 1 │ 256.00 GiB │ 16.00 GiB │ 6.25% │
├──────────────────────┼───────────────────┼─────────────┼────────┼────────────┼─────────┼────────┼────────────┼────────────┼─────────┤
│ images │ shared-block-iops │ blockbridge │ 1 │ 191.89 GiB │ 1 │ 1 │ 256.00 GiB │ 64.11 GiB │ 25.04% │
├──────────────────────┼───────────────────┼─────────────┼────────┼────────────┼─────────┼────────┼────────────┼────────────┼─────────┤
│ iso,images,vztmpl,.. │ local │ dir │ 1 │ 1.11 GiB │ 1 │ 0 │ 7.57 GiB │ 6.05 GiB │ 79.99% │
├─────────────────────-┼───────────────────┼─────────────┼────────┼────────────┼─────────┼────────┼────────────┼────────────┼─────────┤
│ vztmpl,backup,iso │ shared-file │ cephfs │ 1 │ 55.47 GiB │ 1 │ 1 │ 56.42 GiB │ 972.00 MiB │ 1.68% │
└──────────────────────┴───────────────────┴─────────────┴────────┴────────────┴─────────┴────────┴────────────┴────────────┴─────────┘
List Volumes
You can enumerate volumes stored in a storage pool using the GUI, pvesm
, and pvesh
tools.
GUI
To generate a list of all volumes in a storage pool, we recommend Folder View
. To see devices connected to a specific virtual machine, select the VM from the primary navigation plane. Then select Hardware
.
To see a list of all devices in the storage pool, select a storage pool from the Storage folder in the primary navigation plane (all nodes have a consistent view of storage.) Then select VM Disks.
PVESM
pvesm list <storage> [--vmid <integer>]
Parameter | Format | Description |
---|---|---|
storage | string | Storage pool identifier from pvesm status |
vmid | integer | Optional Virtual machine owner ID |
Example
List all volumes from the shared-block-iops pool.
pvesm list shared-block-iops
Volid Format Type Size VMID
shared-block-iops:vm-101-disk-0 raw images 34359738368 101
shared-block-iops:vm-101-disk-1 raw images 42949672960 101
shared-block-iops:vm-101-disk-2 raw images 34359738368 101
shared-block-iops:vm-101-state-foo raw images 4819255296 101
shared-block-iops:vm-10444-disk-1 raw images 34359738368 10444
shared-block-iops:vm-2000-disk-0 raw images 117440512 2000
List volumes of VM 101 stored in the shared-block-iops pool.
pvesm list shared-block-iops --vmid 101
Volid Format Type Size VMID
shared-block-iops:vm-101-disk-0 raw images 34359738368 101
shared-block-iops:vm-101-disk-1 raw images 42949672960 101
shared-block-iops:vm-101-disk-2 raw images 34359738368 101
shared-block-iops:vm-101-state-foo raw images 4819255296 101
PVESH
pvesh get <api_path> [-vmid <integer>]
Parameter | Format | Description |
---|---|---|
api_path | string | /nodes/{node}/storage/{storage}/content |
node | string | Any pve node listed in the output of pvesh get /nodes |
storage | string | Storage pool identifier from pvesh get /storage |
vmid | integer | Optional Virtual machine owner ID |
Show volumes from the shared-block-iops pool:
pvesh get /nodes/proxmox-1/storage/shared-block-iops/content --vmid 101
┌────────┬────────────┬────────────────────────────────────┬────────────┬───────────┬───────┬────────┬──────┬──────────────┬───────┐
│ format │ size │ volid │ ctime │ encrypted │ notes │ parent │ used │ verification │ vmid │
╞════════╪════════════╪════════════════════════════════════╪════════════╪═══════════╪═══════╪════════╪══════╪══════════════╪═══════╡
│ raw │ 32.00 GiB │ shared-block-iops:vm-101-disk-0 │ 1612628760 │ │ │ │ │ │ 101 │
├────────┼────────────┼────────────────────────────────────┼────────────┼───────────┼───────┼────────┼──────┼──────────────┼───────┤
│ raw │ 40.00 GiB │ shared-block-iops:vm-101-disk-1 │ 1612627879 │ │ │ │ │ │ 101 │
├────────┼────────────┼────────────────────────────────────┼────────────┼───────────┼───────┼────────┼──────┼──────────────┼───────┤
│ raw │ 32.00 GiB │ shared-block-iops:vm-101-disk-2 │ 1612564950 │ │ │ │ │ │ 101 │
├────────┼────────────┼────────────────────────────────────┼────────────┼───────────┼───────┼────────┼──────┼──────────────┼───────┤
│ raw │ 4.49 GiB │ shared-block-iops:vm-101-state-foo │ 1612725210 │ │ │ │ │ │ 101 │
├────────┼────────────┼────────────────────────────────────┼────────────┼───────────┼───────┼────────┼──────┼──────────────┼───────┤
│ raw │ 32.00 GiB │ shared-block-iops:vm-10444-disk-1 │ 1612566379 │ │ │ │ │ │ 10444 │
├────────┼────────────┼────────────────────────────────────┼────────────┼───────────┼───────┼────────┼──────┼──────────────┼───────┤
│ raw │ 112.00 MiB │ shared-block-iops:vm-2000-disk-0 │ 1612478241 │ │ │ │ │ │ 2000 │
└────────┴────────────┴────────────────────────────────────┴────────────┴───────────┴───────┴────────┴──────┴──────────────┴───────┘
List volumes of VM 101 that are stored in the shared-block-iops pool:
pvesh get /nodes/proxmox-1/storage/shared-block-iops/content
┌────────┬───────────┬────────────────────────────────────┬────────────┬───────────┬───────┬────────┬──────┬──────────────┬──────┐
│ format │ size │ volid │ ctime │ encrypted │ notes │ parent │ used │ verification │ vmid │
╞════════╪═══════════╪════════════════════════════════════╪════════════╪═══════════╪═══════╪════════╪══════╪══════════════╪══════╡
│ raw │ 32.00 GiB │ shared-block-iops:vm-101-disk-0 │ 1612628760 │ │ │ │ │ │ 101 │
├────────┼───────────┼────────────────────────────────────┼────────────┼───────────┼───────┼────────┼──────┼──────────────┼──────┤
│ raw │ 40.00 GiB │ shared-block-iops:vm-101-disk-1 │ 1612627879 │ │ │ │ │ │ 101 │
├────────┼───────────┼────────────────────────────────────┼────────────┼───────────┼───────┼────────┼──────┼──────────────┼──────┤
│ raw │ 32.00 GiB │ shared-block-iops:vm-101-disk-2 │ 1612564950 │ │ │ │ │ │ 101 │
├────────┼───────────┼────────────────────────────────────┼────────────┼───────────┼───────┼────────┼──────┼──────────────┼──────┤
│ raw │ 4.49 GiB │ shared-block-iops:vm-101-state-foo │ 1612725210 │ │ │ │ │ │ 101 │
└────────┴───────────┴────────────────────────────────────┴────────────┴───────────┴───────┴────────┴──────┴──────────────┴──────┘
Allocate A Volume
Proxmox volumes are provisioned in the context of a VM. In fact, the naming scheme for volumes includes the VMID. When using the GUI, volume allocation automatically attaches the volume to the VM. When pvesm
or pvesh
are used, you are required to attach volumes as a separate step (see: Attach A Volume). This section covers explicit allocation of volumes as a distinct action.
PVESM
pvesm alloc <storage> <vmid> <filename> <size>
Arguments
Parameter | Format | Description |
---|---|---|
storage | string | Storage pool identifier from pvesm status |
vmid | integer | Virtual machine owner ID |
filename | string | See: Device Naming Specification |
size | \d+[MG]? | Default is KiB (1024). Optional suffixes M (MiB, 1024K) and G (GiB, 1024M) |
Example
Allocate a 10G volume for VMID 100 from the general purpose performance pool.
pvesm alloc shared-block-gp 100 vm-100-disk-1 10G
successfully created 'shared-block-gp:vm-100-disk-1'
illegal name '101-vm-disk-2' - should be 'vm-10444-*'
.PVESH
pvesh create <api_path> -vmid <vmid> -filename <filename> -size <size>
Arguments
Volume management with pvesh
is node-relative. However, Blockbridge’s shared
storage permits uniform access to storage from all Proxmox nodes. You are free
to execute allocation requests against any cluster member. The volume will be
available globally.
Parameter | Format | Description |
---|---|---|
api_path | string | /nodes/{node}/storage/{storage}/content |
node | string | Any pve node listed in the output of pvesh get /nodes |
storage | string | Storage pool identifier from pvesh get /storage |
vmid | integer | Virtual machine owner ID |
filename | string | See: Device Naming Specification |
size | \d+[MG]? | Default: KiB (1024). Other Suffixes: M (MiB, 1024K) and G (GiB, 1024M) |
Example
Allocate a 10G volume for VMID 100 from the general purpose performance pool.
pvesh create /nodes/proxmox-1/storage/shared-block-gp/content -vmid 100 -filename vm-100-disk-1 -size 10G
shared-block-gp:vm-100-disk-1
Delete A Volume
You can use either pvesm
or pvesh
commands to delete a volume. It may appear as though the tools use inconsistent terminology. However, keep in mind that pvesh
is submitting a DELETE
HTTP request to the resource URL.
PVESM
pvesm free <volume> --storage <storage>
Parameter | Format | Description |
---|---|---|
volume | string | Name of volume to destroy |
storage | string | Storage pool identifier |
Example
Destroy a volume allocated from the general purpose performance pool.
pvesm free vm-100-disk-10 --storage shared-block-gp
Removed volume 'shared-block-gp:vm-100-disk-10'
PVESH
pvesh delete <api_path>
Parameter | Format | Description |
---|---|---|
api_path | string | /nodes/{node}/storage/{storage}/content/{volume} |
node | string | Any pve node listed in the output of pvesh get /nodes |
storage | string | Storage pool identifier |
volume | string | Name of volume to destroy |
Example
Destroy a volume allocated from the general purpose performance pool.
pvesh delete /nodes/proxmox-1/storage/shared-block-gp/content/vm-100-disk-1
Removed volume 'shared-block-gp:vm-100-disk-1'
Attach A Volume
An attachment is effectively a VM configuration reference to a storage device. An attachment describes how a storage device is connected to a VM and how the guest OS sees it. The attach operation is principally a VM operation.
unused
.attach
and detach
commands are essential primitives required to move a disk between virtual machines.GUI
The GUI allows you to attach
devices from the Hardware
list that are identified as Unused
. Select an Unused
disk from the Hardware
table and click the Edit
button. Assign a Bus
and Device
number. Then Add
the device to the VM.
qm rescan --vmid <vmid>
on the Proxmox node that owns the VM, if you suspect that an unused device is missing.QM
qm set <vmid> --scsihw <scsi-adapter> --scsi<N> <storage>:<volume>
Parameter | Format | Description |
---|---|---|
vmid | string | The (unique) ID of the VM. |
scsi-adapter | string | SCSI controller model (man qm for more details) |
N | integer | SCSI target/device number (min: 0, max: 30) |
storage | string | Storage pool identifier |
volume | string | Name of volume to attach |
Example
Attach device vm-100-disk-1 to VM 100.
qm set 100 --scsihw virtio-scsi-pci --scsi1 shared-block-gp:vm-100-disk-1
update VM 100: -scsi1 shared-block-gp:vm-100-disk-1 -scsihw virtio-scsi-pci
qm
command must be executed on the home node of the VM.PVESH
pvesh create <api_path> -scsihw <scsi-adapter> -scsi<n> <storage>:<volume>
Parameter | Format | Description |
---|---|---|
api_path | string | /nodes/{node}/qemu/{vmid}/config |
node | string | pve node owner of the VM |
scsi-adapter | string | SCSI controller model (man qm for more details) |
N | integer | SCSI target/device number (min: 0, max: 30) |
storage | string | Storage pool identifier |
volume | string | Name of volume to attach |
Example
Attach device vm-100-disk-1 to VM 100.
pvesh create /nodes/proxmox-1/qemu/100/config -scsihw virtio-scsi-pci -scsi1 shared-block-gp:vm-100-disk-1
update VM 100: -scsi1 shared-block-gp:vm-100-disk-1 -scsihw virtio-scsi-pci
pvesh
command while operating on any node in your Proxmox cluster.Detach A Volume
The detach operation updates the configuration of a VM to remove references to a storage device. If the VM is running, the device will disappear from the guest. Detach is a non-destructive operation. It does not overwrite or release storage.
detach
in the GUI is synonymous with unlink
in pvesh
and qm
.GUI
The GUI allows you to detach
devices in Hardware
list. Select a disk from the Hardware
table and click the Detach
button.
QM
qm unlink <vmid> --idlist scsi<N>
Parameter | Format | Description |
---|---|---|
vmid | string | The (unique) ID of the VM. |
N | integer | SCSI target/device number (min: 0, max: 30) |
Example
Unlink the scsi1 device from VM 100.
qm unlink 100 --idlist scsi1
update VM 100: -delete scsi1
qm config <VMID>
command.PVESH
pvesh set <api_path> -idlist scsi<N>
Parameter | Format | Description |
---|---|---|
api_path | string | /nodes/{node}/qemu/{vmid}/unlink |
node | string | pve node owner of the VM |
vmid | string | The (unique) ID of the VM. |
N | integer | SCSI target/device number (min: 0, max: 30) |
Example
Unlink the scsi1 device from VM 100.
pvesh set /nodes/proxmox-1/qemu/100/unlink -idlist scsi1
update VM 100: -delete scsi1
pvesh get /nodes/<node>/qemu/<vmid>/config
.Resize A Volume
The resize operation extends the logical address space of a storage device. Reducing the size of a device is not permitted by Proxmox. The resize operation can only execute against devices that are attached to a VM.
GUI
The GUI allows you to resize
devices available from Hardware
list. Select a disk from the Hardware
table and click the Resize
button.
QM
qm resize <vmid> scsi<N> <size>
Parameter | Format | Description |
---|---|---|
vmid | string | The (unique) ID of the VM. |
N | integer | SCSI target/device number (min: 0, max: 30) |
size | +?\d+(.\d+)?[KMGT]? | With the + sign the value is added to the actual size of the volume. Without it, the value is taken as absolute. |
Example
Extend the device attached to scsi1 of VM 100 by 1GiB.
qm resize 100 scsi1 +1G
PVESH
pvesh set <api_path> -disk scsi<N> -size <size>
Parameter | Format | Description |
---|---|---|
api_path | string | /nodes/{node}/qemu/{vmid}/resize |
node | string | pve node owner of the VM |
vmid | string | The (unique) ID of the VM. |
N | integer | SCSI target/device number (min: 0, max: 30) |
size | +?\d+(.\d+)?[KMGT]? | With the + sign the value is added to the actual size of the volume. Without it, the value is taken as absolute. |
Example
Extend the device attached to scsi1 of VM 100 by 1GiB.
pvesh set /nodes/proxmox-1/qemu/100/resize -disk scsi1 -size +1G
Create A Snapshot
Snapshots provide a recovery point for a virtual machine’s state, configuration, and data. Proxmox orchestrates snapshots via QEMU and backend storage providers. When you snapshot a Proxmox VM that uses virtual disks backed by Blockbridge, your disk snapshots are thin, they complete instantly, and they avoid copy-on-write (COW) performance penalties.
unused
) are ignored.GUI
In the Snapshots
panel for the VM, click Take Snapshot
. The duration of the operation depends on whether VM state is preserved.
QM
qm snapshot <vmid> <snapname> --description <desc> --vmstate <save>
Parameter | Format | Description |
---|---|---|
vmid | string | The (unique) ID of the VM. |
snapname | string | The name of the snapshot. |
desc | string | Snapshot description - Optional |
save | boolean | [0,1] Save VM RAM state - Optional |
Example
Take a snapshot of VM 100, including RAM.
qm snapshot 100 snap_1 --description "hello world" --vmstate 1
PVESH
pvesh create <api_path> -snapname -description <desc> -vmstate <save>
Parameter | Format | Description |
---|---|---|
api_path | string | /nodes/{node}/qemu/{vmid}/snapshot |
node | string | pve node owner of the VM. |
vmid | string | The (unique) ID of the VM. |
snapname | string | The name of the snapshot. |
desc | string | Snapshot description - Optional |
save | boolean | [0,1] Save VM RAM state - Optional |
Example
Take a snapshot of VM 100, including RAM.
pvesh create /nodes/proxmox-1/qemu/100/snapshot -snapname snap_1 -description "hello world" -vmstate 1
Remove A Snapshot
Delete a VM snapshot and release associated storage resources.
GUI
In the Snapshots
panel for the VM, select the snapshot to remove, and then click Remove
. A dialog will appear to confirm your intent.
QM
qm delsnapshot <vmid> <snapname> --force <force>
Parameter | Format | Description |
---|---|---|
vmid | string | The (unique) ID of the VM. |
snapname | string | The name of the snapshot. |
force | boolean | Remove config, even if storage removal fails. - Optional |
Example
Gracefully delete the snapshot snap1 of VM 100.
qm delsnapshot 100 snap1
PVESH
pvesh delete <api_path> -force <force>
Parameter | Format | Description |
---|---|---|
api_path | string | /nodes/{node}/qemu/{vmid}/snapshot/{snapname} |
node | string | pve node owner of the VM |
vmid | string | The (unique) ID of the VM. |
snapname | string | The name of the snapshot to delete. |
force | boolean | Remove config, even if storage removal fails. - Optional |
Example
Gracefully Delete the snapshot snap1 of VM 100.
pvesh delete /nodes/proxmox-1/qemu/100/snapshot/snap1