Kubernetes v1.36, released in May 2026, promotes volume group snapshots to General Availability (GA). The feature lets operators capture crash-consistent backups across multiple PersistentVolumeClaims (PVCs) with a single label selector, then restore all volumes from a consistent point-in-time — no more piecemeal restores that risk application inconsistency.
Overview
Volume group snapshots entered Alpha in Kubernetes v1.27, moved to Beta in v1.32, and to a second Beta in v1.34. With v1.36, the API version is promoted to groupsnapshot.storage.k8s.io/v1. The feature relies on three CustomResourceDefinitions (CRDs): VolumeGroupSnapshot, VolumeGroupSnapshotContent, and VolumeGroupSnapshotClass. It is only supported for CSI volume drivers.
What it does
A group snapshot represents copies made from multiple volumes at the same point-in-time, achieving write-order consistency. This is critical for stateful applications that span multiple volumes — for example, an application storing data in one volume and logs in another. If snapshots are taken at different times, restoring from them would leave the application in an inconsistent state. Group snapshots eliminate the need for application quiescence before snapshotting.
How to use it
Creating a group snapshot:
- Label the PVCs you want to group:
kubectl label pvc pvc-0 group=myGroup
kubectl label pvc pvc-1 group=myGroup
- Create a
VolumeGroupSnapshotobject with a label selector:
apiVersion: groupsnapshot.storage.k8s.io/v1
kind: VolumeGroupSnapshot
metadata:
name: snapshot-daily-20260422
namespace: demo-namespace
spec:
volumeGroupSnapshotClassName: csi-groupSnapclass
source:
selector:
matchLabels:
group: myGroup
- Define a
VolumeGroupSnapshotClass(required for dynamic provisioning):
apiVersion: groupsnapshot.storage.k8s.io/v1
kind: VolumeGroupSnapshotClass
metadata:
name: csi-groupSnapclass
driver: example.csi.k8s.io
deletionPolicy: Delete
Restoring from a group snapshot:
Create individual PVCs, each referencing a VolumeSnapshot that is part of the group snapshot:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: examplepvc-restored-2026-04-22
namespace: demo-namespace
spec:
storageClassName: example-sc
dataSource:
name: snapshot-0962a745b2bf930bb385b7b50c9b08af471f1a16780726de19429dd9c94eaca0
kind: VolumeSnapshot
apiGroup: snapshot.storage.k8s.io
accessModes:
- ReadWriteOncePod
resources:
requests:
storage: 100Mi
Repeat for each volume in the group.
Tradeoffs
- CSI-only: The feature requires CSI drivers that implement the group controller service and the RPCs
CreateVolumeGroupSnapshot,DeleteVolumeGroupSnapshot, andGetVolumeGroupSnapshot. In-tree or legacy drivers are not supported. - Vendor certification: Storage vendors must certify their CSI implementations against the new group-snapshot contract. Expect a ramp-up period.
- No application quiescence: While group snapshots provide crash consistency, they do not guarantee application-consistent snapshots (e.g., flushing in-memory caches). For that, you still need application-level coordination.
When to use it
Use volume group snapshots for any stateful workload that spans multiple PVCs and requires crash-consistent recovery points — databases with separate data and log volumes, content management systems, or any multi-volume application where restoring from inconsistent snapshots would cause data corruption.
Bottom line
Volume group snapshots in GA give Kubernetes operators a standardized, CSI-backed way to take crash-consistent multi-volume backups and restores. The API is stable, the workflow is straightforward (label + create a VolumeGroupSnapshot), and the feature eliminates a long-standing gap in Kubernetes' storage snapshot story. If your storage vendor supports it, this is the default way to back up multi-volume stateful workloads.