Kubernetes v1.36 introduces an alpha feature that moves event filtering from the client to the API server, reducing per-replica CPU, memory, and network costs for horizontally scaled controllers watching high-cardinality resources like Pods.
The problem
In large clusters with tens of thousands of nodes, controllers that watch high-cardinality resources face a scaling bottleneck. Every replica of a horizontally scaled controller receives the full stream of events from the API server. Each replica pays the CPU, memory, and network cost to deserialize every event, only to discard objects it is not responsible for. Scaling out the controller does not reduce per-replica cost — it multiplies it.
Some controllers, such as kube-state-metrics, already support client-side sharding. Each replica is assigned a portion of the keyspace and discards objects outside its range. This works functionally but does not reduce the volume of data flowing from the API server: N replicas still receive the full event stream, network bandwidth scales with replicas, and CPU spent on deserialization is wasted for the discarded fraction.
How server-side sharding works
Kubernetes v1.36 (KEP-5866) adds a shardSelector field to ListOptions. Clients specify a hash range using the shardRange() function, for example:
shardRange(object.metadata.uid, '0x0000000000000000', '0x8000000000000000')
The API server computes a deterministic 64-bit FNV-1a hash of the specified field and returns only objects whose hash falls within the range [start, end). This applies to both list responses and watch event streams. The hash function produces the same result across all API server instances, so the feature is safe to use with multiple API server replicas.
Currently supported field paths are object.metadata.uid and object.metadata.namespace.
Using sharded watches in controllers
Controllers typically use informers to list and watch resources. To shard the workload, each replica injects the shardSelector into the ListOptions used by its informers via WithTweakListOptions:
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/informers"
)
shardSelector := "shardRange(object.metadata.uid, '0x0000000000000000', '0x8000000000000000')"
factory := informers.NewSharedInformerFactoryWithOptions(
client,
resyncPeriod,
informers.WithTweakListOptions(func(opts *metav1.ListOptions) {
opts.ShardSelector = shardSelector
}),
)
For a 2-replica deployment, the selectors split the hash space in half:
- Replica 0:
shardRange(object.metadata.uid, '0x0000000000000000', '0x8000000000000000') - Replica 1:
shardRange(object.metadata.uid, '0x8000000000000000', '0x10000000000000000')
A single replica can also cover non-contiguous ranges using ||:
"shardRange(object.metadata.uid, '0x0000000000000000', '0x4000000000000000') || " +
"shardRange(object.metadata.uid, '0x8000000000000000', '0xc0000000000000000')"
Verifying server support
When the API server honors a shard selector, the list response includes a shardInfo field in the response metadata that echoes back the applied selector:
{
"kind": "PodList",
"apiVersion": "v1",
"metadata": {
"resourceVersion": "10245",
"shardInfo": {
"selector": "shardRange(object.metadata.uid, '0x0000000000000000', '0x8000000000000000')"
}
},
"items": [...]
}
If shardInfo is absent, the server did not honor the shard selector and the client received the complete, unfiltered collection. In this case, the client should be prepared to handle the full result set, for example by applying client-side filtering to discard objects outside its assigned shard range.
Getting started
This feature is in alpha and requires enabling the ShardedListAndWatch feature gate on the API server. The Kubernetes team is seeking feedback from controller authors and operators running large clusters.
Bottom line
Server-side sharded list and watch addresses a real scaling pain point for large Kubernetes deployments. By moving filtering into the API server, it reduces per-replica overhead and makes horizontal scaling of controllers more efficient. The feature is straightforward to adopt for controllers already using informers, and the shardInfo field provides a clear way to verify that the server is honoring the shard selector.