Help improve this page
To contribute to this user guide, choose the Edit this page on GitHub link that is located in the right pane of every page.
Static Capacity Node Pools in EKS Auto Mode
Amazon EKS Auto Mode supports static capacity node pools that maintain a fixed number of nodes regardless of pod demand. Static capacity node pools are useful for workloads that require predictable capacity, reserved instances, or specific compliance requirements where you need to maintain a consistent infrastructure footprint.
Unlike dynamic node pools that scale based on pod scheduling demands, static capacity node pools maintain the number of nodes that you have configured.
Configure a static capacity node pool
To create a static capacity node pool, set the replicas field in your NodePool specification. The replicas field defines the exact number of nodes that the node pool will maintain. See Examples for how to configure replicas.
Static capacity node pool considerations
Static capacity node pools have several important constraints and behaviors:
Configuration constraints:
-
Cannot switch modes: Once you set
replicason a node pool, you cannot remove it. The node pool cannot switch between static and dynamic modes. -
Limited resource limits: Only the
limits.nodesfield is supported in the limits section. CPU and memory limits are not applicable. -
No weight field: The
weightfield cannot be set on static capacity node pools since node selection is not based on priority.
Operational behavior:
-
No consolidation: Nodes in static capacity pools are not considered for consolidation.
-
Scaling operations: Scale operations bypass node disruption budgets but still respect PodDisruptionBudgets.
-
Node replacement: Nodes are still replaced for drift (such as AMI updates) and expiration based on your configuration.
Best practices
Capacity planning:
-
Set
limits.nodeshigher thanreplicasto allow for temporary scaling during node replacement operations. -
Consider the maximum capacity needed during node drift or AMI updates when setting limits.
Instance selection:
-
Use specific instance types when you have Reserved Instances or specific hardware requirements.
-
Avoid overly restrictive requirements that might limit instance availability during scaling.
Disruption management:
-
Configure appropriate disruption budgets to balance availability with maintenance operations.
-
Consider your application’s tolerance for node replacement when setting budget percentages.
Monitoring:
-
Regularly monitor the
status.nodesfield to ensure your desired capacity is maintained. -
Set up alerts for when the actual node count deviates from the desired replicas.
Zone distribution:
-
For high availability, spread static capacity across multiple Availability Zones.
-
When you create a static capacity node pool that spans multiple availability zones, EKS Auto Mode distributes the nodes across the specified zones, but the distribution is not guaranteed to be even.
-
For predictable and even distribution across availability zones, create separate static capacity node pools, each pinned to a specific availability zone using the
topology.kubernetes.io/zonerequirement. -
If you need 12 nodes evenly distributed across three zones, create three node pools with 4 replicas each, rather than one node pool with 12 replicas across three zones.
Scale a static capacity node pool
You can change the number of replicas in a static capacity node pool using the kubectl scale command:
# Scale down to 5 nodes kubectl scale nodepool static-nodepool --replicas=5
When scaling down, EKS Auto Mode will terminate nodes gracefully, respecting PodDisruptionBudgets and allowing running pods to be rescheduled to remaining nodes.
Monitor static capacity node pools
Use the following commands to monitor your static capacity node pools:
# View node pool status kubectl get nodepool static-nodepool # Get detailed information including current node count kubectl describe nodepool static-nodepool # Check the current number of nodes kubectl get nodepool static-nodepool -o jsonpath='{.status.nodes}'
The status.nodes field shows the current number of nodes managed by the node pool, which should match your desired replicas count under normal conditions.
Troubleshooting
Nodes not reaching desired replicas:
-
Check if the
limits.nodesvalue is sufficient -
Verify that your requirements don’t overly constrain instance selection
-
Review AWS service quotas for the instance types and regions you’re using
Node replacement taking too long:
-
Adjust disruption budgets to allow more concurrent replacements
-
Check if PodDisruptionBudgets are preventing node termination
Unexpected node termination:
-
Review the
expireAfterandterminationGracePeriodsettings -
Check for manual node terminations or AWS maintenance events
Examples
Basic static capacity node pool
apiVersion: karpenter.sh/v1 kind: NodePool metadata: name: basic-static spec: replicas: 5 template: spec: nodeClassRef: group: eks.amazonaws.com kind: NodeClass name: default requirements: - key: "eks.amazonaws.com/instance-category" operator: In values: ["m"] - key: "topology.kubernetes.io/zone" operator: In values: ["us-west-2a"] limits: nodes: 8 # Allow scaling up to 8 during operations
Static capacity with specific instance types
apiVersion: karpenter.sh/v1 kind: NodePool metadata: name: reserved-instances spec: replicas: 20 template: metadata: labels: instance-type: reserved cost-center: production spec: nodeClassRef: group: eks.amazonaws.com kind: NodeClass name: default requirements: - key: "node.kubernetes.io/instance-type" operator: In values: ["m5.2xlarge"] # Specific instance type - key: "karpenter.sh/capacity-type" operator: In values: ["on-demand"] - key: "topology.kubernetes.io/zone" operator: In values: ["us-west-2a", "us-west-2b", "us-west-2c"] limits: nodes: 25 disruption: # Conservative disruption for production workloads budgets: - nodes: 10%
Multi-zone static capacity node pool
apiVersion: karpenter.sh/v1 kind: NodePool metadata: name: multi-zone-static spec: replicas: 12 # Will be distributed across specified zones template: metadata: labels: availability: high spec: nodeClassRef: group: eks.amazonaws.com kind: NodeClass name: default requirements: - key: "eks.amazonaws.com/instance-category" operator: In values: ["c", "m"] - key: "eks.amazonaws.com/instance-cpu" operator: In values: ["8", "16"] - key: "topology.kubernetes.io/zone" operator: In values: ["us-west-2a", "us-west-2b", "us-west-2c"] - key: "karpenter.sh/capacity-type" operator: In values: ["on-demand"] limits: nodes: 15 disruption: budgets: - nodes: 25%
Static capacity with capacity reservation
The following example shows how to use a static capacity node pool with an EC2 Capacity Reservation. For more information on using EC2 Capacity Reservations with EKS Auto Mode, see Control deployment of workloads into Capacity Reservations with EKS Auto Mode.
NodeClass defining the capacityReservationSelectorTerms
apiVersion: eks.amazonaws.com/v1 kind: NodeClass metadata: name: capacity-reservation-nodeclass spec: role: AmazonEKSNodeRole securityGroupSelectorTerms: - id: sg-0123456789abcdef0 subnetSelectorTerms: - id: subnet-0123456789abcdef0 capacityReservationSelectorTerms: - id: cr-0123456789abcdef0
NodePool referencing the above NodeClass and using karpenter.sh/capacity-type: reserved.
apiVersion: karpenter.sh/v1 kind: NodePool metadata: name: static-capacity-reservation-nodepool spec: replicas: 5 limits: nodes: 8 # Allow scaling up to 8 during operations template: metadata: {} spec: nodeClassRef: group: eks.amazonaws.com kind: NodeClass name: capacity-reservation-nodeclass requirements: - key: karpenter.sh/capacity-type operator: In values: ['reserved']