Kubernetes Requests & Limits Calculator
Requests drive scheduling. This tool converts per-pod requests into cluster totals and estimates how many nodes you need for those requests given an allocatable overhead.
Inputs
Pods
CPU request (mCPU / pod)
Memory request (MiB / pod)
CPU limit (mCPU / pod)
Memory limit (MiB / pod)
Node CPU (cores)
Node memory (GiB)
Allocatable (%)
Reserve capacity for kubelet/daemonsets/overhead.
Results
Total CPU requests
15 cores
Total memory requests
30 GiB
Nodes needed (requests)
3
Allocatable per node
7.2 cores / 28.8 GiB (90%)
Limits (burst risk)
| Metric | Total |
|---|---|
| CPU limits | 30 cores |
| Memory limits | 60 GiB |
Example scenario
- 60 pods with 250m CPU and 512Mi requests → estimate total requests and node count for an 8 vCPU / 32 GiB node.
Included
- Totals for CPU/memory requests and limits from per-pod values and pod count.
- Node count estimate based on allocatable percentage (headroom).
Not included
- Bin packing constraints (pods per node limits, affinities, taints) and daemonset overhead.
- Network, storage, and control plane costs.
How we calculate
- Total requests = pods × per-pod request (CPU and memory).
- Allocatable per node = node capacity × allocatable percentage.
- Node estimate uses the larger of CPU-based and memory-based counts.
FAQ
Why not size based on limits?
Scheduling uses requests, not limits. Limits matter for bursting risk and potential throttling/OOM behavior.
What should I use for allocatable %?
A common planning value is 85–95% depending on kubelet/system reservations, daemonsets, and headroom.
Does this include per-node overhead like daemonsets?
Not explicitly. Use a lower allocatable % or increase requests to account for overhead.
Related tools
Disclaimer
Educational use only. Not legal, financial, or professional advice. Results are estimates based on the inputs and assumptions shown on this page. Verify pricing and limits with your providers and documentation.
Last updated: 2026-01-06