Understanding Kubernetes API Server Concurrency Controls

Kubernetes API performance depends heavily on how the API server manages concurrent requests. Two important parameters control how many simultaneous operations the control plane can process: --max-requests-inflight and --max-mutating-requests-inflight.

These flags define how many concurrent read and write requests the API server allows before it starts rejecting new ones with HTTP 429 Too Many Requests errors. They exist to prevent resource exhaustion and protect etcd and the API server itself from overload.

How the API Server Handles Requests

The API server processes every incoming request through a pipeline that includes authentication, authorization, admission control, and storage operations.

Before these stages, each request is subject to inflight limits:

  • Non-mutating requests (GETLISTWATCH) are controlled by --max-requests-inflight.
  • Mutating requests (POSTPUTPATCHDELETE) are limited by --max-mutating-requests-inflight.

Internally, Kubernetes uses semaphore-like counters implemented in Go to manage concurrency. When all available slots are occupied, new requests are rejected immediately.

efault Values and Tuning

The default values are usually:

  • --max-requests-inflight400
  • --max-mutating-requests-inflight200

Increasing these numbers allows more concurrent requests but consumes more CPU and memory and can create backpressure on etcd.
Setting them too low causes frequent throttling and timeouts for controllers and users.

A general rule of thumb is to keep the read limit around twice the write limit.
The optimal configuration depends on the control plane’s CPU, memory, and the overall cluster size.

Monitoring and Observability

Monitoring API server performance is key to proper tuning.
The following Prometheus metrics provide visibility:

  • apiserver_current_inflight_requests
  • apiserver_request_total{code="429"}

If 429 errors appear regularly without corresponding etcd latency increases, the API server limit is likely too restrictive.
If etcd latency rises first, the bottleneck is the storage layer, not the API layer.

Always adjust these flags gradually and validate the impact using Prometheus or Grafana dashboards.

API Priority and Fairness (APF)

Modern Kubernetes versions enable API Priority and Fairness (APF) by default.
This subsystem provides a dynamic way to manage concurrency through FlowSchemas and PriorityLevels.

The inflight flags still act as global hard limits, while APF handles per-user and per-workload fairness.
The recommended approach is to use these flags as safety caps and rely on APF for traffic shaping and workload isolation.

Managed Services (AKS, EKS, GKE)

On managed Kubernetes platforms, these flags can’t be changed directly — the control plane is fully managed by the cloud provider.
However, you can still influence how API requests behave and avoid throttling.

Azure Kubernetes Service (AKS)

  • You cannot change the API server flags directly.
  • Use API Priority and Fairness (APF) to control request behavior.
  • Choose a higher control plane SKU (Standard or Premium) for better performance.

Amazon Elastic Kubernetes Service (EKS)

  • AWS automatically adjusts concurrency limits based on cluster size.
  • For very large clusters or CI/CD-heavy environments, use multiple smaller clusters to spread the load.

Google Kubernetes Engine (GKE)

  • GKE automatically scales the control plane to handle load.
  • You cannot modify inflight flags directly.
  • You can define FlowSchemas for specific workloads if you need fine-grained API control.

Security and DoS Protection

These concurrency flags also play a critical role in protecting against denial-of-service attacks.
Without them, a flood of LIST or WATCH requests could exhaust the API server’s resources and cause a cluster-wide outage.

To protect against such risks:

  • Keep reasonable inflight limits.
  • Enable API Priority and Fairness with limitResponse.reject for low-priority users.
  • Use RBAC and NetworkPolicies to limit who can access the API.
  • Apply client-side throttling in controllers and operators.

Maxime.

Démarrez une conversation

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *