Author: zigmax

Understanding Kubernetes API Server Concurrency Controls

Kubernetes API performance depends heavily on how the API server manages concurrent requests. Two important parameters control how many simultaneous operations the control plane can process: --max-requests-inflight and --max-mutating-requests-inflight.

These flags define how many concurrent read and write requests the API server allows before it starts rejecting new ones with HTTP 429 Too Many Requests errors. They exist to prevent resource exhaustion and protect etcd and the API server itself from overload.

How the API Server Handles Requests

The API server processes every incoming request through a pipeline that includes authentication, authorization, admission control, and storage operations.

Before these stages, each request is subject to inflight limits:

  • Non-mutating requests (GETLISTWATCH) are controlled by --max-requests-inflight.
  • Mutating requests (POSTPUTPATCHDELETE) are limited by --max-mutating-requests-inflight.

Internally, Kubernetes uses semaphore-like counters implemented in Go to manage concurrency. When all available slots are occupied, new requests are rejected immediately.

efault Values and Tuning

The default values are usually:

  • --max-requests-inflight400
  • --max-mutating-requests-inflight200

Increasing these numbers allows more concurrent requests but consumes more CPU and memory and can create backpressure on etcd.
Setting them too low causes frequent throttling and timeouts for controllers and users.

A general rule of thumb is to keep the read limit around twice the write limit.
The optimal configuration depends on the control plane’s CPU, memory, and the overall cluster size.

Monitoring and Observability

Monitoring API server performance is key to proper tuning.
The following Prometheus metrics provide visibility:

  • apiserver_current_inflight_requests
  • apiserver_request_total{code="429"}

If 429 errors appear regularly without corresponding etcd latency increases, the API server limit is likely too restrictive.
If etcd latency rises first, the bottleneck is the storage layer, not the API layer.

Always adjust these flags gradually and validate the impact using Prometheus or Grafana dashboards.

API Priority and Fairness (APF)

Modern Kubernetes versions enable API Priority and Fairness (APF) by default.
This subsystem provides a dynamic way to manage concurrency through FlowSchemas and PriorityLevels.

The inflight flags still act as global hard limits, while APF handles per-user and per-workload fairness.
The recommended approach is to use these flags as safety caps and rely on APF for traffic shaping and workload isolation.

Managed Services (AKS, EKS, GKE)

On managed Kubernetes platforms, these flags can’t be changed directly — the control plane is fully managed by the cloud provider.
However, you can still influence how API requests behave and avoid throttling.

Azure Kubernetes Service (AKS)

  • You cannot change the API server flags directly.
  • Use API Priority and Fairness (APF) to control request behavior.
  • Choose a higher control plane SKU (Standard or Premium) for better performance.

Amazon Elastic Kubernetes Service (EKS)

  • AWS automatically adjusts concurrency limits based on cluster size.
  • For very large clusters or CI/CD-heavy environments, use multiple smaller clusters to spread the load.

Google Kubernetes Engine (GKE)

  • GKE automatically scales the control plane to handle load.
  • You cannot modify inflight flags directly.
  • You can define FlowSchemas for specific workloads if you need fine-grained API control.

Security and DoS Protection

These concurrency flags also play a critical role in protecting against denial-of-service attacks.
Without them, a flood of LIST or WATCH requests could exhaust the API server’s resources and cause a cluster-wide outage.

To protect against such risks:

  • Keep reasonable inflight limits.
  • Enable API Priority and Fairness with limitResponse.reject for low-priority users.
  • Use RBAC and NetworkPolicies to limit who can access the API.
  • Apply client-side throttling in controllers and operators.

Maxime.

What I Learned at fwd:cloudsec North America 2025

At the end of June, I had the chance to attend fwd:cloudsec North America 2025 in Denver, Colorado. For those unfamiliar, fwd:cloudsec is a community-driven, non-profit conference focused on cloud security research, offensive techniques, and defensive strategies. What makes it unique is its vendor-agnostic spirit: you won’t find flashy marketing keynotes or sales pitches here just practitioners sharing what really works (and what doesn’t) in securing the cloud.

The conference ran June 30 – July 1, with two packed days of deep technical talks, hallway discussions, and a strong community vibe. All talks are recorded and available on the official YouTube playlist

Why I Attended

As someone who spends most of my time on Kubernetes, Azure, and multi-cloud security strategy, fwd:cloudsec is one of the rare conferences that consistently delivers fresh, practical insights. My goals this year were to:

  • Learn from the latest offensive research and translate it into stronger threat models.
  • See how others are balancing platform guardrails vs. application-level controls.
  • Connect with peers facing similar large-scale challenges in runtime security, IAM complexity, and SaaS integrations.

Sessions That Shaped My Thinking

Maxime.

Kubernetes 1.34: What’s New in Security

Released on August 27, 2025 under the theme “Of Wind & Will (O’ WaW)”, Kubernetes v1.34 brings a strong security focus, reinforcing zero-trust principles, secure defaults, and identity-aware operations across the platform.

Projected ServiceAccount Tokens for Image Pulls (Beta)

– What’s new: The kubelet can now use short-lived, audience‑bound ServiceAccount tokens to authenticate with container registries, eliminating static Secrets on nodes.

– Why it matters: This significantly shrinks the attack surface by eschewing long-lived credentials, aligning registry access with workload identity rather than node-level secrets.

Scoped Anonymous Access for API Endpoints

– What’s new: Administrators can now safely expose health endpoints (/healthz, /readyz, /livez) to unauthenticated access, while denying broader anonymous access via narrow configuration in AuthenticationConfiguration.

– Why it matters: Prevents accidental overexposure of API capabilities, balancing observability/open health checks with tightened security controls.

Pod Identity & mTLS with PodCertificateRequests (Stable)

– What’s new: Pods can now obtain X.509 certificates via PodCertificateRequests, allowing kubelet-managed issuance for use in mTLS authentication.

– Why it matters: Embeds strong, workload-specific identity into the platform, reinforcing secure communication patterns among services.

Field or Label-Aware RBAC (Enhanced Least Privilege)

– What’s new: Although not yet GA, emerging enhancements allow RBAC rules that consider node or pod-specific attributes (fields or labels) to enforce least-privilege access.

– Why it matters: Granular permissions reduce risk from overbroad role bindings, tightening control over what pods or nodes can access and do.

CEL Mutation Policies & External JWT Signing

– CEL Mutation Policies: Introduce native support for rule-based mutation using Common Expression Language (CEL), enabling secure, declarative policy enforcement within Kubernetes.

– External JWT Signing: Facilitates signing JWTs via external key management services, removing local key storage and enhancing auditability and security.

Mutual TLS (mTLS) for Pod-to-API Traffic

– What’s new: Kubernetes is ramping up mTLS support to secure pod-to-API server communications, though details are still unfolding.

– Why it matters: Ensures encrypted, authenticated channeling between workloads and the control plane, a key zero-trust tenet.

OCI Artifact Volumes & Image Pull Security

– What’s new: Ability to mount OCI images directly as volumes, ensuring secure, versioned delivery of external files to pods.

– Why it matters: Reduces reliance on sidecars or manual injection methods, streamlining configuration while preserving integrity.

Conclusion

Kubernetes v1.34 represents a meaningful step forward in embedding robust security into the platform itself. From per-pod identity to safer defaults, explicit anonymous access handling, and fine-grained policy enforcement, it advances Kubernetes toward a more zero-trust architecture.

Organizations should explore upgrading thoughtfully, especially leveraging the projected ServiceAccount tokens, pod-level certification, and scoped anonymous access to immediately elevate cluster security.

Maxime.