My Experience at Cloud Native Rejekts NA 2025

After speaking last year at Cloud Native Rejekts Salt Lake City 2024 with Mathieu on Platform Engineering Loves Security: Shift Down to Your Platform, not Left to Your Developers!, I returned to Cloud Native Rejekts North America 2025 in Atlanta, this time as an attendee, eager to reconnect with the community, discover new research, and exchange ideas on the evolution of Kubernetes Security, AI integration, and platform engineering.

Cloud Native Rejekts has always been a special event for me. It’s intimate, technically rich, and community-driven, the perfect pre-KubeCon gathering where bold ideas and unfiltered discussions shape the future of our ecosystem.

Atlanta’s edition had an incredible mix of platform engineers, SREs, Devs, and security practitioners from across North America and beyond. What I love most about Rejekts is the raw energy talks are deeply technical, hallway conversations turn into architecture sessions, and everyone genuinely wants to share and learn.

The venue setup encouraged collaboration, and the diversity of topics from runtime isolation to AI-driven observability, reflected just how fast our space is evolving.

My Top 5 Highlights from Cloud Native Rejekts 2025

1. Catch Me If You Can: A Kubernetes Escape Story

A live, show-stopping demo by Jed Salazar and James Petersen revealed the anatomy of a real-world container escape from the initial breakout to lateral movement across a Kubernetes cluster. The session unpacked how weak isolation, misconfigured permissions, and monitoring blind spots open the door to stealthy takeovers, and how defenses like user namespaces in Kubernetes 1.33, capability hardening, and runtime detection close it.

Through a step-by-step attack reconstruction, the speakers connected kernel-level exploits to cluster-wide compromise, then flipped the lens to show how to build multi-tenant isolation, detect breakout signals early, and contain the blast radius.

A must-watch for anyone serious about runtime hardening and defense-in-depth in Kubernetes.


2. Beyond the Default Scheduler: Navigating GPU Multitenancy in the AI Era

Shivay Lamba, Hrittik Roy, and Saiyam Pathak explored one of the toughest challenges in AI infrastructure: secure GPU sharing.


They broke down how time-slicing improves utilization but weakens isolation and why NVIDIA MIG’s hardware partitioning (cores, memory, L2 cache) is a game changer.


By leveraging schedulers like KAI, Volcano, and Kueue, they showed how to build secure, fair, and efficient multi-tenant GPU clusters that can power the next generation of AI workloads.


3. Make Your Developer’s Pains Go Away, with the Right Level of Abstraction for Your Platform

Mathieu Benoit and Artem Lajko tackled a reality every engineer knows too well: developers don’t spend their day coding, they spend it battling TicketOps, infrastructure blockers, and security gates.

Their talk presented a battle-tested approach to building Internal Developer Platforms (IDPs) with empathy, powered by Score and Kro.

The key takeaway: successful platforms don’t hide Kubernetes, they abstract it at the right level.

By combining GitOps workflows with automation, they demonstrated how developers can deploy secure, production-grade workloads effortlessly focusing on their apps while the platform handles the hard parts behind the scenes. It wasn’t about YAMLs or GitOps, it was about developer joy.


4. In-SPIRE-ing Identity: Using SPIRE for Verifiable Container Isolation

Marina Moore delivered a brilliant session on cryptographic attestation for workloads using SPIFFE/SPIRE. Edera’s architecture lets teams prove that workloads run in isolated zones with end-to-end encryption and non-falsifiable build provenance essentially, identity as a security perimeter.

Her insights into deployment challenges and configuration trade-offs offered a roadmap for teams moving toward verifiable workload trust in cloud-native systems.

behind the scenes. It wasn’t about YAMLs or GitOps, it was about developer joy.


5. The Paranoid’s Guide to Deploying Skynet’s Interns

This talk is a reality check for anyone deploying AI agents in production. While autonomous agents are powerful, they’re being plugged into legacy or unsecured architectures, a recipe for chaos.

The speaker (Dan Fernandez) breaks down the anatomy of AI agent ecosystems (Agents, MCP servers, Tools, and Memory) before exposing the major security pitfalls:

  • Tangled Web of Trust: Agents interact with tools and data sources of mixed trust levels, risking internal system compromise.
  • Persistent Threats: Because agents “remember,” attacks can persist, evolve, and resurface over time.
  • Amplified Supply Chain Risks: Every autonomous action turns dependencies into potential attack vectors.
  • Compounding Complexity: Multi-agent comms and centralized MCPs obscure visibility and weaken control.

The key takeaway: treat AI agents as untrusted, dynamic supply chains. Apply strict segmentation, isolation, and defense-in-depth to every component from MCP servers to memory stores. Paranoia isn’t overkill here, it’s essential for survival in the era of autonomous AI.


Networking and Shared Purpose

Beyond the sessions, hallway conversations were pure gold.
I had deep discussions about:

  • Integrating Kubernetes security controls within Platform Engineering and Internal Developer Platforms (IDPs) to deliver secure-by-default services while maintaining developer velocity.
  • Measuring platform security maturity using structured threat models and practical scorecards.
  • Embedding AI-driven risk assessment directly into CI/CD pipelines for continuous validation.

Watch the Replays

It’s clear that the Cloud Native Rejekts community thrives on transparency, mentorship, and shared improvement, values that continue to guide my own journey in cloud-native security.

And finally a heartfelt thank you to all the volunteers who made this event possible. Your passion, generosity, and dedication are what make Cloud Native Rejekts such a unique experience. It’s more than a conference it’s a community space where creativity meets curiosity, where ideas grow into open-source projects, and where the next wave of cloud-native innovation quietly takes shape.

Maxime.

Restricting Pod Access to Azure IMDS (Preview)

In the world of Kubernetes on Azure, there’s been a longstanding default: any pod in your AKS cluster can query the Azure Instance Metadata Service (IMDS). That’s powerful — but also risky. Today, Microsoft introduces a preview feature that lets you block pod access to IMDS, tightening your cluster’s security boundaries.

Why Restrict IMDS?

IMDS is a REST API that provides VM metadata: VM specs, networking, upcoming maintenance events, and (critically) identity tokens. Because it’s accessible by default (via IP 169.254.169.254), a pod that’s compromised or misbehaving could exploit this to pull sensitive information or impersonate the node’s identity. That’s a serious threat.

By limiting which pods can reach IMDS, you reduce the “blast radius” of potential vulnerabilities.

How the Restriction Works (Preview)

  • Non host network pods (hostNetwork: false) lose access to IMDS entirely once restriction is enabled.
  • Host network pods (hostNetwork: true) retain access (they share the same network space as the node).
  • Azure implements this via iptables rules on the node to block traffic from non-host pods.
  • Tampering with iptables (e.g. via SSH or privileged containers) can break enforcement, so best practices like disabling SSH or avoiding privileged pods come into play.

Limitations & Considerations

Because this is still in preview, there are a number of tradeoffs:

  • Many AKS add-ons do not support IMDS restriction (e.g. Azure Monitor, Application Gateway Ingress, Flux/GitOps, Azure Policy, etc.).
  • Windows node pools aren’t supported yet.
  • Enabling restriction on a cluster that uses unsupported add-ons will fail.
  • After enabling or disabling, you must reimage the nodes (e.g. via az aks upgrade --node-image-only) to apply or remove the iptables rules.
  • The feature is opt-in and isn’t backed by an SLA or warranty.

Getting Started: Enabling IMDS Restriction

  1. Use Azure CLI 2.61.0+ and install or update aks-preview.
  2. Register the IMDSRestrictionPreview feature and refresh the ContainerService provider.
  3. Ensure OIDC issuer is enabled on your cluster (required).
  4. To create a new cluster with this feature:az aks create ... --enable-imds-restriction
  5. To enable it on an existing cluster:az aks update ... --enable-imds-restriction Then reimage nodes for enforcement.
  6. To verify, deploy test pods with and without hostNetwork: true and attempt to curl IMDS — the non-host pods should fail, the host pods should succeed.
  7. To disable, run az aks update --disable-imds-restriction and reimage.

Final Thoughts

This new capability gives AKS users an additional layer of defense: limiting which pods can access VM metadata and identities.

Reference: https://learn.microsoft.com/en-us/azure/aks/imds-restriction

Maxime.

Understanding Kubernetes API Server Concurrency Controls

Kubernetes API performance depends heavily on how the API server manages concurrent requests. Two important parameters control how many simultaneous operations the control plane can process: --max-requests-inflight and --max-mutating-requests-inflight.

These flags define how many concurrent read and write requests the API server allows before it starts rejecting new ones with HTTP 429 Too Many Requests errors. They exist to prevent resource exhaustion and protect etcd and the API server itself from overload.

How the API Server Handles Requests

The API server processes every incoming request through a pipeline that includes authentication, authorization, admission control, and storage operations.

Before these stages, each request is subject to inflight limits:

  • Non-mutating requests (GETLISTWATCH) are controlled by --max-requests-inflight.
  • Mutating requests (POSTPUTPATCHDELETE) are limited by --max-mutating-requests-inflight.

Internally, Kubernetes uses semaphore-like counters implemented in Go to manage concurrency. When all available slots are occupied, new requests are rejected immediately.

efault Values and Tuning

The default values are usually:

  • --max-requests-inflight400
  • --max-mutating-requests-inflight200

Increasing these numbers allows more concurrent requests but consumes more CPU and memory and can create backpressure on etcd.
Setting them too low causes frequent throttling and timeouts for controllers and users.

A general rule of thumb is to keep the read limit around twice the write limit.
The optimal configuration depends on the control plane’s CPU, memory, and the overall cluster size.

Monitoring and Observability

Monitoring API server performance is key to proper tuning.
The following Prometheus metrics provide visibility:

  • apiserver_current_inflight_requests
  • apiserver_request_total{code="429"}

If 429 errors appear regularly without corresponding etcd latency increases, the API server limit is likely too restrictive.
If etcd latency rises first, the bottleneck is the storage layer, not the API layer.

Always adjust these flags gradually and validate the impact using Prometheus or Grafana dashboards.

API Priority and Fairness (APF)

Modern Kubernetes versions enable API Priority and Fairness (APF) by default.
This subsystem provides a dynamic way to manage concurrency through FlowSchemas and PriorityLevels.

The inflight flags still act as global hard limits, while APF handles per-user and per-workload fairness.
The recommended approach is to use these flags as safety caps and rely on APF for traffic shaping and workload isolation.

Managed Services (AKS, EKS, GKE)

On managed Kubernetes platforms, these flags can’t be changed directly — the control plane is fully managed by the cloud provider.
However, you can still influence how API requests behave and avoid throttling.

Azure Kubernetes Service (AKS)

  • You cannot change the API server flags directly.
  • Use API Priority and Fairness (APF) to control request behavior.
  • Choose a higher control plane SKU (Standard or Premium) for better performance.

Amazon Elastic Kubernetes Service (EKS)

  • AWS automatically adjusts concurrency limits based on cluster size.
  • For very large clusters or CI/CD-heavy environments, use multiple smaller clusters to spread the load.

Google Kubernetes Engine (GKE)

  • GKE automatically scales the control plane to handle load.
  • You cannot modify inflight flags directly.
  • You can define FlowSchemas for specific workloads if you need fine-grained API control.

Security and DoS Protection

These concurrency flags also play a critical role in protecting against denial-of-service attacks.
Without them, a flood of LIST or WATCH requests could exhaust the API server’s resources and cause a cluster-wide outage.

To protect against such risks:

  • Keep reasonable inflight limits.
  • Enable API Priority and Fairness with limitResponse.reject for low-priority users.
  • Use RBAC and NetworkPolicies to limit who can access the API.
  • Apply client-side throttling in controllers and operators.

Maxime.