Catégorie : Kubernetes (AKS)

AKS | Artifact Streaming 

Hi!

High-performance compute workloads often grapple with the challenge of managing large container images, leading to extended image pull times and delayed workload deployments. Recognizing this pain point, Azure Kubernetes Service (AKS) introduces Artifact Streaming, a powerful feature designed to streamline the process of streaming container images from Azure Container Registry (ACR) to AKS. This article delves into the benefits and implementation of Artifact Streaming, shedding light on how it can significantly enhance the performance of your AKS workloads.

Large images in high-performance compute workloads can impede efficiency, resulting in prolonged image pull times and, subsequently, delayed deployment of workloads. This bottleneck can be particularly problematic for workloads that require rapid scalability and responsiveness.

Artifact Streaming on AKS offers a solution to this challenge by optimizing the process of streaming container images from ACR to AKS. Unlike traditional methods, AKS with Artifact Streaming only pulls the essential layers needed for the initial pod startup. This targeted approach dramatically reduces the time required to pull images, resulting in faster and more efficient workload deployments.

Key Benefits of Artifact Streaming:

  • Reduced Time to Pod Readiness:
    • Experience over a 15% reduction in time to pod readiness, particularly impactful for time-sensitive workloads.
  • Optimized for Images <30GB:
    • While Artifact Streaming is most effective for images under 30GB, our testing showcased substantial improvements for images under 10GB, with pod start-up times decreasing from minutes to seconds.
  • Concurrent Pod Start-ups:
    • Artifact Streaming enables concurrent pod start-ups, offering a significant advantage over the traditional serial start-up process.

Create a new node pool with Artifact Streaming enabled:

az aks nodepool add --resource-group myResourceGroup --cluster-name myAKSCluster --name myNodePool --enable-artifact-streaming

In conclusion, Artifact Streaming on AKS proves to be a game-changer for high-performance compute workloads, offering a streamlined approach to handling large container images. By significantly reducing image pull times and enhancing pod start-up efficiency, AKS with Artifact Streaming empowers businesses to meet the demands of dynamic and scalable workloads. Follow the implementation guide provided to unlock the full potential of this feature and elevate the performance of your AKS deployments.

Documentation: https://learn.microsoft.com/en-us/azure/aks/artifact-streaming

Maxime.

AKS | Karpenter Introduction

Hi!

As businesses continue to embrace Kubernetes for container orchestration, the need for efficient resource utilization and cost optimization becomes paramount. Enter Karpenter, an open-source node provisioning project tailored specifically for Kubernetes environments. In this article, we’ll explore how Karpenter can be a game-changer for Azure Kubernetes Service (AKS) users, helping them unlock the full potential of their clusters while minimizing operational costs.

This is achieved through a set of core functionalities:

  1. Automated Unschedulable Pod Handling: Karpenter actively monitors the Kubernetes scheduler for pods that have been flagged as unschedulable. This ensures that no resources go to waste, and workloads can be efficiently distributed across the cluster.
  2. Dynamic Scheduling Constraints Evaluation: The system meticulously evaluates a range of scheduling constraints specified by the pods. These constraints include resource requests, nodeselectors, affinities, tolerations, and topology spread constraints. By taking these factors into consideration, Karpenter ensures optimal node selection for each workload.
  3. Precision Node Provisioning: Karpenter excels in the art of resource allocation. It automatically provisions nodes that precisely align with the specific requirements of the pods. This results in a finely tuned infrastructure that maximizes resource utilization.
  4. Automated Node Decommissioning: As workloads evolve, the need for certain nodes may diminish. Karpenter is equipped to intelligently identify when nodes are no longer essential and orchestrates their graceful removal from the cluster. This proactive management ensures that resources are allocated efficiently and are not tied up unnecessarily.

The API for AKS Karpenter Provider is currently alpha (v1alpha2).

Documentation: https://github.com/Azure/karpenter

Maxime.