Windows nodes on Azure Kubernetes Service (AKS)


With the release of Kubernetes version 1.14 Windows containers support was promoted to generally available (GA) and with this milestone the time is right to start looking at creating clusters comprised of both Linux and Windows nodes. Recently the Azure Kubernetes Service launched their preview of Windows node pools which allow us to create mixed OS Kubernetes clusters (I’ll refer to these as hybrid clusters).

I’ll cover why we might want to create hybrid clusters in another post, in this post I want to quickly go over the new capability.

Incidentally, its been possible to create hybrid clusters as soon as we had the 1.14 Kubernetes release, however this preview from AKS is the first support for hybrid clusters from a managed Kubernetes provider (although both Google and AWS have announced similar functionality since the AKS release).


With Kubernetes 1.14 supporting Windows containers and the AKS preview of Windows node support we can finally start creating hybrid clusters comprised of Windows and Linux nodes using a managed Kubernetes provider.

Creating the cluster

I won’t go into details on how to create the hybrid clusters as that is covered in the documentation here. I will warn you however, it’s a little fiddly with the node pool name being limited to 6 characters and the password regex being extremely annoying.

Don’t forget to register the resource provider before you try and create the clusters or node pools.

The Windows node support comes along at the same time as multiple node pool support which is no surprise as the Windows nodes are essentially another type of node pool.

What are node pools?

Node pools are sets of VMs which have the same capabilities, you can use these pools to create clusters which have a number of different VM types to suit all the workloads you might try and run. For example with the Windows nodes we will have a node pool with Windows machines alongside a node pool with Linux machines, the same concept could be used for creating a node pool containing GPUs (for machine learning workloads for example)

It’s worth noting that the Windows node pool feature doesn’t automatically taint the Windows nodes which means unless you’ve explicitly set node selectors in your deployments you may see Kubernetes attempt to schedule Linux workloads onto Windows nodes or vice versa.

If you’re new to some of the more advanced scheduling mechanisms Kubernetes provides, I gave a talk covering this topic at NDC Minnesota 2019 which you can watch here:

What do Windows nodes look like?

I’ve set up a AKS cluster with a single Linux node and a single Windows node.

I can see this by getting the nodes using the Kubernetes CLI - kubectl

kubectl get nodes -o wide
NAME                                STATUS    ROLES     AGE       VERSION   EXTERNAL-IP   OS-IMAGE                    KERNEL-VERSION      CONTAINER-RUNTIME
aks-nodepool1-13791586-vmss000000   Ready     agent     8d        v1.14.0   <none>        Ubuntu 16.04.6 LTS          4.15.0-1040-azure   docker://3.0.4
akswin000000                        Ready     agent     8d        v1.14.0   <none>        Windows Server Datacenter   10.0.17763.379      docker://18.9.2

We can see there is one node running Ubuntu and one node running Windows Server Datacenter.

Running a full .NET framework application in AKS

With the cluster up and running you can create a deployment which is a Windows container and deploy this to AKS using the exactly the same tools/API that you would use for Linux containers. Here you can see the application is running on the Windows node

kubectl get pods -o wide
NAME                   READY     STATUS    RESTARTS   AGE     IP            NODE
test-9c969bbdd-9r5sb   1/1       Running   0          12d   akswin000000

We can see the image being used is the image which is a full .NET framework ASP.NET sample image.

kubectl describe deployment test
Name:                   test
Namespace:              default
CreationTimestamp:      Sat, 11 May 2019 15:05:22 +0200
Labels:                 run=test
Selector:               run=test
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  1 max unavailable, 1 max surge
Pod Template:
  Labels:  run=test
    Port:         80/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>

And we can browse to this site and see the default sample application. app screenshot

What’s next?

This is the area I’m most excited by, I have been speaking about Kubernetes from the perspective of a .NET developer for approx. 18months and finally the picture I’ve been painting is becoming a reality.

With hybrid Kubernetes clusters we have the ability to deploy and run both “legacy” .NET framework applications alongside newer .NET core applications. Because they’re deployed to Kubernetes we can use a consistent deployment and management approach. Even more exciting is the ability to leverage some of the capabilities of Kubernetes to allow us to begin breaking our monolithic .NET framework applications into smaller .NET Core services whilst being able to control traffic flow between each service. This will all make more sense in the follow up post where I dive into this vision