In the first two parts of this DevEx series, I tried to show how golden paths in Backstage can enable developer self‑service without chaos and how extending the catalog into copilots with MCP servers transforms discoverability into a conversational experience. Both of these steps were about reducing friction and giving developers paved roads to move faster while still staying compliant.
Back in the first article, I also touched on the broader idea of IDPs, how organizations often build them with tools like Crossplane, Tofu Controller, Argo CD and OPA on top of Kubernetes, that was also the stack I worked with in practice, powerful but not always the most approachable starting point for teams because of the complexity involved as I also stated in the first article.
In this article, I want to focus on Azure Service Operator (ASO) which provides a Kubernetes native way to reconcile Azure resources directly from manifests and when combined with Kubernetes Resource Orchestrator (KRO), it becomes more streamlined to expose developer friendly abstractions without the heavy complexity of other stacks. While many organizations build IDPs with different controllers and toolchains like Crossplane or Tofu Controller, using KRO and ASO in the Azure ecosystem gives us a more minimal and practical path to build the IDP platform foundation that is easy to approach and ready to grow.
Lately, I have been experimenting with ASO and KRO and I really liked how much they simplify the experience. That is why I wanted to include this stack in the series as well. With Backstage as the portal, KRO and ASO as the abstraction layer and the reconciliation engine, we can deliver a clean path where developers request what they need and the platform control plane provisions it automatically in Azure in a simple, compliant way that scales.
Let’s Get Started with KRO and ASO
Now we have the motivation, so let’s get started looking at KRO and ASO and see how they work together in practice.
KRO provides us a Kubernetes native way to define our own simple abstractions as custom resources (CRDs) that encapsulate the logic we want to expose to developers along with any organizational standards and best practices. In essence, KRO gives us the abstraction and orchestration layer. It lets us deploy multiple resources as a single unit but on its own it doesn’t provision anything.
This is where ASO comes into play. It acts as a reconciliation engine that actually provisions and manages the Azure resources behind those abstractions. It lets us define Azure resources as Kubernetes custom resources. Together, these two tools give us an abstraction layer and a reconciliation engine that together form the backbone of a simple automation platform on Kubernetes.
Installing KRO and ASO on AKS
I will assume that we already have a running AKS cluster, so we can jump straight into getting these two tools installed.
First, we will start installing ASO v2 on the cluster. To keep things simple, we will follow the steps from the official ASO documentation (Azure Service Operator). First install cert-manager, since ASO uses it to issue and rotate certificates, then deploy ASO v2 itself with a specific CRD configuration and finally configure a service principal or a managed identity that ASO will use to authenticate against Azure.
That being said, let’s use the following helm command to install ASO v2 along with the “resources.azure.com/*;*app.azure.com/*” CRDs which are the only ones we need for this article series. By default, ASO doesn’t install any CRDs instead it allows us to choose which ones to enable via the “crdPattern” parameter. For now, we will install the Azure Container Apps CRDs since we will use them in our demo.
helm upgrade --install aso2 aso2/azure-service-operator \
--create-namespace \
--namespace=azureserviceoperator-system \
--set crdPattern='resources.azure.com/*;*app.azure.com/*'Once ASO is installed, we should see its pods running in”azureserviceoperator-system” namespace.
At this point, ASO alone lets us define and provision Azure resources as Kubernetes CRDs. But when we add KRO on top, its orchestration capabilities allow us to define opinionated, developer‑friendly abstractions that orchestrate everything behind the scenes. Without KRO, we wouldn’t be able to orchestrate multiple resources behind a single, simplified abstraction.
So, let’s follow the same approach and install KRO on the cluster as well by using the steps from the official KRO documentation (Installing kro).
With installation of KRO alongside ASO, we now have the minimal backbone of an Azure focused IDP. From this point, our AKS cluster becomes a lightweight, centralized platform orchestrator.
Let’s Create a Simple, Orchestrated Azure Resource Template with KRO and ASO
Now that both ASO and KRO are installed, we can create a small abstraction that provisions a few Azure resources as a single unit. This follows the same idea behind “golden templates” we explored earlier in this series. Instead of asking developers to stitch together multiple files or understand every Azure resource, we expose a single, opiniated interface that hides all of that complexity.
To keep things simple, we will create a simple abstraction that sets up a “Resource Group“, an “Container Apps Environment” and a “Container App” in an Azure environment. We can think of this as a lightweight, serverless starter kit with built‑in best practices for the platform.
So, to build this starter kit, we will use “ResourceGraphDefinition” in KRO, this is where we describe the high level abstraction, the template itself. It is the core API in KRO for defining CRDs that orchestrates multiple resources. It links the ASO resources and ensures everything is provisioned in the right order.
apiVersion: kro.run/v1alpha1
kind: ResourceGraphDefinition
metadata:
name: aca-starterkit
spec:
schema:
apiVersion: v1alpha1
kind: ACAStarterkit
spec:
location: string
image: string
cpu: string | default="0.25"
memory: string | default="0.5Gi"
targetPort: integer | default=80
status:
resourceGroupName: ${resourceGroup.status.name}
appFqdn: ${containerApp.status.configuration.ingress.fqdn}
resources:
- id: resourceGroup
template:
apiVersion: resources.azure.com/v1api20200601
kind: ResourceGroup
metadata:
name: "${schema.metadata.name}-rg"
spec:
location: "${schema.spec.location}"
- id: containerAppsEnv
template:
apiVersion: app.azure.com/v1api20240301
kind: ManagedEnvironment
metadata:
name: "${schema.metadata.name}-env"
spec:
owner:
name: "${resourceGroup.metadata.name}"
location: "${schema.spec.location}"
workloadProfiles:
- name: Consumption
workloadProfileType: Consumption
- id: containerApp
template:
apiVersion: app.azure.com/v1api20240301
kind: ContainerApp
metadata:
name: "${schema.metadata.name}"
spec:
owner:
name: "${resourceGroup.metadata.name}"
location: "${schema.spec.location}"
environmentReference:
armId: "${containerAppsEnv.status.id}"
configuration:
ingress:
external: true
targetPort: ${schema.spec.targetPort}
template:
containers:
- name: app
image: "${schema.spec.image}"
resources:
cpu: ${double(schema.spec.cpu)}
memory: "${schema.spec.memory}"As we can see, the “schema” section defines the structure (kind, fields and etc.) of the custom resource we expose to developers. It is a kind of a contract between platform and the developers. It defines what developers can can configure. The “status” part in this section is simply outputs of this abstraction. It defines what information goes back to developers after deployment.
The “resources” section is where we list the underlying ASO resources that make up the abstraction. Each resource has an “id” and a “template” and KRO analyzes the expressions inside these templates to understand how the resources relate to each other. So we don’t need to wire everything together ourself.
For example, in the the “containerApp” resource, we referenced the Managed Environment using this “${containerAppsEnv.status.id}” expression. Similarly in the “containerAppsEnv” resource, we referenced the Resource Group using “${resourceGroup.metadata.name}” expression.
From these references, KRO builds a Directed Acyclic Graph (DAG) and compute the topological order. We will see this in action once we apply the definition. You can find all details about KRO’ graph inference here and you can see all supported ASO resources listed here.
Let’s apply this definition and get the “ACAStarterkit” CRD created.
kubectl apply -f aca-starterkit.yaml kubectl get resourcegraphdefinition
Now let’s describe it to see the topological order that we mentioned.
kubectl describe resourcegraphdefinition aca-starterkit
As we can see, KRO built the dependency graph for our abstraction and figured out that the resource group must be created first, then the managed environment and then the container app.
Let’s Deploy an Application Using the ACAStarterkit CRD
Since everything is in place, we can now start using this CRD just like any other Kubernetes resource. This is where the DevEx really begins, all the underlying complexity is hidden behind a single, simple interface.
There are many ways to integrate this abstraction into the platform. For example, the platform team could expose this CRD through Backstage by adding it to the template catalog as a starter kit, or bundle it into any software boilerplates, golden templates so developers can have an end to end opinionated experience with just a few clicks and start experimenting. The key idea here remains the same, this CRD becomes a reusable building block inside the IDP.
Let’s create our first “ACAStarterkit” instance and watch KRO orchestrate the underlying Azure resources for us.
For the sake of simplicity, I will skip Backstage, scaffolding and templating workflows. We already covered those in detail in the earlier parts of this series. Instead, we will apply a minimal “ACAStarterkit” resource directly to the cluster to provision the Azure resources and deploy a simple “quickstart” container image.
apiVersion: kro.run/v1alpha1 kind: ACAStarterkit metadata: name: my-hello-world-app namespace: default spec: location: westeurope image: mcr.microsoft.com/k8se/quickstart:latest # cpu: "0.5" - optional # memory: "1Gi" - optional # targetPort: 80 - optional
kubectl apply -f hello-world.yaml
After applying this “ACAStarterkit” resource, let’s monitor the deployment by describing it.
kubectl describe acastarterkit my-hello-world-app
If we look at the “Status” section, we can see the overall deployment state, the progress and once provisioning is completed, the output values that we exposed such as “App Fqdn” and “Resource Group Name“.
If we want more detailed reconciliation progress of the underlying Azure resources, for example if things are taking a bit longer, we can also inspect each resource type individually.
NOTE: Depending on the Azure resource type, provisioning might take some time.
kubectl get resourcegroups,managedenvironments,containerapps
This also shows the status and error messages from ASO as it provisions resources in Azure. We can also see that the resources are created sequentially based on the dependency graph that KRO inferred.
Since all the resources got provisioned by our minimal platform cluster, we can now access the application through the exposed “App Fqdn” output and verify that its running.
As we can see, our quickstart container app running on ACA and fully provisioned through the “ACAStarterkit” abstraction in a simple and declarative way.
Wrapping Up
In this article, I tried to show how KRO and ASO work together and lay the foundation for an Azure focused platform setup. What we built here is the starting point of a lightweight, Azure focused IDP. Its minimal but solid and it gives us a clear and approachable path that can grow into a full IDP experience over time.
From here, we can extend the abstraction and integrate GitOps, OPA and Backstage to turn this platform foundation into a more complete developer portal and IDP experience.
References
https://azure.github.io/azure-service-operator/
https://kro.run/docs/getting-started/Installation/#installation
https://github.com/Azure/azure-service-operator/blob/main/docs/troubleshooting.md







Be First to Comment