Allgemein

CTO Chris Aniszczyk on the CNCF push for AI interoperability

CTO Chris Aniszczyk on the CNCF push for AI interoperability

Chris Aniszczyk, co-founder and CTO of the Cloud Native Computing Foundation (CNCF), argues that AI agents resemble microservices at a surface level, though they differ in how they are scaled and managed.

On a surface level, AI agents aren’t so different from microservices, according to Chris Aniszczyk, co-founder and CTO of the Cloud Native Computing Foundation (CNCF).

The New Stack spoke with Aniszczyk about the AI and CNCF programs ahead of this year’s KubeCon + CloudNativeCon Europe, set for March 23-26 in Amsterdam.

“If you think about what an agent is at a surface level, it sounds like a microservice,” Aniszczyk tells The New Stack during this week’s episode of The Makers. “In my opinion, it is similar to microservice, but has a little bit different characteristics of how it’s scaled, managed, and so on.”

That’s why being cloud native is central to becoming AI native. Cloud native is about building and running scalable systems that are resilient and scalable. To do that, cloud native uses technologies such as containers, microservices, and orchestrators such as Kubernetes to enable a dynamic model of development, he says.

A variety of CNCF projects support this approach, including Kubernetes, but he also points to gRPC, which facilitates the communication between microservices, and projects like Prometheus and OpenTelemetry, which provide the observability necessary for achieving scalability and resilience. The CNCF is all about providing these pieces to really enable cloud native at scale, he adds.

“Essentially, my view of an AI-native developer is … someone that is using an agent or AI-first approach to software development across the whole software life cycle,” Aniszczyk says. ”In order to be AI native, you have to be cloud native by default.”

He adds that it was no secret that ChatGPT, Claude, and other AI are all powered by Kubernetes and other CNCF projects behind the scenes.

A conformance program for AI

Last year, the CNCF created a group to work on a new effort to extend its conformance efforts to the world of AI. Conformance refers to the certification process that ensures a software product, such as a Kubernetes distribution, adheres to a specific set of standards and interoperability requirements, as defined by the community.

“With the rise of agentic, AI, generative AI, a lot of these folks had to go scale the infrastructure that is required to run things like Chat GPT and a variety of different services out there as part of that scaling process,” Aniszczyk says. “It’s been a little bit complicated, say, to go scale your Kubernetes environments and cloud native environments to meet the demands of generative AI workloads, and a lot of people had to make some changes.”

For example, he points to Kubernetes’ new feature called dynamic resource allocation (DRA), which supports different types of accelerators such as GPUs and Tensor Processing Units (TPUs) for AI-based workloads within Kubernetes. There are also changes that have to be done on the networking and trafficking layer because AI inference workloads handle traffic a bit differently than a microservice or HTTP-based workload, he adds.

The problem is, everyone is handling it differently across the industry, which he notes created “a bit of a mess.”

“Our goal, at least within the CNCF, is to have basically every major provider out there, whether you’re a big cloud or one of these like neo GPU-focused cloud providers, to support this baseline of compatibility,” he says. “It’s all about vendor neutrality and building community across companies and geographies.”

New CNCF incubation programs

Aniszczyk also shares a few of his favorite incubation projects at the CNCF.

Metal³ started at the telecommunications provider Ericsson, which needed to run Kubernetes on bare metal. The company needed a system to provision and manage clusters without using the cloud, Aniszczyk says.

“Metal³ does a fantastic job of basically managing bare metal infrastructure for the purposes of rolling out Kubernetes,” he says.

Aniszczyk notes that the CNCF also has a number of projects that deal with edge deployment, such as Cube Edge, which has graduated from the incubator, and OpenYurt.

“The simplest way I can kind of describe OpenYurt is (to) think of it as a control plane for managing edge-based Kubernetes deployments,” he says. “ They created this CRD [Custom Resource Definition] called Node Pool that acts as a controller for all that at a high level.”

Tune into the full episode of The Makers to learn more about what the CNCF plans for 2026.

The post CTO Chris Aniszczyk on the CNCF push for AI interoperability appeared first on The New Stack.