Containers

Kubernetes Gateway API: Migrate from Ingress with Weighted and Header Routing

Ingress served Kubernetes well for years, but its annotation-driven configuration hit a wall when you needed weighted routing or header matching. Every controller interpreted annotations differently, portability was a myth, and anything beyond basic host/path routing required vendor-specific hacks. The Gateway API is the official successor, designed from scratch with expressive routing, role-oriented configuration, and a portable spec that works the same across implementations.

Original content from computingforgeeks.com - post 165284

This guide walks through installing the Gateway API CRDs, deploying NGINX Gateway Fabric as the controller, and configuring real routing scenarios: host-based routing, weighted canary splits, and header-based routing. Every command and output here was tested on a live cluster running Kubernetes 1.35.3. If you already have a production-ready HA cluster, you can start immediately.

Current as of April 2026. Verified on Ubuntu 24.04.4 LTS with Kubernetes 1.35.3, Gateway API v1.3.0, NGINX Gateway Fabric 2.5.0

Gateway API Architecture

The Gateway API splits responsibilities into three resource types, each owned by a different persona in the organization. This separation is what makes it fundamentally better than Ingress for multi-team clusters.

GatewayClass is the infrastructure-level resource. It defines which controller handles traffic, similar to how a StorageClass defines which provisioner handles volumes. Cluster operators install and manage GatewayClasses. A cluster might have one GatewayClass for public traffic and another for internal services.

Gateway represents a load balancer instance. It binds to a GatewayClass and declares which ports and protocols to listen on. Platform teams own Gateways, controlling TLS certificates and network policies without touching application routing.

HTTPRoute is where application developers work. It attaches to a Gateway and defines routing rules: hostnames, path matches, header filters, weighted backends, redirects, and rewrites. Developers can modify their own routes without cluster-admin privileges.

This three-layer model means a platform team can grant developers permission to create HTTPRoutes in their namespace while keeping Gateway and GatewayClass locked down. Ingress never had this kind of role separation.

There is also ReferenceGrant, a fourth resource that controls cross-namespace references. If an HTTPRoute in namespace frontend needs to send traffic to a Service in namespace backend, the backend namespace must create a ReferenceGrant explicitly allowing it. This prevents one team from hijacking another team’s services without permission.

The API also defines route types beyond HTTP. GRPCRoute handles gRPC traffic with native support for service and method matching. TLSRoute and TCPRoute (currently in the experimental channel) cover non-HTTP protocols. Each route type attaches to the same Gateway resource, so a single load balancer instance can handle HTTP, gRPC, and raw TCP simultaneously.

Prerequisites

  • A running Kubernetes cluster (v1.28 or later). This guide was tested on v1.35.3 built with kubeadm on Ubuntu 24.04
  • kubectl configured with cluster-admin access
  • Helm 3.x installed on your workstation
  • Two or more worker nodes for meaningful traffic distribution tests
  • A CNI plugin installed and working (Cilium, Calico, or Flannel)

Verify your cluster is healthy before starting:

kubectl get nodes

All nodes should show Ready:

NAME          STATUS   ROLES           AGE   VERSION
k8s-master    Ready    control-plane   45d   v1.35.3
k8s-worker1   Ready    <none>          45d   v1.35.3
k8s-worker2   Ready    <none>          45d   v1.35.3

Install Gateway API CRDs

The Gateway API ships as standalone CRDs, separate from any controller. This is an important design decision: the API definition and the controller implementation are decoupled, so you can swap controllers without reinstalling CRDs. Install the standard channel CRDs first, which include GatewayClass, Gateway, HTTPRoute, and ReferenceGrant.

There are two CRD channels available. The standard channel includes GA resources that are stable and production-ready. The experimental channel adds resources like TCPRoute, TLSRoute, and BackendTLSPolicy that are still maturing. For production clusters, stick with the standard channel.

kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.3.0/standard-install.yaml

The output confirms all CRDs are created:

customresourcedefinition.apiextensions.k8s.io/gatewayclasses.gateway.networking.k8s.io created
customresourcedefinition.apiextensions.k8s.io/gateways.gateway.networking.k8s.io created
customresourcedefinition.apiextensions.k8s.io/httproutes.gateway.networking.k8s.io created
customresourcedefinition.apiextensions.k8s.io/referencegrants.gateway.networking.k8s.io created
customresourcedefinition.apiextensions.k8s.io/grpcroutes.gateway.networking.k8s.io created

Verify the CRDs are registered in the cluster:

kubectl get crds | grep gateway

You should see all five Gateway API resources listed:

gatewayclasses.gateway.networking.k8s.io          2026-04-01T09:12:34Z
gateways.gateway.networking.k8s.io                2026-04-01T09:12:34Z
grpcroutes.gateway.networking.k8s.io              2026-04-01T09:12:35Z
httproutes.gateway.networking.k8s.io              2026-04-01T09:12:35Z
referencegrants.gateway.networking.k8s.io         2026-04-01T09:12:35Z

Install NGINX Gateway Fabric

NGINX Gateway Fabric is NGINX’s official Gateway API implementation. It is actively maintained, supports the full v1.3.0 spec, and uses NGINX as its data plane. Other solid options include Envoy Gateway, Istio, and Cilium (if you are already running Cilium as your CNI), but NGINX Gateway Fabric is a good default for teams familiar with NGINX.

Add the Helm repository:

helm repo add nginx-gateway https://charts.nginx.com
helm repo update

Install the controller into its own namespace:

helm install nginx-gateway nginx-gateway/nginx-gateway-fabric \
  --version 2.5.0 \
  --namespace nginx-gateway \
  --create-namespace \
  --set service.type=NodePort

The deployment takes about 30 seconds. Check that the controller pod is running:

kubectl get pods -n nginx-gateway

Both the controller and the NGINX data plane pod should show Running:

NAME                                          READY   STATUS    RESTARTS   AGE
nginx-gateway-nginx-gateway-fabric-6d8b4c77   2/2     Running   0          45s

Confirm the GatewayClass was created and accepted by the controller:

kubectl get gatewayclass

The Accepted condition should be True:

NAME    CONTROLLER                              ACCEPTED   AGE
nginx   gateway.nginx.org/nginx-gateway-fabric   True       52s

Create a Gateway

A Gateway binds to the GatewayClass and opens listeners on specific ports. This Gateway listens on port 80 for HTTP traffic and allows HTTPRoutes from all namespaces.

cat <<'YAML' | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: main-gateway
  namespace: default
spec:
  gatewayClassName: nginx
  listeners:
  - name: http
    protocol: HTTP
    port: 80
    allowedRoutes:
      namespaces:
        from: All
YAML

Check the Gateway status:

kubectl get gateway main-gateway

The Programmed condition means the data plane is ready to handle traffic:

NAME           CLASS   ADDRESS       PROGRAMMED   AGE
main-gateway   nginx   10.0.1.100    True         8s

The Gateway is now listening. Any HTTPRoute that references main-gateway will be picked up and configured automatically.

For a more detailed look at the Gateway resource, use describe:

kubectl describe gateway main-gateway

The conditions section shows both Accepted and Programmed as True, meaning the controller recognized the Gateway and the data plane is configured:

Status:
  Conditions:
    Type:    Accepted
    Status:  True
    Reason:  Accepted
    Type:    Programmed
    Status:  True
    Reason:  Programmed
  Listeners:
    Name:  http
    Supported Kinds:
      Kind:  HTTPRoute
    Attached Routes:  0
    Conditions:
      Type:    Accepted
      Status:  True

The Attached Routes: 0 line will increment as you add HTTPRoutes in the following sections.

Basic HTTPRoute: Host-Based Routing

Start with a simple scenario: route app.example.com to an NGINX backend. First, deploy a test application.

kubectl create deployment web-app --image=nginx:latest --replicas=2
kubectl expose deployment web-app --port=80

Now create an HTTPRoute that matches the hostname and sends traffic to the service:

cat <<'YAML' | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: app-route
  namespace: default
spec:
  parentRefs:
  - name: main-gateway
  hostnames:
  - "app.example.com"
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - name: web-app
      port: 80
YAML

Verify the route is accepted:

kubectl get httproute app-route

The output shows the route is bound to the Gateway:

NAME        HOSTNAMES              AGE
app-route   ["app.example.com"]    5s

Find the NodePort assigned to the Gateway service:

kubectl get svc -n nginx-gateway nginx-gateway-nginx-gateway-fabric -o jsonpath='{.spec.ports[0].nodePort}'

Test the route by sending a request with the correct Host header. Replace 10.0.1.50 with any of your node IPs:

curl -s -H "Host: app.example.com" http://10.0.1.50:31080 | head -5

The NGINX welcome page confirms traffic is flowing through the Gateway to the backend:

<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>

Weighted Routing for Canary Deployments

This is where Gateway API really shines compared to Ingress. Weighted routing splits traffic between multiple backends by percentage, which is exactly what you need for canary deployments. No annotations, no controller-specific hacks.

Deploy two versions of an application. Version 1 runs NGINX, version 2 runs Apache httpd:

kubectl create deployment app-v1 --image=nginx:latest --replicas=2
kubectl expose deployment app-v1 --port=80

kubectl create deployment app-v2 --image=httpd:latest --replicas=2
kubectl expose deployment app-v2 --port=80

Create an HTTPRoute that sends 80% of traffic to v1 and 20% to v2:

cat <<'YAML' | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: canary-route
  namespace: default
spec:
  parentRefs:
  - name: main-gateway
  hostnames:
  - "canary.example.com"
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - name: app-v1
      port: 80
      weight: 80
    - name: app-v2
      port: 80
      weight: 20
YAML

Test the traffic split by sending 10 requests and counting which backend responds. NGINX returns "Welcome to nginx!" while httpd returns "It works!":

for i in $(seq 1 10); do
  curl -s -H "Host: canary.example.com" http://10.0.1.50:31080 | grep -o "nginx\|It works"
done

The distribution closely matches the 80/20 split. Out of 10 requests, 7 hit NGINX and 3 hit httpd:

nginx
nginx
nginx
It works
nginx
nginx
nginx
It works
nginx
It works

Small sample sizes will vary, but over hundreds of requests the distribution converges on 80/20. To shift more traffic to v2, just update the weights in the HTTPRoute. When the canary is validated, set v2 to weight 100 and remove v1.

Try doing this with a plain Ingress resource. You would need controller-specific annotations like nginx.ingress.kubernetes.io/canary-weight, and the syntax differs for every Ingress controller. The Gateway API makes it a first-class, portable feature.

Header-Based Routing

Header matching routes requests based on HTTP headers, useful for A/B testing, API versioning, or directing internal debug traffic to a specific backend. The Gateway API supports exact matches, prefix matches, and regex matches on headers.

This HTTPRoute sends requests with X-Version: v2 to app-v2, and everything else to app-v1:

cat <<'YAML' | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: header-route
  namespace: default
spec:
  parentRefs:
  - name: main-gateway
  hostnames:
  - "api.example.com"
  rules:
  - matches:
    - headers:
      - name: X-Version
        value: v2
    backendRefs:
    - name: app-v2
      port: 80
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - name: app-v1
      port: 80
YAML

Test without any header. The request goes to app-v1 (NGINX):

curl -s -H "Host: api.example.com" http://10.0.1.50:31080 | grep -o "nginx\|It works"

Returns the NGINX backend as expected:

nginx

Now send the same request with the X-Version: v2 header:

curl -s -H "Host: api.example.com" -H "X-Version: v2" http://10.0.1.50:31080 | grep -o "nginx\|It works"

Traffic is routed to app-v2 (httpd):

It works

The routing is deterministic. Every request with X-Version: v2 hits app-v2, every request without it hits app-v1. You can combine header matching with path matching in the same rule for more granular control.

This pattern is particularly useful for API versioning. Instead of maintaining separate URL paths like /v1/ and /v2/, clients send a version header and the Gateway routes accordingly. Backend services stay clean, and you can deprecate old versions by removing the header match rule.

Path-Based Routing

Beyond hostnames and headers, path matching is the most common routing pattern. The Gateway API supports three path match types: Exact, PathPrefix, and RegularExpression. Here is an HTTPRoute that splits traffic by URL path, sending /api/ requests to a backend API service and everything else to the frontend:

cat <<'YAML' | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: path-route
  namespace: default
spec:
  parentRefs:
  - name: main-gateway
  hostnames:
  - "myapp.example.com"
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /api/
    backendRefs:
    - name: app-v2
      port: 80
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - name: app-v1
      port: 80
YAML

Rule ordering matters. The Gateway API evaluates rules from most specific to least specific. The /api/ prefix is more specific than /, so it takes priority for any request starting with /api/. Requests to /about, /login, or / fall through to the second rule.

Test the path routing:

curl -s -H "Host: myapp.example.com" http://10.0.1.50:31080/ | grep -o "nginx\|It works"
curl -s -H "Host: myapp.example.com" http://10.0.1.50:31080/api/health | grep -o "nginx\|It works"

The root path returns NGINX (app-v1), while /api/health returns httpd (app-v2):

nginx
It works

Migrating from Ingress to Gateway API

If you have existing Ingress resources, migration is straightforward because the concepts map directly. Here is a side-by-side comparison of the same routing rule expressed as Ingress and HTTPRoute.

Ingress (old way):

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: app-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  ingressClassName: nginx
  rules:
  - host: app.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: web-app
            port:
              number: 80

HTTPRoute (Gateway API):

apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: app-route
spec:
  parentRefs:
  - name: main-gateway
  hostnames:
  - "app.example.com"
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - name: web-app
      port: 80

The structure is similar, but notice the differences. The Ingress version relies on an annotation for rewrite behavior, which only works with the NGINX Ingress Controller. The HTTPRoute version uses parentRefs to bind to a Gateway instead of ingressClassName. Routing rules are explicit fields, not annotations.

For clusters with many Ingress resources, migrate incrementally. Keep both Ingress and Gateway API running simultaneously (they use different controllers and don't conflict), convert routes one at a time, and decommission the Ingress controller once all routes are migrated. Make sure your etcd backups are current before starting a large migration.

A practical migration strategy looks like this: start with low-traffic, non-critical services. Create the HTTPRoute alongside the existing Ingress, verify traffic flows correctly through the Gateway, then delete the Ingress resource. Once you are confident in the pattern, migrate the remaining services in batches. Keep the Ingress controller running until the last Ingress resource is removed.

One thing that catches people off guard: the NGINX Ingress Controller and NGINX Gateway Fabric are separate projects. Having one installed does not give you the other. They use different configuration models and different NGINX instances. You need to install NGINX Gateway Fabric (or your chosen Gateway controller) even if you already have the NGINX Ingress Controller deployed.

Gateway API vs Ingress: Feature Comparison

FeatureIngressGateway API
Weighted routing (canary)Annotation-based, controller-specificNative weight field on backendRefs
Header matchingLimited, annotation-dependentFirst-class headers match type
Role-based ownershipSingle resource, one RBAC scopeGatewayClass / Gateway / Route separation
Cross-namespace routingNot supportedSupported via ReferenceGrant
TLS passthroughAnnotation-basedNative TLSRoute resource
gRPC routingNot supported nativelyDedicated GRPCRoute resource
URL rewriteAnnotation (varies by controller)Native URLRewrite filter
Request mirroringAnnotation (few controllers)Native RequestMirror filter
PortabilityAnnotations break across controllersSpec is portable, same YAML works everywhere
API maturityGA since Kubernetes 1.19GA (v1.0+) since October 2023

Ingress is not going away immediately, but the Kubernetes project has been clear that Gateway API is the future of traffic management. New features are only being added to Gateway API, not Ingress.

Production Considerations

Before running Gateway API in production, a few things to keep in mind. TLS termination should be configured on the Gateway resource itself using Kubernetes Secrets for certificates. The Gateway listener section supports TLS mode Terminate for offloading and Passthrough for end-to-end encryption.

For high-availability setups with an external load balancer such as HAProxy, point the load balancer at the Gateway's NodePort across all nodes. If you are using LoadBalancer service type in a cloud environment, the cloud provider handles this automatically.

ReferenceGrant is the resource you need when an HTTPRoute in namespace A needs to reference a backend Service in namespace B. Without it, cross-namespace references are denied by default, which is a deliberate security boundary.

Monitor your Gateway controller like any other critical workload. NGINX Gateway Fabric exposes Prometheus metrics on port 9113 by default. Scrape those alongside your cluster monitoring stack.

Troubleshooting

If an HTTPRoute shows Accepted: False, the most common cause is a mismatch between the route's parentRefs and the actual Gateway name or namespace. Check the route's status conditions for the specific reason:

kubectl describe httproute app-route

Look at the Conditions section under Status.Parents. Common issues include:

  • InvalidParentRef: the Gateway name in parentRefs does not match any existing Gateway. Double-check the name and namespace
  • NotAllowedByListeners: the Gateway's allowedRoutes restricts which namespaces can attach routes. Either move the HTTPRoute to the allowed namespace or update the Gateway to allow routes from All namespaces
  • BackendNotFound: the target Service does not exist or is in a different namespace without a ReferenceGrant

If the Gateway itself shows Programmed: False, the controller pod may not be running. Check the controller logs:

kubectl logs -n nginx-gateway -l app.kubernetes.io/name=nginx-gateway-fabric --tail=50

Port conflicts are another gotcha. If another service already binds to the same NodePort, the Gateway will fail to program. Check for conflicting services with kubectl get svc --all-namespaces and look for duplicate port assignments.

Frequently Asked Questions

Is Ingress deprecated?

No, Ingress is not deprecated as of Kubernetes 1.35. It remains a stable GA API and will continue to work. However, the Kubernetes SIG-Network team has stated that no new features will be added to the Ingress spec. All active development is happening on the Gateway API. For new clusters, Gateway API is the recommended approach. Existing Ingress setups will keep working, but you should plan a migration path when your team is ready.

Which Gateway controller should I use?

It depends on what you are already running. If you use Cilium as your CNI, its built-in Gateway API support avoids deploying a separate controller. For Istio service mesh users, Istio's Gateway implementation integrates naturally with the mesh. NGINX Gateway Fabric (used in this guide) is a solid choice for teams that want a standalone controller with familiar NGINX performance characteristics. Envoy Gateway is another strong option, especially if you want Envoy's advanced observability features. All of these implement the same Gateway API spec, so your HTTPRoute manifests are portable between them.

Can I run Gateway API and Ingress together?

Yes, they coexist without conflict. Gateway API controllers and Ingress controllers are separate deployments that watch different API resources. You can run both simultaneously, which is the recommended approach during migration. Route traffic through Ingress for existing services while onboarding new services onto Gateway API. Once all routes are migrated, decommission the Ingress controller. The only caveat: don't configure both an Ingress and an HTTPRoute for the same hostname and path, because the two controllers would compete for the same traffic.

Related Articles

Containers How To Install Operator SDK CLI on Linux / macOS Containers How To Configure NFS For Kubernetes Persistent Storage Containers Stream Desktop and Containerized Applications on Browser Cheat Sheets ifconfig vs ip usage guide on Linux

Leave a Comment

Press ESC to close