I woke up on March 25th to a Slack message from our security team: “ingress-nginx is EOL as of yesterday. Timeline for migration?”
I had been ignoring this for months. The retirement was announced back in November 2025, but it felt distant. Now it was real. No more CVE patches. No more bug fixes. The clock was ticking.
What Actually Happened
On March 24, 2026, Kubernetes SIG Network and the Security Response Committee officially retired ingress-nginx. The project is done. Container images and Helm charts will stay available (they’re not deleting anything), but there will be no new releases. If a critical vulnerability drops tomorrow, you’re on your own.
This isn’t a deprecation warning you can ignore for three releases. This is “the maintainers have left the building.”
Taking Stock
First thing I did was audit what we actually had running. Across three clusters:
kubectl get ingress -A -o json | jq -r '.items[] | [.metadata.namespace, .metadata.name, (.spec.rules[]?.host // "no-host")] | @tsv' | sort
Output: 6 Ingress resources across 4 namespaces. Two of them had annotations I hadn’t touched in over a year. One had a configuration-snippet that made me wince.
kubectl get ingress -A -o jsonpath='{range .items[*]}{.metadata.namespace}/{.metadata.name}{"\n"}{end}' | wc -l
# 6
Not terrible. But some of those had complex configs: rate limiting, custom headers, CORS, SSL redirect logic buried in annotations.
Picking a Replacement
I already had Gateway API running in two clusters (wrote about that migration previously). For the third cluster, which was older and running on bare metal, I had three options:
- Gateway API with Envoy Gateway - my preference
- Traefik - solid, well maintained
- HAProxy Ingress - if I wanted to stay close to the Ingress resource model
I went with Envoy Gateway for consistency. If you’re already on Traefik or Contour, just stay there. The goal here is getting off ingress-nginx, not starting a religious war about proxies.
Installing Envoy Gateway
helm install eg oci://docker.io/envoyproxy/gateway-helm \
--version v1.3.0 \
--namespace envoy-gateway-system \
--create-namespace
Verify it’s running:
kubectl -n envoy-gateway-system get pods
# NAME READY STATUS RESTARTS AGE
# envoy-gateway-5d89f7c5b6-x2k4m 1/1 Running 0 45s
Then create a GatewayClass and Gateway:
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
name: eg
spec:
controllerName: gateway.envoyproxy.io/gatewayclass-controller
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: main-gateway
namespace: envoy-gateway-system
spec:
gatewayClassName: eg
listeners:
- name: https
protocol: HTTPS
port: 443
tls:
mode: Terminate
certificateRefs:
- kind: Secret
name: wildcard-tls
allowedRoutes:
namespaces:
from: All
- name: http
protocol: HTTP
port: 80
allowedRoutes:
namespaces:
from: All
The Actual Migration, Service by Service
Simple Services (4 of 6)
Four of my Ingress resources were straightforward: host-based routing, TLS termination, nothing fancy. These mapped cleanly to HTTPRoute:
Before (Ingress):
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: api-ingress
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- api.example.com
secretName: api-tls
rules:
- host: api.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: api-svc
port:
number: 8080
After (HTTPRoute):
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: api-route
spec:
parentRefs:
- name: main-gateway
namespace: envoy-gateway-system
hostnames:
- api.example.com
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- name: api-svc
port: 8080
SSL redirect is handled at the Gateway level, so you don’t need an annotation for it. One less thing to forget.
I applied these four in parallel, tested each one, then deleted the old Ingress resources:
for route in api-route dashboard-route docs-route webhook-route; do
kubectl apply -f "${route}.yaml"
done
# test each endpoint
for host in api.example.com dash.example.com docs.example.com hooks.example.com; do
curl -s -o /dev/null -w "%{http_code} ${host}\n" "https://${host}/healthz"
done
All returned 200. Deleted the old Ingress objects.
The Tricky One: Rate Limiting
One service had rate limiting via nginx annotations:
annotations:
nginx.ingress.kubernetes.io/limit-rps: "10"
nginx.ingress.kubernetes.io/limit-burst-multiplier: "5"
With Envoy Gateway, rate limiting requires a BackendTrafficPolicy:
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
name: api-ratelimit
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: HTTPRoute
name: public-api-route
rateLimit:
type: Global
global:
rules:
- clientSelectors:
- headers:
- name: x-forwarded-for
type: Distinct
limit:
requests: 10
unit: Second
This is more verbose but also more powerful. You can rate limit by header, by IP, by path. The nginx annotation approach always felt like duct tape.
The Ugly One: Configuration Snippets
The last service had a configuration-snippet annotation injecting raw nginx config:
annotations:
nginx.ingress.kubernetes.io/configuration-snippet: |
more_set_headers "X-Frame-Options: DENY";
more_set_headers "X-Content-Type-Options: nosniff";
if ($request_uri ~* "^/old-path") {
return 301 https://$host/new-path;
}
This was the one I dreaded. The custom headers were easy with SecurityPolicy. The redirect required an HTTPRoute filter:
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: legacy-redirect
spec:
parentRefs:
- name: main-gateway
namespace: envoy-gateway-system
hostnames:
- app.example.com
rules:
- matches:
- path:
type: PathPrefix
value: /old-path
filters:
- type: RequestRedirect
requestRedirect:
path:
type: ReplaceFullPath
replaceFullPath: /new-path
statusCode: 301
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- name: app-svc
port: 8080
Custom headers via SecurityPolicy:
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: SecurityPolicy
metadata:
name: security-headers
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: Gateway
name: main-gateway
headers:
setHeaders:
X-Frame-Options: "DENY"
X-Content-Type-Options: "nosniff"
The Gotcha That Cost Me Two Hours
After migration, one service started returning 503s intermittently. Logs showed the backend was healthy. Turned out the issue was externalTrafficPolicy: Local on the old nginx LoadBalancer Service combined with how Envoy Gateway creates its own Service.
The fix was to make sure the new Envoy Gateway Service had the right externalTrafficPolicy and that my nodes had the correct health check configuration. If you’re on bare metal with MetalLB, double check your L2 advertisement config after swapping controllers.
kubectl -n envoy-gateway-system get svc
# Check the EXTERNAL-IP and TYPE
kubectl describe svc -n envoy-gateway-system envoy-main-gateway
Cleanup
Once everything was confirmed working for 48 hours, I removed ingress-nginx:
helm uninstall ingress-nginx -n ingress-nginx
kubectl delete namespace ingress-nginx
Then cleaned up the old IngressClass:
kubectl delete ingressclass nginx
DNS Cutover
The part people forget: if your old nginx LoadBalancer had a different external IP than the new Envoy Gateway one, you need to update DNS. I use external-dns with annotation-based filtering, so I just had to make sure the new Gateway Service had the right annotations:
metadata:
annotations:
external-dns.alpha.kubernetes.io/hostname: "*.example.com"
If you’re managing DNS manually, update your A records to point to the new LoadBalancer IP.
What I’d Do Differently
- Start earlier. The November announcement gave us four months. I used about two days of that. Don’t be me.
- Test rate limiting first. The simple services are easy. The complex annotation-heavy ones take 80% of the time.
- Run both controllers in parallel. I did this and it saved me. Keep ingress-nginx running while you validate the new routes. Just don’t delete the old Ingress resources until the new routes are confirmed working.
- Check your monitoring. If you had nginx-specific Prometheus metrics (like
nginx_ingress_controller_requests), those are gone now. Envoy Gateway exposes different metrics. Update your dashboards before you lose visibility.
Final Thoughts
Ingress-nginx served the community well for years. But it’s done now, and the longer you wait, the riskier your clusters get. The migration took me a weekend for six services. If you have dozens, start planning now.
The Gateway API is genuinely better. More expressive, more composable, less annotation spaghetti. If this retirement is what finally pushes your team to adopt it, that’s a net positive.
Don’t wait for the first unpatched CVE to make this urgent.