You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Mar 24, 2026. It is now read-only.
We have been running ingress-nginx for a while. Recently ingress became unresponsive and it was related to the MaxMind GeoIP2 database file downloads. I downloaded the files to an EFS and mount that EFS to the folder using a persistentVolumeClaim. However, while the pods run, they restart frequently (once every 5 minutes or so, the other seems to be stable).
What you expected to happen:
The pods start and keep running. The restarts cause outages since everything goes through the ingress.
** What do you think went wrong?
It seems the resources may not have been sufficient, so I increased them to cpu: 500m and memory: 1Gi and set requests and limits to the same so QoS is Guaranteed. However, this has not fixed the issue. I also updated the ingress-controller (helm chart 4.11.5 to 4.14.0).
NGINX Ingress controller version (exec into the pod and run /nginx-ingress-controller --version):
Client Version: v1.33.3
Kustomize Version: v5.6.0
Server Version: v1.33.5-eks-3cfe0ce
Environment:
Cloud provider or hardware configuration: Amazon EKS
OS (e.g. from /etc/os-release): Alpine Linux 3.22.2
Kernel (e.g. uname -a): Linux ingress-nginx-controller-fd76f6c5d-sh554 6.12.40-64.114.amzn2023.x86_64 Basic structure #1 SMP PREEMPT_DYNAMIC Tue Aug 26 05:26:24 UTC 2025 x86_64 Linux
Install tools: Amazon EKS "create cluster"
Please mention how/where was the cluster created like kubeadm/kops/minikube/kind etc.
Basic cluster related info:
kubectl version
Client Version: v1.33.3
Kustomize Version: v5.6.0
Server Version: v1.33.5-eks-3cfe0ce
How was the ingress-nginx-controller installed: helm
If helm was used then please show output of helm ls -A | grep -i ingress
ingress-nginx ingress-nginx 139 2025-11-20 19:17:56.531561565 +0000 UTC deployed ingress-nginx-4.14.0 1.14.0
If helm was used then please show output of helm -n <ingresscontrollernamespace> get values <helmreleasename>
NAME SCHEDULE TIMEZONE SUSPEND ACTIVE LAST SCHEDULE AGE CONTAINERS IMAGES SELECTOR
cronjob.batch/geoip-downloader 0 2 * * * False 0 15h geoip-downloader curlimages/curl:latest
NAME STATUS COMPLETIONS DURATION AGE CONTAINERS IMAGES SELECTOR
job.batch/geoip-downloader-1763620045134 Complete 1/1 8s 15h geoip-downloader curlimages/curl:latest batch.kubernetes.io/controller-uid=fdaf4f34-af42-43fd-b678-b29868b256cb
kubectl -n <ingresscontrollernamespace> describe po <ingresscontrollerpodname>
Name: ingress-nginx-controller-fd76f6c5d-sh554
Namespace: ingress-nginx
Priority: 0
Priority Class Name: default
Service Account: ingress-nginx
Node: ip-10-254-191-230.us-west-2.compute.internal/10.254.191.230
Start Time: Thu, 20 Nov 2025 11:18:59 -0800
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
app.kubernetes.io/version=1.14.0
helm.sh/chart=ingress-nginx-4.14.0
pod-template-hash=fd76f6c5d
Annotations:
Status: Running
IP: 10.254.132.155
IPs:
IP: 10.254.132.155
Controlled By: ReplicaSet/ingress-nginx-controller-fd76f6c5d
Containers:
controller:
Container ID: containerd://f333d7fd85da9fd3a0443a79370d8fb953d38fa133397ab0b063bb1b5c375ccb
Image: registry.k8s.io/ingress-nginx/controller:v1.14.0@sha256:e4127065d0317bd11dc64c4dd38dcf7fb1c3d72e468110b4086e636dbaac943d
Image ID: registry.k8s.io/ingress-nginx/controller@sha256:e4127065d0317bd11dc64c4dd38dcf7fb1c3d72e468110b4086e636dbaac943d
Ports: 80/TCP, 443/TCP, 10254/TCP, 8443/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP
SeccompProfile: RuntimeDefault
Args:
/nginx-ingress-controller
--default-backend-service=$(POD_NAMESPACE)/ingress-nginx-defaultbackend
--publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
--election-id=ingress-nginx-leader
--controller-class=k8s.io/ingress-nginx
--ingress-class=nginx
--configmap=$(POD_NAMESPACE)/ingress-nginx-controller
--validating-webhook=:8443
--validating-webhook-certificate=/usr/local/certificates/cert
--validating-webhook-key=/usr/local/certificates/key
--watch-ingress-without-class=true
--enable-metrics=true
--maxmind-edition-ids=GeoLite2-City,GeoLite2-ASN,GeoLite2-Country
State: Running
Started: Thu, 20 Nov 2025 13:43:10 -0800
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 20 Nov 2025 13:37:29 -0800
Finished: Thu, 20 Nov 2025 13:43:10 -0800
Ready: False
Restart Count: 21
Limits:
cpu: 500m
memory: 1Gi
Requests:
cpu: 500m
memory: 1Gi
Liveness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=5
Readiness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
Environment:
POD_NAME: ingress-nginx-controller-fd76f6c5d-sh554 (v1:metadata.name)
POD_NAMESPACE: ingress-nginx (v1:metadata.namespace)
LD_PRELOAD: /usr/local/lib/libmimalloc.so
AWS_STS_REGIONAL_ENDPOINTS: regional
AWS_DEFAULT_REGION: us-west-2
AWS_REGION: us-west-2
AWS_CONTAINER_CREDENTIALS_FULL_URI: http://169.254.170.23/v1/credentials
AWS_CONTAINER_AUTHORIZATION_TOKEN_FILE: /var/run/secrets/pods.eks.amazonaws.com/serviceaccount/eks-pod-identity-token
Mounts:
/etc/ingress-controller/geoip from vol-efs-geoip (rw)
/etc/nginx/lua/plugins/header_filter from vol-header-filter (ro)
/etc/nginx/lua/plugins/parse_nested_details from vol-parse-nested-details (ro)
/etc/nginx/lua/plugins/rewrite from vol-rewrite (ro)
/etc/nginx/lua/plugins/shared-code from vol-shared-code (ro)
/etc/nginx/modsecurity/modsecurity.conf from vol-modsecurity (ro,path="modsecurity.conf")
/usr/local/certificates/ from webhook-cert (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dll48 (ro)
/var/run/secrets/pods.eks.amazonaws.com/serviceaccount from eks-pod-identity-token (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
eks-pod-identity-token:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 85621
webhook-cert:
Type: Secret (a volume populated by a Secret)
SecretName: ingress-nginx-admission
Optional: false
vol-modsecurity:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: modsecurity
Optional: false
vol-shared-code:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: shared-code
Optional: false
vol-parse-nested-details:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: parse-nested-details
Optional: false
vol-rewrite:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: rewrite
Optional: false
vol-header-filter:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: header-filter
Optional: false
vol-efs-geoip:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: efs-geoip-pvc
ReadOnly: false
kube-api-access-dll48:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: Guaranteed
Node-Selectors: kubernetes.io/os=linux
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
Normal RELOAD 59m (x2 over 59m) nginx-ingress-controller NGINX reload triggered due to a change in configuration
Normal RELOAD 54m (x2 over 54m) nginx-ingress-controller NGINX reload triggered due to a change in configuration
Normal RELOAD 48m (x2 over 48m) nginx-ingress-controller NGINX reload triggered due to a change in configuration
Normal RELOAD 40m (x2 over 40m) nginx-ingress-controller NGINX reload triggered due to a change in configuration
Normal RELOAD 34m (x2 over 34m) nginx-ingress-controller NGINX reload triggered due to a change in configuration
Normal RELOAD 28m (x2 over 28m) nginx-ingress-controller NGINX reload triggered due to a change in configuration
Warning Unhealthy 25m (x150 over 144m) kubelet Liveness probe failed: Get "http://10.254.132.155:10254/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Normal RELOAD 20m (x2 over 20m) nginx-ingress-controller NGINX reload triggered due to a change in configuration
Warning Unhealthy 19m (x165 over 145m) kubelet Readiness probe failed: Get "http://10.254.132.155:10254/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Normal RELOAD 14m (x2 over 14m) nginx-ingress-controller NGINX reload triggered due to a change in configuration
Normal RELOAD 9m1s (x2 over 9m8s) nginx-ingress-controller NGINX reload triggered due to a change in configuration
Normal Pulled 3m32s (x21 over 137m) kubelet Container image "registry.k8s.io/ingress-nginx/controller:v1.14.0@sha256:e4127065d0317bd11dc64c4dd38dcf7fb1c3d72e468110b4086e636dbaac943d" already present on machine
Normal RELOAD 3m20s (x2 over 3m27s) nginx-ingress-controller NGINX reload triggered due to a change in configuration
Normal Killing 81s (x22 over 141m) kubelet Container controller failed liveness probe, will be restarted
Warning Unhealthy 1s (x546 over 141m) kubelet Readiness probe failed: HTTP probe failed with statuscode: 500
Anything else we need to know:
This is our dev/staging cluster. The production cluster is not experiencing issues and is at an older version (helm chart 4.11.5 / controller 1.11.5).
What happened:
We have been running ingress-nginx for a while. Recently ingress became unresponsive and it was related to the MaxMind GeoIP2 database file downloads. I downloaded the files to an EFS and mount that EFS to the folder using a persistentVolumeClaim. However, while the pods run, they restart frequently (once every 5 minutes or so, the other seems to be stable).
What you expected to happen:
The pods start and keep running. The restarts cause outages since everything goes through the ingress.
** What do you think went wrong?
It seems the resources may not have been sufficient, so I increased them to cpu: 500m and memory: 1Gi and set requests and limits to the same so QoS is Guaranteed. However, this has not fixed the issue. I also updated the ingress-controller (helm chart 4.11.5 to 4.14.0).
NGINX Ingress controller version (exec into the pod and run
/nginx-ingress-controller --version):NGINX Ingress controller
Release: v1.14.0
Build: 52c0a83
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.27.1
Kubernetes version (use
kubectl version):Client Version: v1.33.3
Kustomize Version: v5.6.0
Server Version: v1.33.5-eks-3cfe0ce
Environment:
uname -a): Linux ingress-nginx-controller-fd76f6c5d-sh554 6.12.40-64.114.amzn2023.x86_64 Basic structure #1 SMP PREEMPT_DYNAMIC Tue Aug 26 05:26:24 UTC 2025 x86_64 LinuxPlease mention how/where was the cluster created like kubeadm/kops/minikube/kind etc.kubectl versionClient Version: v1.33.3
Kustomize Version: v5.6.0
Server Version: v1.33.5-eks-3cfe0ce
helm ls -A | grep -i ingressingress-nginx ingress-nginx 139 2025-11-20 19:17:56.531561565 +0000 UTC deployed ingress-nginx-4.14.0 1.14.0
helm -n <ingresscontrollernamespace> get values <helmreleasename>USER-SUPPLIED VALUES:
controller:
addHeaders:
X-City: $asd_city
X-TimeZone: $asd_timezone
X-ZipCode: $asd_zipcode
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app.kubernetes.io/name
operator: In
values:
- ingress-nginx
- key: app.kubernetes.io/instance
operator: In
values:
- ingress-nginx
- key: app.kubernetes.io/component
operator: In
values:
- controller
topologyKey: kubernetes.io/hostname
autoscaling:
enabled: false
maxReplicas: 4
minReplicas: 2
targetCPUUtilizationPercentage: 80
targetMemoryUtilizationPercentage: 80
config:
allow-snippet-annotations: "true"
enable-modsecurity: "true"
enable-owasp-modsecurity-crs: "true"
geoip2-autoreload-in-minutes: 10080
http-snippet: |
real_ip_header X-Forwarded-For;
real_ip_recursive on;
set_real_ip_from 10.254.0.0/16;
set_real_ip_from 103.115.8.0/24;
map $http_x_zipcode $asd_zipcode {
default $http_x_zipcode;
"" $geoip2_postal_code;
}
map $http_x_timezone $asd_timezone {
default $http_x_timezone;
"" $geoip2_time_zone;
}
map $http_x_city $asd_city {
default $http_x_city;
"" $geoip2_city;
}
log-format-escape-json: "true"
log-format-upstream: '{ "proxy_protocol_addr": "$proxy_protocol_addr","remote_addr":
"$remote_addr","remote_port": "$remote_port", "remote_user": "$remote_user","time_local":
"$time_local","time_iso8601": "$time_iso8601", "status": "$status", "body_bytes_sent":
"$body_bytes_sent", "http_referer": "$http_referer", "http_user_agent": "$http_user_agent",
"request_length": "$request_length", "request_time": "$request_time", "proxy_upstream_name":
"$proxy_upstream_name", "proxy_alternative_upstream_name": "$proxy_alternative_upstream_name",
"upstream_addr": "$upstream_addr", "upstream_bytes_received": "$upstream_bytes_received",
"upstream_bytes_sent": "$upstream_bytes_sent", "upstream_cache_status": "$upstream_cache_status",
"upstream_connect_time": "$upstream_connect_time", "upstream_header_time": "$upstream_header_time",
"upstream_response_length": "$upstream_response_length", "upstream_response_time":
"$upstream_response_time", "upstream_status": "$upstream_status", "query_string":
"$query_string", "bytes_sent": "$bytes_sent", "connection": "$connection", "connection_requests":
"$connection_requests", "connection_time": "$connection_time", "content_length":
"$content_length", "content_type": "$content_type", "request_uri": "$request_uri",
"host": "$host", "hostname": "$hostname", "https": "$https", "is_args": "$is_args",
"limit_rate": "$limit_rate", "msec": "$msec", "nginx_version": "$nginx_version",
"pid": "$pid", "pipe": "$pipe", "proxy_protocol_port": "$proxy_protocol_port",
"proxy_protocol_server_addr": "$proxy_protocol_server_addr", "proxy_protocol_server_port":
"$proxy_protocol_server_port", "nginx_version": "$nginx_version", "request_body":
"$request_body", "request_completion": "$request_completion", "request_id":
"$request_id", "request_method": "$request_method", "scheme": "$scheme", "server_name":
"$server_name", "server_port": "$server_port", "server_protocol": "$server_protocol",
"uri": "$uri", "upstream_status": "$upstream_status", "req_id": "$req_id", "namespace":
"$namespace", "ingress_name": "$ingress_name", "service_name": "$service_name",
"service_port": "$service_port", "proxy_add_x_forwarded_for": "$proxy_add_x_forwarded_for",
"geoip_country_code": "$geoip2_country_code", "geoip_country_name": "$geoip2_country_name",
"geoip_continent_code": "$geoip2_continent_code","geoip_continent_name": "$geoip2_continent_name",
"geoip_city_country_code": "$geoip2_city_country_code","geoip_city_country_name":
"$geoip2_city_country_name", "geoip_city": "$geoip2_city","geoip_postal_code":"$geoip2_postal_code","geoip_dma_code":
"$geoip2_dma_code", "geoip_latitude": "$geoip2_latitude", "geoip_longitude":
"$geoip2_longitude", "geoip_time_zone": "$geoip2_time_zone","geoip_region_code":
"$geoip2_region_code","geoip_region_name": "$geoip2_region_name", "geoip_subregion_code":
"$geoip2_subregion_code","geoip_subregion_name": "$geoip2_subregion_name", "geoip_org":"$geoip2_org","geoip_asn":"$geoip2_asn",
"request_headers": "$request_headers","response_headers": "$response_headers",
"query_params": "$query_params" }'
plugins: parse_nested_details, rewrite, header_filter
proxy-body-size: 16m
proxy-buffer-size: 16k
server-snippet: |
set $request_headers '';
set $response_headers '';
set $query_params '';
use-geoip2: "true"
use-gzip: "true"
extraArgs:
maxmind-edition-ids: GeoLite2-City,GeoLite2-ASN,GeoLite2-Country
extraVolumeMounts:
mountPath: /etc/nginx/modsecurity/modsecurity.conf
name: vol-modsecurity
readOnly: true
subPath: modsecurity.conf
mountPath: /etc/nginx/lua/plugins/shared-code
name: vol-shared-code
readOnly: true
mountPath: /etc/nginx/lua/plugins/parse_nested_details
name: vol-parse-nested-details
readOnly: true
mountPath: /etc/nginx/lua/plugins/rewrite
name: vol-rewrite
readOnly: true
mountPath: /etc/nginx/lua/plugins/header_filter
name: vol-header-filter
readOnly: true
mountPath: /etc/ingress-controller/geoip
name: vol-efs-geoip
extraVolumes:
configMap:
name: modsecurity
name: vol-modsecurity
configMap:
name: shared-code
name: vol-shared-code
configMap:
name: parse-nested-details
name: vol-parse-nested-details
configMap:
name: rewrite
name: vol-rewrite
configMap:
name: header-filter
name: vol-header-filter
name: vol-efs-geoip
persistentVolumeClaim:
claimName: efs-geoip-pvc
ingressClassResource:
default: true
name: nginx
maxmindLicenseKey: ""
metrics:
enabled: true
proxySetHeaders:
X-City: $asd_city
X-Forwarded-For: $proxy_add_x_forwarded_for
X-TimeZone: $asd_timezone
X-ZipCode: $asd_zipcode
replicaCount: "2"
resources:
limits:
cpu: 500m
memory: 1Gi
requests:
cpu: 500m
memory: 1Gi
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
service.beta.kubernetes.io/aws-load-balancer-type: nlb
externalTrafficPolicy: Local
watchIngressWithoutClass: true
defaultBackend:
autoscaling:
enabled: false
targetCPUUtilizationPercentage: 80
targetMemoryUtilizationPercentage: 80
enabled: true
resources:
limits:
cpu: 200m
memory: 512Mi
requests:
cpu: 200m
memory: 512Mi
environmentType: staging
hostname: ""
Current State of the controller:
kubectl describe ingressclassesName: nginx
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
app.kubernetes.io/version=1.14.0
helm.sh/chart=ingress-nginx-4.14.0
Annotations: ingressclass.kubernetes.io/is-default-class: true
meta.helm.sh/release-name: ingress-nginx
meta.helm.sh/release-namespace: ingress-nginx
Controller: k8s.io/ingress-nginx
Events:
kubectl -n <ingresscontrollernamespace> get all -A -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/geoip-downloader-1763620045134-2lbg4 0/1 Completed 0 15h 10.254.114.171 ip-10-254-75-251.us-west-2.compute.internal
pod/grafana-59f5fd6866-8vzrr 1/1 Running 0 42d 10.254.220.67 ip-10-254-225-78.us-west-2.compute.internal
pod/ingress-nginx-controller-fd76f6c5d-7kxqp 1/1 Running 0 146m 10.254.243.196 ip-10-254-234-43.us-west-2.compute.internal
pod/ingress-nginx-controller-fd76f6c5d-sh554 1/1 Running 21 (115s ago) 146m 10.254.132.155 ip-10-254-191-230.us-west-2.compute.internal
pod/ingress-nginx-defaultbackend-7447b8db4c-pcgkh 1/1 Running 0 146m 10.254.223.127 ip-10-254-197-192.us-west-2.compute.internal
pod/prometheus-server-c5d6988c6-d57c5 1/1 Running 0 42d 10.254.83.141 ip-10-254-86-166.us-west-2.compute.internal
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/grafana NodePort 172.20.119.21 3000:31622/TCP 296d app.kubernetes.io/name=grafana,app.kubernetes.io/part-of=ingress-nginx
service/ingress-nginx-controller LoadBalancer 172.20.25.76 a3ba15ac2e25d45d99c015052083f908-805e9b713adadc2e.elb.us-west-2.amazonaws.com 80:31112/TCP,443:32665/TCP 621d app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
service/ingress-nginx-controller-admission ClusterIP 172.20.161.219 443/TCP 621d app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
service/ingress-nginx-controller-metrics ClusterIP 172.20.193.209 10254/TCP 621d app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
service/ingress-nginx-defaultbackend ClusterIP 172.20.174.103 80/TCP 621d app.kubernetes.io/component=default-backend,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
service/prometheus-server NodePort 172.20.67.136 9090:30858/TCP 297d app.kubernetes.io/name=prometheus,app.kubernetes.io/part-of=ingress-nginx
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/grafana 1/1 1 1 296d grafana grafana/grafana app.kubernetes.io/name=grafana,app.kubernetes.io/part-of=ingress-nginx
deployment.apps/ingress-nginx-controller 2/2 2 2 2d20h controller registry.k8s.io/ingress-nginx/controller:v1.14.0@sha256:e4127065d0317bd11dc64c4dd38dcf7fb1c3d72e468110b4086e636dbaac943d app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
deployment.apps/ingress-nginx-defaultbackend 1/1 1 1 621d ingress-nginx-default-backend registry.k8s.io/defaultbackend-amd64:1.5 app.kubernetes.io/component=default-backend,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
deployment.apps/prometheus-server 1/1 1 1 297d prometheus prom/prometheus app.kubernetes.io/name=prometheus,app.kubernetes.io/part-of=ingress-nginx
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/grafana-59f5fd6866 1 1 1 296d grafana grafana/grafana app.kubernetes.io/name=grafana,app.kubernetes.io/part-of=ingress-nginx,pod-template-hash=59f5fd6866
replicaset.apps/ingress-nginx-controller-5f69b74565 0 0 0 24h controller registry.k8s.io/ingress-nginx/controller:v1.11.5@sha256:a1cbad75b0a7098bf9325132794dddf9eef917e8a7fe246749a4cea7ff6f01eb app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=5f69b74565
replicaset.apps/ingress-nginx-controller-64c7d5fb67 0 0 0 22h controller registry.k8s.io/ingress-nginx/controller:v1.11.5@sha256:a1cbad75b0a7098bf9325132794dddf9eef917e8a7fe246749a4cea7ff6f01eb app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=64c7d5fb67
replicaset.apps/ingress-nginx-controller-75dcb8d6f6 0 0 0 17h controller registry.k8s.io/ingress-nginx/controller:v1.11.5@sha256:a1cbad75b0a7098bf9325132794dddf9eef917e8a7fe246749a4cea7ff6f01eb app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=75dcb8d6f6
replicaset.apps/ingress-nginx-controller-764bc6f475 0 0 0 154m controller registry.k8s.io/ingress-nginx/controller:v1.11.5@sha256:a1cbad75b0a7098bf9325132794dddf9eef917e8a7fe246749a4cea7ff6f01eb app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=764bc6f475
replicaset.apps/ingress-nginx-controller-7c56b9bf95 0 0 0 3h11m controller registry.k8s.io/ingress-nginx/controller:v1.11.5@sha256:a1cbad75b0a7098bf9325132794dddf9eef917e8a7fe246749a4cea7ff6f01eb app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=7c56b9bf95
replicaset.apps/ingress-nginx-controller-86475d6576 0 0 0 14h controller registry.k8s.io/ingress-nginx/controller:v1.11.5@sha256:a1cbad75b0a7098bf9325132794dddf9eef917e8a7fe246749a4cea7ff6f01eb app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=86475d6576
replicaset.apps/ingress-nginx-controller-9678b8c8b 0 0 0 2d20h controller registry.k8s.io/ingress-nginx/controller:v1.11.5@sha256:a1cbad75b0a7098bf9325132794dddf9eef917e8a7fe246749a4cea7ff6f01eb app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=9678b8c8b
replicaset.apps/ingress-nginx-controller-bcdd69c9c 0 0 0 21h controller registry.k8s.io/ingress-nginx/controller:v1.11.5@sha256:a1cbad75b0a7098bf9325132794dddf9eef917e8a7fe246749a4cea7ff6f01eb app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=bcdd69c9c
replicaset.apps/ingress-nginx-controller-bf96b54b7 0 0 0 3h30m controller registry.k8s.io/ingress-nginx/controller:v1.11.5@sha256:a1cbad75b0a7098bf9325132794dddf9eef917e8a7fe246749a4cea7ff6f01eb app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=bf96b54b7
replicaset.apps/ingress-nginx-controller-fd6b46d65 0 0 0 3h1m controller registry.k8s.io/ingress-nginx/controller:v1.11.5@sha256:a1cbad75b0a7098bf9325132794dddf9eef917e8a7fe246749a4cea7ff6f01eb app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=fd6b46d65
replicaset.apps/ingress-nginx-controller-fd76f6c5d 2 2 2 146m controller registry.k8s.io/ingress-nginx/controller:v1.14.0@sha256:e4127065d0317bd11dc64c4dd38dcf7fb1c3d72e468110b4086e636dbaac943d app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=fd76f6c5d
replicaset.apps/ingress-nginx-defaultbackend-56b8646758 0 0 0 209d ingress-nginx-default-backend registry.k8s.io/defaultbackend-amd64:1.5 app.kubernetes.io/component=default-backend,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=56b8646758
replicaset.apps/ingress-nginx-defaultbackend-5cb8859b4b 0 0 0 154m ingress-nginx-default-backend registry.k8s.io/defaultbackend-amd64:1.5 app.kubernetes.io/component=default-backend,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=5cb8859b4b
replicaset.apps/ingress-nginx-defaultbackend-7447b8db4c 1 1 1 146m ingress-nginx-default-backend registry.k8s.io/defaultbackend-amd64:1.5 app.kubernetes.io/component=default-backend,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=7447b8db4c
replicaset.apps/ingress-nginx-defaultbackend-75f9948c44 0 0 0 607d ingress-nginx-default-backend registry.k8s.io/defaultbackend-amd64:1.5 app.kubernetes.io/component=default-backend,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=75f9948c44
replicaset.apps/ingress-nginx-defaultbackend-7749f97c47 0 0 0 3h30m ingress-nginx-default-backend registry.k8s.io/defaultbackend-amd64:1.5 app.kubernetes.io/component=default-backend,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=7749f97c47
replicaset.apps/ingress-nginx-defaultbackend-7b64948c8f 0 0 0 621d ingress-nginx-default-backend registry.k8s.io/defaultbackend-amd64:1.5 app.kubernetes.io/component=default-backend,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=7b64948c8f
replicaset.apps/prometheus-server-c5d6988c6 1 1 1 297d prometheus prom/prometheus app.kubernetes.io/name=prometheus,app.kubernetes.io/part-of=ingress-nginx,pod-template-hash=c5d6988c6
NAME SCHEDULE TIMEZONE SUSPEND ACTIVE LAST SCHEDULE AGE CONTAINERS IMAGES SELECTOR
cronjob.batch/geoip-downloader 0 2 * * * False 0 15h geoip-downloader curlimages/curl:latest
NAME STATUS COMPLETIONS DURATION AGE CONTAINERS IMAGES SELECTOR
job.batch/geoip-downloader-1763620045134 Complete 1/1 8s 15h geoip-downloader curlimages/curl:latest batch.kubernetes.io/controller-uid=fdaf4f34-af42-43fd-b678-b29868b256cb
kubectl -n <ingresscontrollernamespace> describe po <ingresscontrollerpodname>Name: ingress-nginx-controller-fd76f6c5d-sh554
Namespace: ingress-nginx
Priority: 0
Priority Class Name: default
Service Account: ingress-nginx
Node: ip-10-254-191-230.us-west-2.compute.internal/10.254.191.230
Start Time: Thu, 20 Nov 2025 11:18:59 -0800
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
app.kubernetes.io/version=1.14.0
helm.sh/chart=ingress-nginx-4.14.0
pod-template-hash=fd76f6c5d
Annotations:
Status: Running
IP: 10.254.132.155
IPs:
IP: 10.254.132.155
Controlled By: ReplicaSet/ingress-nginx-controller-fd76f6c5d
Containers:
controller:
Container ID: containerd://f333d7fd85da9fd3a0443a79370d8fb953d38fa133397ab0b063bb1b5c375ccb
Image: registry.k8s.io/ingress-nginx/controller:v1.14.0@sha256:e4127065d0317bd11dc64c4dd38dcf7fb1c3d72e468110b4086e636dbaac943d
Image ID: registry.k8s.io/ingress-nginx/controller@sha256:e4127065d0317bd11dc64c4dd38dcf7fb1c3d72e468110b4086e636dbaac943d
Ports: 80/TCP, 443/TCP, 10254/TCP, 8443/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP
SeccompProfile: RuntimeDefault
Args:
/nginx-ingress-controller
--default-backend-service=$(POD_NAMESPACE)/ingress-nginx-defaultbackend
--publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
--election-id=ingress-nginx-leader
--controller-class=k8s.io/ingress-nginx
--ingress-class=nginx
--configmap=$(POD_NAMESPACE)/ingress-nginx-controller
--validating-webhook=:8443
--validating-webhook-certificate=/usr/local/certificates/cert
--validating-webhook-key=/usr/local/certificates/key
--watch-ingress-without-class=true
--enable-metrics=true
--maxmind-edition-ids=GeoLite2-City,GeoLite2-ASN,GeoLite2-Country
State: Running
Started: Thu, 20 Nov 2025 13:43:10 -0800
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 20 Nov 2025 13:37:29 -0800
Finished: Thu, 20 Nov 2025 13:43:10 -0800
Ready: False
Restart Count: 21
Limits:
cpu: 500m
memory: 1Gi
Requests:
cpu: 500m
memory: 1Gi
Liveness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=5
Readiness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
Environment:
POD_NAME: ingress-nginx-controller-fd76f6c5d-sh554 (v1:metadata.name)
POD_NAMESPACE: ingress-nginx (v1:metadata.namespace)
LD_PRELOAD: /usr/local/lib/libmimalloc.so
AWS_STS_REGIONAL_ENDPOINTS: regional
AWS_DEFAULT_REGION: us-west-2
AWS_REGION: us-west-2
AWS_CONTAINER_CREDENTIALS_FULL_URI: http://169.254.170.23/v1/credentials
AWS_CONTAINER_AUTHORIZATION_TOKEN_FILE: /var/run/secrets/pods.eks.amazonaws.com/serviceaccount/eks-pod-identity-token
Mounts:
/etc/ingress-controller/geoip from vol-efs-geoip (rw)
/etc/nginx/lua/plugins/header_filter from vol-header-filter (ro)
/etc/nginx/lua/plugins/parse_nested_details from vol-parse-nested-details (ro)
/etc/nginx/lua/plugins/rewrite from vol-rewrite (ro)
/etc/nginx/lua/plugins/shared-code from vol-shared-code (ro)
/etc/nginx/modsecurity/modsecurity.conf from vol-modsecurity (ro,path="modsecurity.conf")
/usr/local/certificates/ from webhook-cert (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dll48 (ro)
/var/run/secrets/pods.eks.amazonaws.com/serviceaccount from eks-pod-identity-token (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
eks-pod-identity-token:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 85621
webhook-cert:
Type: Secret (a volume populated by a Secret)
SecretName: ingress-nginx-admission
Optional: false
vol-modsecurity:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: modsecurity
Optional: false
vol-shared-code:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: shared-code
Optional: false
vol-parse-nested-details:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: parse-nested-details
Optional: false
vol-rewrite:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: rewrite
Optional: false
vol-header-filter:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: header-filter
Optional: false
vol-efs-geoip:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: efs-geoip-pvc
ReadOnly: false
kube-api-access-dll48:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: Guaranteed
Node-Selectors: kubernetes.io/os=linux
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
Normal RELOAD 59m (x2 over 59m) nginx-ingress-controller NGINX reload triggered due to a change in configuration
Normal RELOAD 54m (x2 over 54m) nginx-ingress-controller NGINX reload triggered due to a change in configuration
Normal RELOAD 48m (x2 over 48m) nginx-ingress-controller NGINX reload triggered due to a change in configuration
Normal RELOAD 40m (x2 over 40m) nginx-ingress-controller NGINX reload triggered due to a change in configuration
Normal RELOAD 34m (x2 over 34m) nginx-ingress-controller NGINX reload triggered due to a change in configuration
Normal RELOAD 28m (x2 over 28m) nginx-ingress-controller NGINX reload triggered due to a change in configuration
Warning Unhealthy 25m (x150 over 144m) kubelet Liveness probe failed: Get "http://10.254.132.155:10254/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Normal RELOAD 20m (x2 over 20m) nginx-ingress-controller NGINX reload triggered due to a change in configuration
Warning Unhealthy 19m (x165 over 145m) kubelet Readiness probe failed: Get "http://10.254.132.155:10254/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Normal RELOAD 14m (x2 over 14m) nginx-ingress-controller NGINX reload triggered due to a change in configuration
Normal RELOAD 9m1s (x2 over 9m8s) nginx-ingress-controller NGINX reload triggered due to a change in configuration
Normal Pulled 3m32s (x21 over 137m) kubelet Container image "registry.k8s.io/ingress-nginx/controller:v1.14.0@sha256:e4127065d0317bd11dc64c4dd38dcf7fb1c3d72e468110b4086e636dbaac943d" already present on machine
Normal RELOAD 3m20s (x2 over 3m27s) nginx-ingress-controller NGINX reload triggered due to a change in configuration
Normal Killing 81s (x22 over 141m) kubelet Container controller failed liveness probe, will be restarted
Warning Unhealthy 1s (x546 over 141m) kubelet Readiness probe failed: HTTP probe failed with statuscode: 500
Anything else we need to know:
This is our dev/staging cluster. The production cluster is not experiencing issues and is at an older version (helm chart 4.11.5 / controller 1.11.5).
Thanks.
-Arun Thomas
deat_iit@comcast.net
arun.thomas@allstardirectories.com