Upgrading Kubernetes sounds simple… until your application goes down in production.
I’ve seen teams rush upgrades and end up with broken deployments, failed pods, or unexpected downtime.
Over time, I built a simple checklist that I follow every time I upgrade an Amazon EKS cluster — and it has saved me multiple times.
When you upgrade EKS, you're not just upgrading Kubernetes.
If your app is not prepared, users will feel it.
Before touching anything, I always check:
One small API change can break your entire deployment.
This is where most issues happen.
I always verify:
If you have only one pod, downtime is guaranteed.
Your deployment strategy matters a lot.
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0
maxSurge: 1
This ensures: - No downtime - New pods start before old ones stop
I never upgrade all nodes at once.
Instead:
This avoids sudden failures.
During upgrade, I continuously monitor:
Even small spikes can indicate problems.
Never upgrade without a rollback plan.
I always keep:
If something breaks, you should recover fast.
After following this checklist:
EKS upgrades are not risky — if you prepare properly.
Most problems come from skipping basics like readiness checks and rolling updates.
Keep it simple. Test first. Monitor everything.
Written by Adarsh Singh — DevOps Engineer