kubernetes - How to ensure running pods are terminated before new pods are created for a deployment

hi im Bacon

I have 4 replicas of a deployment running that consumer from Kafka and do some processing.

When I come to deploying a new version of the application, 4 new pods are created before the existing pods are removed and in the termination grace period, the consumers will rebalance the work across all 8 pods before they exit. They will then rebalance again once the original pods are terminated.

This means that the data they work on during the rebalancing is effectively lost as the work is stateful so nothing meaningful can be done during this handover time.

The ideal solution would be:

  • The running pods are signalled to terminate
  • They save their current state
  • New pods are started and rebalance the Kafka partitions
  • Initial state is fetched by new pods
  • Profit

I am not worrying about how to save/pass the state at the moment but would like to avoid the necessary rebalancing and resulting output data that comes from having the new pods be created before the old ones terminated

I appreciate that this will mean that service can be interrupted if the new version has issues. The consumers are far faster than the data coming in and will catchup, being down for multiple hours would be fine.

What configuration can I set in the kubeconfig of my deployment to signal kubernetes to change the order of its deployment strategy?

Chris

Default update strategy is RollingUpdate, which causes to observed behavior. You can set it to Recreate, by setting spec.strategy.type like:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: some-app
spec:
  replicas: 3
  strategy:
    type: Recreate
[...]

From docs on that strategy:

All existing Pods are killed before new ones are created

Collected from the Internet

Please contact [email protected] to delete if infringement.

edited at
0

Comments

0 comments
Login to comment

Related

Why Kubernetes Services are created before Deployment/Pods?

How to do mulit pods deployment in kubernetes

running pods and containers in Kubernetes

How to update a set of pods running in kubernetes?

How to change running pods limits in Kubernetes?

Kubernetes helm waiting before killing the old pods during helm deployment

How to see logs of terminated pods

Stopping all pods in Kubernetes cluster before running database migration job

Kubernetes Pods Terminated - Exit Code 137

How many pods can be configured per deployment in kubernetes?

In Kubernetes how can I have a hard minimum number of pods for Deployment?

How to overwrite file in pods container in Kubernetes deployment file?

How to rolling restart pods without changing deployment yaml in kubernetes?

How to debug Kotlin applications running in Kubernetes pods in IntelliJ?

Kubernetes - How to list all pods running in a particular instance group?

Specifying memory in Kubernetes pods for deployment of Docker image

Kubectl command to list pods of a deployment in Kubernetes

kubernetes cleanup of pods,service,deployment etc

Unable to mount PVC created by OpenEBS on pods on Kubernetes bare-metal deployment

Is it possible to move the running pods from ReplicationController to a Deployment?

Set retention policy for Pods created by Kubernetes CronJob

Kubernetes Rollout Drop Old Pods When New Pods Are Not Fully Ready

how to use Kubernetes DNS for pods?

Are there issues with running user pods on a Kubernetes master node?

Kubernetes pods not starting, running behind a proxy

kubernetes list all running pods name

Kubernetes prometheus metrics for running pods and nodes?

Consul on Kubernetes: Consul pods are running but not ready

Running a command on all kubernetes pods of a service