I have 4 replicas of a deployment running that consumer from Kafka and do some processing.
When I come to deploying a new version of the application, 4 new pods are created before the existing pods are removed and in the termination grace period, the consumers will rebalance the work across all 8 pods before they exit. They will then rebalance again once the original pods are terminated.
This means that the data they work on during the rebalancing is effectively lost as the work is stateful so nothing meaningful can be done during this handover time.
The ideal solution would be:
I am not worrying about how to save/pass the state at the moment but would like to avoid the necessary rebalancing and resulting output data that comes from having the new pods be created before the old ones terminated
I appreciate that this will mean that service can be interrupted if the new version has issues. The consumers are far faster than the data coming in and will catchup, being down for multiple hours would be fine.
What configuration can I set in the kubeconfig of my deployment to signal kubernetes to change the order of its deployment strategy?
Default update strategy is RollingUpdate
, which causes to observed behavior. You can set it to Recreate
, by setting spec.strategy.type
like:
apiVersion: apps/v1
kind: Deployment
metadata:
name: some-app
spec:
replicas: 3
strategy:
type: Recreate
[...]
From docs on that strategy:
All existing Pods are killed before new ones are created
Collected from the Internet
Please contact [email protected] to delete if infringement.
Comments