Upgrading Bitnami Kafka Helm Chart from v23 to v24

#bitnami #kafka #kubernetes #helm #jotthatdown

Profile picture

Age: 28

Profession: Engineer

Location: 🇺🇸

Introduction

This document will walk you through how to upgrade Kafka from Chart version v23 to chart version v24. The major changes to the chart do not allow you to leverage the same deployment of Kafka. Therefore some surgery has to be done.

The process below is based on a HomeLab. Some variables may need to be updated for your use case.

Process

  1. Deploy Chart v24 with the following values and command:

    image:
      #ensure this image is the same as your v23 deployment
      tag: 3.5.1-debian-11-r7
    externalZookeeper:
      servers: zookeeper.porp-zookeeper.svc.cluster.local
    zookeeper:
      enabled: false
    kraft:
      enabled: false
    broker:
      replicaCount: 3
      persistence:
        size: 1Gi
    controller:
      replicaCount: 0
      persistence:
        size: 1Gi
    listeners:
      #listeners configuration should match previous client and interbroker authentication mechanisms
      client:
        containerPort: 9092
        protocol: PLAINTEXT
        name: CLIENT
        sslClientAuth: ""
      interbroker:
        containerPort: 9094
        protocol: PLAINTEXT
        name: INTERNAL
        sslClientAuth: ""
    helm install kafka24 bitnami/kafka -f values.yaml -n porp-kafka --version 24.0.3
  2. Ensure that all 3 PersistentVolume’s Reclaim Policy are set to Retain

reclaim_policy

If they are not set to Reclaim you can simply edit the configuration of the PersistentVolume and change persistentVolumeReclaimPolicy: DeletepersistentVolumeReclaimPolicy: Retain

  1. Delete the data-kafka24-broker-0,data-kafka24-broker-1,data-kafka24-broker-2 PersistentVolumeClaims (this might require setting Finalizers to null)

  2. Repeat the following steps for each of your replicas:

    REPLICA=0
    OLD_PVC="data-kafka23-${REPLICA}"
    NEW_PVC="data-kafka24-broker-${REPLICA}"
    PV_NAME=$(kubectl get pvc $OLD_PVC -n porp-kafka -o jsonpath="{.spec.volumeName}")
    NEW_PVC_MANIFEST_FILE="$NEW_PVC.yaml"
    
    # Create new PVC manifest
    kubectl get pvc $OLD_PVC -n porp-kafka -o json | jq "
      .metadata.name = \"$NEW_PVC\"
      | with_entries(
          select([.key] |
            inside([\"metadata\", \"spec\", \"apiVersion\", \"kind\"]))
        )
      | del(
          .metadata.annotations, .metadata.creationTimestamp,
          .metadata.finalizers, .metadata.resourceVersion,
          .metadata.selfLink, .metadata.uid
        )
      " > $NEW_PVC_MANIFEST_FILE

🚨 At this point ensure that the files were created and validate their contents

  1. Delete the v23 StatefulSet
  2. Delete all v23 PersistentVolumeClaims
  3. Edit the v23 PersistentVolumes and remove the ClaimRef object (they should all show as Available following this)

pvc_available

  1. Apply the files created during step #4

    kubectl apply -f data-kafka24-broker-0.yaml,data-kafka24-broker-1.yaml,data-kafka24-broker-2.yaml

new_pvcs

  1. Restart the kafka24-broker StatefulSet

  2. At this point kafka24 should be connected to the Volumes which were previously used by the old deployment. I have validated this by listing the topics from the previous deployment and I can see that the topic I created is available

    $ kafka-topics.sh --bootstrap-server localhost:9092 --list
    porp

🚨 As part of this process any connection strings that reference the old Kafka deployment will have to change


Profile picture

Jotted down by JotThatDown