Skip to content

Upgrade to Gluu Server 4.2#

Overview#

The Gluu Server cannot be upgraded with a simple apt-get upgrade. You will need to either use our in-place upgrade script or explicitly install the new version and export/import your data. Find the existing version below for upgrade instructions to Gluu Server 4.2.

Pre-requisites#

  • Before upgrading, make sure to back up the Gluu container or LDAP LDIF.
  • Upgrades should always be thoroughly scoped and tested on a development environment first.
  • This upgrade process only upgrades versions 4.0.x and 4.1.x. To upgrade from a previous version, first upgrade to 4.0.

Online Upgrade from 4.x to 4.2#

Note

Upgrade script runs on Python 3. You need to install Python 3 before running the script. * On CentoOS/RHEL: yum install -y python3 * On Ubuntu/Debian: apt-get update && apt-get install -y python3

The upgrade script downloads all needed software and applications from the internet. You can perform an online upgrade by following these steps:

  • Download the upgrade script
wget https://raw.githubusercontent.com/GluuFederation/community-edition-package/master/update/4.2.3/upg4xto423.py
  • Execute the script:
python3 upg4xto423.py

Your upgrade directory will be the current directory. The script will create these directories: app, and ces_current, and writes Gluu cacerts.

Warning

This section is under construction.

Overview#

This guide introduces how to upgrade cloud native edition from one version to another.

Upgrade#

Kustomize#

  • Download pygluu-kubernetes.pyz. This package can be built manually.

  • Move your settings.json that was used in installing 4.1 next to ./pygluu-kubernetes.pyz.

  • Run :

    ./pygluu-kubernetes.pyz upgrade
    

Note

Compated to 4.1 , 4.2 has a centrialized configMap holding the necessary environment variables for all Gluu services. Hence, you will come to realize that the associated configMaps for each service that were defined previosuly such as oxauth-cm are no longer used. The upgrade process does not delete these unused configMaps as a rollout back to 4.1 might be needed. You may choose to discard these unused configMaps after full confirmation that your deployment fully functions.

Note

The upgrade method has no means of installing couchbase. You may be prompted for couchbase related settings, but that is only to update your current or new settings.json.

  1. Add a new bucket named gluu_session.

    If you are using a custom couchbase-cluster.yaml that means that COUCHBASE_CLUSTER_FILE_OVERRIDE is set to Y inside settings.json. We advice you upgrade to the new couchbase operator and couchbase-server 6.6.0. If you stick with the current operator please create two empty files couchbase-buckets.yaml and couchbase-ephemeral-buckets.yaml next to your custom couchbase-cluster.yaml.

    Add the following to couchbase-cluster.yaml under the buckets section:

      buckets:
      - name: gluu_session   #DO NOT CHANGE THIS LINE
        type: ephemeral
        memoryQuota: 100 #<-- Change this if necessary
        replicas: 1
        ioPriority: high
        evictionPolicy: nruEviction
        conflictResolution: seqno
        enableFlush: true
        enableIndexReplica: false
        compressionMode: passive
    

    Apply the following yaml in the couchbase namespace:

    cat <<EOF | kubectl apply -f -
    apiVersion: couchbase.com/v2
    kind: CouchbaseEphemeralBucket
    metadata:
      name: gluu-session
      labels:
        cluster: gluu-couchbase
    spec:
      name: gluu_session
      memoryQuota: 100Mi #<-- Change this if necessary
      replicas: 1
      ioPriority: high
      evictionPolicy: nruEviction
      conflictResolution: seqno
      enableFlush: true
      compressionMode: passive
    EOF
    
  2. Add a new user in couchbase named gluu.

    1. Inside the Couchbase UI create a group by going to Security --> ADD GROUP and call that gluu-group and add query_select, query_insert, query_update and query_delete to gluu buckets gluu, gluu_session, gluu_token, gluu_cache and gluu_site.

    2. Inside the Couchbase UI create a user by going to Security --> ADD USER and call that user gluu and choose a good password and remember that as you will be prompted for it later. Remember this is not the superuser password i.e admin. Assign the group gluu-group which was create in the previous step to that user.

    1. Create a secret that will hold gluu password in the couchbase namespace:
    kubectl create secret generic gluu-couchbase-user-password --from-literal=password=P@ssw0rd --namespace cbns
    
    1. Apply the following yaml in the couchbase namespace:
    cat <<EOF | kubectl apply -f -
    apiVersion: couchbase.com/v2
    kind: CouchbaseGroup
    metadata:
      name: gluu-group
      labels:
        cluster: CLUSTERNAME # <--- change this to your cluster name i.e cbgluu
    spec:
      roles:
      - name: query_select
        bucket: gluu
      - name: query_select
        bucket: gluu_site
      - name: query_select
        bucket: gluu_user
      - name: query_select
        bucket: gluu_cache
      - name: query_select
        bucket: gluu_token
      - name: query_select
        bucket: gluu_session
    
      - name: query_update
        bucket: gluu
      - name: query_update
        bucket: gluu_site
      - name: query_update
        bucket: gluu_user
      - name: query_update
        bucket: gluu_cache
      - name: query_update
        bucket: gluu_token
      - name: query_update
        bucket: gluu_session
    
      - name: query_insert
        bucket: gluu
      - name: query_insert
        bucket: gluu_site
      - name: query_insert
        bucket: gluu_user
      - name: query_insert
        bucket: gluu_cache
      - name: query_insert
        bucket: gluu_token
      - name: query_insert
        bucket: gluu_session
    
      - name: query_delete
        bucket: gluu
      - name: query_delete
        bucket: gluu_site
      - name: query_delete
        bucket: gluu_user
      - name: query_delete
        bucket: gluu_cache
      - name: query_delete
        bucket: gluu_token
      - name: query_delete
        bucket: gluu_session
    ---
    apiVersion: couchbase.com/v2
    kind: CouchbaseRoleBinding
    metadata:
      name: gluu-role-binding
    spec:
      subjects:
      - kind: CouchbaseUser
        name: gluu
      roleRef:
        kind: CouchbaseGroup
        name: gluu-group
    ---
    apiVersion: couchbase.com/v2
    kind: CouchbaseUser
    metadata:
      name: gluu
      labels:
        cluster: CLUSTERNAME # <--- change this to your cluster name i.e cbgluu
    spec:
      fullName: "Gluu Cloud Native"
      authDomain: local
      authSecret: gluu-couchbase-user-password
    EOF
    
  3. Run :

    ./pygluu-kubernetes.pyz upgrade
    

Note

There is a new health check in 4.2 which may result in kubernetes rejecting the update of statefulsets describing that there are mulitple healthchecks defined. This does not affect the upgrade process itself. This is often only seen in oxtrust and hence after the confirmation that most services are up you may have to kubectl delete -f oxtrust.yaml and re-apply kubectl apply -f oxtrust.yaml to re-initiate the statefulset.

Note

Compated to 4.1 , 4.2 has a centrialized configMap holding the necessary environment variables for all Gluu services. Hence, you will come to realize that the associated configMaps for each service that were defined previosuly such as oxauth-cm are no longer used. The upgrade process does not delete these unused configMaps as a rollout back to 4.1 might be needed. You may choose to discard these unused configMaps after full confirmation that your deployment fully functions.

Helm#

  1. Copy the following yaml into upgrade.yaml and adjust all entries marked below:

    apiVersion: v1
    data:
      DOMAIN: FQDN #<-- Change this to your FQDN
      GLUU_CACHE_TYPE: NATIVE_PERSISTENCE #<-- Change this if necessary
      GLUU_CONFIG_ADAPTER: kubernetes
      GLUU_CONFIG_KUBERNETES_NAMESPACE: gluu  #<-- Change this to Gluus namespace
      GLUU_LDAP_URL: opendj:1636
      GLUU_PERSISTENCE_TYPE: ldap
      GLUU_SECRET_ADAPTER: kubernetes
      GLUU_SECRET_KUBERNETES_NAMESPACE: gluu #<-- Change this to Gluus namespace
    kind: ConfigMap
    metadata:
      labels:
        app: gluu-upgrade
      name: upgrade-cm
    ---
    apiVersion: batch/v1
    kind: Job
    metadata:
      labels:
        app: gluu-upgrade
      name: gluu-upgrade-job
    spec:
      template:
        metadata:
          annotations:
             sidecar.istio.io/inject: "false"                
          labels:
            app: gluu-upgrade
        spec:
          containers:
          - args:
            - --source
            - "4.1"
            - --target
            - "4.2"
            envFrom:
            - configMapRef:
                name: upgrade-cm
            image: gluufederation/upgrade:4.2.3_03
            name: gluu-upgrade-job
          restartPolicy: Never
    
  2. Clone latest stable manifests.

    git clone --recursive --depth 1 --branch 4.2 https://github.com/GluuFederation/cloud-native-edition && cd pygluu/kubernetes/templates/helm/gluu
    
  3. Modify all images inside main values.yaml to latest images according to upgrade target version. Also make sure your current values.yaml other options are moved correctly to the new values.yaml. Move old settings.json that was used in 4.1 installation into the same directory pygluu-kubernetes exists in and execute the following command :

    ./pygluu-kubernetes.pyz upgrade-values-yaml
    

    Go over your values.yaml and make sure it reflects all current information.

  4. Inside values.yaml set global.upgrade.enabled to true and global.persistence.enabled to false.

  5. Create configmap for 101-ox.ldif file.

    kubectl -n <gluu-namespace> create -f https://raw.githubusercontent.com/GluuFederation/cloud-native-edition/4.2/pygluu/kubernetes/templates/ldap/base/101-ox.yaml
    
  6. Delete oxAuthExpiration index

    kubectl exec -ti gluu-opendj-0 -n <gluu-namespace> -- /opt/opendj/bin/dsconfig delete-backend-index --backend-name userRoot --index-name oxAuthExpiration --hostName 0.0.0.0 --port 4444 --bindDN 'cn=Directory Manager' --trustAll -f
    
  7. Mount 101-ox.ldif in opendj-pods. Open opendj yaml or edit the statefulset directly kubectl edit statefulset gluu-opendj -n gluu

      volumes:
      - name: ox-ldif-cm
        configMap:
          name: oxldif
      containers:
        image: gluufederation/opendj:4.2.3_02
        ...
        ...
        volumeMounts:
        - name: ox-ldif-cm
          mountPath: /opt/opendj/config/schema/101-ox.ldif
          subPath: 101-ox.ldif
    
  8. Apply upgrade.yaml

    kubectl create -f upgrade.yaml -n <gluu-namespace>
    

    Wait until upgrade job is finished and tail the logs of the upgrade pod.

  9. Run upgrade Helm

    helm upgrade <release-name> . -f ./values.yaml -n <namespace>   
    

Note

Compated to 4.1 , 4.2 has a centrialized configMap holding the necessary environment variables for all Gluu services. Hence, you will come to realize that the associated configMaps for each service that were defined previosuly such as oxauth-cm are no longer used. The upgrade process does not delete these unused configMaps as a rollout back to 4.1 might be needed. You may choose to discard these unused configMaps after full confirmation that your deployment fully functions.

  1. Copy the following yaml into upgrade.yaml and adjust all entries marked below:

    apiVersion: v1
    data:
      DOMAIN: FQDN #<-- Change this to your FQDN
      GLUU_CACHE_TYPE: NATIVE_PERSISTENCE #<-- Change this if necessary
      GLUU_CONFIG_ADAPTER: kubernetes
      GLUU_CONFIG_KUBERNETES_NAMESPACE: gluu  #<-- Change this to Gluus namespace
      GLUU_COUCHBASE_CERT_FILE: /etc/certs/couchbase.crt
      GLUU_COUCHBASE_PASSWORD_FILE: /etc/gluu/conf/couchbase_password #<-- super user password
      GLUU_COUCHBASE_URL: cbgluu.cbns.svc.cluster.local #<-- Change this if necessary
      GLUU_COUCHBASE_USER: admin #<-- Change super user if necessary .
      GLUU_COUCHBASE_BUCKET_PREFIX: gluu #<-- Change if necessary .
      GLUU_PERSISTENCE_TYPE: couchbase
      GLUU_SECRET_ADAPTER: kubernetes
      GLUU_SECRET_KUBERNETES_NAMESPACE: gluu #<-- Change this to Gluus namespace
    kind: ConfigMap
    metadata:
      labels:
        app: gluu-upgrade
      name: upgrade-cm
    ---
    apiVersion: batch/v1
    kind: Job
    metadata:
      labels:
        app: gluu-upgrade
      name: gluu-upgrade-job
    spec:
      template:
        metadata:
          annotations:
             sidecar.istio.io/inject: "false"              
          labels:
            app: gluu-upgrade
        spec:
          containers:
          - args:
            - --source
            - "4.1"
            - --target
            - "4.2"
            envFrom:
            - configMapRef:
                name: upgrade-cm
            image: gluufederation/upgrade:4.2.3_03
            name: gluu-upgrade-job                 
            volumeMounts:
            - mountPath: /etc/gluu/conf/couchbase_password
              name: cb-pass
              subPath: couchbase_password
            - mountPath: /etc/certs/couchbase.crt
              name: cb-crt
              subPath: couchbase.crt
          restartPolicy: Never
          volumes:
          - name: cb-pass
            secret:
              secretName: cb-pass #<-- Change this to the secret name holding couchbase superuser pass
          - name: cb-crt
            secret:
              secretName: cb-crt #<-- Change this to the secret name holding couchbase cert
    
  2. Add a new bucket named gluu_session.

    Add the following to couchbase-cluster.yaml under the buckets section:

      buckets:
      - name: gluu_session   #DO NOT CHANGE THIS LINE
        type: ephemeral
        memoryQuota: 100 #<-- Change this if necessary
        replicas: 1
        ioPriority: high
        evictionPolicy: nruEviction
        conflictResolution: seqno
        enableFlush: true
        enableIndexReplica: false
        compressionMode: passive
    

    Apply the following yaml in the couchbase namespace:

    cat <<EOF | kubectl apply -f -
    apiVersion: couchbase.com/v2
    kind: CouchbaseEphemeralBucket
    metadata:
      name: gluu-session
      labels:
        cluster: gluu-couchbase
    spec:
      name: gluu_session
      memoryQuota: 100Mi #<-- Change this if necessary
      replicas: 1
      ioPriority: high
      evictionPolicy: nruEviction
      conflictResolution: seqno
      enableFlush: true
      compressionMode: passive
    EOF
    
  3. Add a new user in couchbase named gluu.

    1. Inside the Couchbase UI create a group by going to Security --> ADD GROUP and call that gluu-group and add query_select, query_insert, query_update and query_delete to gluu buckets gluu, gluu_session, gluu_token, gluu_cache and gluu_site.

    2. Inside the Couchbase UI create a user by going to Security --> ADD USER and call that user gluu and choose a good password and remember that as you will need it later. Assign the group gluu-group which was create in the previous step to that user.

    1. Create a secret that will hold gluu password in the couchbase namespace:
    kubectl create secret generic gluu-couchbase-user-password --from-literal=password=P@ssw0rd --namespace cbns
    
    1. Apply the following yaml in the couchbase namespace:
    cat <<EOF | kubectl apply -f -
    apiVersion: couchbase.com/v2
    kind: CouchbaseGroup
    metadata:
      name: gluu-group
      labels:
        cluster: CLUSTERNAME # <--- change this to your cluster name i.e cbgluu
    spec:
      roles:
      - name: query_select
        bucket: gluu
      - name: query_select
        bucket: gluu_site
      - name: query_select
        bucket: gluu_user
      - name: query_select
        bucket: gluu_cache
      - name: query_select
        bucket: gluu_token
      - name: query_select
        bucket: gluu_session
    
      - name: query_update
        bucket: gluu
      - name: query_update
        bucket: gluu_site
      - name: query_update
        bucket: gluu_user
      - name: query_update
        bucket: gluu_cache
      - name: query_update
        bucket: gluu_token
      - name: query_update
        bucket: gluu_session
    
      - name: query_insert
        bucket: gluu
      - name: query_insert
        bucket: gluu_site
      - name: query_insert
        bucket: gluu_user
      - name: query_insert
        bucket: gluu_cache
      - name: query_insert
        bucket: gluu_token
      - name: query_insert
        bucket: gluu_session
    
      - name: query_delete
        bucket: gluu
      - name: query_delete
        bucket: gluu_site
      - name: query_delete
        bucket: gluu_user
      - name: query_delete
        bucket: gluu_cache
      - name: query_delete
        bucket: gluu_token
      - name: query_delete
        bucket: gluu_session
    ---
    apiVersion: couchbase.com/v2
    kind: CouchbaseRoleBinding
    metadata:
      name: gluu-role-binding
    spec:
      subjects:
      - kind: CouchbaseUser
        name: gluu
      roleRef:
        kind: CouchbaseGroup
        name: gluu-group
    ---
    apiVersion: couchbase.com/v2
    kind: CouchbaseUser
    metadata:
      name: gluu
      labels:
        cluster: CLUSTERNAME # <--- change this to your cluster name i.e cbgluu
    spec:
      fullName: "Gluu Cloud Native"
      authDomain: local
      authSecret: gluu-couchbase-user-password
    EOF
    
  4. Clone latest stable manifests.

    git clone --recursive --depth 1 --branch 4.2 https://github.com/GluuFederation/cloud-native-edition && cd pygluu/kubernetes/templates/helm/gluu
    
  5. Modify all images inside main values.yaml to latest images according to upgrade target version. Move old settings.json that was used in 4.1 installation into the same directory pygluu-kubernetes exists in and execute the following command :

    ./pygluu-kubernetes.pyz upgrade-values-yaml
    

    Go over your values.yaml and make sure it reflects all current information. Forexample, make sure your couchbase url and crt are filled and correct. Also make sure that your couchbase user and password are the new ones which you created in a previous step, and that the couchbase superuser and superuser password are filled correctly.

  6. Inside values.yaml set global.upgrade.enabled to true and global.persistence.enabled to false.

  7. Apply upgrade.yaml

    kubectl create -f upgrade.yaml -n <gluu-namespace>
    

    Wait until upgrade job is finished and tail the logs of the upgrade pod.

  8. Run upgrade Helm

    helm upgrade <release-name> . -f ./values.yaml -n <namespace>   
    

Note

Compated to 4.1 , 4.2 has a centrialized configMap holding the necessary environment variables for all Gluu services. Hence, you will come to realize that the associated configMaps for each service that were defined previosuly such as oxauth-cm are no longer used. The upgrade process does not delete these unused configMaps as a rollout back to 4.1 might be needed. You may choose to discard these unused configMaps after full confirmation that your deployment fully functions.

  1. Copy the following yaml into upgrade.yaml and adjust all entries marked below:

    apiVersion: v1
    data:
      DOMAIN: FQDN #<-- Change this to your FQDN
      GLUU_CACHE_TYPE: NATIVE_PERSISTENCE #<-- Change this if necessary
      GLUU_CONFIG_ADAPTER: kubernetes
      GLUU_CONFIG_KUBERNETES_NAMESPACE: gluu  #<-- Change this to Gluus namespace
      GLUU_COUCHBASE_CERT_FILE: /etc/certs/couchbase.crt
      GLUU_COUCHBASE_PASSWORD_FILE: /etc/gluu/conf/couchbase_password #<-- super user password
      GLUU_COUCHBASE_URL: cbgluu.cbns.svc.cluster.local #<-- Change this if necessary
      GLUU_COUCHBASE_USER: admin #<-- Change this if necessary
      GLUU_COUCHBASE_BUCKET_PREFIX: gluu #<-- Change if necessary .
      GLUU_LDAP_URL: opendj:1636
      GLUU_PERSISTENCE_LDAP_MAPPING: "default" #<-- Change this if needed
      GLUU_PERSISTENCE_TYPE: couchbase
      GLUU_SECRET_ADAPTER: kubernetes
      GLUU_SECRET_KUBERNETES_NAMESPACE: gluu #<-- Change this to Gluus namespace
    kind: ConfigMap
    metadata:
      labels:
        app: gluu-upgrade
      name: upgrade-cm
    ---
    apiVersion: batch/v1
    kind: Job
    metadata:
      labels:
        app: gluu-upgrade
      name: gluu-upgrade-job
    spec:
      template:
        metadata:
          annotations:
             sidecar.istio.io/inject: "false"                      
          labels:
            app: gluu-upgrade
        spec:
          containers:
          - args:
            - --source
            - "4.1"
            - --target
            - "4.2"
            envFrom:
            - configMapRef:
                name: upgrade-cm
            image: gluufederation/upgrade:4.2.3_03
            name: gluu-upgrade-job                    
            volumeMounts:
            - mountPath: /etc/gluu/conf/couchbase_password
              name: cb-pass
              subPath: couchbase_password
            - mountPath: /etc/certs/couchbase.crt
              name: cb-crt
              subPath: couchbase.crt
          restartPolicy: Never
          volumes:
          - name: cb-pass
            secret:
              secretName: cb-pass #<-- Change this to the secret name holding couchbase pass
          - name: cb-crt
            secret:
              secretName: cb-crt #<-- Change this to the secret name holding couchbase cert
    
  2. Add a new bucket named gluu_session.

    Add the following to couchbase-cluster.yaml under the buckets section:

      buckets:
      - name: gluu_session   #DO NOT CHANGE THIS LINE
        type: ephemeral
        memoryQuota: 100 #<-- Change this if necessary
        replicas: 1
        ioPriority: high
        evictionPolicy: nruEviction
        conflictResolution: seqno
        enableFlush: true
        enableIndexReplica: false
        compressionMode: passive
    

    Apply the following yaml in the couchbase namespace:

    cat <<EOF | kubectl apply -f -
    apiVersion: couchbase.com/v2
    kind: CouchbaseEphemeralBucket
    metadata:
      name: gluu-session
      labels:
        cluster: gluu-couchbase
    spec:
      name: gluu_session
      memoryQuota: 100Mi <-- Change this if necessary
      replicas: 1
      ioPriority: high
      evictionPolicy: nruEviction
      conflictResolution: seqno
      enableFlush: true
      compressionMode: passive
    EOF
    
  3. Add a new user in couchbase named gluu.

    1. Inside the Couchbase UI create a group by going to Security --> ADD GROUP and call that gluu-group and add query_select, query_insert, query_update and query_delete to gluu buckets gluu, gluu_session, gluu_token, gluu_cache and gluu_site.

    2. Inside the Couchbase UI create a user by going to Security --> ADD USER and call that user gluu and choose a good password and remember that as you will need it later. Assign the group gluu-group which was create in the previous step to that user.

    1. Create a secret that will hold gluu password in the couchbase namespace:
    kubectl create secret generic gluu-couchbase-user-password --from-literal=password=P@ssw0rd --namespace cbns
    
    1. Apply the following yaml in the couchbase namespace:
    cat <<EOF | kubectl apply -f -
    apiVersion: couchbase.com/v2
    kind: CouchbaseGroup
    metadata:
      name: gluu-group
      labels:
        cluster: CLUSTERNAME # <--- change this to your cluster name i.e cbgluu
    spec:
      roles:
      - name: query_select
        bucket: gluu
      - name: query_select
        bucket: gluu_site
      - name: query_select
        bucket: gluu_user
      - name: query_select
        bucket: gluu_cache
      - name: query_select
        bucket: gluu_token
      - name: query_select
        bucket: gluu_session
    
      - name: query_update
        bucket: gluu
      - name: query_update
        bucket: gluu_site
      - name: query_update
        bucket: gluu_user
      - name: query_update
        bucket: gluu_cache
      - name: query_update
        bucket: gluu_token
      - name: query_update
        bucket: gluu_session
    
      - name: query_insert
        bucket: gluu
      - name: query_insert
        bucket: gluu_site
      - name: query_insert
        bucket: gluu_user
      - name: query_insert
        bucket: gluu_cache
      - name: query_insert
        bucket: gluu_token
      - name: query_insert
        bucket: gluu_session
    
      - name: query_delete
        bucket: gluu
      - name: query_delete
        bucket: gluu_site
      - name: query_delete
        bucket: gluu_user
      - name: query_delete
        bucket: gluu_cache
      - name: query_delete
        bucket: gluu_token
      - name: query_delete
        bucket: gluu_session
    ---
    apiVersion: couchbase.com/v2
    kind: CouchbaseRoleBinding
    metadata:
      name: gluu-role-binding
    spec:
      subjects:
      - kind: CouchbaseUser
        name: gluu
      roleRef:
        kind: CouchbaseGroup
        name: gluu-group
    ---
    apiVersion: couchbase.com/v2
    kind: CouchbaseUser
    metadata:
      name: gluu
      labels:
        cluster: CLUSTERNAME # <--- change this to your cluster name i.e cbgluu
    spec:
      fullName: "Gluu Cloud Native"
      authDomain: local
      authSecret: gluu-couchbase-user-password
    EOF
    
  4. Clone latest stable manifests.

    git clone --recursive --depth 1 --branch 4.2 https://github.com/GluuFederation/cloud-native-edition && cd pygluu/kubernetes/templates/helm/gluu
    
  5. Modify all images inside main values.yaml to latest images according to upgrade target version. Move old settings.json that was used in 4.1 installation into the same directory pygluu-kubernetes exists in and execute the following command :

    ./pygluu-kubernetes.pyz upgrade-values-yaml
    

    Go over your values.yaml and make sure it reflects all current information. Forexample, make sure your couchbase url and crt are filled and correct. Also make sure that your couchbase user and password are the new ones which you created in a previous step, and that the couchbase superuser and superuser password are filled correctly.

    Go over your values.yaml and make sure it reflects all current information. Forexample, make sure your couchbase url and crt are filled and correct. Also make sure that your couchbase user and password are the new ones which you created in a previous step, and that the couchbase superuser and superuser password are filled correctly.

  6. Inside values.yaml set global.upgrade.enabled to true and global.persistence.enabled to false.

  7. Create configmap for 101-ox.ldif file.

    kubectl -n <gluu-namespace> create -f https://raw.githubusercontent.com/GluuFederation/cloud-native-edition/v1.2.6/pygluu/kubernetes/templates/ldap/base/101-ox.yaml
    
  8. Delete oxAuthExpiration index

    kubectl exec -ti gluu-opendj-0 -n <gluu-namespace> -- /opt/opendj/bin/dsconfig delete-backend-index --backend-name userRoot --index-name oxAuthExpiration --hostName 0.0.0.0 --port 4444 --bindDN 'cn=Directory Manager' --trustAll -f
    
  9. Mount 101-ox.ldif in opendj-pods. Open opendj yaml or edit the statefulset directly kubectl edit statefulset opendj -n gluu

      volumes:
      - name: ox-ldif-cm
        configMap:
          name: oxldif
      containers:
        image: gluufederation/opendj:4.2.3_02
        ...
        ...
        volumeMounts:
        - name: ox-ldif-cm
          mountPath: /opt/opendj/config/schema/101-ox.ldif
          subPath: 101-ox.ldif
    
  10. Apply upgrade.yaml

    kubectl create -f upgrade.yaml -n <gluu-namespace>
    

    Wait until upgrade job is finished and tail the logs of the upgrade pod.

  11. Run upgrade Helm

    helm upgrade <release-name> . -f ./values.yaml -n <namespace>   
    

Note

Compated to 4.1 , 4.2 has a centrialized configMap holding the necessary environment variables for all Gluu services. Hence, you will come to realize that the associated configMaps for each service that were defined previosuly such as oxauth-cm are no longer used. The upgrade process does not delete these unused configMaps as a rollout back to 4.1 might be needed. You may choose to discard these unused configMaps after full confirmation that your deployment fully functions.

Exporting Data#

Note

This step is not needed.

  1. Make sure to backup existing LDAP data

  2. Set environment variable as a placeholder for LDAP server password (for later use):

    export LDAP_PASSWD=YOUR_PASSWORD_HERE
    
  3. Assuming that existing LDAP container called ldap has data, export data from each backend:

    1. Export o=gluu

      kubectl exec -ti ldap /opt/opendj/bin/ldapsearch \
          -Z \
          -X \
          -D "cn=directory manager" \
          -w $LDAP_PASSWD \
          -p 1636 \
          -b "o=gluu" \
          -s sub \
          'objectClass=*' > gluu.ldif
      
    2. Export o=site

      kubectl exec -ti ldap /opt/opendj/bin/ldapsearch \
          -Z \
          -X \
          -D "cn=directory manager" \
          -w $LDAP_PASSWD \
          -p 1636 \
          -b "o=site" \
          -s sub \
          'objectClass=*' > site.ldif
      
    3. Export o=metric

      kubectl exec -ti ldap /opt/opendj/bin/ldapsearch \
          -Z \
          -X \
          -D "cn=directory manager" \
          -w $LDAP_PASSWD \
          -p 1636 \
          -b "o=metric" \
          -s sub \
          'objectClass=*' > metric.ldif
      
  4. Unset LDAP_PASSWD environment variable