Skip to content

Overview#

This documentation demonstrates how to upgrade a Kubernetes setup of Gluu >=4.2 LDAP to 4.5 PostgreSQL.

Prerequisites#

Assuming Gluu 4.2 is already installed and running, do the following steps:

  1. Scale down the OpenDJ replicas to 1 pod.
  2. Back up the persistence volumes as the upgrade process is irreversible.
  3. Back up the existing values.yaml used in Gluu 4.2 installation as values-4.2.yaml.

Additional steps are required if using LDAP/OpenDJ with multiCluster enabled (for example WEST and EAST regions):

  1. Upgrade only in 1 region, e.g. WEST
  2. Disable OpenDJ replication between WEST and EAST
  3. Disable traffic to the severed region, e.g. EAST

How to upgrade and migrate#

Step 1: Upgrading Gluu 4.2 to 4.5 with OpenDJ/LDAP as persistence#

  1. Change the ownership of the OpenDJ filesystem:

    kubectl exec  <opendj-pod-name> -n <namespace> -- chown -R 1000:root /opt/opendj
    

    Configmaps are mounted as read-only files. So, if you have any configmap mounted under /opt/opendj, you'll need to add their mount path so that it's ignored by the chown command:

    kubectl exec <opendj-pod-name> -n <namespace> -- sh -c 'find /opt/opendj -path "/path/of/file/mounted" -prune -o -exec chown 1000:root {} +'
    
  2. If using any custom schema that contains the gluuCustomPerson objectClass, for example:

    # example of 102-my-customAttributes.ldif file
    dn: cn=schema
    objectClass: top
    objectClass: ldapSubentry
    objectClass: subschema
    cn: schema
    attributeTypes: ( 1.3.6.1.4.1.48710.1.3.1400 NAME 'customTest'
      DESC 'Custom Attribute' 
      EQUALITY caseIgnoreMatch 
      SUBSTR caseIgnoreSubstringsMatch 
      SYNTAX 1.3.6.1.4.1.1466.115.121.1.15 
      X-ORIGIN 'Gluu custom attribute' )
    objectClasses: ( 1.3.6.1.4.1.48710.1.4.101 NAME 'gluuCustomPerson'
      SUP ( top )
      AUXILIARY
      MAY ( customTest $ telephoneNumber $ mobile $ carLicense $ title )
    

    Remove it as it's included in Gluu 4.5.x by default:

    # example of 102-my-customAttributes.ldif file
    dn: cn=schema
    objectClass: top
    objectClass: ldapSubentry
    objectClass: subschema
    cn: schema
    attributeTypes: ( 1.3.6.1.4.1.48710.1.3.1400 NAME 'customTest'
      DESC 'Custom Attribute' 
      EQUALITY caseIgnoreMatch 
      SUBSTR caseIgnoreSubstringsMatch 
      SYNTAX 1.3.6.1.4.1.1466.115.121.1.15 
      X-ORIGIN 'Gluu custom attribute' )
    

    Create a configmap with the new custom schema:

    kubectl -n <namespace> create cm my-custom-schema --from-file=102-my-customAttributes.ldif #adjust the file name as needed
    

    The new custom schema needs to be mounted under /opt/opendj/config/schema directory, as shown in the next step.

  3. Edit the manifest of the current OpenDJ statefulset:

    kubectl edit sts <opendj-sts-name> -n <namespace>
    

    Upgrade the image tag to 4.5.5-x and add a new env variable:

    # uncomment volumes if using custom schema
    # volumes:
    #  - name: my-custom-schema
    #    configMap:
    #      name: my-custom-schema
    containers:
      - image: gluufederation/opendj:4.5.5-1
        env:
          - name: GLUU_LDAP_AUTO_REPLICATE
            value: "false"
        # uncomment volumeMounts if using a custom schema
        # volumeMounts:
        #  - name: my-custom-schema
        #    mountPath: /opt/opendj/config/schema/102-my-customAttributes.ldif # adjust the name according to your setup
        #    subPath: 102-my-customAttributes.ldif # adjust the name according to your setup
    

    Save the changes and wait until the OpenDJ pod gets terminated, re-deployed, and running.

  4. Make sure that the completed gluu-config and gluu-persistence jobs are deleted.

  5. Create gluu-upgrade-42.yaml file:

    apiVersion: batch/v1
    kind: Job
    metadata:
      name: gluu-upgrade-42
    spec:
      template:
        metadata:
          annotations:
            sidecar.istio.io/inject: "false"
        spec:
          restartPolicy: Never
          imagePullSecrets:
            - name: regcred
          volumes: []
          containers:
            - name: upgrade-42
              image: gluufederation/upgrade:4.5.5-1
              volumeMounts: []
              envFrom:
                - configMapRef:
                    name: gluu-config-cm # adjust the name according to your setup
              env: []
              args:
                - --source=4.2
                - --target=4.5    
    
  6. Apply the job to upgrade the OpenDJ entries:

    kubectl -n <namespace> apply -f gluu-upgrade-42.yaml
    
  7. Wait until the job is completed successfully, and then delete the job:

    kubectl -n <namespace> delete -f gluu-upgrade-42.yaml
    

Step 2: migrate the OpenDJ entries to Postgres#

  1. Export entries for each tree (o=gluu, o=site, o=metric) as .ldif file.

    mkdir -p custom_ldif
    kubectl -n <namespace> exec -ti <opendj-pod> -- /opt/opendj/bin/ldapsearch -D "cn=directory manager" -p 1636 --useSSL -w <ldap-password> --trustAll -b "o=gluu" -s sub objectClass=* > custom_ldif/01_gluu.ldif
    
    kubectl -n <namespace> exec -ti <opendj-pod> -- /opt/opendj/bin/ldapsearch -D "cn=directory manager" -p 1636 --useSSL -w <ldap-password> --trustAll -b "o=site" -s sub objectClass=* > custom_ldif/02_site.ldif
    
    kubectl -n <namespace> exec -ti <opendj-pod> -- /opt/opendj/bin/ldapsearch -D "cn=directory manager" -p 1636 --useSSL -w <ldap-password> --trustAll -b "o=metric" -s sub objectClass=* > custom_ldif/03_metric.ldif
    
  2. Create configmaps for .ldif files.

    • If each .ldif file is smaller than 1MB:

    kubectl -n <namespace> create cm custom-gluu-ldif --from-file=custom_ldif/01_gluu.ldif
    kubectl -n <namespace> create cm custom-site-ldif --from-file=custom_ldif/02_site.ldif
    kubectl -n <namespace> create cm custom-metric-ldif --from-file=custom_ldif/03_metric.ldif
    
    The job will have a yaml configuration that mounts these 3 configmaps.

    • If each .ldif file is larger than 1MB:

      1. Create a file named mycustomldif.sh which basically contains instructions to pull the 3 ldif files:
       #!/bin/sh
       # This script will pull the ldif file from a remote location
       # and place it in the correct location for the Persistence job to use it
       mkdir -p /app/custom_ldif 
       wget -O /app/custom_ldif/01_gluu.ldif https://<ldif-file-location/01_gluu.ldif
       wget -O /app/custom_ldif/02_site.ldif https://<ldif-file-location/02_site.ldif
       wget -O /app/custom_ldif/03_metric.ldif https://<ldif-file-location/03_metric.ldif
      
    • Create a configmap that has the mycustomldif.sh script:

      kubectl -n <namespace> create cm my-custom-ldif --from-file=mycustomldif.sh
      

      The job will have a yaml configuration that mounts this single configmap.

  3. If using custom schema in OpenDJ installation, for example:

    # example of 102-my-customAttributes.ldif file
    dn: cn=schema
    objectClass: top
    objectClass: ldapSubentry
    objectClass: subschema
    cn: schema
    attributeTypes: ( 1.3.6.1.4.1.48710.1.3.1400 NAME 'customTest'
      DESC 'Custom Attribute' 
      EQUALITY caseIgnoreMatch 
      SUBSTR caseIgnoreSubstringsMatch 
      SYNTAX 1.3.6.1.4.1.1466.115.121.1.15 
      X-ORIGIN 'Gluu custom attribute' )
    

    You will need to convert them to conform to new setup.

    1. Obtain default custom_schema.json

    2. Add your custom attributes (some contents are omitted):

      "attributeTypes": [
        {
          "desc": "Custom Attribute",
          "equality": "caseIgnoreMatch",
          "names": [
            "customTest"
          ],
          "oid": "oxAttribute", 
          "substr": "caseIgnoreSubstringsMatch",
          "syntax": "1.3.6.1.4.1.1466.115.121.1.15",
          "x_origin": "Gluu custom attribute"
        }
      ]
      
    3. Add the custom attributes into objectClasses (some contents are omitted):

      "objectClasses": [
        {
          "kind": "AUXILIARY", 
          "may": [
            "customTest"
          ], 
          "names": [
            "gluuCustomPerson"
          ], 
          "oid": "oxObjectClass", 
          "sup": [
            "top"
          ], 
          "x_origin": "Gluu - Custom person objectclass",
          "sql": {"ignore": true}
        }
      ]
      
    4. Create a configmap that has the modified custom_schema.json file:

      kubectl -n <namespace> create cm custom-schema-json --from-file=custom_schema.json
      

    Note

    The new customTest column will be created under the gluuPerson table only if the table does not exist. Otherwise, you may need to create the column manually.

  4. Prepare Postgres database for migration. You should have a production-ready Postgres database. We will be using a Bitnami Postgres Helm chart for this example:

    kubectl create ns postgres
    helm repo add bitnami https://charts.bitnami.com/bitnami
    helm install postgresql --set auth.rootPassword=<postgres-root-password>,auth.database=gluu,auth.username=gluu,auth.password=<postgres-user-password> bitnami/postgresql -n postgres
    

    Take notes about the values above as we will need them in the next sections.

    Migrating entries from .ldif files may take a while, hence we will be migrating them offline using a separate k8s job.

    1. Create a sql_password file to store the password for the Postgres user and save it into a secret:

      kubectl -n <namespace> create secret generic offline-sql-pass --from-file=sql_password
      
    2. Create offline-persistence-load.yaml:

      apiVersion: batch/v1
      kind: Job
      metadata:
        name: offline-persistence-load
      spec:
        template:
          metadata:
            annotations:
              sidecar.istio.io/inject: "false"
          spec:
            restartPolicy: Never
            imagePullSecrets:
              - name: regcred
            volumes:
              - name: custom-gluu-ldif
                configMap:
                  name: custom-gluu-ldif
              - name: custom-site-ldif
                configMap:
                  name: custom-site-ldif
              - name: custom-metric-ldif
                configMap:
                  name: custom-metric-ldif
              - name: sql-pass
                secret:
                  secretName: offline-sql-pass # adjust the value according to your setup
              # uncomment if using modified custom_schema.json
              # - name: custom-schema-json
              #   configMap:
              #     name: custom-schema-json
            containers:
              - name: offline-persistence-load
                image: gluufederation/persistence:4.5.5-1
                volumeMounts:
                  - name: custom-gluu-ldif
                    mountPath: /app/custom_ldif/01_gluu.ldif
                    subPath: 01_gluu.ldif
                  - name: custom-site-ldif
                    mountPath: /app/custom_ldif/02_site.ldif
                    subPath: 02_site.ldif
                  - name: custom-metric-ldif
                    mountPath: /app/custom_ldif/03_metric.ldif
                    subPath: 03_metric.ldif
                  - name: sql-pass
                    mountPath: "/etc/gluu/conf/sql_password"
                    subPath: sql_password
                  # uncomment if using modified custom_schema.json
                  # - name: custom-schema-json
                  #   mountPath: "/app/static/custom_schema.json"
                  #   subPath: custom_schema.json
                envFrom:
                  - configMapRef:
                      name: gluu-config-cm # adjust the name according to your setup
                env:
                  - name: GLUU_PERSISTENCE_IMPORT_BUILTIN_LDIF
                    value: "false" # [DONT CHANGE] skip builtin LDIF files generated by the image container
                  - name: GLUU_PERSISTENCE_TYPE
                    value: "sql" # [DONT CHANGE]
                  - name: GLUU_SQL_DB_DIALECT
                    value: "pgsql" # [DONT CHANGE]
                  - name: GLUU_SQL_DB_NAME
                    value: "gluu" # adjust according to your setup
                  - name: GLUU_SQL_DB_HOST
                    value: "postgresql.postgres.svc.cluster.local" # adjust according to your setup
                  - name: GLUU_SQL_DB_PORT
                    value: "5432" # adjust according to your setup
                  - name: GLUU_SQL_DB_USER
                    value: "gluu" # adjust according to your setup
                  - name: GLUU_SQL_DB_SCHEMA
                    value: "public" # [default value] adjust according to your setup
      
    3. If the ldif files are larger then 1 MB, you would mount the single configmap instead of the 3 configmaps as shown below::

        volumes:
          - name: my-custom-ldif
            configMap:
              defaultMode: 493
              name: my-custom-ldif
        containers: 
          - name: offline-persistence-load
            command:
            - tini
            - -g
            - --
            - /bin/sh
            - -c
            - |
              /tmp/mycustomldif.sh
              /app/scripts/entrypoint.sh
            image: gluufederation/persistence:4.5.5-1
            volumeMounts:
              - name: my-custom-ldif
                mountPath: /tmp/mycustomldif.sh
                subPath: mycustomldif.sh
      
  5. Deploy the job:

    kubectl -n <namespace> apply -f offline-persistence-load.yaml
    
  6. Make sure there's no error while running the job before proceeding to the next step. If there's no error, the job and secret can be deleted safely:

    kubectl -n <namespace> delete secret offline-sql-pass
    kubectl -n <namespace> delete job offline-persistence-load
    

Step 3: switching from OpenDJ to Postgres#

  1. Get new values.yaml for Gluu 4.5 installation.

  2. Compare values-4.2.yaml with the new values.yaml, and then modify values.yaml

  3. Switch the persistence from OpenDJ to Postgres by adding the following parameters to the existing values.yaml:

    global:
      gluuPersistenceType: sql
      upgrade:
        enabled: false
      opendj:
        enabled: false  
    config:
      configmap:
        cnSqlDbName: gluu # adjust according to your setup
        cnSqlDbPort: 5432 # adjust according to your setup
        cnSqlDbDialect: pgsql
        cnSqlDbHost: postgresql.postgres.svc # adjust according to your setup
        cnSqlDbUser: gluu # adjust according to your setup
        cnSqlDbTimezone: UTC
        cnSqldbUserPassword: <postgres-user-password> # adjust according to your setup
    
  4. Run helm upgrade <gluu-release-name> gluu/gluu -n <namespace> -f values.yaml.

  5. Make sure the cluster is functioning after the migration.

Known Issues#

  1. Since 4.2 uses the deprecated v1beta1 API version. When upgrading, you'll receive the following error:

    ensure CRDs are installed first, resource mapping not found for name: "gluu-nginx-ingress-casa" namespace: "" from "": no matches for kind "Ingress" in version "networking.k8s.io/v1beta1
    

    You can follow this to resolve this Ingress API version incompatibility error.

    You can resolve it using the mapkubeapis helm plugin by running the following:

    helm mapkubeapis <gluu-release-name> -n <namespace>
    
  2. During the upgrade from >=4.2 to 4.5, if you didn't delete the jobs as instructed, the helm command throws the following message:

    Error: UPGRADE FAILED: cannot patch "gluu-config" with kind Job: Job.batch "gluu-config" is invalid: spec.template: 
    Invalid value: core.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"config-job", GenerateName:"", Namespace:""
    

    The upgrade itself is running though.

    If you face this, you should switch global.upgrade.enabled: false and rerun the helm upgrade command again, so that it’s registered with the helm lifecycle that the upgrade was successful.

  3. Interception scripts are not upgraded automatically. They need to be upgraded manually.