Skip to content

Attention

All Linux assets, packages, and binaries require a support contract for access. Contact sales@gluu.org for more information. For free up-to-date binaries, check out the latest releases at The Linux Foundation Janssen Project, the new upstream open-source project.

The Kubernetes recipes#

Getting Started with Kubernetes#

The Kubernetes deployment of the Gluu Server, also called Cloud Native (CN) Edition, requires some special considerations compared to other deployments. This page details the installation and initial configuration of a CN deployment. More advanced configuration details are available on the appropriate pages throughout the Gluu documentation. For convenience, links to those documents follow:

Requirements for accessing docker images and assets#

  1. Contact sales@gluu.org for credentials (username and password/token) to access and pull our docker images. Existing customers should have received the credentials already.

  2. Create secrets to access and pull images from Docker hub repo. The secrets must lives in the same namespace (create the namespace if doesn't exist yet).

    kubectl create namespace <namespace>
    

    If you're planning to use istio, set the label as well:

    kubectl label namespace <namespace> istio-injection=enabled
    

    Afterwards, create the required secrets (in this example, regcred is the name of the secret):

    kubectl -n <namespace> create secret docker-registry regcred --docker-server=https://index.docker.io/v1/ --docker-username=<username> --docker-password=<password/token>
    
  3. If you are using pygluu-kubernetes.pyz the tool that parses values.yaml you can skip the next steps. We recommend using helm manually.

  4. Inject the secret name in your values.yaml at image.pullSecrets for each service. For example:

    ...
    ...
    oxauth:
      image:
        # -- Image pullPolicy to use for deploying.
        pullPolicy: IfNotPresent
        # -- Image to use for deploying.
        repository: gluufederation/oxauth
        # -- Image  tag to use for deploying.
        tag: 4.5.3-1
        # -- Image Pull Secrets
        pullSecrets:
          - name: regcred
    

System Requirements for cloud deployments#

Note

For local deployments like minikube and microk8s or cloud installations for demoing Gluu may set the resources to the minimum and hence can have 8GB RAM, 4 CPU, and 50GB disk in total to run all services.

Please calculate the minimum required resources as per the services deployed. The following table contains the default recommended resources to start with. Depending on the use of each service the resources may be increased or decreased.

Service CPU Unit RAM Disk Space Processor Type Required
oxAuth 2.5 2.5GB N/A 64 Bit Yes
LDAP 1.5 2GB 10GB 64 Bit if using hybrid or LDAP for persistence
Couchbase - - - - If using hybrid or couchbase for persistence
FIDO2 0.5 0.5GB N/A 64 Bit No
SCIM 1.0 1.0GB N/A 64 Bit No
config - job 0.5 0.5GB N/A 64 Bit Yes on fresh installs
Jackrabbit 1.5 1GB 10GB 64 Bit Yes
persistence - job 0.5 0.5GB N/A 64 Bit Yes on fresh installs
oxTrust 1.0 1.0GB N/A 64 Bit No
oxShibboleth 1.0 1.0GB N/A 64 Bit No
oxPassport 0.7 0.9GB N/A 64 Bit No
oxd-server 1 0.4GB N/A 64 Bit No
NGINX 1 1GB N/A 64 Bit Yes if not ALB
key-rotation 0.3 0.3GB N/A 64 Bit No
cr-rotate 0.2 0.2GB N/A 64 Bit No
CASA 0.5 0.5GB N/A 64 Bit No
  1. Configure cloud or local kubernetes cluster:

Amazon Web Services (AWS) - EKS#

Setup Cluster#

  • Follow this guide to install a cluster with worker nodes. Please make sure that you have all the IAM policies for the AWS user that will be creating the cluster and volumes.

  • To be able to attach volumes to your pod, you need to install the Amazon EBS CSI driver

Requirements#

  • The above guide should also walk you through installing kubectl, aws-iam-authenticator and aws cli on the VM you will be managing your cluster and nodes from. Check to make sure.

    aws-iam-authenticator help
    aws-cli
    kubectl version
    
  • Optional[alpha]: If using Istio please install it prior to installing Gluu. You may choose to use any installation method Istio supports. If you have installed istio ingress, a loadbalancer will have been created. Please save the address of the loadbalancer for use later during installation.

Note

Default AWS deployment will install a classic load balancer with an IP that is not static. Don't worry about the IP changing. All pods will be updated automatically with our script when a change in the IP of the load balancer occurs. However, when deploying in production, DO NOT use our script. Instead, assign a CNAME record for the LoadBalancer DNS name, or use Amazon Route 53 to create a hosted zone. More details are in this AWS guide.

Warning

In recent releases we have noticed that the ALB does not properly work with the oxtrust admin UI. Functions such as access and cache refresh do not work. There is an issue open but the main issue is in the fact that ALB does not support rewrites.

GCE (Google Cloud Engine) - GKE#

Setup Cluster#

  1. Install gcloud

  2. Install kubectl using gcloud components install kubectl command

  3. Create a cluster using a command such as the following example:

    gcloud container clusters create exploringgluu --num-nodes 2 --machine-type e2-highcpu-8 --zone us-west1-a
    

    where CLUSTER_NAME is the name you choose for the cluster and ZONE_NAME is the name of zone where the cluster resources live in.

  4. Configure kubectl to use the cluster:

    gcloud container clusters get-credentials CLUSTER_NAME --zone ZONE_NAME
    

    where CLUSTER_NAME is the name you choose for the cluster and ZONE_NAME is the name of zone where the cluster resources live in.

  5. Afterwards, run kubectl cluster-info to check whether kubectl is ready to interact with the cluster. Make sure you are authenticated by using one of the several ways

  6. Optional[alpha]: If using Istio please install it prior to installing Gluu. You may choose to use any installation method Istio supports. If you have installed istio ingress, a loadbalancer will have been created. Please save the ip of loadbalancer for use later during installation.

DigitalOcean Kubernetes (DOKS)#

Setup Cluster#

  • Follow this guide to create a digital ocean kubernetes service cluster and connect to it.

  • Optional[alpha]: If using Istio please install it prior to installing Gluu. You may choose to use any installation method Istio supports. If you have installed istio ingress, a loadbalancer will have been created. Please save the ip of loadbalancer for use later during installation.

Azure - AKS#

Warning

Pending

Requirements#

  • Follow this guide to install Azure CLI on the VM that will be managing the cluster and nodes. Check to make sure.

  • Follow this section to create the resource group for the AKS setup.

  • Follow this section to create the AKS cluster

  • Follow this section to connect to the AKS cluster

  • Optional[alpha]: If using Istio please install it prior to installing Gluu. You may choose to use any installation method Istio supports. If you have installed istio ingress, a loadbalancer will have been created. Please save the ip of loadbalancer for use later during installation.

Minikube#

Requirements#

  1. Install minikube.

  2. Install kubectl.

  3. Create cluster:

    minikube start
    
  4. Configure kubectl to use the cluster:

    kubectl config use-context minikube
    
  5. Enable ingress on minikube

    minikube addons enable ingress
    
  6. Optional[alpha]: If using Istio please install it prior to installing Gluu. You may choose to use any installation method Istio supports. Please note that at the moment Istio ingress is not supported with Minikube.

MicroK8s#

Requirements#

  1. Install MicroK8s

  2. Make sure all ports are open for microk8s

  3. Enable helm3, hostpath-storage, and dns:

    sudo microk8s.enable dns
    sudo microk8s.enable hostpath-storage
    sudo microk8s.enable helm3 
    

    Make aliases for kubectl and helm3:

    sudo snap alias microk8s.kubectl kubectl
    sudo snap alias microk8s.helm3 helm
    
  4. Optional: If using nginx ingress, please enable it.

    sudo microk8s.enable ingress
    

    Note

    If using self-generated SSL certificate and key generated by installer, enable ingress after config job is finished. This will skip the creation of SSL certificate and key generated by ingress.

  5. Optional[alpha]: If using Istio please enable it.

    sudo microk8s.enable community
    sudo microk8s.enable istio
    

    Note

    The istio ingress gateway service is deployed as LoadBalancer type which requires external IP. Some cloud providers has their own loadbalancer that can assign external IP for istio ingress gateway. If there's no external loadbalancer (as in baremetal VM), an alternative is to install MetalLB:

    sudo microk8s.enable metallb
    

    This command will prompt for IP address pool. Refer to metallb addons docs for details.

  1. Install using one of the following :

Install Gluu using Helm#

Prerequisites#

  • Kubernetes >=1.19x
  • Persistent volume provisioner support in the underlying infrastructure
  • Install Helm3 (if not installed yet)

Quickstart#

  1. Download pygluu-kubernetes.pyz. This package can be built manually.

  2. Optional: If using couchbase as the persistence backend. Download the couchbase kubernetes operator package for Linux and place it in the same directory as pygluu-kubernetes.pyz

  3. Run :

./pygluu-kubernetes.pyz helm-install

Installing Gluu using Helm manually#

  1. Optional if not using Istio ingress: Install NGINX-Ingress Helm Chart.
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo add stable https://charts.helm.sh/stable
helm repo update
helm install <nginx-release-name> ingress-nginx/ingress-nginx --namespace=<nginx-namespace>
    • If the FQDN for gluu i.e. demoexample.gluu.org is registered and globally resolvable, forward it to the loadbalancer address created in the previous step by NGINX-Ingress. A record can be added on most cloud providers to forward the domain to the loadbalancer. For example, on AWS assign a CNAME record for the LoadBalancer DNS name, or use Amazon Route 53 to create a hosted zone. More details in this AWS guide. Another example on GCE.

    • If the FQDN is not registered acquire the loadbalancer ip if on GCE, or Azure using kubectl get svc <release-name>-nginx-ingress-controller --output jsonpath='{.status.loadBalancer.ingress[0].ip}' and if on AWS get the loadbalancer addresss using kubectl -n ingress-nginx get svc ingress-nginx \--output jsonpath='{.status.loadBalancer.ingress[0].hostname}'.

    • If deploying on the cloud make sure to take a look at the Helm cloud-specific notes before continuing.

    • EKS

    • GKE

    • If deploying locally make sure to take a look at the helm-specific notes below before continuing.

    • Minikube

    • MicroK8s
  1. Optional: If using PostgreSQL as the persistence backend. In a production environment, a production-grade PostgreSQL server should be used such as Cloud SQL in GCP or Amazon RDS in AWS.

    For testing purposes, you can deploy it on your Kubernetes cluster using the following commands:

    helm install my-release --set auth.postgresPassword=Test1234#,auth.database=gluu -n gluu oci://registry-1.docker.io/bitnamicharts/postgresql
    

    Add the following yaml snippet to your override.yaml file:

    global:
      gluuPersistenceType: sql
    config:
      configmap:
        cnSqlDbName: gluu
        cnSqlDbPort: 5432
        cnSqlDbDialect: pgsql
        cnSqlDbHost: my-release-postgresql.gluu.svc
        cnSqlDbUser: postgres
        cnSqlDbTimezone: UTC
        cnSqldbUserPassword: Test1234#
    

  2. Optional: If using couchbase as the persistence backend.

    1. Download pygluu-kubernetes.pyz. This package can be built manually.

    2. Download the couchbase kubernetes operator package for Linux and place it in the same directory as pygluu-kubernetes.pyz

    3. Run:

    ./pygluu-kubernetes.pyz couchbase-install
    
    1. Open the settings.json file generated from the previous step and copy over the values of COUCHBASE_URL and COUCHBASE_USER to global.gluuCouchbaseUrl and global.gluuCouchbaseUser in values.yaml respectively.
  3. Create your override-values.yaml and execute:

 helm repo add gluu https://gluufederation.github.io/cloud-native-edition/pygluu/kubernetes/templates/helm
 helm repo update
 helm install gluu gluu/gluu -n <namespace> --version=1.7.x -f override-values.yaml

EKS Helm notes#

Required changes to the values.yaml#

Inside the global values.yaml change the marked keys with CHANGE-THIS to the appropriate values :

#global values to be used across charts
global:
  storageClass:
    provisioner: kubernetes.io/aws-ebs
  domain: demoexample.gluu.org #CHANGE-THIS to the FQDN used for Gluu
  isDomainRegistered: "false" # CHANGE-THIS  "true" or "false" to specify if the domain above is registered or not.    
nginx-ingress:
  ingress:
    enabled: true
    path: /
    hosts:
      - demoexample.gluu.org #CHANGE-THIS to the FQDN used for Gluu
    tls:
      - secretName: tls-certificate
        hosts:
          - demoexample.gluu.org #CHANGE-THIS to the FQDN used for Gluu
config:
  configmap:
    lbAddr: "" #CHANGE-THIS to the address received in the previous step axx-109xx52.us-west-2.elb.amazonaws.com 

Tweak the optional parameters in values.yaml to fit the setup needed.

GKE Helm notes#

Required changes to the values.yaml#

Inside the global values.yaml change the marked keys with CHANGE-THIS to the appropriate values :

#global values to be used across charts
global:
  storageClass:
    provisioner: kubernetes.io/gce-pd
  domain: demoexample.gluu.org #CHANGE-THIS to the FQDN used for Gluu
  # Networking configs
  lbIp: "" #CHANGE-THIS  to the IP received from the previous step
  isDomainRegistered: "false" # CHANGE-THIS  "true" or "false" to specify if the domain above is registered or not.
nginx-ingress:
  ingress:
    enabled: true
    path: /
    hosts:
      - demoexample.gluu.org #CHANGE-THIS to the FQDN used for Gluu
    tls:
      - secretName: tls-certificate
        hosts:
          - demoexample.gluu.org #CHANGE-THIS to the FQDN used for Gluu

Tweak the optional parameters in values.yaml to fit the setup needed.

Minikube Helm notes#

Required changes to the values.yaml#

Inside the global values.yaml change the marked keys with CHANGE-THIS to the appropriate values :

#global values to be used across charts
global:
  storageClass:
    provisioner: k8s.io/minikube-hostpath
  domain: demoexample.gluu.org #CHANGE-THIS to the FQDN used for Gluu
  lbIp: "" #CHANGE-THIS  to the IP of minikube <minikube ip>

nginx-ingress:
  ingress:
    enabled: true
    path: /
    hosts:
      - demoexample.gluu.org #CHANGE-THIS to the FQDN used for Gluu
    tls:
      - secretName: tls-certificate
        hosts:
          - demoexample.gluu.org #CHANGE-THIS to the FQDN used for Gluu

Tweak the optional parameters in values.yaml to fit the setup needed.

  • Map gluu's FQDN at /etc/hosts file to the minikube IP as shown below.

    ##
    # Host Database
    #
    # localhost is used to configure the loopback interface
    # when the system is booting.  Do not change this entry.
    ##
    192.168.99.100  demoexample.gluu.org #minikube IP and example domain
    127.0.0.1   localhost
    255.255.255.255 broadcasthost
    ::1             localhost
    

Microk8s helm notes#

Required changes to the values.yaml#

Inside the global values.yaml change the marked keys with CHANGE-THIS to the appropriate values :

#global values to be used across charts
global:
  storageClass:
    provisioner: microk8s.io/hostpath
  domain: demoexample.gluu.org #CHANGE-THIS to the FQDN used for Gluu
  lbIp: "" #CHANGE-THIS  to the IP of the microk8s VM

nginx-ingress:
  ingress:
    enabled: true
    path: /
    hosts:
      - demoexample.gluu.org #CHANGE-THIS to the FQDN used for Gluu
    tls:
      - secretName: tls-certificate
        hosts:
          - demoexample.gluu.org #CHANGE-THIS to the FQDN used for Gluu

Tweak the optional parameters in values.yaml to fit the setup needed.

  • Map gluu's FQDN at /etc/hosts file to the microk8s VM IP as shown below.
##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting.  Do not change this entry.
##
192.168.99.100    demoexample.gluu.org #microk8s IP and example domain
127.0.0.1 localhost
255.255.255.255   broadcasthost
::1             localhost

Uninstalling the Chart#

To uninstall/delete my-release deployment:

helm delete <my-release>

If during installation the release was not defined, the release name is checked by running $ helm ls then deleted using the previous command and the default release name.

Configuration#

Key Type Default Description
global object {"alb":{"ingress":{"additionalAnnotations":{"alb.ingress.kubernetes.io/auth-session-cookie":"custom-cookie","alb.ingress.kubernetes.io/certificate-arn":"arn:aws:acm:us-west-2:xxxx:certificate/xxxxxx","alb.ingress.kubernetes.io/scheme":"internet-facing","kubernetes.io/ingress.class":"alb"},"additionalLabels":{},"adminUiEnabled":true,"authServerEnabled":true,"casaEnabled":false,"enabled":false,"fido2ConfigEnabled":false,"fido2Enabled":false,"openidConfigEnabled":true,"passportEnabled":false,"scimConfigEnabled":false,"scimEnabled":false,"shibEnabled":false,"u2fConfigEnabled":true,"uma2ConfigEnabled":true,"webdiscoveryEnabled":true,"webfingerEnabled":true}},"azureStorageAccountType":"Standard_LRS","azureStorageKind":"Managed","cloud":{"testEnviroment":false},"cnGoogleApplicationCredentials":"/etc/gluu/conf/google-credentials.json","config":{"enabled":true},"configAdapterName":"kubernetes","configSecretAdapter":"kubernetes","cr-rotate":{"enabled":false},"domain":"demoexample.gluu.org","fido2":{"appLoggers":{"fido2LogLevel":"INFO","fido2LogTarget":"STDOUT","persistenceLogLevel":"INFO","persistenceLogTarget":"FILE"},"enabled":false},"gcePdStorageType":"pd-standard","gluuJackrabbitCluster":"true","gluuPersistenceType":"couchbase","isDomainRegistered":"false","istio":{"additionalAnnotations":{},"additionalLabels":{},"enabled":false,"ingress":false,"namespace":"istio-system"},"jackrabbit":{"enabled":true},"lbIp":"","ldapServiceName":"opendj","nginx-ingress":{"enabled":true},"opendj":{"enabled":true},"oxauth":{"appLoggers":{"auditStatsLogLevel":"INFO","auditStatsLogTarget":"FILE","authLogLevel":"INFO","authLogTarget":"STDOUT","cleanerLogLevel":"INFO","cleanerLogTarget":"FILE","httpLogLevel":"INFO","httpLogTarget":"FILE","ldapStatsLogLevel":"INFO","ldapStatsLogTarget":"FILE","persistenceDurationLogLevel":"INFO","persistenceDurationLogTarget":"FILE","persistenceLogLevel":"INFO","persistenceLogTarget":"FILE","scriptLogLevel":"INFO","scriptLogTarget":"FILE"},"enabled":true},"oxauth-key-rotation":{"enabled":false},"oxd-server":{"appLoggers":{"oxdServerLogLevel":"INFO","oxdServerLogTarget":"STDOUT"},"enabled":false},"oxshibboleth":{"appLoggers":{"auditStatsLogLevel":"INFO","auditStatsLogTarget":"FILE","consentAuditLogLevel":"INFO","consentAuditLogTarget":"FILE","idpLogLevel":"INFO","idpLogTarget":"STDOUT","scriptLogLevel":"INFO","scriptLogTarget":"FILE"},"enabled":false},"oxtrust":{"appLoggers":{"apachehcLogLevel":"INFO","apachehcLogTarget":"FILE","auditStatsLogLevel":"INFO","auditStatsLogTarget":"FILE","cacheRefreshLogLevel":"INFO","cacheRefreshLogTarget":"FILE","cacheRefreshPythonLogLevel":"INFO","cacheRefreshPythonLogTarget":"FILE","cleanerLogLevel":"INFO","cleanerLogTarget":"FILE","httpLogLevel":"INFO","httpLogTarget":"FILE","ldapStatsLogLevel":"INFO","ldapStatsLogTarget":"FILE","oxtrustLogLevel":"INFO","oxtrustLogTarget":"STDOUT","persistenceDurationLogLevel":"INFO","persistenceDurationLogTarget":"FILE","persistenceLogLevel":"INFO","persistenceLogTarget":"FILE","scriptLogLevel":"INFO","scriptLogTarget":"FILE","velocityLogLevel":"INFO","velocityLogTarget":"FILE"},"enabled":true},"persistence":{"enabled":true},"scim":{"appLoggers":{"persistenceDurationLogLevel":"INFO","persistenceDurationLogTarget":"FILE","persistenceLogLevel":"INFO","persistenceLogTarget":"FILE","scimLogLevel":"INFO","scimLogTarget":"STDOUT","scriptLogLevel":"INFO","scriptLogTarget":"FILE"},"enabled":false},"storageClass":{"allowVolumeExpansion":true,"allowedTopologies":[],"mountOptions":["debug"],"parameters":{},"provisioner":"microk8s.io/hostpath","reclaimPolicy":"Retain","volumeBindingMode":"WaitForFirstConsumer"},"upgrade":{"enabled":false,"image":{"repository":"gluufederation/upgrade","tag":"4.4.0-1"},"sourceVersion":"4.4","targetVersion":"4.4"},"usrEnvs":{"normal":{},"secret":{}}} Parameters used globally across all services helm charts.
global.alb.ingress.additionalAnnotations object {"alb.ingress.kubernetes.io/auth-session-cookie":"custom-cookie","alb.ingress.kubernetes.io/certificate-arn":"arn:aws:acm:us-west-2:xxxx:certificate/xxxxxx","alb.ingress.kubernetes.io/scheme":"internet-facing","kubernetes.io/ingress.class":"alb"} Additional annotations that will be added across all ingress definitions in the format of
global.alb.ingress.additionalLabels object {} Additional labels that will be added across all ingress definitions in the format of
global.alb.ingress.adminUiEnabled bool true Enable Admin UI endpoints /identity
global.alb.ingress.authServerEnabled bool true Enable Auth server endpoints /oxauth
global.alb.ingress.casaEnabled bool false Enable casa endpoints /casa
global.alb.ingress.fido2ConfigEnabled bool false Enable endpoint /.well-known/fido2-configuration
global.alb.ingress.fido2Enabled bool false Enable all fido2 endpoints /fido2
global.alb.ingress.openidConfigEnabled bool true Enable endpoint /.well-known/openid-configuration
global.alb.ingress.passportEnabled bool false Enable passport /passport
global.alb.ingress.scimConfigEnabled bool false Enable endpoint /.well-known/scim-configuration
global.alb.ingress.scimEnabled bool false Enable SCIM endpoints /scim
global.alb.ingress.shibEnabled bool false Enable oxshibboleth endpoints /idp
global.alb.ingress.u2fConfigEnabled bool true Enable endpoint /.well-known/fido-configuration
global.alb.ingress.uma2ConfigEnabled bool true Enable endpoint /.well-known/uma2-configuration
global.alb.ingress.webdiscoveryEnabled bool true Enable endpoint /.well-known/simple-web-discovery
global.alb.ingress.webfingerEnabled bool true Enable endpoint /.well-known/webfinger
global.azureStorageAccountType string "Standard_LRS" Volume storage type if using Azure disks.
global.azureStorageKind string "Managed" Azure storage kind if using Azure disks
global.cloud.testEnviroment bool false Boolean flag if enabled will strip resources requests and limits from all services.
global.cnGoogleApplicationCredentials string "/etc/gluu/conf/google-credentials.json" Base64 encoded service account. The sa must have roles/secretmanager.admin to use Google secrets and roles/spanner.databaseUser to use Spanner.
global.config.enabled bool true Boolean flag to enable/disable the configuration chart. This normally should never be false
global.configAdapterName string "kubernetes" The config backend adapter that will hold Gluu configuration layer. google
global.configSecretAdapter string "kubernetes" The config backend adapter that will hold Gluu secret layer. google
global.cr-rotate.enabled bool false Boolean flag to enable/disable the cr-rotate chart.
global.domain string "demoexample.gluu.org" Fully qualified domain name to be used for Gluu installation. This address will be used to reach Gluu services.
global.fido2.appLoggers object {"fido2LogLevel":"INFO","fido2LogTarget":"STDOUT","persistenceLogLevel":"INFO","persistenceLogTarget":"FILE"} App loggers can be configured to define where the logs will be redirected to and the level of each in which it should be displayed. log levels are "OFF", "FATAL", "ERROR", "WARN", "INFO", "DEBUG", "TRACE" Targets are "STDOUT" and "FILE"
global.fido2.appLoggers.fido2LogLevel string "INFO" fido2.log level
global.fido2.appLoggers.fido2LogTarget string "STDOUT" fido2.log target
global.fido2.appLoggers.persistenceLogLevel string "INFO" fido2_persistence.log level
global.fido2.appLoggers.persistenceLogTarget string "FILE" fido2_persistence.log target
global.fido2.enabled bool false Boolean flag to enable/disable the fido2 chart.
global.gcePdStorageType string "pd-standard" GCE storage kind if using Google disks
global.gluuJackrabbitCluster string "true" Boolean flag if enabled will enable jackrabbit in cluster mode with Postgres.
global.gluuPersistenceType string "couchbase" Persistence backend to run Gluu with ldap
global.isDomainRegistered string "false" Boolean flag to enable mapping global.lbIp to global.fqdn inside pods on clouds that provide static ip for loadbalancer. On cloud that provide only addresses to the LB this flag will enable a script to actively scan config.configmap.lbAddr and update the hosts file inside the pods automatically.
global.istio.additionalAnnotations object {} Additional annotations that will be added across the gateway in the format of
global.istio.additionalLabels object {} Additional labels that will be added across the gateway in the format of
global.istio.enabled bool false Boolean flag that enables using istio gateway for Gluu. This assumes istio ingress is installed and hence the LB is available.
global.istio.ingress bool false Boolean flag that enables using istio side cars with Gluu services.
global.istio.namespace string "istio-system" The namespace istio is deployed in. The is normally istio-system.
global.jackrabbit.enabled bool true Boolean flag to enable/disable the jackrabbit chart. For more information on how it is used inside Gluu https://gluu.org/docs/gluu-server/4.2/installation-guide/install-kubernetes/#working-with-jackrabbit. If disabled oxShibboleth cannot be run.
global.lbIp string "" The Loadbalancer IP created by nginx or istio on clouds that provide static IPs. This is not needed if global.domain is globally resolvable.
global.ldapServiceName string "opendj" Name of the OpenDJ service. Please keep it as default.
global.nginx-ingress.enabled bool true Boolean flag to enable/disable the nginx-ingress definitions chart.
global.opendj.enabled bool true Boolean flag to enable/disable the OpenDJ chart.
global.oxauth-key-rotation.enabled bool false Boolean flag to enable/disable the oxauth-server-key rotation cronjob chart.
global.oxauth.appLoggers object {"auditStatsLogLevel":"INFO","auditStatsLogTarget":"FILE","authLogLevel":"INFO","authLogTarget":"STDOUT","cleanerLogLevel":"INFO","cleanerLogTarget":"FILE","httpLogLevel":"INFO","httpLogTarget":"FILE","ldapStatsLogLevel":"INFO","ldapStatsLogTarget":"FILE","persistenceDurationLogLevel":"INFO","persistenceDurationLogTarget":"FILE","persistenceLogLevel":"INFO","persistenceLogTarget":"FILE","scriptLogLevel":"INFO","scriptLogTarget":"FILE"} App loggers can be configured to define where the logs will be redirected to and the level of each in which it should be displayed. log levels are "OFF", "FATAL", "ERROR", "WARN", "INFO", "DEBUG", "TRACE" Targets are "STDOUT" and "FILE"
global.oxauth.appLoggers.auditStatsLogLevel string "INFO" oxauth_audit.log level
global.oxauth.appLoggers.auditStatsLogTarget string "FILE" oxauth_script.log target
global.oxauth.appLoggers.authLogLevel string "INFO" oxauth.log level
global.oxauth.appLoggers.authLogTarget string "STDOUT" oxauth.log target
global.oxauth.appLoggers.cleanerLogLevel string "INFO" cleaner log level
global.oxauth.appLoggers.cleanerLogTarget string "FILE" cleaner log target
global.oxauth.appLoggers.httpLogLevel string "INFO" http_request_response.log level
global.oxauth.appLoggers.httpLogTarget string "FILE" http_request_response.log target
global.oxauth.appLoggers.ldapStatsLogLevel string "INFO" oxauth_persistence_ldap_statistics.log level
global.oxauth.appLoggers.ldapStatsLogTarget string "FILE" oxauth_persistence_ldap_statistics.log target
global.oxauth.appLoggers.persistenceDurationLogLevel string "INFO" oxauth_persistence_duration.log level
global.oxauth.appLoggers.persistenceDurationLogTarget string "FILE" oxauth_persistence_duration.log target
global.oxauth.appLoggers.persistenceLogLevel string "INFO" oxauth_persistence.log level
global.oxauth.appLoggers.persistenceLogTarget string "FILE" oxauth_persistence.log target
global.oxauth.appLoggers.scriptLogLevel string "INFO" oxauth_script.log level
global.oxauth.appLoggers.scriptLogTarget string "FILE" oxauth_script.log target
global.oxauth.enabled bool true Boolean flag to enable/disable oxauth chart. You should never set this to false.
global.oxd-server.appLoggers object {"oxdServerLogLevel":"INFO","oxdServerLogTarget":"STDOUT"} App loggers can be configured to define where the logs will be redirected to and the level of each in which it should be displayed. log levels are "OFF", "FATAL", "ERROR", "WARN", "INFO", "DEBUG", "TRACE" Targets are "STDOUT" and "FILE"
global.oxd-server.appLoggers.oxdServerLogLevel string "INFO" oxd-server.log level
global.oxd-server.appLoggers.oxdServerLogTarget string "STDOUT" oxd-server.log target
global.oxd-server.enabled bool false Boolean flag to enable/disable the oxd-server chart.
global.oxshibboleth.appLoggers object {"auditStatsLogLevel":"INFO","auditStatsLogTarget":"FILE","consentAuditLogLevel":"INFO","consentAuditLogTarget":"FILE","idpLogLevel":"INFO","idpLogTarget":"STDOUT","scriptLogLevel":"INFO","scriptLogTarget":"FILE"} App loggers can be configured to define where the logs will be redirected to and the level of each in which it should be displayed. log levels are "OFF", "FATAL", "ERROR", "WARN", "INFO", "DEBUG", "TRACE" Targets are "STDOUT" and "FILE"
global.oxshibboleth.appLoggers.auditStatsLogLevel string "INFO" idp-audit.log level
global.oxshibboleth.appLoggers.auditStatsLogTarget string "FILE" idp-audit.log target
global.oxshibboleth.appLoggers.consentAuditLogLevel string "INFO" idp-consent-audit.log level
global.oxshibboleth.appLoggers.consentAuditLogTarget string "FILE" idp-consent-audit.log target
global.oxshibboleth.appLoggers.idpLogLevel string "INFO" idp-process.log level
global.oxshibboleth.appLoggers.idpLogTarget string "STDOUT" idp-process.log target
global.oxshibboleth.appLoggers.scriptLogLevel string "INFO" idp script.log level
global.oxshibboleth.appLoggers.scriptLogTarget string "FILE" idp script.log target
global.oxshibboleth.enabled bool false Boolean flag to enable/disable the oxShibbboleth chart.
global.oxtrust.appLoggers object {"apachehcLogLevel":"INFO","apachehcLogTarget":"FILE","auditStatsLogLevel":"INFO","auditStatsLogTarget":"FILE","cacheRefreshLogLevel":"INFO","cacheRefreshLogTarget":"FILE","cacheRefreshPythonLogLevel":"INFO","cacheRefreshPythonLogTarget":"FILE","cleanerLogLevel":"INFO","cleanerLogTarget":"FILE","httpLogLevel":"INFO","httpLogTarget":"FILE","ldapStatsLogLevel":"INFO","ldapStatsLogTarget":"FILE","oxtrustLogLevel":"INFO","oxtrustLogTarget":"STDOUT","persistenceDurationLogLevel":"INFO","persistenceDurationLogTarget":"FILE","persistenceLogLevel":"INFO","persistenceLogTarget":"FILE","scriptLogLevel":"INFO","scriptLogTarget":"FILE","velocityLogLevel":"INFO","velocityLogTarget":"FILE"} App loggers can be configured to define where the logs will be redirected to and the level of each in which it should be displayed. log levels are "OFF", "FATAL", "ERROR", "WARN", "INFO", "DEBUG", "TRACE" Targets are "STDOUT" and "FILE"
global.oxtrust.appLoggers.apachehcLogLevel string "INFO" apachehc log level
global.oxtrust.appLoggers.apachehcLogTarget string "FILE" apachehc log target
global.oxtrust.appLoggers.auditStatsLogLevel string "INFO" oxtrust_audit.log level
global.oxtrust.appLoggers.auditStatsLogTarget string "FILE" oxtrust_script.log target
global.oxtrust.appLoggers.cacheRefreshLogLevel string "INFO" cache refresh log level
global.oxtrust.appLoggers.cacheRefreshLogTarget string "FILE" cache refresh log target
global.oxtrust.appLoggers.cacheRefreshPythonLogLevel string "INFO" cleaner log level
global.oxtrust.appLoggers.cacheRefreshPythonLogTarget string "FILE" cache refresh python log target
global.oxtrust.appLoggers.cleanerLogLevel string "INFO" cleaner log target
global.oxtrust.appLoggers.cleanerLogTarget string "FILE" cleaner log target
global.oxtrust.appLoggers.httpLogLevel string "INFO" http_request_response.log level
global.oxtrust.appLoggers.httpLogTarget string "FILE" http_request_response.log target
global.oxtrust.appLoggers.ldapStatsLogLevel string "INFO" oxtrust_persistence_ldap_statistics.log level
global.oxtrust.appLoggers.ldapStatsLogTarget string "FILE" oxtrust_persistence_ldap_statistics.log target
global.oxtrust.appLoggers.oxtrustLogLevel string "INFO" oxtrust.log level
global.oxtrust.appLoggers.oxtrustLogTarget string "STDOUT" oxtrust.log target
global.oxtrust.appLoggers.persistenceDurationLogLevel string "INFO" oxtrust_persistence_duration.log level
global.oxtrust.appLoggers.persistenceDurationLogTarget string "FILE" oxtrust_persistence_duration.log target
global.oxtrust.appLoggers.persistenceLogLevel string "INFO" oxtrust_persistence.log level
global.oxtrust.appLoggers.persistenceLogTarget string "FILE" oxtrust_persistence.log target
global.oxtrust.appLoggers.scriptLogLevel string "INFO" oxtrust_script.log level
global.oxtrust.appLoggers.scriptLogTarget string "FILE" oxtrust_script.log target
global.oxtrust.appLoggers.velocityLogLevel string "INFO" velocity log level
global.oxtrust.appLoggers.velocityLogTarget string "FILE" velocity log target
global.oxtrust.enabled bool true Boolean flag to enable/disable the oxtrust chart.
global.persistence.enabled bool true Boolean flag to enable/disable the persistence chart.
global.scim.appLoggers object {"persistenceDurationLogLevel":"INFO","persistenceDurationLogTarget":"FILE","persistenceLogLevel":"INFO","persistenceLogTarget":"FILE","scimLogLevel":"INFO","scimLogTarget":"STDOUT","scriptLogLevel":"INFO","scriptLogTarget":"FILE"} App loggers can be configured to define where the logs will be redirected to and the level of each in which it should be displayed. log levels are "OFF", "FATAL", "ERROR", "WARN", "INFO", "DEBUG", "TRACE" Targets are "STDOUT" and "FILE"
global.scim.appLoggers.persistenceDurationLogLevel string "INFO" scim_persistence_duration.log level
global.scim.appLoggers.persistenceDurationLogTarget string "FILE" scim_persistence_duration.log target
global.scim.appLoggers.persistenceLogLevel string "INFO" scim_persistence.log level
global.scim.appLoggers.persistenceLogTarget string "FILE" scim_persistence.log target
global.scim.appLoggers.scimLogLevel string "INFO" scim.log level
global.scim.appLoggers.scimLogTarget string "STDOUT" scim.log target
global.scim.appLoggers.scriptLogLevel string "INFO" scim_script.log level
global.scim.appLoggers.scriptLogTarget string "FILE" scim_script.log target
global.scim.enabled bool false Boolean flag to enable/disable the SCIM chart.
global.storageClass object {"allowVolumeExpansion":true,"allowedTopologies":[],"mountOptions":["debug"],"parameters":{},"provisioner":"microk8s.io/hostpath","reclaimPolicy":"Retain","volumeBindingMode":"WaitForFirstConsumer"} StorageClass section for Jackrabbit and OpenDJ charts. This is not currently used by the openbanking distribution. You may specify custom parameters as needed.
global.storageClass.parameters object {} parameters:
global.upgrade.enabled bool false Boolean flag used when running upgrading through versions command.
global.upgrade.image.repository string "gluufederation/upgrade" Image to use for deploying.
global.upgrade.image.tag string "4.4.0-1" Image tag to use for deploying.
global.upgrade.sourceVersion string "4.4" Source version currently running. This is normally one minor version down. The step should only be one minor version per upgrade
global.upgrade.targetVersion string "4.4" Target version currently running. This is normally one minor version up. The step should only be one minor version per upgrade
global.usrEnvs object {"normal":{},"secret":{}} Add custom normal and secret envs to the service. Envs defined in global.userEnvs will be globally available to all services
global.usrEnvs.normal object {} Add custom normal envs to the service. variable1: value1
global.usrEnvs.secret object {} Add custom secret envs to the service. variable1: value1
Key Type Default Description
config object {"additionalAnnotations":{},"additionalLabels":{},"adminPass":"P@ssw0rd","city":"Austin","configmap":{"cnConfigGoogleSecretNamePrefix":"gluu","cnConfigGoogleSecretVersionId":"latest","cnGoogleProjectId":"google-project-to-save-config-and-secrets-to","cnGoogleSecretManagerPassPhrase":"Test1234#","cnGoogleServiceAccount":"SWFtTm90YVNlcnZpY2VBY2NvdW50Q2hhbmdlTWV0b09uZQo=","cnGoogleSpannerDatabaseId":"","cnGoogleSpannerInstanceId":"","cnSecretGoogleSecretNamePrefix":"gluu","cnSecretGoogleSecretVersionId":"latest","cnSqlDbDialect":"mysql","cnSqlDbHost":"my-release-mysql.default.svc.cluster.local","cnSqlDbName":"gluu","cnSqlDbPort":3306,"cnSqlDbTimezone":"UTC","cnSqlDbUser":"gluu","cnSqlPasswordFile":"/etc/gluu/conf/sql_password","cnSqldbUserPassword":"Test1234#","containerMetadataName":"kubernetes","gluuCacheType":"NATIVE_PERSISTENCE","gluuCasaEnabled":false,"gluuCouchbaseBucketPrefix":"gluu","gluuCouchbaseCertFile":"/etc/certs/couchbase.crt","gluuCouchbaseCrt":"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURlakNDQW1LZ0F3SUJBZ0lKQUwyem5UWlREUHFNTUEwR0NTcUdTSWIzRFFFQkN3VUFNQzB4S3pBcEJnTlYKQkFNTUlpb3VZMkpuYkhWMUxtUmxabUYxYkhRdWMzWmpMbU5zZFhOMFpYSXViRzlqWVd3d0hoY05NakF3TWpBMQpNRGt4T1RVeFdoY05NekF3TWpBeU1Ea3hPVFV4V2pBdE1Tc3dLUVlEVlFRRERDSXFMbU5pWjJ4MWRTNWtaV1poCmRXeDBMbk4yWXk1amJIVnpkR1Z5TG14dlkyRnNNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUIKQ2dLQ0FRRUFycmQ5T3lvSnRsVzhnNW5nWlJtL2FKWjJ2eUtubGU3dVFIUEw4Q2RJa1RNdjB0eHZhR1B5UkNQQgo3RE00RTFkLzhMaU5takdZZk41QjZjWjlRUmNCaG1VNmFyUDRKZUZ3c0x0cTFGT3MxaDlmWGo3d3NzcTYrYmlkCjV6Umw3UEE0YmdvOXVkUVRzU1UrWDJUUVRDc0dxVVVPWExrZ3NCMjI0RDNsdkFCbmZOeHcvYnFQa2ZCQTFxVzYKVXpxellMdHN6WE5GY0dQMFhtU3c4WjJuaFhhUGlva2pPT2dyMkMrbVFZK0htQ2xGUWRpd2g2ZjBYR0V0STMrKwoyMStTejdXRkF6RlFBVUp2MHIvZnk4TDRXZzh1YysvalgwTGQrc2NoQTlNQjh3YmJORUp2ZjNMOGZ5QjZ0cTd2CjF4b0FnL0g0S1dJaHdqSEN0dFVnWU1oU0xWV3UrUUlEQVFBQm80R2NNSUdaTUIwR0ExVWREZ1FXQkJTWmQxWU0KVGNIRVZjSENNUmp6ejczZitEVmxxREJkQmdOVkhTTUVWakJVZ0JTWmQxWU1UY0hFVmNIQ01Sanp6NzNmK0RWbApxS0V4cEM4d0xURXJNQ2tHQTFVRUF3d2lLaTVqWW1kc2RYVXVaR1ZtWVhWc2RDNXpkbU11WTJ4MWMzUmxjaTVzCmIyTmhiSUlKQUwyem5UWlREUHFNTUF3R0ExVWRFd1FGTUFNQkFmOHdDd1lEVlIwUEJBUURBZ0VHTUEwR0NTcUcKU0liM0RRRUJDd1VBQTRJQkFRQk9meTVWSHlKZCtWUTBXaUQ1aSs2cmhidGNpSmtFN0YwWVVVZnJ6UFN2YWVFWQp2NElVWStWOC9UNnE4Mk9vVWU1eCtvS2dzbFBsL01nZEg2SW9CRnVtaUFqek14RTdUYUhHcXJ5dk13Qk5IKzB5CnhadG9mSnFXQzhGeUlwTVFHTEs0RVBGd3VHRlJnazZMRGR2ZEN5NVdxWW1MQWdBZVh5VWNaNnlHYkdMTjRPUDUKZTFiaEFiLzRXWXRxRHVydFJrWjNEejlZcis4VWNCVTRLT005OHBZN05aaXFmKzlCZVkvOEhZaVQ2Q0RRWWgyTgoyK0VWRFBHcFE4UkVsRThhN1ZLL29MemlOaXFyRjllNDV1OU1KdjM1ZktmNUJjK2FKdWduTGcwaUZUYmNaT1prCkpuYkUvUENIUDZFWmxLaEFiZUdnendtS1dDbTZTL3g0TklRK2JtMmoKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=","gluuCouchbaseIndexNumReplica":0,"gluuCouchbasePass":"P@ssw0rd","gluuCouchbasePassFile":"/etc/gluu/conf/couchbase_password","gluuCouchbaseSuperUser":"admin","gluuCouchbaseSuperUserPass":"P@ssw0rd","gluuCouchbaseSuperUserPassFile":"/etc/gluu/conf/couchbase_superuser_password","gluuCouchbaseUrl":"cbgluu.default.svc.cluster.local","gluuCouchbaseUser":"gluu","gluuDocumentStoreType":"DB","gluuJackrabbitAdminId":"admin","gluuJackrabbitAdminIdFile":"/etc/gluu/conf/jackrabbit_admin_id","gluuJackrabbitAdminPassFile":"/etc/gluu/conf/jackrabbit_admin_password","gluuJackrabbitPostgresDatabaseName":"jackrabbit","gluuJackrabbitPostgresHost":"postgresql.postgres.svc.cluster.local","gluuJackrabbitPostgresPasswordFile":"/etc/gluu/conf/postgres_password","gluuJackrabbitPostgresPort":5432,"gluuJackrabbitPostgresUser":"jackrabbit","gluuJackrabbitSyncInterval":300,"gluuJackrabbitUrl":"http://jackrabbit:8080","gluuLdapUrl":"opendj:1636","gluuMaxRamPercent":"75.0","gluuOxauthBackend":"oxauth:8080","gluuOxdAdminCertCn":"oxd-server","gluuOxdApplicationCertCn":"oxd-server","gluuOxdBindIpAddresses":"*","gluuOxdServerUrl":"oxd-server:8443","gluuOxtrustApiEnabled":false,"gluuOxtrustApiTestMode":false,"gluuOxtrustBackend":"oxtrust:8080","gluuOxtrustConfigGeneration":true,"gluuPassportEnabled":false,"gluuPassportFailureRedirectUrl":"","gluuPersistenceLdapMapping":"default","gluuRedisSentinelGroup":"","gluuRedisSslTruststore":"","gluuRedisType":"STANDALONE","gluuRedisUrl":"redis:6379","gluuRedisUseSsl":"false","gluuSamlEnabled":false,"gluuScimProtectionMode":"OAUTH","gluuSyncCasaManifests":false,"gluuSyncShibManifests":false,"lbAddr":""},"countryCode":"US","dnsConfig":{},"dnsPolicy":"","email":"support@gluu.com","image":{"pullSecrets":[],"repository":"gluufederation/config-init","tag":"4.4.0-1"},"ldapPass":"P@ssw0rd","migration":{"enabled":false,"migrationDataFormat":"ldif","migrationDir":"/ce-migration"},"orgName":"Gluu","redisPass":"P@assw0rd","resources":{"limits":{"cpu":"300m","memory":"300Mi"},"requests":{"cpu":"300m","memory":"300Mi"}},"state":"TX","usrEnvs":{"normal":{},"secret":{}},"volumeMounts":[],"volumes":[]} Configuration parameters for setup and initial configuration secret and config layers used by Gluu services.
config.additionalAnnotations object {} Additional annotations that will be added across all resources in the format of {cert-manager.io/issuer: "letsencrypt-prod"}. key app is taken
config.additionalLabels object {} Additional labels that will be added across all resources definitions in the format of
config.adminPass string "P@ssw0rd" Admin password to log in to the UI.
config.city string "Austin" City. Used for certificate creation.
config.configmap.cnConfigGoogleSecretNamePrefix string "gluu" Prefix for Gluu configuration secret in Google Secret Manager. Defaults to gluu. If left intact gluu-configuration secret will be created. Used only when global.configAdapterName and global.configSecretAdapter is set to google.
config.configmap.cnConfigGoogleSecretVersionId string "latest" Secret version to be used for configuration. Defaults to latest and should normally always stay that way. Used only when global.configAdapterName and global.configSecretAdapter is set to google. Used only when global.configAdapterName and global.configSecretAdapter is set to google.
config.configmap.cnGoogleProjectId string "google-project-to-save-config-and-secrets-to" Project id of the google project the secret manager belongs to. Used only when global.configAdapterName and global.configSecretAdapter is set to google.
config.configmap.cnGoogleSecretManagerPassPhrase string "Test1234#" Passphrase for Gluu secret in Google Secret Manager. This is used for encrypting and decrypting data from the Google Secret Manager. Used only when global.configAdapterName and global.configSecretAdapter is set to google.
config.configmap.cnGoogleServiceAccount string "SWFtTm90YVNlcnZpY2VBY2NvdW50Q2hhbmdlTWV0b09uZQo=" Service account with roles roles/secretmanager.admin base64 encoded string. This is used often inside the services to reach the configuration layer. Used only when global.configAdapterName and global.configSecretAdapter is set to google.
config.configmap.cnGoogleSpannerDatabaseId string "" Google Spanner Database ID. Used only when global.gluuPersistenceType is spanner.
config.configmap.cnGoogleSpannerInstanceId string "" Google Spanner ID. Used only when global.gluuPersistenceType is spanner.
config.configmap.cnSecretGoogleSecretNamePrefix string "gluu" Prefix for Gluu secret in Google Secret Manager. Defaults to gluu. If left gluu-secret secret will be created. Used only when global.configAdapterName and global.configSecretAdapter is set to google.
config.configmap.cnSecretGoogleSecretVersionId string "latest" Secret version to be used for secret configuration. Defaults to latest and should normally always stay that way. Used only when global.configAdapterName and global.configSecretAdapter is set to google.
config.configmap.cnSqlDbDialect string "mysql" SQL database dialect. mysql or pgsql
config.configmap.cnSqlDbHost string "my-release-mysql.default.svc.cluster.local" SQL database host uri.
config.configmap.cnSqlDbName string "gluu" SQL database username.
config.configmap.cnSqlDbPort int 3306 SQL database port.
config.configmap.cnSqlDbTimezone string "UTC" SQL database timezone.
config.configmap.cnSqlDbUser string "gluu" SQL database username.
config.configmap.cnSqlPasswordFile string "/etc/gluu/conf/sql_password" SQL password file holding password from config.configmap.cnSqldbUserPassword .
config.configmap.cnSqldbUserPassword string "Test1234#" SQL password injected as config.configmap.cnSqlPasswordFile .
config.configmap.gluuCacheType string "NATIVE_PERSISTENCE" Cache type. NATIVE_PERSISTENCE, REDIS. or IN_MEMORY. Defaults to NATIVE_PERSISTENCE .
config.configmap.gluuCasaEnabled bool false Enable Casa flag .
config.configmap.gluuCouchbaseBucketPrefix string "gluu" The prefix of couchbase buckets. This helps with separation in between different environments and allows for the same couchbase cluster to be used by different setups of Gluu.
config.configmap.gluuCouchbaseCertFile string "/etc/certs/couchbase.crt" Location of couchbase.crt used by Couchbase SDK for tls termination. The file path must end with couchbase.crt. In mTLS setups this is not required.
config.configmap.gluuCouchbaseCrt string "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURlakNDQW1LZ0F3SUJBZ0lKQUwyem5UWlREUHFNTUEwR0NTcUdTSWIzRFFFQkN3VUFNQzB4S3pBcEJnTlYKQkFNTUlpb3VZMkpuYkhWMUxtUmxabUYxYkhRdWMzWmpMbU5zZFhOMFpYSXViRzlqWVd3d0hoY05NakF3TWpBMQpNRGt4T1RVeFdoY05NekF3TWpBeU1Ea3hPVFV4V2pBdE1Tc3dLUVlEVlFRRERDSXFMbU5pWjJ4MWRTNWtaV1poCmRXeDBMbk4yWXk1amJIVnpkR1Z5TG14dlkyRnNNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUIKQ2dLQ0FRRUFycmQ5T3lvSnRsVzhnNW5nWlJtL2FKWjJ2eUtubGU3dVFIUEw4Q2RJa1RNdjB0eHZhR1B5UkNQQgo3RE00RTFkLzhMaU5takdZZk41QjZjWjlRUmNCaG1VNmFyUDRKZUZ3c0x0cTFGT3MxaDlmWGo3d3NzcTYrYmlkCjV6Umw3UEE0YmdvOXVkUVRzU1UrWDJUUVRDc0dxVVVPWExrZ3NCMjI0RDNsdkFCbmZOeHcvYnFQa2ZCQTFxVzYKVXpxellMdHN6WE5GY0dQMFhtU3c4WjJuaFhhUGlva2pPT2dyMkMrbVFZK0htQ2xGUWRpd2g2ZjBYR0V0STMrKwoyMStTejdXRkF6RlFBVUp2MHIvZnk4TDRXZzh1YysvalgwTGQrc2NoQTlNQjh3YmJORUp2ZjNMOGZ5QjZ0cTd2CjF4b0FnL0g0S1dJaHdqSEN0dFVnWU1oU0xWV3UrUUlEQVFBQm80R2NNSUdaTUIwR0ExVWREZ1FXQkJTWmQxWU0KVGNIRVZjSENNUmp6ejczZitEVmxxREJkQmdOVkhTTUVWakJVZ0JTWmQxWU1UY0hFVmNIQ01Sanp6NzNmK0RWbApxS0V4cEM4d0xURXJNQ2tHQTFVRUF3d2lLaTVqWW1kc2RYVXVaR1ZtWVhWc2RDNXpkbU11WTJ4MWMzUmxjaTVzCmIyTmhiSUlKQUwyem5UWlREUHFNTUF3R0ExVWRFd1FGTUFNQkFmOHdDd1lEVlIwUEJBUURBZ0VHTUEwR0NTcUcKU0liM0RRRUJDd1VBQTRJQkFRQk9meTVWSHlKZCtWUTBXaUQ1aSs2cmhidGNpSmtFN0YwWVVVZnJ6UFN2YWVFWQp2NElVWStWOC9UNnE4Mk9vVWU1eCtvS2dzbFBsL01nZEg2SW9CRnVtaUFqek14RTdUYUhHcXJ5dk13Qk5IKzB5CnhadG9mSnFXQzhGeUlwTVFHTEs0RVBGd3VHRlJnazZMRGR2ZEN5NVdxWW1MQWdBZVh5VWNaNnlHYkdMTjRPUDUKZTFiaEFiLzRXWXRxRHVydFJrWjNEejlZcis4VWNCVTRLT005OHBZN05aaXFmKzlCZVkvOEhZaVQ2Q0RRWWgyTgoyK0VWRFBHcFE4UkVsRThhN1ZLL29MemlOaXFyRjllNDV1OU1KdjM1ZktmNUJjK2FKdWduTGcwaUZUYmNaT1prCkpuYkUvUENIUDZFWmxLaEFiZUdnendtS1dDbTZTL3g0TklRK2JtMmoKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=" Couchbase certificate authority string. This must be encoded using base64. This can also be found in your couchbase UI Security > Root Certificate. In mTLS setups this is not required.
config.configmap.gluuCouchbaseIndexNumReplica int 0 The number of replicas per index created. Please note that the number of index nodes must be one greater than the number of index replicas. That means if your couchbase cluster only has 2 index nodes you cannot place the number of replicas to be higher than 1.
config.configmap.gluuCouchbasePass string "P@ssw0rd" Couchbase password for the restricted user config.configmap.gluuCouchbaseUser that is often used inside the services. The password must contain one digit, one uppercase letter, one lower case letter and one symbol .
config.configmap.gluuCouchbasePassFile string "/etc/gluu/conf/couchbase_password" The location of the Couchbase restricted user config.configmap.gluuCouchbaseUser password. The file path must end with couchbase_password
config.configmap.gluuCouchbaseSuperUser string "admin" The Couchbase super user (admin) user name. This user is used during initialization only.
config.configmap.gluuCouchbaseSuperUserPass string "P@ssw0rd" Couchbase password for the super user config.configmap.gluuCouchbaseSuperUser that is used during the initialization process. The password must contain one digit, one uppercase letter, one lower case letter and one symbol
config.configmap.gluuCouchbaseSuperUserPassFile string "/etc/gluu/conf/couchbase_superuser_password" The location of the Couchbase restricted user config.configmap.gluuCouchbaseSuperUser password. The file path must end with couchbase_superuser_password.
config.configmap.gluuCouchbaseUrl string "cbgluu.default.svc.cluster.local" Couchbase URL. Used only when global.gluuPersistenceType is hybrid or couchbase. This should be in FQDN format for either remote or local Couchbase clusters. The address can be an internal address inside the kubernetes cluster
config.configmap.gluuCouchbaseUser string "gluu" Couchbase restricted user. Used only when global.gluuPersistenceType is hybrid or couchbase.
config.configmap.gluuDocumentStoreType string "DB" Document store type to use for shibboleth files JCA or LOCAL. Note that if JCA is selected Apache Jackrabbit will be used. Jackrabbit also enables loading custom files across all services easily.
config.configmap.gluuJackrabbitAdminId string "admin" Jackrabbit admin uid.
config.configmap.gluuJackrabbitAdminIdFile string "/etc/gluu/conf/jackrabbit_admin_id" The location of the Jackrabbit admin uid config.gluuJackrabbitAdminId. The file path must end with jackrabbit_admin_id.
config.configmap.gluuJackrabbitAdminPassFile string "/etc/gluu/conf/jackrabbit_admin_password" The location of the Jackrabbit admin password jackrabbit.secrets.gluuJackrabbitAdminPassword. The file path must end with jackrabbit_admin_password.
config.configmap.gluuJackrabbitPostgresDatabaseName string "jackrabbit" Jackrabbit postgres database name.
config.configmap.gluuJackrabbitPostgresHost string "postgresql.postgres.svc.cluster.local" Postgres url
config.configmap.gluuJackrabbitPostgresPasswordFile string "/etc/gluu/conf/postgres_password" The location of the Jackrabbit postgres password file jackrabbit.secrets.gluuJackrabbitPostgresPassword. The file path must end with postgres_password.
config.configmap.gluuJackrabbitPostgresPort int 5432 Jackrabbit Postgres port
config.configmap.gluuJackrabbitPostgresUser string "jackrabbit" Jackrabbit Postgres uid
config.configmap.gluuJackrabbitSyncInterval int 300 Interval between files sync (default to 300 seconds).
config.configmap.gluuJackrabbitUrl string "http://jackrabbit:8080" Jackrabbit internal url. Normally left as default.
config.configmap.gluuLdapUrl string "opendj:1636" OpenDJ internal address. Leave as default. Used when global.gluuPersistenceType is set to ldap.
config.configmap.gluuMaxRamPercent string "75.0" Value passed to Java option -XX:MaxRAMPercentage
config.configmap.gluuOxauthBackend string "oxauth:8080" oxAuth internal address. Leave as default.
config.configmap.gluuOxdAdminCertCn string "oxd-server" OXD serve OAuth client admin certificate common name. This should be left to the default value client-api .
config.configmap.gluuOxdApplicationCertCn string "oxd-server" OXD server OAuth client application certificate common name. This should be left to the default value client-api.
config.configmap.gluuOxdBindIpAddresses string "*" OXD server bind address. This limits what ip ranges can access the client-api. This should be left as * and controlled by a NetworkPolicy
config.configmap.gluuOxdServerUrl string "oxd-server:8443" OXD server Oauth client address. This should be left intact in kubernetes as it uses the internal address format.
config.configmap.gluuOxtrustApiEnabled bool false Enable oxTrust API
config.configmap.gluuOxtrustApiTestMode bool false Enable oxTrust API testmode
config.configmap.gluuOxtrustBackend string "oxtrust:8080" oxTrust internal address. Leave as default.
config.configmap.gluuOxtrustConfigGeneration bool true Whether to generate oxShibboleth configuration or not (default to true).
config.configmap.gluuPassportEnabled bool false Boolean flag to enable/disable passport chart
config.configmap.gluuPassportFailureRedirectUrl string "" TEMP KEY TO BE REMOVED IN 4.4 which allows passport failure redirect url to be specified.
config.configmap.gluuPersistenceLdapMapping string "default" Specify data that should be saved in LDAP (one of default, user, cache, site, token, or session; default to default). Note this environment only takes effect when global.gluuPersistenceType is set to hybrid.
config.configmap.gluuRedisSentinelGroup string "" Redis Sentinel Group. Often set when config.configmap.gluuRedisType is set to SENTINEL. Can be used when config.configmap.gluuCacheType is set to REDIS.
config.configmap.gluuRedisSslTruststore string "" Redis SSL truststore. Optional. Can be used when config.configmap.gluuCacheType is set to REDIS.
config.configmap.gluuRedisType string "STANDALONE" Redis service type. STANDALONE or CLUSTER. Can be used when config.configmap.gluuCacheType is set to REDIS.
config.configmap.gluuRedisUrl string "redis:6379" Redis URL and port number :. Can be used when config.configmap.gluuCacheType is set to REDIS.
config.configmap.gluuRedisUseSsl string "false" Boolean to use SSL in Redis. Can be used when config.configmap.gluuCacheType is set to REDIS.
config.configmap.gluuSamlEnabled bool false Enable SAML-related features; UI menu, etc.
config.configmap.gluuScimProtectionMode string "OAUTH" SCIM protection mode OAUTH
config.configmap.gluuSyncCasaManifests bool false Activate manual Casa files sync - depreciated
config.configmap.gluuSyncShibManifests bool false Activate manual Shib files sync - depreciated
config.configmap.lbAddr string "" Loadbalancer address for AWS if the FQDN is not registered.
config.countryCode string "US" Country code. Used for certificate creation.
config.dnsConfig object {} Add custom dns config
config.dnsPolicy string "" Add custom dns policy
config.email string "support@gluu.com" Email address of the administrator usually. Used for certificate creation.
config.image.pullSecrets list [] Image Pull Secrets
config.image.repository string "gluufederation/config-init" Image to use for deploying.
config.image.tag string "4.4.0-1" Image tag to use for deploying.
config.ldapPass string "P@ssw0rd" LDAP admin password if OpennDJ is used for persistence.
config.migration object {"enabled":false,"migrationDataFormat":"ldif","migrationDir":"/ce-migration"} CE to CN Migration section
config.migration.enabled bool false Boolean flag to enable migration from CE
config.migration.migrationDataFormat string "ldif" migration data-format depending on persistence backend. Supported data formats are ldif, couchbase+json, spanner+avro, postgresql+json, and mysql+json.
config.migration.migrationDir string "/ce-migration" Directory holding all migration files
config.orgName string "Gluu" Organization name. Used for certificate creation.
config.redisPass string "P@assw0rd" Redis admin password if config.configmap.gluuCacheType is set to REDIS.
config.resources object {"limits":{"cpu":"300m","memory":"300Mi"},"requests":{"cpu":"300m","memory":"300Mi"}} Resource specs.
config.resources.limits.cpu string "300m" CPU limit.
config.resources.limits.memory string "300Mi" Memory limit.
config.resources.requests.cpu string "300m" CPU request.
config.resources.requests.memory string "300Mi" Memory request.
config.state string "TX" State code. Used for certificate creation.
config.usrEnvs object {"normal":{},"secret":{}} Add custom normal and secret envs to the service.
config.usrEnvs.normal object {} Add custom normal envs to the service. variable1: value1
config.usrEnvs.secret object {} Add custom secret envs to the service. variable1: value1
config.volumeMounts list [] Configure any additional volumesMounts that need to be attached to the containers
config.volumes list [] Configure any additional volumes that need to be attached to the pod
Key Type Default Description
nginx-ingress object {"certManager":{"certificate":{"enabled":false,"issuerGroup":"cert-manager.io","issuerKind":"ClusterIssuer","issuerName":""}},"ingress":{"additionalAnnotations":{"kubernetes.io/ingress.class":"nginx"},"additionalLabels":{},"adminUiAdditionalAnnotations":{},"adminUiEnabled":true,"adminUiLabels":{},"authServerAdditionalAnnotations":{},"authServerEnabled":true,"authServerLabels":{},"casaAdditionalAnnotations":{},"casaEnabled":false,"casaLabels":{},"deviceCodeAdditionalAnnotations":{},"deviceCodeEnabled":true,"deviceCodeLabels":{},"enabled":true,"fido2ConfigAdditionalAnnotations":{},"fido2ConfigEnabled":false,"fido2ConfigLabels":{},"fido2Enabled":false,"fido2Labels":{},"firebaseMessagingAdditionalAnnotations":{},"firebaseMessagingEnabled":true,"firebaseMessagingLabels":{},"hosts":["demoexample.gluu.org"],"legacy":false,"openidAdditionalAnnotations":{},"openidConfigEnabled":true,"openidConfigLabels":{},"passportAdditionalAnnotations":{},"passportEnabled":false,"passportLabels":{},"path":"/","scimAdditionalAnnotations":{},"scimConfigAdditionalAnnotations":{},"scimConfigEnabled":false,"scimConfigLabels":{},"scimEnabled":false,"scimLabels":{},"shibAdditionalAnnotations":{},"shibEnabled":false,"shibLabels":{},"tls":[{"hosts":["demoexample.gluu.org"],"secretName":"tls-certificate"}],"u2fAdditionalAnnotations":{},"u2fConfigEnabled":true,"u2fConfigLabels":{},"uma2AdditionalAnnotations":{},"uma2ConfigEnabled":true,"uma2ConfigLabels":{},"webdiscoveryAdditionalAnnotations":{},"webdiscoveryEnabled":true,"webdiscoveryLabels":{},"webfingerAdditionalAnnotations":{},"webfingerEnabled":true,"webfingerLabels":{}}} Nginx ingress definitions chart
nginx-ingress.ingress.additionalAnnotations object {"kubernetes.io/ingress.class":"nginx"} Additional annotations that will be added across all ingress definitions in the format of {cert-manager.io/issuer: "letsencrypt-prod"}. key app is taken Enable client certificate authentication nginx.ingress.kubernetes.io/auth-tls-verify-client: "optional" Create the secret containing the trusted ca certificates nginx.ingress.kubernetes.io/auth-tls-secret: "gluu/tls-certificate" Specify the verification depth in the client certificates chain nginx.ingress.kubernetes.io/auth-tls-verify-depth: "1" Specify if certificates are passed to upstream server nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream: "true"
nginx-ingress.ingress.additionalAnnotations."kubernetes.io/ingress.class" string "nginx" Required annotation below. Use kubernetes.io/ingress.class: "public" for microk8s.
nginx-ingress.ingress.additionalLabels object {} Additional labels that will be added across all ingress definitions in the format of
nginx-ingress.ingress.adminUiAdditionalAnnotations object {} Admin UI ingress resource additional annotations.
nginx-ingress.ingress.adminUiEnabled bool true Enable Admin UI endpoints /identity
nginx-ingress.ingress.adminUiLabels object {} Admin UI ingress resource labels. key app is taken.
nginx-ingress.ingress.authServerAdditionalAnnotations object {} Auth server ingress resource additional annotations.
nginx-ingress.ingress.authServerEnabled bool true Enable Auth server endpoints /oxauth
nginx-ingress.ingress.authServerLabels object {} Auth server config ingress resource labels. key app is taken
nginx-ingress.ingress.casaAdditionalAnnotations object {} Casa ingress resource additional annotations.
nginx-ingress.ingress.casaEnabled bool false Enable casa endpoints /casa
nginx-ingress.ingress.casaLabels object {} Casa ingress resource labels. key app is taken
nginx-ingress.ingress.deviceCodeAdditionalAnnotations object {} device-code ingress resource additional annotations.
nginx-ingress.ingress.deviceCodeEnabled bool true Enable endpoint /device-code
nginx-ingress.ingress.deviceCodeLabels object {} device-code ingress resource labels. key app is taken
nginx-ingress.ingress.fido2ConfigAdditionalAnnotations object {} fido2 config ingress resource additional annotations.
nginx-ingress.ingress.fido2ConfigEnabled bool false Enable endpoint /.well-known/fido2-configuration
nginx-ingress.ingress.fido2ConfigLabels object {} fido2 config ingress resource labels. key app is taken
nginx-ingress.ingress.fido2Enabled bool false Enable all fido2 endpoints
nginx-ingress.ingress.fido2Labels object {} fido2 ingress resource labels. key app is taken
nginx-ingress.ingress.firebaseMessagingAdditionalAnnotations object {} Firebase Messaging ingress resource additional annotations.
nginx-ingress.ingress.firebaseMessagingEnabled bool true Enable endpoint /firebase-messaging-sw.js
nginx-ingress.ingress.firebaseMessagingLabels object {} Firebase Messaging ingress resource labels. key app is taken
nginx-ingress.ingress.legacy bool false Enable use of legacy API version networking.k8s.io/v1beta1 to support kubernetes 1.18. This flag should be removed next version release along with nginx-ingress/templates/ingress-legacy.yaml.
nginx-ingress.ingress.openidAdditionalAnnotations object {} openid-configuration ingress resource additional annotations.
nginx-ingress.ingress.openidConfigEnabled bool true Enable endpoint /.well-known/openid-configuration
nginx-ingress.ingress.openidConfigLabels object {} openid-configuration ingress resource labels. key app is taken
nginx-ingress.ingress.passportAdditionalAnnotations object {} passport ingress resource additional annotations.
nginx-ingress.ingress.passportEnabled bool false Enable passport endpoints /idp
nginx-ingress.ingress.passportLabels object {} passport ingress resource labels. key app is taken.
nginx-ingress.ingress.scimAdditionalAnnotations object {} SCIM ingress resource additional annotations.
nginx-ingress.ingress.scimConfigAdditionalAnnotations object {} SCIM config ingress resource additional annotations.
nginx-ingress.ingress.scimConfigEnabled bool false Enable endpoint /.well-known/scim-configuration
nginx-ingress.ingress.scimConfigLabels object {} webdiscovery ingress resource labels. key app is taken
nginx-ingress.ingress.scimEnabled bool false Enable SCIM endpoints /scim
nginx-ingress.ingress.scimLabels object {} scim config ingress resource labels. key app is taken
nginx-ingress.ingress.shibAdditionalAnnotations object {} shibboleth ingress resource additional annotations.
nginx-ingress.ingress.shibEnabled bool false Enable shibboleth endpoints /idp
nginx-ingress.ingress.shibLabels object {} shibboleth ingress resource labels. key app is taken.
nginx-ingress.ingress.u2fAdditionalAnnotations object {} u2f config ingress resource additional annotations.
nginx-ingress.ingress.u2fConfigEnabled bool true Enable endpoint /.well-known/fido-configuration
nginx-ingress.ingress.u2fConfigLabels object {} u2f config ingress resource labels. key app is taken
nginx-ingress.ingress.uma2AdditionalAnnotations object {} uma2 config ingress resource additional annotations.
nginx-ingress.ingress.uma2ConfigEnabled bool true Enable endpoint /.well-known/uma2-configuration
nginx-ingress.ingress.uma2ConfigLabels object {} uma 2 config ingress resource labels. key app is taken
nginx-ingress.ingress.webdiscoveryAdditionalAnnotations object {} webdiscovery ingress resource additional annotations.
nginx-ingress.ingress.webdiscoveryEnabled bool true Enable endpoint /.well-known/simple-web-discovery
nginx-ingress.ingress.webdiscoveryLabels object {} webdiscovery ingress resource labels. key app is taken
nginx-ingress.ingress.webfingerAdditionalAnnotations object {} webfinger ingress resource additional annotations.
nginx-ingress.ingress.webfingerEnabled bool true Enable endpoint /.well-known/webfinger
nginx-ingress.ingress.webfingerLabels object {} webfinger ingress resource labels. key app is taken
Key Type Default Description
jackrabbit object {"additionalAnnotations":{},"additionalLabels":{},"clusterId":"","dnsConfig":{},"dnsPolicy":"","hpa":{"behavior":{},"enabled":true,"maxReplicas":10,"metrics":[],"minReplicas":1,"targetCPUUtilizationPercentage":50},"image":{"pullPolicy":"IfNotPresent","pullSecrets":[],"repository":"gluufederation/jackrabbit","tag":"4.4.0-1"},"livenessProbe":{"initialDelaySeconds":25,"periodSeconds":25,"tcpSocket":{"port":"http-jackrabbit"},"timeoutSeconds":5},"readinessProbe":{"initialDelaySeconds":30,"periodSeconds":30,"tcpSocket":{"port":"http-jackrabbit"},"timeoutSeconds":5},"replicas":1,"resources":{"limits":{"cpu":"1500m","memory":"1000Mi"},"requests":{"cpu":"1500m","memory":"1000Mi"}},"secrets":{"gluuJackrabbitAdminPass":"Test1234#","gluuJackrabbitPostgresPass":"P@ssw0rd"},"service":{"jackRabbitServiceName":"jackrabbit","name":"http-jackrabbit","port":8080},"storage":{"size":"5Gi"},"usrEnvs":{"normal":{},"secret":{}},"volumeMounts":[],"volumes":[]} Jackrabbit Oak is a complementary implementation of the JCR specification. It is an effort to implement a scalable and performant hierarchical content repository for use as the foundation of modern world-class web sites and other demanding content applications https://jackrabbit.apache.org/jcr/index.html
jackrabbit.additionalAnnotations object {} Additional annotations that will be added across the gateway in the format of
jackrabbit.additionalLabels object {} Additional labels that will be added across the gateway in the format of
jackrabbit.clusterId string "" This id needs to be unique to each kubernetes cluster in a multi cluster setup west, east, south, north, region ...etc If left empty it will be randomly generated.
jackrabbit.dnsConfig object {} Add custom dns config
jackrabbit.dnsPolicy string "" Add custom dns policy
jackrabbit.hpa object {"behavior":{},"enabled":true,"maxReplicas":10,"metrics":[],"minReplicas":1,"targetCPUUtilizationPercentage":50} Configure the HorizontalPodAutoscaler
jackrabbit.hpa.behavior object {} Scaling Policies
jackrabbit.hpa.metrics list [] metrics if targetCPUUtilizationPercentage is not set
jackrabbit.image.pullPolicy string "IfNotPresent" Image pullPolicy to use for deploying.
jackrabbit.image.pullSecrets list [] Image Pull Secrets
jackrabbit.image.repository string "gluufederation/jackrabbit" Image to use for deploying.
jackrabbit.image.tag string "4.4.0-1" Image tag to use for deploying.
jackrabbit.livenessProbe object {"initialDelaySeconds":25,"periodSeconds":25,"tcpSocket":{"port":"http-jackrabbit"},"timeoutSeconds":5} Configure the liveness healthcheck for the Jackrabbit if needed.
jackrabbit.livenessProbe.tcpSocket object {"port":"http-jackrabbit"} Executes tcp healthcheck.
jackrabbit.readinessProbe object {"initialDelaySeconds":30,"periodSeconds":30,"tcpSocket":{"port":"http-jackrabbit"},"timeoutSeconds":5} Configure the readiness healthcheck for the Jackrabbit if needed.
jackrabbit.readinessProbe.tcpSocket object {"port":"http-jackrabbit"} Executes tcp healthcheck.
jackrabbit.replicas int 1 Service replica number.
jackrabbit.resources object {"limits":{"cpu":"1500m","memory":"1000Mi"},"requests":{"cpu":"1500m","memory":"1000Mi"}} Resource specs.
jackrabbit.resources.limits.cpu string "1500m" CPU limit.
jackrabbit.resources.limits.memory string "1000Mi" Memory limit.
jackrabbit.resources.requests.cpu string "1500m" CPU request.
jackrabbit.resources.requests.memory string "1000Mi" Memory request.
jackrabbit.secrets.gluuJackrabbitAdminPass string "Test1234#" Jackrabbit admin uid password
jackrabbit.secrets.gluuJackrabbitPostgresPass string "P@ssw0rd" Jackrabbit Postgres uid password
jackrabbit.service.jackRabbitServiceName string "jackrabbit" Name of the Jackrabbit service. Please keep it as default.
jackrabbit.service.name string "http-jackrabbit" The name of the jackrabbit port within the jackrabbit service. Please keep it as default.
jackrabbit.service.port int 8080 Port of the jackrabbit service. Please keep it as default.
jackrabbit.storage.size string "5Gi" Jackrabbit volume size
jackrabbit.usrEnvs object {"normal":{},"secret":{}} Add custom normal and secret envs to the service
jackrabbit.usrEnvs.normal object {} Add custom normal envs to the service variable1: value1
jackrabbit.usrEnvs.secret object {} Add custom secret envs to the service variable1: value1
jackrabbit.volumeMounts list [] Configure any additional volumesMounts that need to be attached to the containers
jackrabbit.volumes list [] Configure any additional volumes that need to be attached to the pod
Key Type Default Description
opendj object {"additionalAnnotations":{},"additionalLabels":{},"backup":{"cronJobSchedule":"*/59 * * * *","enabled":true},"dnsConfig":{},"dnsPolicy":"","hpa":{"behavior":{},"enabled":true,"maxReplicas":10,"metrics":[],"minReplicas":1,"targetCPUUtilizationPercentage":50},"image":{"pullPolicy":"IfNotPresent","pullSecrets":[],"repository":"gluufederation/opendj","tag":"4.4.0-1"},"livenessProbe":{"exec":{"command":["python3","/app/scripts/healthcheck.py"]},"failureThreshold":20,"initialDelaySeconds":30,"periodSeconds":30,"timeoutSeconds":5},"multiCluster":{"clusterId":"","enabled":false,"namespaceIntId":0,"replicaCount":1,"serfAdvertiseAddrSuffix":"regional.gluu.org","serfKey":"Z51b6PgKU1MZ75NCZOTGGoc0LP2OF3qvF6sjxHyQCYk=","serfPeers":["gluu-opendj-regional-0-regional.gluu.org:30946","gluu-opendj-regional-0-regional.gluu.org:31946"]},"persistence":{"size":"5Gi"},"ports":{"tcp-admin":{"nodePort":"","port":4444,"protocol":"TCP","targetPort":4444},"tcp-ldap":{"nodePort":"","port":1389,"protocol":"TCP","targetPort":1389},"tcp-ldaps":{"nodePort":"","port":1636,"protocol":"TCP","targetPort":1636},"tcp-repl":{"nodePort":"","port":8989,"protocol":"TCP","targetPort":8989},"tcp-serf":{"nodePort":"","port":7946,"protocol":"TCP","targetPort":7946},"udp-serf":{"nodePort":"","port":7946,"protocol":"UDP","targetPort":7946}},"readinessProbe":{"failureThreshold":20,"initialDelaySeconds":60,"periodSeconds":25,"tcpSocket":{"port":1636},"timeoutSeconds":5},"replicas":1,"resources":{"limits":{"cpu":"1500m","memory":"2000Mi"},"requests":{"cpu":"1500m","memory":"2000Mi"}},"usrEnvs":{"normal":{},"secret":{}},"volumeMounts":[],"volumes":[]} OpenDJ is a directory server which implements a wide range of Lightweight Directory Access Protocol and related standards, including full compliance with LDAPv3 but also support for Directory Service Markup Language (DSMLv2).Written in Java, OpenDJ offers multi-master replication, access control, and many extensions.
opendj.additionalAnnotations object {} Additional annotations that will be added across all resources in the format of {cert-manager.io/issuer: "letsencrypt-prod"}. key app is taken
opendj.additionalLabels object {} Additional labels that will be added across all resources definitions in the format of
opendj.backup object {"cronJobSchedule":"*/59 * * * *","enabled":true} Configure ldap backup cronjob
opendj.dnsConfig object {} Add custom dns config
opendj.dnsPolicy string "" Add custom dns policy
opendj.hpa object {"behavior":{},"enabled":true,"maxReplicas":10,"metrics":[],"minReplicas":1,"targetCPUUtilizationPercentage":50} Configure the HorizontalPodAutoscaler
opendj.hpa.behavior object {} Scaling Policies
opendj.hpa.metrics list [] metrics if targetCPUUtilizationPercentage is not set
opendj.image.pullPolicy string "IfNotPresent" Image pullPolicy to use for deploying.
opendj.image.pullSecrets list [] Image Pull Secrets
opendj.image.repository string "gluufederation/opendj" Image to use for deploying.
opendj.image.tag string "4.4.0-1" Image tag to use for deploying.
opendj.livenessProbe object {"exec":{"command":["python3","/app/scripts/healthcheck.py"]},"failureThreshold":20,"initialDelaySeconds":30,"periodSeconds":30,"timeoutSeconds":5} Configure the liveness healthcheck for OpenDJ if needed. https://github.com/GluuFederation/docker-opendj/blob/4.4/scripts/healthcheck.py
opendj.livenessProbe.exec object {"command":["python3","/app/scripts/healthcheck.py"]} Executes the python3 healthcheck.
opendj.multiCluster.clusterId string "" This id needs to be unique to each kubernetes cluster in a multi cluster setup west, east, south, north, region ...etc If left empty it will be randomly generated.
opendj.multiCluster.enabled bool false Enable OpenDJ multiCluster mode. This flag enables loading keys under opendj.multiCluster
opendj.multiCluster.namespaceIntId int 0 Namespace int id. This id needs to be a unique number 0-9 per gluu installation per namespace. Used when gluu is installed in the same kubernetes cluster more than once.
opendj.multiCluster.replicaCount int 1 The number of opendj non scalabble statefulsets to create. Each pod created must be resolvable as it follows the patterm RELEASE-NAME-opendj-regional-{{statefulset pod number}}-{{ $.Values.multiCluster.serfAdvertiseAddrSuffix }} If set to 1, with a release name of gluu, the address of the pod would be gluu-opendj-regional-0-regional.gluu.org
opendj.multiCluster.serfAdvertiseAddrSuffix string "regional.gluu.org" OpenDJ Serf advertise address for the cluster
opendj.multiCluster.serfKey string "Z51b6PgKU1MZ75NCZOTGGoc0LP2OF3qvF6sjxHyQCYk=" Serf key. This key will automatically sync across clusters.
opendj.multiCluster.serfPeers list ["gluu-opendj-regional-0-regional.gluu.org:30946","gluu-opendj-regional-0-regional.gluu.org:31946"] Serf peer addresses. One per cluster.
opendj.persistence.size string "5Gi" OpenDJ volume size
opendj.ports object {"tcp-admin":{"nodePort":"","port":4444,"protocol":"TCP","targetPort":4444},"tcp-ldap":{"nodePort":"","port":1389,"protocol":"TCP","targetPort":1389},"tcp-ldaps":{"nodePort":"","port":1636,"protocol":"TCP","targetPort":1636},"tcp-repl":{"nodePort":"","port":8989,"protocol":"TCP","targetPort":8989},"tcp-serf":{"nodePort":"","port":7946,"protocol":"TCP","targetPort":7946},"udp-serf":{"nodePort":"","port":7946,"protocol":"UDP","targetPort":7946}} servicePorts values used in StatefulSet container
opendj.readinessProbe object {"failureThreshold":20,"initialDelaySeconds":60,"periodSeconds":25,"tcpSocket":{"port":1636},"timeoutSeconds":5} Configure the readiness healthcheck for OpenDJ if needed. https://github.com/GluuFederation/docker-opendj/blob/4.4/scripts/healthcheck.py
opendj.replicas int 1 Service replica number.
opendj.resources object {"limits":{"cpu":"1500m","memory":"2000Mi"},"requests":{"cpu":"1500m","memory":"2000Mi"}} Resource specs.
opendj.resources.limits.cpu string "1500m" CPU limit.
opendj.resources.limits.memory string "2000Mi" Memory limit.
opendj.resources.requests.cpu string "1500m" CPU request.
opendj.resources.requests.memory string "2000Mi" Memory request.
opendj.usrEnvs object {"normal":{},"secret":{}} Add custom normal and secret envs to the service
opendj.usrEnvs.normal object {} Add custom normal envs to the service variable1: value1
opendj.usrEnvs.secret object {} Add custom secret envs to the service variable1: value1
opendj.volumeMounts list [] Configure any additional volumesMounts that need to be attached to the containers
opendj.volumes list [] Configure any additional volumes that need to be attached to the pod
Key Type Default Description
persistence object {"additionalAnnotations":{},"additionalLabels":{},"dnsConfig":{},"dnsPolicy":"","image":{"pullPolicy":"IfNotPresent","pullSecrets":[],"repository":"gluufederation/persistence","tag":"4.4.0-1"},"resources":{"limits":{"cpu":"300m","memory":"300Mi"},"requests":{"cpu":"300m","memory":"300Mi"}},"usrEnvs":{"normal":{},"secret":{}},"volumeMounts":[],"volumes":[]} Job to generate data and initial config for Gluu Server persistence layer.
persistence.additionalAnnotations object {} Additional annotations that will be added across all resources in the format of {cert-manager.io/issuer: "letsencrypt-prod"}. key app is taken
persistence.additionalLabels object {} Additional labels that will be added across all resources definitions in the format of
persistence.dnsConfig object {} Add custom dns config
persistence.dnsPolicy string "" Add custom dns policy
persistence.image.pullPolicy string "IfNotPresent" Image pullPolicy to use for deploying.
persistence.image.pullSecrets list [] Image Pull Secrets
persistence.image.repository string "gluufederation/persistence" Image to use for deploying.
persistence.image.tag string "4.4.0-1" Image tag to use for deploying.
persistence.resources object {"limits":{"cpu":"300m","memory":"300Mi"},"requests":{"cpu":"300m","memory":"300Mi"}} Resource specs.
persistence.resources.limits.cpu string "300m" CPU limit
persistence.resources.limits.memory string "300Mi" Memory limit.
persistence.resources.requests.cpu string "300m" CPU request.
persistence.resources.requests.memory string "300Mi" Memory request.
persistence.usrEnvs object {"normal":{},"secret":{}} Add custom normal and secret envs to the service
persistence.usrEnvs.normal object {} Add custom normal envs to the service variable1: value1
persistence.usrEnvs.secret object {} Add custom secret envs to the service variable1: value1
persistence.volumeMounts list [] Configure any additional volumesMounts that need to be attached to the containers
persistence.volumes list [] Configure any additional volumes that need to be attached to the pod
Key Type Default Description
oxauth object {"additionalAnnotations":{},"additionalLabels":{},"dnsConfig":{},"dnsPolicy":"","hpa":{"behavior":{},"enabled":true,"maxReplicas":10,"metrics":[],"minReplicas":1,"targetCPUUtilizationPercentage":50},"image":{"pullPolicy":"IfNotPresent","pullSecrets":[],"repository":"gluufederation/oxauth","tag":"4.4.0-1"},"livenessProbe":{"exec":{"command":["python3","/app/scripts/healthcheck.py"]},"initialDelaySeconds":30,"periodSeconds":30,"timeoutSeconds":5},"readinessProbe":{"exec":{"command":["python3","/app/scripts/healthcheck.py"]},"initialDelaySeconds":25,"periodSeconds":25,"timeoutSeconds":5},"replicas":1,"resources":{"limits":{"cpu":"2500m","memory":"2500Mi"},"requests":{"cpu":"2500m","memory":"2500Mi"}},"service":{"name":"http-oxauth","oxAuthServiceName":"oxauth","port":8080},"usrEnvs":{"normal":{},"secret":{}},"volumeMounts":[],"volumes":[]} OAuth Authorization Server, the OpenID Connect Provider, the UMA Authorization Server--this is the main Internet facing component of Gluu. It's the service that returns tokens, JWT's and identity assertions. This service must be Internet facing.
oxauth-key-rotation object {"additionalAnnotations":{},"additionalLabels":{},"dnsConfig":{},"dnsPolicy":"","image":{"pullPolicy":"IfNotPresent","pullSecrets":[],"repository":"gluufederation/certmanager","tag":"4.4.0-1"},"keysLife":48,"resources":{"limits":{"cpu":"300m","memory":"300Mi"},"requests":{"cpu":"300m","memory":"300Mi"}},"usrEnvs":{"normal":{},"secret":{}},"volumeMounts":[],"volumes":[]} Responsible for regenerating auth-keys per x hours
oxauth-key-rotation.additionalAnnotations object {} Additional annotations that will be added across all resources in the format of {cert-manager.io/issuer: "letsencrypt-prod"}. key app is taken
oxauth-key-rotation.additionalLabels object {} Additional labels that will be added across all resources definitions in the format of
oxauth-key-rotation.dnsConfig object {} Add custom dns config
oxauth-key-rotation.dnsPolicy string "" Add custom dns policy
oxauth-key-rotation.image.pullPolicy string "IfNotPresent" Image pullPolicy to use for deploying.
oxauth-key-rotation.image.pullSecrets list [] Image Pull Secrets
oxauth-key-rotation.image.repository string "gluufederation/certmanager" Image to use for deploying.
oxauth-key-rotation.image.tag string "4.4.0-1" Image tag to use for deploying.
oxauth-key-rotation.keysLife int 48 Auth server key rotation keys life in hours
oxauth-key-rotation.resources object {"limits":{"cpu":"300m","memory":"300Mi"},"requests":{"cpu":"300m","memory":"300Mi"}} Resource specs.
oxauth-key-rotation.resources.limits.cpu string "300m" CPU limit.
oxauth-key-rotation.resources.limits.memory string "300Mi" Memory limit.
oxauth-key-rotation.resources.requests.cpu string "300m" CPU request.
oxauth-key-rotation.resources.requests.memory string "300Mi" Memory request.
oxauth-key-rotation.usrEnvs object {"normal":{},"secret":{}} Add custom normal and secret envs to the service
oxauth-key-rotation.usrEnvs.normal object {} Add custom normal envs to the service variable1: value1
oxauth-key-rotation.usrEnvs.secret object {} Add custom secret envs to the service variable1: value1
oxauth-key-rotation.volumeMounts list [] Configure any additional volumesMounts that need to be attached to the containers
oxauth-key-rotation.volumes list [] Configure any additional volumes that need to be attached to the pod
oxauth.additionalAnnotations object {} Additional annotations that will be added across all resources in the format of {cert-manager.io/issuer: "letsencrypt-prod"}. key app is taken
oxauth.additionalLabels object {} Additional labels that will be added across all resources definitions in the format of
oxauth.dnsConfig object {} Add custom dns config
oxauth.dnsPolicy string "" Add custom dns policy
oxauth.hpa object {"behavior":{},"enabled":true,"maxReplicas":10,"metrics":[],"minReplicas":1,"targetCPUUtilizationPercentage":50} Configure the HorizontalPodAutoscaler
oxauth.hpa.behavior object {} Scaling Policies
oxauth.hpa.metrics list [] metrics if targetCPUUtilizationPercentage is not set
oxauth.image.pullPolicy string "IfNotPresent" Image pullPolicy to use for deploying.
oxauth.image.pullSecrets list [] Image Pull Secrets
oxauth.image.repository string "gluufederation/oxauth" Image to use for deploying.
oxauth.image.tag string "4.4.0-1" Image tag to use for deploying.
oxauth.livenessProbe object {"exec":{"command":["python3","/app/scripts/healthcheck.py"]},"initialDelaySeconds":30,"periodSeconds":30,"timeoutSeconds":5} Configure the liveness healthcheck for the auth server if needed.
oxauth.livenessProbe.exec object {"command":["python3","/app/scripts/healthcheck.py"]} Executes the python3 healthcheck. https://github.com/GluuFederation/docker-oxauth/blob/4.4/scripts/healthcheck.py
oxauth.readinessProbe object {"exec":{"command":["python3","/app/scripts/healthcheck.py"]},"initialDelaySeconds":25,"periodSeconds":25,"timeoutSeconds":5} Configure the readiness healthcheck for the auth server if needed. https://github.com/GluuFederation/docker-oxauth/blob/4.4/scripts/healthcheck.py
oxauth.replicas int 1 Service replica number.
oxauth.resources object {"limits":{"cpu":"2500m","memory":"2500Mi"},"requests":{"cpu":"2500m","memory":"2500Mi"}} Resource specs.
oxauth.resources.limits.cpu string "2500m" CPU limit.
oxauth.resources.limits.memory string "2500Mi" Memory limit.
oxauth.resources.requests.cpu string "2500m" CPU request.
oxauth.resources.requests.memory string "2500Mi" Memory request.
oxauth.service.name string "http-oxauth" The name of the oxauth port within the oxauth service. Please keep it as default.
oxauth.service.oxAuthServiceName string "oxauth" Name of the oxauth service. Please keep it as default.
oxauth.service.port int 8080 Port of the oxauth service. Please keep it as default.
oxauth.usrEnvs object {"normal":{},"secret":{}} Add custom normal and secret envs to the service
oxauth.usrEnvs.normal object {} Add custom normal envs to the service variable1: value1
oxauth.usrEnvs.secret object {} Add custom secret envs to the service variable1: value1
oxauth.volumeMounts list [] Configure any additional volumesMounts that need to be attached to the containers
oxauth.volumes list [] Configure any additional volumes that need to be attached to the pod
Key Type Default Description
oxtrust object {"additionalAnnotations":{},"additionalLabels":{},"dnsConfig":{},"dnsPolicy":"","hpa":{"behavior":{},"enabled":true,"maxReplicas":10,"metrics":[],"minReplicas":1,"targetCPUUtilizationPercentage":50},"image":{"pullPolicy":"IfNotPresent","pullSecrets":[],"repository":"gluufederation/oxtrust","tag":"4.4.0-1"},"livenessProbe":{"exec":{"command":["python3","/app/scripts/healthcheck.py"]},"initialDelaySeconds":30,"periodSeconds":30,"timeoutSeconds":5},"readinessProbe":{"exec":{"command":["python3","/app/scripts/healthcheck.py"]},"initialDelaySeconds":25,"periodSeconds":25,"timeoutSeconds":5},"replicas":1,"resources":{"limits":{"cpu":"2500m","memory":"2500Mi"},"requests":{"cpu":"2500m","memory":"2500Mi"}},"service":{"clusterIp":"None","name":"http-oxtrust","oxTrustServiceName":"oxtrust","port":8080},"usrEnvs":{"normal":{},"secret":{}},"volumeMounts":[],"volumes":[]} Gluu Admin UI. This shouldn't be internet facing.
oxtrust.additionalAnnotations object {} Additional annotations that will be added across all resources in the format of {cert-manager.io/issuer: "letsencrypt-prod"}. key app is taken
oxtrust.additionalLabels object {} Additional labels that will be added across all resources definitions in the format of
oxtrust.dnsConfig object {} Add custom dns config
oxtrust.dnsPolicy string "" Add custom dns policy
oxtrust.hpa object {"behavior":{},"enabled":true,"maxReplicas":10,"metrics":[],"minReplicas":1,"targetCPUUtilizationPercentage":50} Configure the HorizontalPodAutoscaler
oxtrust.hpa.behavior object {} Scaling Policies
oxtrust.hpa.metrics list [] metrics if targetCPUUtilizationPercentage is not set
oxtrust.image.pullPolicy string "IfNotPresent" Image pullPolicy to use for deploying.
oxtrust.image.pullSecrets list [] Image Pull Secrets
oxtrust.image.repository string "gluufederation/oxtrust" Image to use for deploying.
oxtrust.image.tag string "4.4.0-1" Image tag to use for deploying.
oxtrust.livenessProbe object {"exec":{"command":["python3","/app/scripts/healthcheck.py"]},"initialDelaySeconds":30,"periodSeconds":30,"timeoutSeconds":5} Configure the liveness healthcheck for the auth server if needed.
oxtrust.livenessProbe.exec object {"command":["python3","/app/scripts/healthcheck.py"]} Executes the python3 healthcheck. https://github.com/GluuFederation/docker-oxauth/blob/4.4/scripts/healthcheck.py
oxtrust.readinessProbe object {"exec":{"command":["python3","/app/scripts/healthcheck.py"]},"initialDelaySeconds":25,"periodSeconds":25,"timeoutSeconds":5} Configure the readiness healthcheck for the auth server if needed. https://github.com/GluuFederation/docker-oxauth/blob/4.4/scripts/healthcheck.py
oxtrust.replicas int 1 Service replica number.
oxtrust.resources object {"limits":{"cpu":"2500m","memory":"2500Mi"},"requests":{"cpu":"2500m","memory":"2500Mi"}} Resource specs.
oxtrust.resources.limits.cpu string "2500m" CPU limit.
oxtrust.resources.limits.memory string "2500Mi" Memory limit.
oxtrust.resources.requests.cpu string "2500m" CPU request.
oxtrust.resources.requests.memory string "2500Mi" Memory request.
oxtrust.service.name string "http-oxtrust" The name of the oxtrust port within the oxtrust service. Please keep it as default.
oxtrust.service.oxTrustServiceName string "oxtrust" Name of the oxtrust service. Please keep it as default.
oxtrust.service.port int 8080 Port of the oxtrust service. Please keep it as default.
oxtrust.usrEnvs object {"normal":{},"secret":{}} Add custom normal and secret envs to the service
oxtrust.usrEnvs.normal object {} Add custom normal envs to the service variable1: value1
oxtrust.usrEnvs.secret object {} Add custom secret envs to the service variable1: value1
oxtrust.volumeMounts list [] Configure any additional volumesMounts that need to be attached to the containers
oxtrust.volumes list [] Configure any additional volumes that need to be attached to the pod
Key Type Default Description
fido2 object {"additionalAnnotations":{},"additionalLabels":{},"dnsConfig":{},"dnsPolicy":"","hpa":{"behavior":{},"enabled":true,"maxReplicas":10,"metrics":[],"minReplicas":1,"targetCPUUtilizationPercentage":50},"image":{"pullPolicy":"IfNotPresent","pullSecrets":[],"repository":"gluufederation/fido2","tag":"4.4.0-1"},"livenessProbe":{"httpGet":{"path":"/fido2/restv1/fido2/configuration","port":"http-fido2"},"initialDelaySeconds":25,"periodSeconds":25,"timeoutSeconds":5},"readinessProbe":{"httpGet":{"path":"/fido2/restv1/fido2/configuration","port":"http-fido2"},"initialDelaySeconds":30,"periodSeconds":30,"timeoutSeconds":5},"replicas":1,"resources":{"limits":{"cpu":"500m","memory":"500Mi"},"requests":{"cpu":"500m","memory":"500Mi"}},"service":{"fido2ServiceName":"fido2","name":"http-fido2","port":8080},"usrEnvs":{"normal":{},"secret":{}},"volumeMounts":[],"volumes":[]} FIDO 2.0 (FIDO2) is an open authentication standard that enables leveraging common devices to authenticate to online services in both mobile and desktop environments.
fido2.additionalAnnotations object {} Additional annotations that will be added across all resources in the format of {cert-manager.io/issuer: "letsencrypt-prod"}. key app is taken
fido2.additionalLabels object {} Additional labels that will be added across all resources definitions in the format of
fido2.dnsConfig object {} Add custom dns config
fido2.dnsPolicy string "" Add custom dns policy
fido2.hpa object {"behavior":{},"enabled":true,"maxReplicas":10,"metrics":[],"minReplicas":1,"targetCPUUtilizationPercentage":50} Configure the HorizontalPodAutoscaler
fido2.hpa.behavior object {} Scaling Policies
fido2.hpa.metrics list [] metrics if targetCPUUtilizationPercentage is not set
fido2.image.pullPolicy string "IfNotPresent" Image pullPolicy to use for deploying.
fido2.image.pullSecrets list [] Image Pull Secrets
fido2.image.repository string "gluufederation/fido2" Image to use for deploying.
fido2.image.tag string "4.4.0-1" Image tag to use for deploying.
fido2.livenessProbe object {"httpGet":{"path":"/fido2/restv1/fido2/configuration","port":"http-fido2"},"initialDelaySeconds":25,"periodSeconds":25,"timeoutSeconds":5} Configure the liveness healthcheck for the fido2 if needed.
fido2.livenessProbe.httpGet object {"path":"/fido2/restv1/fido2/configuration","port":"http-fido2"} http liveness probe endpoint
fido2.readinessProbe object {"httpGet":{"path":"/fido2/restv1/fido2/configuration","port":"http-fido2"},"initialDelaySeconds":30,"periodSeconds":30,"timeoutSeconds":5} Configure the readiness healthcheck for the fido2 if needed.
fido2.replicas int 1 Service replica number.
fido2.resources object {"limits":{"cpu":"500m","memory":"500Mi"},"requests":{"cpu":"500m","memory":"500Mi"}} Resource specs.
fido2.resources.limits.cpu string "500m" CPU limit.
fido2.resources.limits.memory string "500Mi" Memory limit.
fido2.resources.requests.cpu string "500m" CPU request.
fido2.resources.requests.memory string "500Mi" Memory request.
fido2.service.fido2ServiceName string "fido2" Name of the fido2 service. Please keep it as default.
fido2.service.name string "http-fido2" The name of the fido2 port within the fido2 service. Please keep it as default.
fido2.service.port int 8080 Port of the fido2 service. Please keep it as default.
fido2.usrEnvs object {"normal":{},"secret":{}} Add custom normal and secret envs to the service
fido2.usrEnvs.normal object {} Add custom normal envs to the service variable1: value1
fido2.usrEnvs.secret object {} Add custom secret envs to the service variable1: value1
fido2.volumeMounts list [] Configure any additional volumesMounts that need to be attached to the containers
fido2.volumes list [] Configure any additional volumes that need to be attached to the pod
Key Type Default Description
scim object {"additionalAnnotations":{},"additionalLabels":{},"dnsConfig":{},"dnsPolicy":"","hpa":{"behavior":{},"enabled":true,"maxReplicas":10,"metrics":[],"minReplicas":1,"targetCPUUtilizationPercentage":50},"image":{"pullPolicy":"IfNotPresent","pullSecrets":[],"repository":"gluufederation/scim","tag":"4.4.0-1"},"livenessProbe":{"httpGet":{"path":"/scim/restv1/scim/v2/ServiceProviderConfig","port":8080},"initialDelaySeconds":30,"periodSeconds":30,"timeoutSeconds":5},"readinessProbe":{"httpGet":{"path":"/scim/restv1/scim/v2/ServiceProviderConfig","port":8080},"initialDelaySeconds":25,"periodSeconds":25,"timeoutSeconds":5},"replicas":1,"resources":{"limits":{"cpu":"1000m","memory":"1000Mi"},"requests":{"cpu":"1000m","memory":"1000Mi"}},"service":{"name":"http-scim","port":8080,"scimServiceName":"scim"},"usrEnvs":{"normal":{},"secret":{}},"volumeMounts":[],"volumes":[]} System for Cross-domain Identity Management (SCIM) version 2.0
scim.additionalAnnotations object {} Additional annotations that will be added across all resources in the format of {cert-manager.io/issuer: "letsencrypt-prod"}. key app is taken
scim.additionalLabels object {} Additional labels that will be added across all resources definitions in the format of
scim.dnsConfig object {} Add custom dns config
scim.dnsPolicy string "" Add custom dns policy
scim.hpa object {"behavior":{},"enabled":true,"maxReplicas":10,"metrics":[],"minReplicas":1,"targetCPUUtilizationPercentage":50} Configure the HorizontalPodAutoscaler
scim.hpa.behavior object {} Scaling Policies
scim.hpa.metrics list [] metrics if targetCPUUtilizationPercentage is not set
scim.image.pullPolicy string "IfNotPresent" Image pullPolicy to use for deploying.
scim.image.pullSecrets list [] Image Pull Secrets
scim.image.repository string "gluufederation/scim" Image to use for deploying.
scim.image.tag string "4.4.0-1" Image tag to use for deploying.
scim.livenessProbe object {"httpGet":{"path":"/scim/restv1/scim/v2/ServiceProviderConfig","port":8080},"initialDelaySeconds":30,"periodSeconds":30,"timeoutSeconds":5} Configure the liveness healthcheck for SCIM if needed.
scim.livenessProbe.httpGet.path string "/scim/restv1/scim/v2/ServiceProviderConfig" http liveness probe endpoint
scim.readinessProbe object {"httpGet":{"path":"/scim/restv1/scim/v2/ServiceProviderConfig","port":8080},"initialDelaySeconds":25,"periodSeconds":25,"timeoutSeconds":5} Configure the readiness healthcheck for the SCIM if needed.
scim.readinessProbe.httpGet.path string "/scim/restv1/scim/v2/ServiceProviderConfig" http readiness probe endpoint
scim.replicas int 1 Service replica number.
scim.resources.limits.cpu string "1000m" CPU limit.
scim.resources.limits.memory string "1000Mi" Memory limit.
scim.resources.requests.cpu string "1000m" CPU request.
scim.resources.requests.memory string "1000Mi" Memory request.
scim.service.name string "http-scim" The name of the scim port within the scim service. Please keep it as default.
scim.service.port int 8080 Port of the scim service. Please keep it as default.
scim.service.scimServiceName string "scim" Name of the scim service. Please keep it as default.
scim.usrEnvs object {"normal":{},"secret":{}} Add custom normal and secret envs to the service
scim.usrEnvs.normal object {} Add custom normal envs to the service variable1: value1
scim.usrEnvs.secret object {} Add custom secret envs to the service variable1: value1
scim.volumeMounts list [] Configure any additional volumesMounts that need to be attached to the containers
scim.volumes list [] Configure any additional volumes that need to be attached to the pod
Key Type Default Description
oxd-server object {"additionalAnnotations":{},"additionalLabels":{},"dnsConfig":{},"dnsPolicy":"","hpa":{"behavior":{},"enabled":true,"maxReplicas":10,"metrics":[],"minReplicas":1,"targetCPUUtilizationPercentage":50},"image":{"pullPolicy":"IfNotPresent","pullSecrets":[],"repository":"gluufederation/oxd-server","tag":"4.4.0-1"},"livenessProbe":{"exec":{"command":["curl","-k","https://localhost:8443/health-check"]},"initialDelaySeconds":30,"periodSeconds":30,"timeoutSeconds":5},"readinessProbe":{"exec":{"command":["curl","-k","https://localhost:8443/health-check"]},"initialDelaySeconds":25,"periodSeconds":25,"timeoutSeconds":5},"replicas":1,"resources":{"limits":{"cpu":"1000m","memory":"400Mi"},"requests":{"cpu":"1000m","memory":"400Mi"}},"service":{"oxdServerServiceName":"oxd-server"},"usrEnvs":{"normal":{},"secret":{}},"volumeMounts":[],"volumes":[]} Middleware API to help application developers call an OAuth, OpenID or UMA server. You may wonder why this is necessary. It makes it easier for client developers to use OpenID signing and encryption features, without becoming crypto experts. This API provides some high level endpoints to do some of the heavy lifting.
oxd-server.additionalAnnotations object {} Additional annotations that will be added across all resources in the format of {cert-manager.io/issuer: "letsencrypt-prod"}. key app is taken
oxd-server.additionalLabels object {} Additional labels that will be added across all resources definitions in the format of
oxd-server.dnsConfig object {} Add custom dns config
oxd-server.dnsPolicy string "" Add custom dns policy
oxd-server.hpa object {"behavior":{},"enabled":true,"maxReplicas":10,"metrics":[],"minReplicas":1,"targetCPUUtilizationPercentage":50} Configure the HorizontalPodAutoscaler
oxd-server.hpa.behavior object {} Scaling Policies
oxd-server.hpa.metrics list [] metrics if targetCPUUtilizationPercentage is not set
oxd-server.image.pullPolicy string "IfNotPresent" Image pullPolicy to use for deploying.
oxd-server.image.pullSecrets list [] Image Pull Secrets
oxd-server.image.repository string "gluufederation/oxd-server" Image to use for deploying.
oxd-server.image.tag string "4.4.0-1" Image tag to use for deploying.
oxd-server.livenessProbe object {"exec":{"command":["curl","-k","https://localhost:8443/health-check"]},"initialDelaySeconds":30,"periodSeconds":30,"timeoutSeconds":5} Configure the liveness healthcheck for the auth server if needed.
oxd-server.livenessProbe.exec object {"command":["curl","-k","https://localhost:8443/health-check"]} Executes the python3 healthcheck.
oxd-server.readinessProbe object {"exec":{"command":["curl","-k","https://localhost:8443/health-check"]},"initialDelaySeconds":25,"periodSeconds":25,"timeoutSeconds":5} Configure the readiness healthcheck for the auth server if needed.
oxd-server.replicas int 1 Service replica number.
oxd-server.resources object {"limits":{"cpu":"1000m","memory":"400Mi"},"requests":{"cpu":"1000m","memory":"400Mi"}} Resource specs.
oxd-server.resources.limits.cpu string "1000m" CPU limit.
oxd-server.resources.limits.memory string "400Mi" Memory limit.
oxd-server.resources.requests.cpu string "1000m" CPU request.
oxd-server.resources.requests.memory string "400Mi" Memory request.
oxd-server.service.oxdServerServiceName string "oxd-server" Name of the OXD server service. This must match config.configMap.gluuOxdApplicationCertCn. Please keep it as default.
oxd-server.usrEnvs object {"normal":{},"secret":{}} Add custom normal and secret envs to the service
oxd-server.usrEnvs.normal object {} Add custom normal envs to the service variable1: value1
oxd-server.usrEnvs.secret object {} Add custom secret envs to the service variable1: value1
oxd-server.volumeMounts list [] Configure any additional volumesMounts that need to be attached to the containers
oxd-server.volumes list [] Configure any additional volumes that need to be attached to the pod
Key Type Default Description
casa object {"additionalAnnotations":{},"additionalLabels":{},"dnsConfig":{},"dnsPolicy":"","hpa":{"behavior":{},"enabled":true,"maxReplicas":10,"metrics":[],"minReplicas":1,"targetCPUUtilizationPercentage":50},"image":{"pullPolicy":"IfNotPresent","pullSecrets":[],"repository":"gluufederation/casa","tag":"4.4.0-1"},"livenessProbe":{"httpGet":{"path":"/casa/health-check","port":"http-casa"},"initialDelaySeconds":25,"periodSeconds":25,"timeoutSeconds":5},"readinessProbe":{"httpGet":{"path":"/casa/health-check","port":"http-casa"},"initialDelaySeconds":30,"periodSeconds":30,"timeoutSeconds":5},"replicas":1,"resources":{"limits":{"cpu":"500m","memory":"500Mi"},"requests":{"cpu":"500m","memory":"500Mi"}},"service":{"casaServiceName":"casa","name":"http-casa","port":8080},"usrEnvs":{"normal":{},"secret":{}},"volumeMounts":[],"volumes":[]} Gluu Casa ("Casa") is a self-service web portal for end-users to manage authentication and authorization preferences for their account in a Gluu Server.
casa.additionalAnnotations object {} Additional annotations that will be added across all resources in the format of {cert-manager.io/issuer: "letsencrypt-prod"}. key app is taken
casa.additionalLabels object {} Additional labels that will be added across all resources definitions in the format of
casa.dnsConfig object {} Add custom dns config
casa.dnsPolicy string "" Add custom dns policy
casa.hpa object {"behavior":{},"enabled":true,"maxReplicas":10,"metrics":[],"minReplicas":1,"targetCPUUtilizationPercentage":50} Configure the HorizontalPodAutoscaler
casa.hpa.behavior object {} Scaling Policies
casa.hpa.metrics list [] metrics if targetCPUUtilizationPercentage is not set
casa.image.pullPolicy string "IfNotPresent" Image pullPolicy to use for deploying.
casa.image.pullSecrets list [] Image Pull Secrets
casa.image.repository string "gluufederation/casa" Image to use for deploying.
casa.image.tag string "4.4.0-1" Image tag to use for deploying.
casa.livenessProbe object {"httpGet":{"path":"/casa/health-check","port":"http-casa"},"initialDelaySeconds":25,"periodSeconds":25,"timeoutSeconds":5} Configure the liveness healthcheck for casa if needed.
casa.livenessProbe.httpGet.path string "/casa/health-check" http liveness probe endpoint
casa.readinessProbe object {"httpGet":{"path":"/casa/health-check","port":"http-casa"},"initialDelaySeconds":30,"periodSeconds":30,"timeoutSeconds":5} Configure the readiness healthcheck for the casa if needed.
casa.readinessProbe.httpGet.path string "/casa/health-check" http readiness probe endpoint
casa.replicas int 1 Service replica number.
casa.resources object {"limits":{"cpu":"500m","memory":"500Mi"},"requests":{"cpu":"500m","memory":"500Mi"}} Resource specs.
casa.resources.limits.cpu string "500m" CPU limit.
casa.resources.limits.memory string "500Mi" Memory limit.
casa.resources.requests.cpu string "500m" CPU request.
casa.resources.requests.memory string "500Mi" Memory request.
casa.service.casaServiceName string "casa" Name of the casa service. Please keep it as default.
casa.service.name string "http-casa" The name of the casa port within the casa service. Please keep it as default.
casa.service.port int 8080 Port of the casa service. Please keep it as default.
casa.usrEnvs object {"normal":{},"secret":{}} Add custom normal and secret envs to the service
casa.usrEnvs.normal object {} Add custom normal envs to the service variable1: value1
casa.usrEnvs.secret object {} Add custom secret envs to the service variable1: value1
casa.volumeMounts list [] Configure any additional volumesMounts that need to be attached to the containers
casa.volumes list [] Configure any additional volumes that need to be attached to the pod
Key Type Default Description
oxpassport object {"additionalAnnotations":{},"additionalLabels":{},"dnsConfig":{},"dnsPolicy":"","hpa":{"behavior":{},"enabled":true,"maxReplicas":10,"metrics":[],"minReplicas":1,"targetCPUUtilizationPercentage":50},"image":{"pullPolicy":"IfNotPresent","pullSecrets":[],"repository":"gluufederation/oxpassport","tag":"4.4.0-1"},"livenessProbe":{"failureThreshold":20,"httpGet":{"path":"/passport/health-check","port":"http-passport"},"initialDelaySeconds":30,"periodSeconds":30,"timeoutSeconds":5},"readinessProbe":{"failureThreshold":20,"httpGet":{"path":"/passport/health-check","port":"http-passport"},"initialDelaySeconds":25,"periodSeconds":25,"timeoutSeconds":5},"replicas":1,"resources":{"limits":{"cpu":"700m","memory":"900Mi"},"requests":{"cpu":"700m","memory":"900Mi"}},"service":{"name":"http-passport","oxPassportServiceName":"oxpassport","port":8090},"usrEnvs":{"normal":{},"secret":{}},"volumeMounts":[],"volumes":[]} Gluu interface to Passport.js to support social login and inbound identity.
oxpassport.additionalAnnotations object {} Additional annotations that will be added across all resources in the format of {cert-manager.io/issuer: "letsencrypt-prod"}. key app is taken
oxpassport.additionalLabels object {} Additional labels that will be added across all resources definitions in the format of
oxpassport.dnsConfig object {} Add custom dns config
oxpassport.dnsPolicy string "" Add custom dns policy
oxpassport.hpa object {"behavior":{},"enabled":true,"maxReplicas":10,"metrics":[],"minReplicas":1,"targetCPUUtilizationPercentage":50} Configure the HorizontalPodAutoscaler
oxpassport.hpa.behavior object {} Scaling Policies
oxpassport.hpa.metrics list [] metrics if targetCPUUtilizationPercentage is not set
oxpassport.image.pullPolicy string "IfNotPresent" Image pullPolicy to use for deploying.
oxpassport.image.pullSecrets list [] Image Pull Secrets
oxpassport.image.repository string "gluufederation/oxpassport" Image to use for deploying.
oxpassport.image.tag string "4.4.0-1" Image tag to use for deploying.
oxpassport.livenessProbe object {"failureThreshold":20,"httpGet":{"path":"/passport/health-check","port":"http-passport"},"initialDelaySeconds":30,"periodSeconds":30,"timeoutSeconds":5} Configure the liveness healthcheck for oxPassport if needed.
oxpassport.livenessProbe.httpGet.path string "/passport/health-check" http liveness probe endpoint
oxpassport.readinessProbe object {"failureThreshold":20,"httpGet":{"path":"/passport/health-check","port":"http-passport"},"initialDelaySeconds":25,"periodSeconds":25,"timeoutSeconds":5} Configure the readiness healthcheck for the oxPassport if needed.
oxpassport.readinessProbe.httpGet.path string "/passport/health-check" http readiness probe endpoint
oxpassport.replicas int 1 Service replica number
oxpassport.resources object {"limits":{"cpu":"700m","memory":"900Mi"},"requests":{"cpu":"700m","memory":"900Mi"}} Resource specs.
oxpassport.resources.limits.cpu string "700m" CPU limit.
oxpassport.resources.limits.memory string "900Mi" Memory limit.
oxpassport.resources.requests.cpu string "700m" CPU request.
oxpassport.resources.requests.memory string "900Mi" Memory request.
oxpassport.service.name string "http-passport" The name of the oxPassport port within the oxPassport service. Please keep it as default.
oxpassport.service.oxPassportServiceName string "oxpassport" Name of the oxPassport service. Please keep it as default.
oxpassport.service.port int 8090 Port of the oxPassport service. Please keep it as default.
oxpassport.usrEnvs object {"normal":{},"secret":{}} Add custom normal and secret envs to the service
oxpassport.usrEnvs.normal object {} Add custom normal envs to the service variable1: value1
oxpassport.usrEnvs.secret object {} Add custom secret envs to the service variable1: value1
oxpassport.volumeMounts list [] Configure any additional volumesMounts that need to be attached to the containers
oxpassport.volumes list [] Configure any additional volumes that need to be attached to the pod
Key Type Default Description
oxshibboleth object {"additionalAnnotations":{},"additionalLabels":{},"dnsConfig":{},"dnsPolicy":"","hpa":{"behavior":{},"enabled":true,"maxReplicas":10,"metrics":[],"minReplicas":1,"targetCPUUtilizationPercentage":50},"image":{"pullPolicy":"IfNotPresent","pullSecrets":[],"repository":"gluufederation/oxshibboleth","tag":"4.4.0-1"},"livenessProbe":{"httpGet":{"path":"/idp","port":"http-oxshib"},"initialDelaySeconds":30,"periodSeconds":30,"timeoutSeconds":5},"readinessProbe":{"httpGet":{"path":"/idp","port":"http-oxshib"},"initialDelaySeconds":25,"periodSeconds":25,"timeoutSeconds":5},"replicas":1,"resources":{"limits":{"cpu":"1000m","memory":"1000Mi"},"requests":{"cpu":"1000m","memory":"1000Mi"}},"service":{"name":"http-oxshib","oxShibbolethServiceName":"oxshibboleth","port":8080},"usrEnvs":{"normal":{},"secret":{}},"volumeMounts":[],"volumes":[]} Shibboleth project for the Gluu Server's SAML IDP functionality.
oxshibboleth.additionalAnnotations object {} Additional annotations that will be added across all resources in the format of {cert-manager.io/issuer: "letsencrypt-prod"}. key app is taken
oxshibboleth.additionalLabels object {} Additional labels that will be added across all resources definitions in the format of
oxshibboleth.dnsConfig object {} Add custom dns config
oxshibboleth.dnsPolicy string "" Add custom dns policy
oxshibboleth.hpa object {"behavior":{},"enabled":true,"maxReplicas":10,"metrics":[],"minReplicas":1,"targetCPUUtilizationPercentage":50} Configure the HorizontalPodAutoscaler
oxshibboleth.hpa.behavior object {} Scaling Policies
oxshibboleth.hpa.metrics list [] metrics if targetCPUUtilizationPercentage is not set
oxshibboleth.image.pullPolicy string "IfNotPresent" Image pullPolicy to use for deploying.
oxshibboleth.image.pullSecrets list [] Image Pull Secrets
oxshibboleth.image.repository string "gluufederation/oxshibboleth" Image to use for deploying.
oxshibboleth.image.tag string "4.4.0-1" Image tag to use for deploying.
oxshibboleth.livenessProbe object {"httpGet":{"path":"/idp","port":"http-oxshib"},"initialDelaySeconds":30,"periodSeconds":30,"timeoutSeconds":5} Configure the liveness healthcheck for the oxShibboleth if needed.
oxshibboleth.livenessProbe.httpGet.path string "/idp" http liveness probe endpoint
oxshibboleth.readinessProbe object {"httpGet":{"path":"/idp","port":"http-oxshib"},"initialDelaySeconds":25,"periodSeconds":25,"timeoutSeconds":5} Configure the readiness healthcheck for the casa if needed.
oxshibboleth.readinessProbe.httpGet.path string "/idp" http liveness probe endpoint
oxshibboleth.replicas int 1 Service replica number.
oxshibboleth.resources object {"limits":{"cpu":"1000m","memory":"1000Mi"},"requests":{"cpu":"1000m","memory":"1000Mi"}} Resource specs.
oxshibboleth.resources.limits.cpu string "1000m" CPU limit.
oxshibboleth.resources.limits.memory string "1000Mi" Memory limit.
oxshibboleth.resources.requests.cpu string "1000m" CPU request.
oxshibboleth.resources.requests.memory string "1000Mi" Memory request.
oxshibboleth.service.name string "http-oxshib" Port of the oxShibboleth service. Please keep it as default.
oxshibboleth.service.oxShibbolethServiceName string "oxshibboleth" Name of the oxShibboleth service. Please keep it as default.
oxshibboleth.service.port int 8080 The name of the oxPassport port within the oxPassport service. Please keep it as default.
oxshibboleth.usrEnvs object {"normal":{},"secret":{}} Add custom normal and secret envs to the service
oxshibboleth.usrEnvs.normal object {} Add custom normal envs to the service variable1: value1
oxshibboleth.usrEnvs.secret object {} Add custom secret envs to the service variable1: value1
oxshibboleth.volumeMounts list [] Configure any additional volumesMounts that need to be attached to the containers
oxshibboleth.volumes list [] Configure any additional volumes that need to be attached to the pod
Key Type Default Description
cr-rotate object {"additionalAnnotations":{},"additionalLabels":{},"dnsConfig":{},"dnsPolicy":"","image":{"pullPolicy":"IfNotPresent","pullSecrets":[],"repository":"gluufederation/cr-rotate","tag":"4.4.0-1"},"resources":{"limits":{"cpu":"200m","memory":"200Mi"},"requests":{"cpu":"200m","memory":"200Mi"}},"service":{"crRotateServiceName":"cr-rotate","name":"http-cr-rotate","port":8084},"usrEnvs":{"normal":{},"secret":{}},"volumeMounts":[],"volumes":[]} CacheRefreshRotation is a special container to monitor cache refresh on oxTrust containers. This may be depreciated.
cr-rotate.additionalAnnotations object {} Additional annotations that will be added across all resources in the format of {cert-manager.io/issuer: "letsencrypt-prod"}. key app is taken
cr-rotate.additionalLabels object {} Additional labels that will be added across all resources definitions in the format of
cr-rotate.dnsConfig object {} Add custom dns config
cr-rotate.dnsPolicy string "" Add custom dns policy
cr-rotate.image.pullPolicy string "IfNotPresent" Image pullPolicy to use for deploying.
cr-rotate.image.pullSecrets list [] Image Pull Secrets
cr-rotate.image.repository string "gluufederation/cr-rotate" Image to use for deploying.
cr-rotate.image.tag string "4.4.0-1" Image tag to use for deploying.
cr-rotate.resources object {"limits":{"cpu":"200m","memory":"200Mi"},"requests":{"cpu":"200m","memory":"200Mi"}} Resource specs.
cr-rotate.resources.limits.cpu string "200m" CPU limit.
cr-rotate.resources.limits.memory string "200Mi" Memory limit.
cr-rotate.resources.requests.cpu string "200m" CPU request.
cr-rotate.resources.requests.memory string "200Mi" Memory request.
cr-rotate.service.crRotateServiceName string "cr-rotate" Name of the cr-rotate service. Please keep it as default.
cr-rotate.service.name string "http-cr-rotate" The name of the cr-rotate port within the cr-rotate service. Please keep it as default.
cr-rotate.service.port int 8084 Port of the casa service. Please keep it as default.
cr-rotate.usrEnvs object {"normal":{},"secret":{}} Add custom normal and secret envs to the service
cr-rotate.usrEnvs.normal object {} Add custom normal envs to the service variable1: value1
cr-rotate.usrEnvs.secret object {} Add custom secret envs to the service variable1: value1
cr-rotate.volumeMounts list [] Configure any additional volumesMounts that need to be attached to the containers
cr-rotate.volumes list [] Configure any additional volumes that need to be attached to the pod
Key Type Default Description
oxauth-key-rotation object {"additionalAnnotations":{},"additionalLabels":{},"dnsConfig":{},"dnsPolicy":"","image":{"pullPolicy":"IfNotPresent","pullSecrets":[],"repository":"gluufederation/certmanager","tag":"4.4.0-1"},"keysLife":48,"resources":{"limits":{"cpu":"300m","memory":"300Mi"},"requests":{"cpu":"300m","memory":"300Mi"}},"usrEnvs":{"normal":{},"secret":{}},"volumeMounts":[],"volumes":[]} Responsible for regenerating auth-keys per x hours
oxauth-key-rotation.additionalAnnotations object {} Additional annotations that will be added across all resources in the format of {cert-manager.io/issuer: "letsencrypt-prod"}. key app is taken
oxauth-key-rotation.additionalLabels object {} Additional labels that will be added across all resources definitions in the format of
oxauth-key-rotation.dnsConfig object {} Add custom dns config
oxauth-key-rotation.dnsPolicy string "" Add custom dns policy
oxauth-key-rotation.image.pullPolicy string "IfNotPresent" Image pullPolicy to use for deploying.
oxauth-key-rotation.image.pullSecrets list [] Image Pull Secrets
oxauth-key-rotation.image.repository string "gluufederation/certmanager" Image to use for deploying.
oxauth-key-rotation.image.tag string "4.4.0-1" Image tag to use for deploying.
oxauth-key-rotation.keysLife int 48 Auth server key rotation keys life in hours
oxauth-key-rotation.resources object {"limits":{"cpu":"300m","memory":"300Mi"},"requests":{"cpu":"300m","memory":"300Mi"}} Resource specs.
oxauth-key-rotation.resources.limits.cpu string "300m" CPU limit.
oxauth-key-rotation.resources.limits.memory string "300Mi" Memory limit.
oxauth-key-rotation.resources.requests.cpu string "300m" CPU request.
oxauth-key-rotation.resources.requests.memory string "300Mi" Memory request.
oxauth-key-rotation.usrEnvs object {"normal":{},"secret":{}} Add custom normal and secret envs to the service
oxauth-key-rotation.usrEnvs.normal object {} Add custom normal envs to the service variable1: value1
oxauth-key-rotation.usrEnvs.secret object {} Add custom secret envs to the service variable1: value1
oxauth-key-rotation.volumeMounts list [] Configure any additional volumesMounts that need to be attached to the containers
oxauth-key-rotation.volumes list [] Configure any additional volumes that need to be attached to the pod

Instructions on how to install different services#

Enabling the following services automatically install the corresponding associated chart. To enable/disable them set true or false in the persistence configs as shown below.

config:
  configmap:
    # Auto install other services. If enabled the respective service chart will be installed
    gluuPassportEnabled: false
    gluuCasaEnabled: false
    gluuRadiusEnabled: false
    gluuSamlEnabled: false

CASA#

  • CASA is dependant on oxd-server. To install it oxd-server must be enabled.

Other optional services#

Other optional services like key-rotation, and cr-rotation, are enabled by setting their corresponding values to true under the global block.

For example, to enable cr-rotate set

global:
  cr-rotate:
    enabled: true

Install Gluu using the gui installer#

Warning

The GUI installer is currently alpha. Please report any bugs found by opening an issue.

  1. Create the GUI installer job

    cat <<EOF | kubectl apply -f -
    apiVersion: batch/v1
    kind: Job
    metadata:
      name: cloud-native-installer
      labels:
        APP_NAME: cloud-native-installer
    spec:
      template:
        metadata:
          labels:
            APP_NAME: cloud-native-installer
        spec:
          restartPolicy: Never
          containers:
            - name: cloud-native-installer
              image: gluufederation/cloud-native:4.4.0_dev
    ---
    kind: Service
    apiVersion: v1
    metadata:
      name: cloud-native-installer
    spec:
      type: LoadBalancer
      selector:
        app: cloud-native-installer
      ports:
        - name: http
          port: 80
          targetPort: 5000           
    EOF
    
  2. Grab the Loadbalancer address , ip or Nodeport and follow installation setup.

    kubectl -n default get svc cloud-native-installer --output jsonpath='{.status.loadBalancer.ingress[0].hostname}'
    
    kubectl -n default get svc cloud-native-installer --output jsonpath='{.status.loadBalancer.ingress[0].ip}'
    
    kubectl -n default get svc cloud-native-installer --output jsonpath='{.status.loadBalancer.ingress[0].ip}'
    
    kubectl -n default get svc cloud-native-installer --output jsonpath='{.status.loadBalancer.ingress[0].ip}'
    
    1. Get ip of microk8s vm

    2. Get NodePort of the GUI installer service

    kubectl -n default get svc cloud-native-installer
    
    1. Get ip of minikube vm
    minikube ip
    
    1. Get NodePort of the GUI installer service
    kubectl -n default get svc cloud-native-installer
    
  3. Head to the address from previous step to start the installation.

settings.json parameters file contents#

This is the main parameter file used with the pygluu-kubernetes.pyz cloud native edition installer.

Note

Please generate this file using pygluu-kubernetes.pyz generate-settings.

Parameter Description Options
ACCEPT_GLUU_LICENSE Accept the License "Y" or "N"
TEST_ENVIRONMENT Allows installation with no resources limits and requests defined. "Y" or "N"
ADMIN_PW Password of oxTrust 6 chars min: 1 capital, 1 small, 1 digit and 1 special char "P@ssw0rd"
GLUU_VERSION Gluu version to be installed "4.2"
GLUU_UPGRADE_TARGET_VERSION Gluu upgrade version "4.2"
GLUU_HELM_RELEASE_NAME Gluu Helm release name "<name>"
NGINX_INGRESS_NAMESPACE Nginx namespace "<name>"
NGINX_INGRESS_RELEASE_NAME Nginx Helm release name "<name>"
USE_ISTIO Enable use of Istio. This will inject sidecars in Gluu pods. "Y" or "N"
USE_ISTIO_INGRESS Enable Istio ingress. "Y" or "N"
ISTIO_SYSTEM_NAMESPACE Istio system namespace "<name>"
POSTGRES_NAMESPACE Postgres namespace - Gluu Gateway "<name>"
POSTGRES_URL Postgres URL ( Can be local or remote) - Gluu Gateway i.e "<servicename>.<namespace>.svc.cluster.local"
NODES_IPS List of kubernetes cluster node ips ["<ip>", "<ip2>", "<ip3>"]
NODES_ZONES List of kubernetes cluster node zones ["<node1_zone>", "<node2_zone>", "<node3_zone>"]
NODES_NAMES List of kubernetes cluster node names ["<node1_name>", "<node2_name>", "<node3_name>"]
NODE_SSH_KEY nodes ssh key path location "<pathtosshkey>"
HOST_EXT_IP Minikube or Microk8s vm ip "<ip>"
VERIFY_EXT_IP Verify the Minikube or Microk8s vm ip placed "Y" or "N"
AWS_LB_TYPE AWS loadbalancer type "" , "clb" or "nlb"
USE_ARN Use ssl provided from ACM AWS "", "Y" or "N"
VPC_CIDR VPC CIDR in use for the Kubernetes cluster "", i.e 192.168.1.116
ARN_AWS_IAM The arn string "" or "<arn:aws:acm:us-west-2:XXXXXXXX:certificate/XXXXXX-XXXXXXX-XXXXXXX-XXXXXXXX>"
LB_ADD AWS loadbalancer address "<loadbalancer_address>"
DEPLOYMENT_ARCH Deployment architecture "microk8s", "minikube", "eks", "gke", "aks", "do" or "local"
PERSISTENCE_BACKEND Backend persistence type "ldap", "couchbase" or "hybrid"
REDIS_URL Redis url with port. Used when Redis is deployed for Cache. i.e "redis:6379", "clustercfg.testing-redis.icrbdv.euc1.cache.amazonaws.com:6379"
REDIS_TYPE Type of Redis deployed "SHARDED", "STANDALONE", "CLUSTER", or "SENTINEL"
REDIS_PW Redis Password if used. This may be empty. If not choose a long password. i.e "", "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURUakNDQWphZ0F3SUJBZ0lVV2Y0TExEb"
REDIS_USE_SSL Redis SSL use "false" or "true"
REDIS_SSL_TRUSTSTORE Redis SSL truststore. If using cloud provider services this is left empty. i.e "", "/etc/myredis.pem"
REDIS_SENTINEL_GROUP Redis Sentinel group i.e ""
REDIS_NAMESPACE Redis Namespace if Redis is to be installed i.e "gluu-redis-cluster"
INSTALL_REDIS Install Redis "Y" or "N"
INSTALL_POSTGRES Install postgres used by Jackrabbit. This option is used in test mode. "Y" or "N"
INSTALL_JACKRABBIT Install Jackrabbit "Y" or "N"
JACKRABBIT_STORAGE_SIZE Jackrabbit volume storage size "" i.e "4Gi"
JACKRABBIT_URL http:// url for Jackrabbit i.e "http://jackrabbit:8080"
JACKRABBIT_ADMIN_ID Jackrabbit admin ID i.e "admin"
JACKRABBIT_ADMIN_PASSWORD Jackrabbit admin password i.e "admin"
JACKRABBIT_CLUSTER Jackrabbit Cluster mode "N" or "Y"
JACKRABBIT_PG_USER Jackrabbit postgres username i.e "jackrabbit"
JACKRABBIT_PG_PASSWORD Jackrabbit postgres password i.e "jackrabbbit"
JACKRABBIT_DATABASE Jackrabbit postgres database name i.e "jackrabbit"
INSTALL_COUCHBASE Install couchbase "Y" or "N"
COUCHBASE_NAMESPACE Couchbase namespace "<name>"
COUCHBASE_VOLUME_TYPE Persistence Volume type "io1","ps-ssd", "Premium_LRS"
COUCHBASE_CLUSTER_NAME Couchbase cluster name "<name>"
COUCHBASE_URL Couchbase internal address to the cluster "" or i.e "<clustername>.<namespace>.svc.cluster.local"
COUCHBASE_USER Couchbase username "" or i.e "gluu"
COUCHBASE_BUCKET_PREFIX Prefix for Couchbase buckets gluu
COUCHBASE_PASSWORD Password of Couchbase 6 characters min: 1 capital, 1 small, 1 digit and 1 special character "P@ssw0rd"
COUCHBASE_SUPERUSER Couchbase superuser username "" or i.e "admin"
COUCHBASE_SUPERUSER_PASSWORD Password of Couchbase 6 characters min: 1 capital, 1 small, 1 digit and 1 special character "P@ssw0rd"
COUCHBASE_CRT Couchbase CA certification "" or i.e <crt content not encoded>
COUCHBASE_CN Couchbase certificate common name ""
COUCHBASE_INDEX_NUM_REPLICA Couchbase number of replicas per index 0
COUCHBASE_SUBJECT_ALT_NAME Couchbase SAN "" or i.e "cb.gluu.org"
COUCHBASE_CLUSTER_FILE_OVERRIDE Override couchbase-cluster.yaml with a custom couchbase-cluster.yaml "Y" or "N"
COUCHBASE_USE_LOW_RESOURCES Use very low resources for Couchbase deployment. For demo purposes "Y" or "N"
COUCHBASE_DATA_NODES Number of Couchbase data nodes "" or i.e "4"
COUCHBASE_QUERY_NODES Number of Couchbase query nodes "" or i.e "3"
COUCHBASE_INDEX_NODES Number of Couchbase index nodes "" or i.e "3"
COUCHBASE_SEARCH_EVENTING_ANALYTICS_NODES Number of Couchbase search, eventing and analytics nodes "" or i.e "2"
COUCHBASE_GENERAL_STORAGE Couchbase general storage size "" or i.e "2"
COUCHBASE_DATA_STORAGE Couchbase data storage size "" or i.e "5Gi"
COUCHBASE_INDEX_STORAGE Couchbase index storage size "" or i.e "5Gi"
COUCHBASE_QUERY_STORAGE Couchbase query storage size "" or i.e "5Gi"
COUCHBASE_ANALYTICS_STORAGE Couchbase search, eventing and analytics storage size "" or i.e "5Gi"
COUCHBASE_INCR_BACKUP_SCHEDULE Couchbase incremental backup schedule i.e "*/30 * * * *"
COUCHBASE_FULL_BACKUP_SCHEDULE Couchbase full backup schedule i.e "0 2 * * 6"
COUCHBASE_BACKUP_RETENTION_TIME Couchbase time to retain backups in s,m or h i.e "168h
COUCHBASE_BACKUP_STORAGE_SIZE Couchbase backup storage size i.e "20Gi"
NUMBER_OF_EXPECTED_USERS Number of expected users [couchbase-resource-calc-alpha] "" or i.e "1000000"
EXPECTED_TRANSACTIONS_PER_SEC Expected transactions per second [couchbase-resource-calc-alpha] "" or i.e "2000"
USING_CODE_FLOW If using code flow [couchbase-resource-calc-alpha] "", "Y" or "N"
USING_SCIM_FLOW If using SCIM flow [couchbase-resource-calc-alpha] "", "Y" or "N"
USING_RESOURCE_OWNER_PASSWORD_CRED_GRANT_FLOW If using password flow [couchbase-resource-calc-alpha] "", "Y" or "N"
DEPLOY_MULTI_CLUSTER Deploying a Multi-cluster [alpha] "Y" or "N"
HYBRID_LDAP_HELD_DATA Type of data to be held in LDAP with a hybrid installation of couchbase and LDAP "", "default", "user", "site", "cache" or "token"
LDAP_JACKRABBIT_VOLUME LDAP/Jackrabbit Volume type "", "io1","ps-ssd", "Premium_LRS"
APP_VOLUME_TYPE Volume type for LDAP persistence options
LDAP_STATIC_VOLUME_ID LDAP static volume id (AWS EKS) "" or "<static-volume-id>"
LDAP_STATIC_DISK_URI LDAP static disk uri (GCE GKE or Azure) "" or "<disk-uri>"
LDAP_BACKUP_SCHEDULE LDAP back up cron job frequency i.e "*/30 * * * *"
GLUU_CACHE_TYPE Cache type to be used "IN_MEMORY", "REDIS" or "NATIVE_PERSISTENCE"
GLUU_NAMESPACE Namespace to deploy Gluu in "<name>"
GLUU_FQDN Gluu FQDN "<FQDN>" i.e "demoexample.gluu.org"
COUNTRY_CODE Gluu country code "<country code>" i.e "US"
STATE Gluu state "<state>" i.e "TX"
EMAIL Gluu email "<email>" i.e "support@gluu.org"
CITY Gluu city "<city>" i.e "Austin"
ORG_NAME Gluu organization name "<org-name>" i.e "Gluu"
LDAP_PW Password of LDAP 6 characters min: 1 capital, 1 small, 1 digit and 1 special character "P@ssw0rd"
GMAIL_ACCOUNT Gmail account for GKE installation "" or"<gmail>" i.e
GOOGLE_NODE_HOME_DIR User node home directory, used if the hosts volume is used "Y" or "N"
IS_GLUU_FQDN_REGISTERED Is Gluu FQDN globally resolvable "Y" or "N"
OXD_APPLICATION_KEYSTORE_CN OXD application keystore common name "<name>" i.e "oxd_server"
OXD_ADMIN_KEYSTORE_CN OXD admin keystore common name "<name>" i.e "oxd_server"
LDAP_STORAGE_SIZE LDAP volume storage size "" i.e "4Gi"
OXAUTH_KEYS_LIFE oxAuth Key life span in hours 48
FIDO2_REPLICAS Number of FIDO2 replicas min "1"
SCIM_REPLICAS Number of SCIM replicas min "1"
OXAUTH_REPLICAS Number of oxAuth replicas min "1"
OXTRUST_REPLICAS Number of oxTrust replicas min "1"
LDAP_REPLICAS Number of LDAP replicas min "1"
OXSHIBBOLETH_REPLICAS Number of oxShibboleth replicas min "1"
OXPASSPORT_REPLICAS Number of oxPassport replicas min "1"
OXD_SERVER_REPLICAS Number of oxdServer replicas min "1"
CASA_REPLICAS Number of Casa replicas min "1"
ENABLE_OXTRUST_API Enable oxTrust-api "Y" or "N"
ENABLE_OXTRUST_TEST_MODE Enable oxTrust Test Mode "Y" or "N"
ENABLE_CACHE_REFRESH Enable cache refresh rotate installation "Y" or "N"
ENABLE_OXD Enable oxd server installation "Y" or "N"
ENABLE_OXPASSPORT Enable oxPassport installation "Y" or "N"
ENABLE_OXSHIBBOLETH Enable oxShibboleth installation "Y" or "N"
ENABLE_CASA Enable Casa installation "Y" or "N"
ENABLE_FIDO2 Enable Fido2 installation "Y" or "N"
ENABLE_SCIM Enable SCIM installation "Y" or "N"
ENABLE_OXAUTH_KEY_ROTATE Enable key rotate installation "Y" or "N"
ENABLE_OXTRUST_API_BOOLEAN Used by pygluu-kubernetes "false"
ENABLE_OXTRUST_TEST_MODE_BOOLEAN Used by pygluu-kubernetes "false"
ENABLE_RADIUS_BOOLEAN Used by pygluu-kubernetes "false"
ENABLE_OXPASSPORT_BOOLEAN Used by pygluu-kubernetes "false"
ENABLE_CASA_BOOLEAN Used by pygluu-kubernetes "false"
ENABLE_SAML_BOOLEAN Used by pygluu-kubernetes "false"
ENABLED_SERVICES_LIST Used by pygluu-kubernetes. List of all enabled services "[]"
EDIT_IMAGE_NAMES_TAGS Manually place the image source and tag "Y" or "N"
JACKRABBIT_IMAGE_NAME Jackrabbit image repository name i.e "gluufederation/jackrabbit"
JACKRABBIT_IMAGE_TAG Jackrabbit image tag i.e "4.4.0-1"
CASA_IMAGE_NAME Casa image repository name i.e "gluufederation/casa"
CASA_IMAGE_TAG Casa image tag i.e "4.4.0-1"
CONFIG_IMAGE_NAME Config image repository name i.e "gluufederation/config-init"
CONFIG_IMAGE_TAG Config image tag i.e "4.4.0-1"
CACHE_REFRESH_ROTATE_IMAGE_NAME Cache refresh image repository name i.e "gluufederation/cr-rotate"
CACHE_REFRESH_ROTATE_IMAGE_TAG Cache refresh image tag i.e "4.4.0-1"
CERT_MANAGER_IMAGE_NAME Gluu's Certificate management image repository name i.e "gluufederation/certmanager"
CERT_MANAGER_IMAGE_TAG Gluu's Certificate management image tag i.e "4.4.0-1"
LDAP_IMAGE_NAME LDAP image repository name i.e "gluufederation/opendj"
LDAP_IMAGE_TAG LDAP image tag i.e "4.4.0-1"
OXAUTH_IMAGE_NAME oxAuth image repository name i.e "gluufederation/oxauth"
OXAUTH_IMAGE_TAG oxAuth image tag i.e "4.4.0-1"
OXD_IMAGE_NAME oxd image repository name i.e "gluufederation/oxd-server"
OXD_IMAGE_TAG oxd image tag i.e "4.4.0-1"
OXPASSPORT_IMAGE_NAME oxPassport image repository name i.e "gluufederation/oxpassport"
OXPASSPORT_IMAGE_TAG oxPassport image tag i.e "4.4.0-1"
FIDO2_IMAGE_NAME FIDO2 image repository name i.e "gluufederation/oxpassport"
FIDO2_IMAGE_TAG FIDO2 image tag i.e "4.4.0-1"
SCIM_IMAGE_NAME SCIM image repository name i.e "gluufederation/oxpassport"
SCIM_IMAGE_TAG SCIM image tag i.e "4.4.0-1"
OXSHIBBOLETH_IMAGE_NAME oxShibboleth image repository name i.e "gluufederation/oxshibboleth"
OXSHIBBOLETH_IMAGE_TAG oxShibboleth image tag i.e "4.4.0-1"
OXTRUST_IMAGE_NAME oxTrust image repository name i.e "gluufederation/oxtrust"
OXTRUST_IMAGE_TAG oxTrust image tag i.e "4.4.0-1"
PERSISTENCE_IMAGE_NAME Persistence image repository name i.e "gluufederation/persistence"
PERSISTENCE_IMAGE_TAG Persistence image tag i.e "4.4.0-1"
UPGRADE_IMAGE_NAME Gluu upgrade image repository name i.e "gluufederation/upgrade"
UPGRADE_IMAGE_TAG Gluu upgrade image tag i.e "4.4.0-1"
CONFIRM_PARAMS Confirm using above options "Y" or "N"
GLUU_LDAP_MULTI_CLUSTER HELM-ALPHA-FEATURE-DEPRECIATED: Enable LDAP multi cluster environment "Y" or "N"
GLUU_LDAP_SERF_PORT HELM-ALPHA-FEATURE-DEPRECIATED: Serf UDP and TCP port i.e 30946
GLUU_LDAP_ADVERTISE_ADDRESS HELM-ALPHA-FEATURE-DEPRECIATED: LDAP pod advertise address i.e demoexample.gluu.org:30946"
GLUU_LDAP_ADVERTISE_ADMIN_PORT HELM-ALPHA-FEATURE-DEPRECIATED: LDAP serf advertise admin port i.e 30444
GLUU_LDAP_ADVERTISE_LDAPS_PORT HELM-ALPHA-FEATURE-DEPRECIATED: LDAP serf advertise LDAPS port i.e 30636
GLUU_LDAP_ADVERTISE_REPLICATION_PORT HELM-ALPHA-FEATURE-DEPRECIATED: LDAP serf advertise replication port i.e 30989
GLUU_LDAP_SECONDARY_CLUSTER HELM-ALPHA-FEATURE-DEPRECIATED: Is this the first kubernetes cluster or not "Y" or "N"
GLUU_LDAP_SERF_PEERS HELM-ALPHA-FEATURE-DEPRECIATED: All opendj serf advertised addresses. This must be resolvable ["firstldap.gluu.org:30946", "secondldap.gluu.org:31946"]
GLUU_INSTALL_SQL Install the SQL server dialect locally. Used in test mode. In production connect to a production SQL server. "Y" or "N"
GLUU_SQL_DB_DIALECT MySql or postgres "mysql" or "pgsql"
GLUU_SQL_DB_NAMESPACE The namespace the sql server was installed into "<name>"
GLUU_SQL_DB_HOST SQL database host uri. "" or i.e "<service>.<namespace>.svc.cluster.local" or cloud url
GLUU_SQL_DB_PORT SQL database port. "" i.e 3306
GLUU_SQL_DB_NAME SQL database name. i.e "gluu"
GLUU_SQL_DB_USER SQL database username. i.e "gluu"
GLUU_SQL_DB_PASSWORD SQL password i.e "P@ssw0rd"
GOOGLE_SERVICE_ACCOUNT_BASE64 Base64 encoded service account. The sa must have roles/secretmanager.admin to use Google secrets and roles/spanner.databaseUser to use Spanner. i.e "SWFtTm90YVNlcnZpY2VBY2NvdW50Q2hhbmdlTWV0b09uZQo="
USE_GOOGLE_SECRET_MANAGER Use Google Secret Manager as the secret and config layer instead of kubernetes Secrets and ConfigMap "Y" or "N"
GOOGLE_SPANNER_INSTANCE_ID Google Spanner Instance ID i.e ""
GOOGLE_SPANNER_DATABASE_ID Google Spanner Database ID i.e ""
GOOGLE_PROJECT_ID Project id of the google project the secret manager and/or spanner instance belongs to i.e "google-project-to-save-config-and-secrets-to"
MIGRATION_ENABLED Boolean flag to enable migration from CE "Y" or "N"
MIGRATION_DIR Directory holding all migration files "/ce-migration"
MIGRATION_DATA_FORMAT migration data-format depending on persistence backend. "ldif", "couchbase+json", "spanner+avro", "postgresql+json", "mysql+json"
GLUU_SCIM_PROTECTION_MODE SCIM protection mode OAUTH,TEST,UMA "OAUTH", "TEST", "UMA"

APP_VOLUME_TYPE-options#

APP_VOLUME_TYPE="" but if PERSISTENCE_BACKEND is OpenDJ options are :

Options Deployemnt Architecture Volume Type
1 Microk8s volumes on host
2 Minikube volumes on host
6 EKS volumes on host
7 EKS EBS volumes dynamically provisioned
8 EKS EBS volumes statically provisioned
11 GKE volumes on host
12 GKE Persistent Disk dynamically provisioned
13 GKE Persistent Disk statically provisioned
16 Azure volumes on host
17 Azure Persistent Disk dynamically provisioned
18 Azure Persistent Disk statically provisioned
21 Digital Ocean volumes on host
22 Digital Ocean Persistent Disk dynamically provisioned
23 Digital Ocean Persistent Disk statically provisioned

Use Couchbase solely as the persistence layer#

Requirements#

  • If you are installing on microk8s or minikube please ignore the below notes as a low resource couchbase-cluster.yaml will be applied automatically, however the VM being used must at least have 8GB RAM and 2 CPU available.

  • An m5.xlarge EKS cluster with 3 nodes at the minimum or n2-standard-4 GKE cluster with 3 nodes. We advise contacting Gluu regarding production setups.

  • Install couchbase Operator Linux version 2.1.0 is recommended but version 2.0.3 is also supported. Place the tar.gz file inside the same directory as the pygluu-kubernetes.pyz.

  • A modified couchbase/couchbase-cluster.yaml will be generated, but in production it is likely that this file will be modified.

  • To override the couchbase-cluster.yaml place the file inside /couchbase folder after running ./pygluu-kubernetes.pyz. More information on the properties couchbase-cluster.yaml.

Note

Please note the couchbase/couchbase-cluster.yaml file must include at least three defined spec.servers with the labels couchbase_services: index, couchbase_services: data and couchbase_services: analytics

If you wish to get started fast just change the values of spec.servers.name and spec.servers.serverGroups inside couchbase/couchbase-cluster.yaml to the zones of your EKS nodes and continue.

  • Run ./pygluu-kubernetes.pyz install-couchbase and follow the prompts to install couchbase solely with Gluu.

Use remote Couchbase as the persistence layer#

  • Install couchbase version 6.x.

  • Obtain the Public DNS or FQDN of the couchbase node.

  • Head to the FQDN of the couchbase node to setup your Couchbase cluster. When setting up please use the FQDN as the hostname of the new cluster.

  • Couchbase URL base , user, and password will be needed for installation when running pygluu-kubernetes.pyz

How to expand EBS volumes#

  1. Make sure the StorageClass used in your deployment has the allowVolumeExpansion set to true. If you have used our EBS volume deployment strategy then you will find that this property has already been set for you.

  2. Edit your persistent volume claim using kubectl edit pvc <claim-name> -n <namespace> and increase the value found for storage: to the value needed. Make sure the volumes expand by checking the kubectl get pvc <claim-name> -n <namespace>.

  3. Restart the associated services

Scaling pods#

Note

When using Mircok8s substitute kubectl with microk8s.kubectl in the below commands.

To scale pods, run the following command:

kubectl scale --replicas=<number> <resource> <name>

In this case, <resource> could be Deployment or Statefulset and <name> is the resource name.

Examples:

  • Scaling oxAuth:

    kubectl scale --replicas=2 deployment oxauth
    
  • Scaling oxTrust:

    kubectl scale --replicas=2 statefulset oxtrust
    

Working with Jackrabbit#

Warning

Jackrabbit is deprecated since Gluu v4.5.2. To customize public pages, it is recommended to use ConfigMaps directly.

Services Folder / File Jackrabbit Repository Method
oxAuth /opt/gluu/jetty/oxauth/custom /repository/default/opt/gluu/jetty/oxauth/custom PULL from Jackrabbit
oxTrust /opt/gluu/jetty/identity/custom /repository/default/opt/gluu/jetty/identity/custom PULL from Jackrabbit
Casa /opt/gluu/jetty/casa /repository/default/opt/gluu/jetty/casa PULL from Jackrabbit

The above means that Jackrabbit will maintain the source folder on all replicas of a service. If one pushed a custom file to /opt/gluu/jetty/oxauth/custom at one replica all other replicas would have this file.

oxTrust --> Jackrabbit --> oxShibboleth#

Info

Gluu v4.5.2 introduces persistence-based document store to distribute Shibboleth config files generated by oxTrust to oxShibboleth.

Services Folder / File Jackrabbit Repository Method
oxTrust /opt/shibboleth-idp /repository/default/opt/shibboleth-idp PUSH to Jackrabbit
oxShibboleth /opt/shibboleth-idp /repository/default/opt/shibboleth-idp PULL from Jackrabbit

oxAuth --> Jackrabbit --> Casa#

Info

Since Gluu v4.5.2, the /etc/certs/otp_configuration.json and /etc/certs/super_gluu_creds.json files shared by oxAuth and Casa are synchronized via secrets instead of Jackrabbit.

Services Folder / File Jackrabbit Repository Method
oxAuth /etc/certs/otp_configuration.json N/A PUSH to secrets
oxAuth /etc/certs/super_gluu_creds.json N/A PUSH to secrets
Casa /etc/certs/otp_configuration.json N/A PULL from secrets
Casa /etc/certs/super_gluu_creds.json N/A PULL from secrets

svg

Note

You can use any client to connect to Jackrabbit. We assume Gluu is installed in gluu namespace

  1. Port forward Jackrabbit at localhost on port 8080

        kubectl port-forward jackrabbit-0 --namespace gluu 8080:8080
    
  2. Optional: If your managing VM is in the cloud you must forward the connection to the Mac, Linux or Windows computer you are working from.

        ssh -i <key.pem> -L 8080:localhost:8080 user-of-managing-vm@ip-of-managing-vm
    
  3. Use any filemanager to connect to Jackrabbit. Here are some examples:

    Open file manager which maybe Nautilus and find Connect to Server place the address which should be dav://localhost:8080/repository/default. By default, the username and password are admin if not changed in /etc/gluu/conf/jackrabbit_admin_password inside the pod.

    Install a WebDav client such as WinSCP. Connect using the jackrabbit address which should be http://localhost:8080/repository/default. By default, the username and password are admin if not changed in /etc/gluu/conf/jackrabbit_admmin_password inside the pod.

    Open Finder , Go then Connect to Server and place the address which should be http://localhost:8080/repository/default. By default, the username and password are admin if not changed in /etc/gluu/conf/jackrabbit_admin_password inside the pod.

Warning

Used for quick testing with Jackrabbit and should be avoided.

  1. Login to the Jackrabbit container, for example: kubectl -n gluu exec -ti jackrabbit-0 -- sh.

  2. Go to /opt/webdav directory; create any files or directory under this directory.

  3. Run python3 /app/scripts/jca_sync.py.

Working with Persistence Document Store#

Info

The new persistence-based document store (called DB) is introduced since Gluu v4.5.2.

One of the main purposes of DB document store is to replace Jackrabbit (JCA) for distributing files across the pods, i.e. copying Shibboleth files generated by oxTrust to oxShibboleth (see the table below):

oxTrust --> persistence --> oxShibboleth#

Services Folder / File Method
oxTrust /opt/shibboleth-idp PUSH to persistence
oxShibboleth /opt/shibboleth-idp PULL from persistence

Migrating from Jackrabbit#

Steps to migrate from Jackrabbit (JCA) to Persistence (DB) document store in existing installation:

  1. Change the value of gluuDocumentStoreType in values.yaml, for example:

    config:
      configmap:
        # previously set to JCA
        gluuDocumentStoreType: DB
    

    Afterwards, upgrade the Helm chart to newest version, for example: helm -n <namespace> upgrade <release-name> gluu/gluu -f values.yaml --version <version>.

  2. Check GLUU_DOCUMENT_STORE_TYPE env var in configmaps:

    kubectl -n <namespace> get cm <release-name>-config-cm --template={{.data.GLUU_DOCUMENT_STORE_TYPE}}
    

    If the value is set to JCA, change it to DB by running the following command:

    kubectl -n <namespace> patch cm <release-name>-config-cm --type json --patch '[{"op": "replace", "path": "/data/GLUU_DOCUMENT_STORE_TYPE", "value": "DB"}]'
    
  3. Check selected document store in oxTrust UI by navigating to Configuration > JSON Configuration > Store Provider Configuration page. Change the value of Document store Type form field from JCA to DB if needed and save configuration.

  4. Rollout restart all deployments/statefulsets to force updates.

Info

If Jackrabbit was previously used for distributing custom pages, switch to ConfigMaps approach instead.

Build pygluu-kubernetes installer#

Overview#

pygluu-kubernetes.pyz is periodically released and does not need to be built manually. However, the process of building the installer package is listed below.

Build pygluu-kubernetes.pyz manually#

Prerequisites#

  1. Python 3.6+.
  2. Python pip3 package.

Installation#

Standard Python package#

  1. Create a virtual environment and activate:

    python3 -m venv .venv
    source .venv/bin/activate
    
  2. Install the package:

    make install
    

    This command will install an executable called pygluu-kubernetes available in the virtual environment PATH.

Python zipapp#

  1. Install shiv using pip3:

    pip3 install shiv
    
  2. Install the package:

    make zipapp
    

    This command will generate an executable called pygluu-kubernetes.pyz under the same directory.

Architectural diagram of all Gluu services#

svg

Network traffic between Gluu services#

  1. Database Access: all Gluu services require access to the database.

  2. Pod-2-Pod Communication: Gluu services communicate with each other as depicted.

  3. External/Internet Communication:

    • Oxauth: should be publically accessible.

    • Rest of the pods: We recommend to only keep the .well-known endpoints public and protect the rest.

Architectural diagram of oxPassport#

svg

Architectural diagram of Casa#

svg

Architectural diagram of SCIM#

svg

Minimum Couchbase System Requirements for cloud deployments#

Note

Couchbase needs optimization in a production environment and must be tested to suit the organizational needs.

NAME # of nodes RAM(GiB) Disk Space CPU Total RAM(GiB) Total CPU
Couchbase Index 1 3 5Gi 1 3 1
Couchbase Query 1 - 5Gi 1 - 1
Couchbase Data 1 3 5Gi 1 3 1
Couchbase Search, Eventing and Analytics 1 2 5Gi 1 2 1
Grand Total 7-8 GB (if query pod is allocated 1 GB) 20Gi 4