Skip to content

The Kubernetes recipes#

  1. If deploying on the cloud make sure to take a look at the cloud specific notes before continuing.

    Warning

    If deploying locally make sure to take a look at the specific notes bellow before continuing.

  2. Install using one of the following :

Amazon Web Services (AWS) - EKS#

Setup Cluster#

  • Follow this guide to install a cluster with worker nodes. Please make sure that you have all the IAM policies for the AWS user that will be creating the cluster and volumes.

Requirements#

  • The above guide should also walk you through installing kubectl , aws-iam-authenticator and aws cli on the VM you will be managing your cluster and nodes from. Check to make sure.
    aws-iam-authenticator help
    aws-cli
    kubectl version
    

Note

Default AWS deployment will install a classic load balancer with an IP that is not static. Don't worry about the IP changing. All pods will be updated automatically with our script when a change in the IP of the load balancer occurs. However, when deploying in production, DO NOT use our script. Instead, assign a CNAME record for the LoadBalancer DNS name, or use Amazon Route 53 to create a hosted zone. More details in this AWS guide.

GCE (Google Cloud Engine) - GKE#

Setup Cluster#

  1. Install gcloud

  2. Install kubectl using gcloud components install kubectl command

  3. Create cluster using a command such as the following as an example:

    gcloud container clusters create exploringgluu --num-nodes 2 --machine-type n1-standard-2 --zone us-west1-a --additional-zones us-west1-b,us-west1-c
    

    where CLUSTER_NAME is the name you choose for the cluster and ZONE_NAME is the name of zone where the cluster resources live in.

  4. Configure kubectl to use the cluster:

    gcloud container clusters get-credentials CLUSTER_NAME --zone ZONE_NAME
    

    where CLUSTER_NAME is the name you choose for the cluster and ZONE_NAME is the name of zone where the cluster resources live in.

    Afterwards run kubectl cluster-info to check whether kubectl is ready to interact with the cluster.

  5. If a connection is not made to google consul using google account the call to the api will fail. Either connect to google consul using an associated google account and run any kubectl command like kubectl get pod or create a service account using a json key file.

Azure - AKS#

Warning

Pending

Requirements#

  • Follow this guide to install Azure CLI on the VM that will be managing the cluster and nodes. Check to make sure.

  • Follow this section to create the resource group for the AKS setup.

  • Follow this section to create the AKS cluster

  • Follow this section to connect to the AKS cluster

Minikube#

Requirements#

  1. Install minikube.

  2. Install kubectl.

  3. Create cluster:

    minikube start
    
  4. Configure kubectl to use the cluster:

    kubectl config use-context minikube
    
    1. Enable ingress on minikube
    minikube addons enable ingress
    

MicroK8s#

Requirements#

  1. Install MicroK8s

  2. Make sure all ports are open for microk8s

  3. Enable helm3, storage, ingress and dns.

    sudo microk8s.enable helm3 storage ingress dns
    

Install Gluu using pygluu-kubernetes with Kustomize#

  1. Download pygluu-kubernetes.pyz. This package can be built manually.

  2. Optional: If using couchbase as the persistence backend. Download the couchbase kubernetes operator package for linux and place it in the same directory as pygluu-kubernetes.pyz

  3. Run :

    ./pygluu-kubernetes.pyz install
    

Note

Prompts will ask for the rest of the information needed. You may generate the manifests (yaml files) and continue to deployment or just generate the manifests (yaml files) during the execution of pygluu-kubernetes.pyz. pygluu-kubernetes.pyz will output a file called settings.json holding all the parameters. More information about this file and the vars it holds is below but please don't manually create this file as the script can generate it using pygluu-kubernetes.pyz generate-settings.

settings.json parameters file contents#

Note

Please generate this file using pygluu-kubernetes.pyz generate-settings.

Parameter Description Options
ACCEPT_GLUU_LICENSE Accept the License "Y" or "N"
GLUU_VERSION Gluu version to be installed "4.0" or "4.1"
GLUU_HELM_RELEASE_NAME Gluu Helm release name "<name>"
NGINX_INGRESS_RELEASE_NAME Nginx Ingress release name "<name>"
NODES_IPS List of kubernetes cluster node ips ["<ip>", "<ip2>", "<ip3>"]
NODES_ZONES List of kubernetes cluster node zones ["<node1_zone>", "<node2_zone>", "<node3_zone>"]
NODES_NAMES List of kubernetes cluster node names ["<node1_name>", "<node2_name>", "<node3_name>"]
NODE_SSH_KEY nodes ssh key path location "<pathtosshkey>"
HOST_EXT_IP Minikube or Microk8s vm ip "<ip>"
VERIFY_EXT_IP Verify the Minikube or Microk8s vm ip placed "Y" or "N"
AWS_LB_TYPE AWS loadbalancer type "" , "clb" or "nlb"
USE_ARN Use ssl provided from ACM AWS "", "Y" or "N"
ARN_AWS_IAM The arn string "" or "<arn:aws:acm:us-west-2:XXXXXXXX:certificate/XXXXXX-XXXXXXX-XXXXXXX-XXXXXXXX>"
LB_ADD AWS loadbalancer address "<loadbalancer_address>"
DEPLOYMENT_ARCH Deployment architecture "microk8s", "minikube", "eks", "gke" or "aks"
PERSISTENCE_BACKEND Backend persistence type "ldap", "couchbase" or "hybrid"
REDIS_URL Redis url with port. Used when Redis is deployed for Cache. i.e "redis:6379", "clustercfg.testing-redis.icrbdv.euc1.cache.amazonaws.com:6379"
REDIS_TYPE Type of Redis deployed "SHARDED", "STANDALONE", "CLUSTER", or "SENTINEL"
REDIS_PW Redis Password if used. This may be empty. If not choose a long password. i.e "", "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURUakNDQWphZ0F3SUJBZ0lVV2Y0TExEb"
REDIS_USE_SSL Redis SSL use "false" or "true"
REDIS_SSL_TRUSTSTORE Redis SSL truststore. If using cloud provider services this is left empty. i.e "", "/etc/myredis.pem"
REDIS_SENTINEL_GROUP Redis Sentinel group i.e ""
INSTALL_COUCHBASE Install couchbase "Y" or "N"
COUCHBASE_NAMESPACE Couchbase namespace "<name>"
COUCHBASE_VOLUME_TYPE Persistence Volume type "io1","ps-ssd", "Premium_LRS"
COUCHBASE_CLUSTER_NAME Couchbase cluster name "<name>"
COUCHBASE_FQDN Couchbase FQDN "" or i.e "<clustername>.<namespace>.gluu.org"
COUCHBASE_URL Couchbase internal address to the cluster "" or i.e "<clustername>.<namespace>.svc.cluster.local"
COUCHBASE_USER Couchbase username "" or i.e "admin"
COUCHBASE_CRT Couchbase CA certification "" or i.e <crt content not encoded>
COUCHBASE_CN Couchbase certificate common name ""
COUCHBASE_SUBJECT_ALT_NAME Couchbase SAN "" or i.e "cb.gluu.org"
COUCHBASE_CLUSTER_FILE_OVERRIDE Override couchbase-cluster.yaml with a custom couchbase-cluster.yaml "Y" or "N"
COUCHBASE_USE_LOW_RESOURCES Use very low resources for Couchbase deployment. For demo purposes "Y" or "N"
COUCHBASE_DATA_NODES Number of Couchbase data nodes "" or i.e "4"
COUCHBASE_QUERY_NODES Number of Couchbase query nodes "" or i.e "3"
COUCHBASE_INDEX_NODES Number of Couchbase index nodes "" or i.e "3"
COUCHBASE_SEARCH_EVENTING_ANALYTICS_NODES Number of Couchbase search, eventing and analytics nodes "" or i.e "2"
COUCHBASE_GENERAL_STORAGE Couchbase general storage size "" or i.e "2"
COUCHBASE_DATA_STORAGE Couchbase data storage size "" or i.e "5Gi"
COUCHBASE_INDEX_STORAGE Couchbase index storage size "" or i.e "5Gi"
COUCHBASE_QUERY_STORAGE Couchbase query storage size "" or i.e "5Gi"
COUCHBASE_ANALYTICS_STORAGE Couchbase search, eventing and analytics storage size "" or i.e "5Gi"
COUCHBASE_BACKUP_SCHEDULE Couchbase back up cron job frequency i.e "*/30 * * * *"
COUCHBASE_BACKUP_RESTORE_POINTS Couchbase number of backups to keep i.e 3
NUMBER_OF_EXPECTED_USERS Number of expected users [couchbase-resource-calc-alpha] "" or i.e "1000000"
EXPECTED_TRANSACTIONS_PER_SEC Expected transactions per second [couchbase-resource-calc-alpha] "" or i.e "2000"
USING_CODE_FLOW If using code flow [couchbase-resource-calc-alpha] "", "Y" or "N"
USING_SCIM_FLOW If using SCIM flow [couchbase-resource-calc-alpha] "", "Y" or "N"
USING_RESOURCE_OWNER_PASSWORD_CRED_GRANT_FLOW If using password flow [couchbase-resource-calc-alpha] "", "Y" or "N"
DEPLOY_MULTI_CLUSTER Deploying a Multi-cluster [alpha] "Y" or "N"
HYBRID_LDAP_HELD_DATA Type of data to be held in LDAP with a hybrid installation of couchbase and LDAP "", "default", "user", "site", "cache" or "token"
LDAP_VOLUME LDAP Volume type "", "io1","ps-ssd", "Premium_LRS"
LDAP_VOLUME_TYPE Volume type for LDAP persistence options
LDAP_STATIC_VOLUME_ID LDAP static volume id (AWS EKS) "" or "<static-volume-id>"
LDAP_STATIC_DISK_URI LDAP static disk uri (GCE GKE or Azure) "" or "<disk-uri>"
LDAP_BACKUP_SCHEDULE LDAP back up cron job frequency i.e "*/30 * * * *"
GLUU_CACHE_TYPE Cache type to be used "IN_MEMORY", "REDIS" or "NATIVE_PERSISTENCE"
GLUU_NAMESPACE Namespace to deploy Gluu in "<name>"
GLUU_FQDN Gluu FQDN "<FQDN>" i.e "demoexample.gluu.org"
COUNTRY_CODE Gluu country code "<country code>" i.e "US"
STATE Gluu state "<state>" i.e "TX"
EMAIL Gluu email "<email>" i.e "support@gluu.org"
CITY Gluu city "<city>" i.e "Austin"
ORG_NAME Gluu organization name "<org-name>" i.e "Gluu"
GMAIL_ACCOUNT Gmail account for GKE installation "" or"<gmail>" i.e
GMAIL_ACCOUNT Gmail account for GKE installation "" or"<gmail>" i.e
GOOGLE_NODE_HOME_DIR User node home directory, used if the hosts volume is used "Y" or "N"
IS_GLUU_FQDN_REGISTERED Is Gluu FQDN globally resolvable "Y" or "N"
OXD_APPLICATION_KEYSTORE_CN OXD application keystore common name "<name>" i.e "oxd_server"
OXD_ADMIN_KEYSTORE_CN OXD admin keystore common name "<name>" i.e "oxd_server"
LDAP_STORAGE_SIZE LDAP volume storage size "" i.e "4Gi"
OXAUTH_REPLICAS Number of oxAuth replicas min "1"
OXTRUST_REPLICAS Number of oxTrust replicas min "1"
LDAP_REPLICAS Number of LDAP replicas min "1"
OXSHIBBOLETH_REPLICAS Number of oxShibboleth replicas min "1"
OXPASSPORT_REPLICAS Number of oxPassport replicas min "1"
OXD_SERVER_REPLICAS Number of oxdServer replicas min "1"
CASA_REPLICAS Number of Casa replicas [alpha] min "1"
RADIUS_REPLICAS Number of Radius replica min "1"
ENABLE_OXTRUST_API Enable oxTrust-api "Y" or "N"
ENABLE_OXTRUST_TEST_MODE Enable oxTrust Test Mode "Y" or "N"
ENABLE_CACHE_REFRESH Enable cache refresh rotate installation "Y" or "N"
ENABLE_OXD Enable oxd server installation "Y" or "N"
ENABLE_RADIUS Enable Radius installation "Y" or "N"
ENABLE_OXPASSPORT Enable oxPassport installation "Y" or "N"
ENABLE_OXSHIBBOLETH Enable oxShibboleth installation "Y" or "N"
ENABLE_CASA Enable Casa installation [alpha] "Y" or "N"
ENABLE_KEY_ROTATE Enable key rotate installation "Y" or "N"
ENABLE_OXTRUST_API_BOOLEAN Used by pygluu-kubernetes "false"
ENABLE_OXTRUST_TEST_MODE_BOOLEAN Used by pygluu-kubernetes "false"
ENABLE_RADIUS_BOOLEAN Used by pygluu-kubernetes "false"
ENABLE_OXPASSPORT_BOOLEAN Used by pygluu-kubernetes "false"
ENABLE_CASA_BOOLEAN Used by pygluu-kubernetes "false"
ENABLE_SAML_BOOLEAN Used by pygluu-kubernetes "false"
EDIT_IMAGE_NAMES_TAGS Manually place the image source and tag "Y" or "N"
CASA_IMAGE_NAME Casa image repository name i.e "gluufederation/casa"
CASA_IMAGE_TAG Casa image tag i.e "4.1.0_01"
CONFIG_IMAGE_NAME Config image repository name i.e "gluufederation/config-init"
CONFIG_IMAGE_TAG Config image tag i.e "4.1.0_01"
CACHE_REFRESH_ROTATE_IMAGE_NAME Cache refresh image repository name i.e "gluufederation/cr-rotate"
CACHE_REFRESH_ROTATE_IMAGE_TAG Cache refresh image tag i.e "4.1.0_01"
KEY_ROTATE_IMAGE_NAME Key rotate image repository name i.e "gluufederation/key-rotation"
KEY_ROTATE_IMAGE_TAG Key rotate image tag i.e "4.1.0_01"
LDAP_IMAGE_NAME LDAP image repository name i.e "gluufederation/wrends"
LDAP_IMAGE_TAG LDAP image tag i.e "4.1.0_01"
OXAUTH_IMAGE_NAME oxAuth image repository name i.e "gluufederation/oxauth"
OXAUTH_IMAGE_TAG oxAuth image tag i.e "4.1.0_01"
OXD_IMAGE_NAME oxd image repository name i.e "gluufederation/oxd-server"
OXD_IMAGE_TAG oxd image tag i.e "4.1.0_01"
OXPASSPORT_IMAGE_NAME oxPassport image repository name i.e "gluufederation/oxpassport"
OXPASSPORT_IMAGE_TAG oxPassport image tag i.e "4.1.0_01"
OXSHIBBOLETH_IMAGE_NAME oxShibboleth image repository name i.e "gluufederation/oxshibboleth"
OXSHIBBOLETH_IMAGE_TAG oxShibboleth image tag i.e "4.1.0_01"
OXTRUST_IMAGE_NAME oxTrust image repository name i.e "gluufederation/oxtrust"
OXTRUST_IMAGE_TAG oxTrust image tag i.e "4.1.0_01"
PERSISTENCE_IMAGE_NAME Persistence image repository name i.e "gluufederation/persistence"
PERSISTENCE_IMAGE_TAG Persistence image tag i.e "4.1.0_01"
RADIUS_IMAGE_NAME Radius image repository name i.e "gluufederation/radius"
RADIUS_IMAGE_TAG Radius image tag i.e "4.1.0_01"
UPGRADE_IMAGE_NAME Gluu upgrade image repository name i.e "gluufederation/upgrade"
UPGRADE_IMAGE_TAG Gluu upgrade image tag i.e "4.1.0_01"
CONFIRM_PARAMS Confirm using above options "Y" or "N"

LDAP_VOLUME_TYPE-options#

LDAP_VOLUME_TYPE="" but if PERSISTENCE_BACKEND is WrenDS options are :

Options Deployemnt Architecture Volume Type
1 Microk8s LDAP volumes on host
2 Minikube LDAP volumes on host
6 EKS LDAP volumes on host
7 EKS LDAP EBS volumes dynamically provisioned
8 EKS LDAP EBS volumes statically provisioned
11 GKE LDAP volumes on host
12 GKE LDAP Persistent Disk dynamically provisioned
13 GKE LDAP Persistent Disk statically provisioned
16 Azure LDAP volumes on host
17 Azure LDAP Persistent Disk dynamically provisioned
18 Azure LDAP Persistent Disk statically provisioned

Uninstall Gluu using Kustomize#

  1. Run :

    ./pygluu-kubernetes.pyz uninstall
    

Install Gluu using Helm#

Prerequisites#

  • Kubernetes 1.x
  • Persistent volume provisioner support in the underlying infrastructure
  • Install Helm3

Quickstart#

  1. Download pygluu-kubernetes.pyz. This package can be built manually.

  2. Optional: If using couchbase as the persistence backend. Download the couchbase kubernetes operator package for linux and place it in the same directory as pygluu-kubernetes.pyz

  3. Run :

./pygluu-kubernetes.pyz helm-install

Installing Gluu using Helm manually#

  1. Install nginx-ingress Helm Chart.
helm repo add stable https://kubernetes-charts.storage.googleapis.com
helm repo update
helm install <nginx-release-name> stable/nginx-ingress --namespace=<nginx-namespace>
    • If the FQDN for gluu i.e demoexample.gluu.org is registered and globally resolvable, forward it to the loadbalancers address created in the previous step by nginx-ingress. A record can be added on most cloud providers to forward the domain to the loadbalancer. Forexample, on AWS assign a CNAME record for the LoadBalancer DNS name, or use Amazon Route 53 to create a hosted zone. More details in this AWS guide. Another example on GCE.

    • If the FQDN is not registered acquire the loadbalancers ip if on GCE, or Azure using kubectl get svc <release-name>-nginx-ingress-controller --output jsonpath='{.status.loadBalancer.ingress[0].ip}' and if on AWS get the loadbalancers addresss using kubectl -n ingress-nginx get svc ingress-nginx \--output jsonpath='{.status.loadBalancer.ingress[0].hostname}'.

    • If deploying on the cloud make sure to take a look at the helm cloud specific notes before continuing.

    • EKS

    • GKE

    • If deploying locally make sure to take a look at the helm specific notes bellow before continuing.

    • Minikube

    • MicroK8s
  1. Optional: If using couchbase as the persistence backend.

    1. Download pygluu-kubernetes.pyz. This package can be built manually.

    2. Download the couchbase kubernetes operator package for linux and place it in the same directory as pygluu-kubernetes.pyz

    3. Run:

    ./pygluu-kubernetes.pyz couchbase-install
    
    1. Open settings.json file generated from the previous step and copy over the values of COUCHBASE_URL and COUCHBASE_USER to global.gluuCouchbaseUrl and global.gluuCouchbaseUser in values.yaml respectively.
  2. Make sure you are in the same directory as the values.yaml file and run:

helm install <release-name> -f values.yaml -n <namespace> .

EKS helm notes#

Required changes to the values.yaml#

Inside the global values.yaml change the marked keys with CHANGE-THIS to the appropriate values :

#global values to be used across charts
global:
  provisioner: kubernetes.io/aws-ebs #CHANGE-THIS
  lbAddr: "" #CHANGE-THIS to the address recieved in the previous step axx-109xx52.us-west-2.elb.amazonaws.com
  domain: demoexample.gluu.org #CHANGE-THIS to the FQDN used for Gluu
  isDomainRegistered: "false" # CHANGE-THIS  "true" or "false" to specify if the domain above is registered or not.

nginx:
  ingress:
    enabled: true
    path: /
    hosts:
      - demoexample.gluu.org #CHANGE-THIS to the FQDN used for Gluu
    tls:
      - secretName: tls-certificate
        hosts:
          - demoexample.gluu.org #CHANGE-THIS to the FQDN used for Gluu

Tweak the optional parameters in values.yaml to fit the setup needed.

GKE helm notes#

Required changes to the values.yaml#

Inside the global values.yaml change the marked keys with CHANGE-THIS to the appopriate values :

#global values to be used across charts
global:
  provisioner: kubernetes.io/gce-pd #CHANGE-THIS
  lbAddr: ""
  domain: demoexample.gluu.org #CHANGE-THIS to the FQDN used for Gluu
    # Networking configs
  nginxIp: "" #CHANGE-THIS  to the IP recieved from the previous step
  isDomainRegistered: "false" # CHANGE-THIS  "true" or "false" to specify if the domain above is registered or not.
nginx:
  ingress:
    enabled: true
    path: /
    hosts:
      - demoexample.gluu.org #CHANGE-THIS to the FQDN used for Gluu
    tls:
      - secretName: tls-certificate
        hosts:
          - demoexample.gluu.org #CHANGE-THIS to the FQDN used for Gluu

Tweak the optional parameters in values.yaml to fit the setup needed.

Minikube helm notes#

Required changes to the values.yaml#

Inside the global values.yaml change the marked keys with CHANGE-THIS to the appopriate values :

#global values to be used across charts
global:
  provisioner: k8s.io/minikube-hostpath #CHANGE-THIS
  lbAddr: ""
  domain: demoexample.gluu.org #CHANGE-THIS to the FQDN used for Gluu
  nginxIp: "" #CHANGE-THIS  to the IP of minikube <minikube ip>

nginx:
  ingress:
    enabled: true
    path: /
    hosts:
      - demoexample.gluu.org #CHANGE-THIS to the FQDN used for Gluu
    tls:
      - secretName: tls-certificate
        hosts:
          - demoexample.gluu.org #CHANGE-THIS to the FQDN used for Gluu

Tweak the optional parameters in values.yaml to fit the setup needed.

  • Map gluus FQDN at /etc/hosts file to the minikube IP as shown below.

    ##
    # Host Database
    #
    # localhost is used to configure the loopback interface
    # when the system is booting.  Do not change this entry.
    ##
    192.168.99.100  demoexample.gluu.org #minikube IP and example domain
    127.0.0.1   localhost
    255.255.255.255 broadcasthost
    ::1             localhost
    

Microk8s helm notes#

Required changes to the values.yaml#

Inside the global values.yaml change the marked keys with CHANGE-THIS to the appopriate values :

#global values to be used across charts
global:
  provisioner: microk8s.io/hostpath #CHANGE-THIS
  lbAddr: ""
  domain: demoexample.gluu.org #CHANGE-THIS to the FQDN used for Gluu
  nginxIp: "" #CHANGE-THIS  to the IP of the microk8s vm

nginx:
  ingress:
    enabled: true
    path: /
    hosts:
      - demoexample.gluu.org #CHANGE-THIS to the FQDN used for Gluu
    tls:
      - secretName: tls-certificate
        hosts:
          - demoexample.gluu.org #CHANGE-THIS to the FQDN used for Gluu

Tweak the optional parameteres in values.yaml to fit the setup needed.

  • Map gluus FQDN at /etc/hosts file to the microk8s vm IP as shown below.
##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting.  Do not change this entry.
##
192.168.99.100    demoexample.gluu.org #microk8s IP and example domain
127.0.0.1 localhost
255.255.255.255   broadcasthost
::1             localhost

Uninstalling the Chart#

To uninstall/delete my-release deployment:

helm delete <my-release>

If during installation the release was not defined, release name is checked by running $ helm ls then deleted using the previous command and the default release name.

Configuration#

Parameter Description Default
global.cloud.enabled Whether to enable cloud provisioning. false
global.provisioner Which cloud provisioner to use when deploying k8s.io/minikube-hostpath
global.ldapServiceName ldap service name. Used to connect other services to ldap opendj
global.nginxIp IP address to be used with a FQDN 192.168.99.100 (for minikube)
global.oxAuthServiceName oxauth service name - should not be changed oxauth
global.oxTrustServiceName oxtrust service name - should not be changed oxtrust
global.domain DNS domain name demoexample.gluu.org
global.isDomainRegistered Whether the domain to be used is registered or not false
global.gluuLdapUrl wrends/ldap server url. Port and service name of opendj server - should not be changed opendj:1636
global.gluuMaxFraction Controls how much of total RAM is up for grabs in containers running Java apps 1
global.configAdapterName The config backend adapter Kubernetes
global.configSecretAdapter The secrets adapter Kubernetes
global.gluuPersistenceType Which database backend to use ldap
global.gluuCouchbaseUrl Couchbase URL. Used only when global.gluuPersistenceType is hybrid or couchbase cbgluu.cbns.svc.cluster.local
global.gluuCouchbaseUser Couchbase user. Used only when global.gluuPersistenceType is hybrid or couchbase admin
global.gluuCouchbasePassFile Location of couchbase_password file /etc/gluu/conf/couchbase_password
global.gluuCouchbaseCertFile Location of couchbase.crt used by cb for tls termination /etc/gluu/conf/couchbase.crt
global.gluuRedisUrl Redis url with port. Used when Redis is deployed for Cache. redis:6379
global.gluuRedisType Type of Redis deployed. "SHARDED", "STANDALONE", "CLUSTER", or "SENTINEL"
global.gluuRedisUseSsl Redis SSL use "false" or "true"
global.gluuRedisSslTruststore Redis SSL truststore. If using cloud provider services this is left empty. ``
global.gluuRedisSentinelGroup Redis Sentinel group ``
global.oxshibboleth.enabled Whether to allow installation of oxshibboleth chart false
global.key-rotation.enabled Allow key rotation false
global.cr-rotate.enabled Allow cache rotation deployment false
global.radius.enabled Enabled radius installation false
global.redis.enabled Whether to allow installation of redis chart. false
global.oxtrust.enabled Allow installation of oxtrust true
global.nginx.enabled Allow installation of nginx. Should be allowed unless another nginx is being deployed true
global.config.enabled Either to install config chart or not. true
config.orgName Organisation Name Gluu
config.email Email to be registered with ssl support@gluu.org
config.adminPass Admin password to log in to the UI P@ssw0rd
config.domain FQDN demoexample.gluu.org
config.countryCode Country code of where the Org is located US
config.state State TX
config.ldapType Type of LDAP server to use. opendj
global.oxauth.enabled Whether to allow installation of oxauth subchart. Should be left as true true
global.opendj.enabled Allow installation of ldap Should left as true true
global.gluuCacheType Options REDIS or NATIVE_PERSISTENCE If REDIS is used redis chart must be enabled and gluuRedisEnabled config set to true NATIVE_PERSISTENCE
opendj.gluuRedisEnabled Used if cache type is redis false
global.persistence.enabled Whether to enable persistence layer. Must ALWAYS remain true true
persistence.configmap.gluuCasaEnabled Enable auto install of casa chart/service while installing Gluu server chart false
persistence.configmap.gluuPassportEnabled Auto install passport service chart false
persistence.configmap.gluuRadiusEnabled Auto install radius service chart false
persistence.configmap.gluuSamlEnabled Auto enable SAML in oxshibboleth. This should be true whether or not oxshibboleth is installed or not. true
oxd-server.enabled Enable or disable installation of OXD server false
oxd-server.configmap.adminKeystorePassword Admin keystore password examplePass
oxd-server.configmap.applicationKeystorePassword Password used to decrypt the keystore examplePass
nginx.ingress.enabled Set routing rules to different services true
nginx.ingress.hosts Gluu FQDN demoexample.gluu.org

Persistence#

Note

Enabling support of oxtrust API and oxtrust TEST_MODE. To enable oxtrust API support and or oxtrust TEST_MODE , set gluuOxtrustApiEnabled and gluuOxtrustApiTestMode true respectively.

# persistence layer
persistence:
  configmap:
     gluuOxtrustApiEnabled: true

Consequently, to enable oxtrust TEST_MODE set the variable gluuOxtrustApiTestMode in the same persistence service to true

```yaml

persistence layer#

persistence: configmap: gluuOxtrustApiTestMode: true

### Instructions on how to install different services

Enabling the following services automatically install the corresponding associated chart. To enable/disable them set `true` or `false` in the persistence configs as shown below.  

```yaml
# persistence layer
persistence:
  enabled: true
  configmap:
    # Auto install other services. If enabled the respective service chart will be installed
    gluuPassportEnabled: false
    gluuCasaEnabled: false
    gluuRadiusEnabled: false
    gluuSamlEnabled: false

oxd-server#

NOTE: If these two are not provided oxd-server will fail to start.
NOTE: For these passwords, stick to digits and numbers only.

oxd-server:
  configmap:
    adminKeystorePassword: admin-example-password
    applicationKeystorePassword: app-example-pass

Casa#

  • Casa is dependant on oxd-server. To install it oxd-server must be enabled.

Redis#

To enable usage of Redis, change the following values.

opendj:
  # options REDIS/NATIVE_PERSISTENCE
  gluuCacheType: REDIS
  # options true/false : must be enabled if cache type is REDIS
  gluuRedisEnabled: true

# redis should be enabled only when cacheType is REDIS
global:
  redis:
    enabled: true

Other optional services#

Other optional services like key-rotation, cr-rotation, and radius are enabled by setting their corresponding values to true under the global block.

For example, to enable cr-rotate set

global:
  cr-rotate:
    enabled: true

Use Couchbase solely as the persistence layer#

Requirements#

  • If you are installing on microk8s or minikube please ignore the below notes as a low resource couchbase-cluster.yaml will be applied automatically, however the VM being used must at least have 8GB RAM and 2 cpu available .

  • An m5.xlarge EKS cluster with 3 nodes at the minimum or n2-standard-4 GKE cluster with 3 nodes. We advice contacting Gluu regarding production setups.

  • Install couchbase kubernetes and place the tar.gz file inside the same directory as the pygluu-kubernetes.pyz.

  • A modified couchbase/couchbase-cluster.yaml will be generated but in production it is likely that this file will be modified.

  • To override the couchbase-cluster.yaml place the file inside /couchbase folder after running ./pygluu-kubernetes.pyz. More information on the properties couchbase-cluster.yaml.

Note

Please note the couchbase/couchbase-cluster.yaml file must include at least three defined spec.servers with the labels couchbase_services: index, couchbase_services: data and couchbase_services: analytics

If you wish to get started fast just change the values of spec.servers.name and spec.servers.serverGroups inside couchbase/couchbase-cluster.yaml to the zones of your EKS nodes and continue.

  • Run ./pygluu-kubernetes.pyz install-couchbase and follow the prompts to install couchbase solely with Gluu.

Use remote Couchbase as the persistence layer#

  • Install couchbase

  • Obtain the Public DNS or FQDN of the couchbase node.

  • Head to the FQDN of the couchbase node to setup your couchbase cluster. When setting up please use the FQDN as the hostname of the new cluster.

  • Couchbase URL base , user, and password will be needed for installation when running pygluu-kubernetes.pyz

How to expand EBS volumes#

  1. Make sure the StorageClass used in your deployment has the allowVolumeExpansion set to true. If you have used our EBS volume deployment strategy then you will find that this property has already been set for you.

  2. Edit your persistent volume claim using kubectl edit pvc <claim-name> -n <namespace> and increase the value found for storage: to the value needed. Make sure the volumes expand by checking the kubectl get pvc <claim-name> -n <namespace>.

  3. Restart the associated services

Scaling pods#

Note

When using Mircok8s substitute kubectl with microk8s.kubectl in the below commands.

To scale pods, run the following command:

kubectl scale --replicas=<number> <resource> <name>

In this case, <resource> could be Deployment or Statefulset and <name> is the resource name.

Examples:

  • Scaling oxAuth:

    kubectl scale --replicas=2 deployment oxauth
    
  • Scaling oxTrust:

    kubectl scale --replicas=2 statefulset oxtrust
    

Build pygluu-kubernetes installer#

Overview#

pygluu-kubernetes.pyz is periodically released and does not need to be built manually. However, the process of building the installer package is listed below.

Build pygluu-kubernetes.pyz manually#

Prerequisites#

  1. Python 3.6+.
  2. Python pip3 package.

Installation#

Standard Python package#

  1. Create virtual environment and activate:

    python3 -m venv .venv
    source .venv/bin/activate
    
  2. Install the package:

    make install
    

    This command will install executable called pygluu-kubernetes available in virtual environment PATH.

Python zipapp#

  1. Install shiv using pip3:

    pip3 install shiv
    
  2. Install the package:

    make zipapp
    

    This command will generate executable called pygluu-kubernetes.pyz under the same directory.

Known bug#

  • Bug in line 101 File /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/kubernetes/client/models/v1beta1_custom_resource_definition_status.py, line 101, in conditions. The error will look similar to the following:
  File "/root/.shiv/pygluu-kubernetes_3e5bddf4d309be28790a1b035ab5d72d0b9f33dfaade59da1bb9ec0bcd0165a4/site-packages/kubernetes/client/models/v1beta1_custom_resource_definition_status.py", line 54, in __init__
  self.conditions = conditions
File "/root/.shiv/pygluu-kubernetes_3e5bddf4d309be28790a1b035ab5d72d0b9f33dfaade59da1bb9ec0bcd0165a4/site-packages/kubernetes/client/models/v1beta1_custom_resource_definition_status.py", line 101, in conditions
  ValueError: Invalid value for `conditions`, must not be `None`

To fix this error just rerun the installation command ./pygluu-kubernetes.pyz <command> again.

Note

Another process to circumvent this bug is to build python-kubernetes-client manually detailed below.

    git clone --recursive https://github.com/kubernetes-client/python.git
    cd python
    git checkout release-11.0
    sed 's/raise ValueError("Invalid value for `conditions`, must not be `None`")/pass/g' ./kubernetes/client/models/v1beta1_custom_resource_definition_status.py > tmpfile.py && mv tmpfile.py ./kubernetes/client/models/v1beta1_custom_resource_definition_status.py
    sudo python3 setup.py install

Now remove the line requiring python client in pygluu-kubernetes setup.py file.

sed '/kubernetes>=11.0.0b2/d' ./setup.py > tmpfile.py && mv tmpfile.py setup.py

Build pygluu-kubernets manually.