Docker Installation#
Overview#
This guide provides instructions for deploying the Gluu Server on a single node VM using Docker.
Prerequisites#
For Docker deployments, provision a VM with:
Linux users#
-
The minimum system requirements, as described in the VM Preparation Guide.
-
Docker is installed
Mac users#
-
The minimum system requirements for Docker for Mac
Instructions#
Obtain files for deployment#
Download the latest pygluu-compose-linux-amd64.pyz
(or pygluu-compose-macos-amd64.pyz
for Mac users) file from Releases page and save it as pygluu-compose.pyz
.
Note
pygluu-compose.pyz
requires Python 3.6+ (and python3-distutils
package if Ubuntu/Debian is used).
Make sure to set the downloaded pygluu-compose.pyz
file as executable:
chmod +x pygluu-compose.pyz
Run the following command to generate manifests for deployment:
./pygluu-compose.pyz init
The generated files are similar to example below:
tree .
.
├── couchbase.crt
├── couchbase_password
├── couchbase_superuser_password
├── docker-compose.yml
├── gcp_kms_creds.json
├── gcp_kms_stanza.hcl
├── jackrabbit_admin_password
├── job.persistence.yml
├── pygluu-compose.pyz
├── svc.casa.yml
├── svc.cr_rotate.yml
├── svc.fido2.yml
├── svc.jackrabbit.yml
├── svc.ldap.yml
├── svc.oxauth.yml
├── svc.oxd_server.yml
├── svc.oxpassport.yml
├── svc.oxshibboleth.yml
├── svc.oxtrust.yml
├── svc.radius.yml
├── svc.redis.yml
├── svc.scim.yml
├── svc.vault_autounseal.yml
├── vault_gluu_policy.hcl
├── vault_role_id.txt
└── vault_secret_id.txt
Proceed to deployment section for basic setup of Gluu Server deployment or read the customizing section for advance setup.
Customizing installation#
Choose services#
The following services are available during deployment:
Service | Setting Name | Mandatory | Enabled by default |
---|---|---|---|
consul |
- | yes | always |
registrator |
- | yes | always |
vault |
- | yes | always |
nginx |
- | yes | always |
persistence |
JOB_PERSISTENCE |
no | yes |
oxauth |
SVC_OXAUTH |
no | yes |
oxtrust |
SVC_OXTRUST |
no | yes |
ldap |
SVC_LDAP |
no | yes |
oxpassport |
SVC_OXPASSPORT |
no | no |
oxshibboleth |
SVC_OXSHIBBOLETH |
no | no |
redis |
SVC_REDIS |
no | no |
radius |
SVC_RADIUS |
no | no |
vault auto-unseal |
SVC_VAULT_AUTOUNSEAL |
no | no |
oxd_server |
SVC_OXD_SERVER |
no | no |
cr_rotate |
SVC_CR_ROTATE |
no | no |
casa |
SVC_CASA |
no | no |
scim |
SVC_SCIM |
no | no |
fido2 |
SVC_FIDO2 |
no | no |
jackrabbit |
SVC_JACKRABBIT |
no | no |
To enable/disable non-mandatory services listed above, create a file called settings.py
and set the value to True
to enable or set to False
to disable the service. For example:
# enable ldap service
SVC_LDAP = True
# disable passport service
SVC_OXPASSPORT = False
Any services not specified in settings.py
will follow the default settings.
To override manifests (i.e. changing oxAuth service definition), add ENABLE_OVERRIDE = True
in settings.py
, for example:
ENABLE_OVERRIDE = True
Then define overrides in docker-compose.override.yml
(create the file if not exist):
version: "2.4"
services:
oxauth:
container_name: my-oxauth
If docker-compose.override.yml
exists, this file will be added as the last Compose file. For reference on multiple Compose file, please take a look at https://docs.docker.com/compose/extends/#multiple-compose-files.
Choose persistence backends#
Supported backends are LDAP, Couchbase, or mix of both (hybrid). The following config control which persistence backend is selected:
PERSISTENCE_TYPE
: choose one ofldap
,couchbase
, orhybrid
(the default isldap
)PERSISTENCE_LDAP_MAPPING
: choose one ofdefault
,user
,site
,cache
,token
, orsession
(default todefault
)
To choose a persistence backend, create a file called settings.py
(if it wasn't created in the last step) and set the corresponding option as seen above. For example:
# Couchbase will be selected
PERSISTENCE_TYPE = "couchbase"
# store user mapping in LDAP
PERSISTENCE_LDAP_MAPPING = "user"
# Couchbase user (has access to read and write data, read buckets, etc)
COUCHBASE_USER = "admin"
# optional, Couchbase superuser (has access to create buckets, etc)
COUCHBASE_SUPERUSER = ""
# Couchbase bucket prefix
COUCHBASE_BUCKET_PREFIX = "gluu"
# Host/IP address of Couchbase server; omit the port
COUCHBASE_URL = "192.168.100.4"
If couchbase
or hybrid
is selected, there are additional steps required to satisfy dependencies:
-
put Couchbase cluster certificate into the
couchbase.crt
file -
put Couchbase password into the
couchbase_password
file -
the Couchbase cluster must have
data
,index
, andquery
services at minimum -
if
COUCHBASE_URL
is set to hostname, make sure it can be reached by DNS query; alternatively add the extra host intodocker-compose.override.yml
file, for example:services: oxauth: extra_hosts: - "${COUCHBASE_HOSTNAME}:${COUCHBASE_IP}"
Set up Vault auto-unseal#
Enable Vault auto-unseal with GCP KMS API by specifying it in settings.py
:
# settings.py
# enable Vault auto-unseal with GCP KMS API
SVC_VAULT_AUTOUNSEAL = True
The following is an example of how to obtain GCP KMS credentials JSON file, and save it as gcp_kms_creds.json
in the same directory where pygluu-compose.pyz
is located, for example:
{
"type": "service_account",
"project_id": "project",
"private_key_id": "1234abcd",
"private_key": "-----BEGIN PRIVATE KEY-----\nabcdEFGH==\n-----END PRIVATE KEY-----\n",
"client_email": "sa@project.iam.gserviceaccount.com",
"client_id": "1234567890",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/sa%40project.iam.gserviceaccount.com"
}
Afterwards, create gcp_kms_stanza.hcl
in the same directory where pygluu-compose.pyz
is located; for example:
seal "gcpckms" {
credentials = "/vault/config/creds.json"
project = "vault-project-1234"
region = "us-east1"
key_ring = "vault-keyring"
crypto_key = "vault-key"
}
Note
Adjust the contents of gcp_kms_stanza.hcl
except the credentials
value as it is mapped in svc.vault_autounseal.yml
file.
Choose Document Storage#
There are 2 types of supported document storage:
-
LOCAL
This is the default document store (using container's filesystem).
-
JCA
This document store uses
jackrabbit
service. To enableJCA
document store, modifysettings.py
as seen below:SVC_JACKRABBIT = True DOCUMENT_STORE_TYPE = "JCA"
By default
jackrabbit
service will add its own user/password credentials (default toadmin
username andadmin
password). To change the credentials, add the following config insettings.py
:# change the username JACKRABBIT_USER = "my-jackrabbit-user"
Modify
jackrabbit_admin_password
file:my-jackrabbit-password
If somehow users need to modify the credentials after service is running, there are few steps need to be done:
-
change credentials via oxTrust UI (Configuration > JSON Configuration > Store Provider Configuration menu).
- adjust the User Id form field
- adjust the Password form field (use plaintext password, i.e.
my-jackrabbit-password
) - submit the form
-
change
JACKRABBIT_USER
config insettings.py
- change the password in
jackrabbit_admin_password
file - re-deploy
oxtrust
andjackrabbit
services
-
Cache Refresh IP Rotation#
Configuring LDAP Synchronization (Cache Refresh) requires static IP address of oxTrust, but in container world, the IP address is assigned dynamically.
By enabling cr_rotate
service, the required IP address of oxTrust can be discovered and configured in persistence:
# enable cache refresh IP rotation
SVC_CR_ROTATE = True
Note that users still need to enable and configure Cache Refresh.
SAML#
Add the following config in settings.py
:
# enable oxshibboleth service
SVC_OXSHIBBOLETH = True
# enable SAML support (including the sidebar menu on oxTrust)
SAML_ENABLED = True
Note
-
SAML_ENABLED
config will take effect only onpersistence
service which is run on initial deployment. Alternatively, users can enable/disable the support using oxTrust UI (Configuration > Organization Configuration menu). -
The
oxtrust
andjackrabbit
services must be enabled (see Choose Document Storage section above for configuringjackrabbit
service).
SCIM#
Add the following config in settings.py
:
# enable scim service
SVC_SCIM = True
# enable SCIM support (including the sidebar menu on oxTrust) and required custom scripts
SCIM_ENABLED = True
Note
SCIM_ENABLED
config will take effect only on persistence
service which is run on initial deployment.
Alternatively, users can enable/disable the support using oxTrust UI (Configuration > Organization Configuration menu).
Passport#
Add the following config in settings.py
:
# enable passport service
SVC_OXPASSPORT = True
# enable Passport support (including the sidebar menu on oxTrust) and required custom scripts
PASSPORT_ENABLED = True
Note
PASSPORT_ENABLED
config will take effect only on persistence
service which is run on initial deployment.
Alternatively, users can enable/disable the support using oxTrust UI (Configuration > Organization Configuration menu).
Gluu Radius#
Add the following config in settings.py
:
# enable radius service
SVC_RADIUS = True
# enable Gluu Radius support (including the sidebar menu on oxTrust) and required custom scripts
RADIUS_ENABLED = True
Note
RADIUS_ENABLED
config will take effect only on persistence
service which is run on initial deployment.
Alternatively, users can enable/disable the support using oxTrust UI (Configuration > Organization Configuration menu).
Casa#
Add the following config in settings.py
:
# enable casa service
SVC_CASA = True
# enable required custom scripts
CASA_ENABLED = True
Note
CASA_ENABLED
config will take effect only onpersistence
service which is run on initial deployment.- The
oxd_server
andjackrabbit
services must be enabled (see Choose Document Storage section above for configuringjackrabbit
service).
Deploy the Gluu Server#
Run the following command to install the Gluu Server:
./pygluu-compose.pyz up
Running the command above will show the deployment process:
[I] Attempting to gather external IP address
[I] Using 192.168.100.4 as external IP address
Note
pygluu-compose.pyz up
command will try to detect external IP address of the host.
In the example above, 192.168.100.4
is detected automatically.
If somehow the IP is incorrect, stop current process and set the IP address explicitly in settings.py
(create the file if not exist).
# settings.py
HOST_IP = "192.168.100.10" # set the external IP address explicitly
Re-run the pygluu-compose.pyz up
command to load new settings.
Creating consul ... done
Creating vault ... done
The consul
and vault
services are required to provide config and secret layers used by the rest of Gluu Server services.
[I] Checking Vault status
[W] Unable to get seal status in Vault; retrying ...
[I] Initializing Vault with 1 recovery key and token
[I] Vault recovery key and root token saved to vault_key_token.txt
[I] Unsealing Vault manually
[I] Creating Vault policy for Gluu
[I] Enabling Vault AppRole auth
On initial deployment, since Vault has not been configured yet, the pygluu-compose.pyz
will generate a root token and key to interact with Vault API, saved as vault_key_token.txt
(secure this file, as it contains the recovery key and root token).
pygluu-compose.pyz
will also setup Vault AppRole for interaction between other services to Vault. Note that by enabling AppRole, there will be vault_role_id.txt
and vault_secret_id.txt
files under working directory.
[I] Attempting to gather FQDN from Consul
[W] Unable to get FQDN from Consul; retrying ...
[W] Unable to get FQDN from Consul; retrying ...
[W] Unable to get FQDN from Consul; retrying ...
Enter hostname [demoexample.gluu.org]:
Enter country code [US]:
Enter state [TX]:
Enter city [Austin]:
Enter oxTrust admin password: ***********
Repeat password: ***********
Enter LDAP admin password: ***********
Repeat password: ***********
Enter email [support@demoexample.gluu.org]:
Enter organization [Gluu]:
After consul
and vault
have been deployed, the next things is getting config from consul
. If there's no existing config, a series of config will be prompted to user as seen above.
Note
When prompted for hostname (which will be used as https://<hostname>
address), using a public FQDN is highly recommended.
If somehow there's no way to use public FQDN, map the VM IP address and the FQDN in /etc/hosts
file.
# /etc/hosts
192.168.100.4 demoexample.gluu.org
Wait for few seconds and the deployment will continue the rest of the processes.
[I] Launching Gluu Server .........................................
[I] Gluu Server installed successfully; please visit https://demoexample.gluu.org
See checkings logs section on how to track the progress.
Checking the deployment logs#
The deployment process may take some time. You can keep track of the deployment by using the following command:
./pygluu-compose.pyz logs -f
Uninstall the Gluu Server#
Run the following command to delete all objects during the deployment:
./pygluu-compose.pyz down
FAQ#
How to use ldapsearch#
docker exec -ti ldap /opt/opendj/bin/ldapsearch \
-h localhost \
-p 1636 \
-Z \
-X \
-D "cn=directory manager" \
-w $LDAP_PASSWORD \
-b "o=gluu" \
-s base \
"objectClass=*"
where $LDAP_PASSWORD
is the password for LDAP given in installation process.
How to unseal Vault#
There are several ways to unseal Vault:
- Use auto-unseal
- Re-run
pygluu-compose up
command. - Quick manual unseal
- Get unseal key from
vault_key_token.txt
file. - Log in to the Vault container:
docker exec -it vault sh
. - Run
vault operator unseal
command (a prompt will appear). Enter the unseal key. - Wait for few seconds for the containers to get back to work.
- Get unseal key from