Element Server Suite Documentation LTS 24.10

Having trouble? Login to your EMS Account and use the contact form to raise a ticket with Support. See the Support page for more details.

Introduction to Element Server Suite

Element Server Suite provides an enterprise-grade secure communications platform. It can be deployed to either your own environment or in our Element Cloud. Element Server Suite includes the Element Matrix Server, which provides a host of security and privacy features, including:

Further, we also offer Enterprise Support, giving you access to experts in federated, secure communications. This should give you confidence to deploy our platform for your most critical secure communications needs.

Given the flexibility afforded by this platform, there are a number of moving parts to configure. This documentation will step you through configuring and deploying Element Enterprise On-Premise.

The first question you'll face is how you want to deploy!

Deploying Element Server Suite

Support for Standalone and Kubernetes deployments.

Element Enterprise On-Premise can be deployed both to a full Kubernetes (a lightweight container orchestration platform) installation or onto a standalone server based on a single-node Kubernetes installation.

One key benefit of going with a full Kubernetes installation is that you can add more resources and nodes to a cluster as you need them, whereas you are capped at one node with our standalone server.

In the case of our standalone server installation, we deploy to microk8s (a smaller lightweight distribution of Kubernetes), which we then use for deploying our Element application.

Versions

Element Server Suite comes in two subscriptions, with differing feature sets. You can register for a trial of Enterprise Edition by visiting here.

Components

Element Server Suite comprises of the following components:

Core Components

Optional Components

VOIP
Monitoring
Bridges

Architecture

This document gives an overview of our secure communications platform architecture: ESS Proposed Kubernetes Deployment Architecture.png

(Please click on the image to view it at 100%.)

Requirements and Recommendations

Software

Element Enterprise Server

Element Server Suite Download Page

To download the installer you require a Element Server Suite subscription tied with your EMS Account. If you are already logged in, click the link above to access the download page, otherwise login and then click the Your Account button found in the top-right of the page. Select Downloads under the On-Premise section.

It is highly recommended that you stay on the latest LTS version; by default, only LTS releases will be displayed. However you can untick the Show LTS Only toggle to see our monthly releases.

For each release you will see download options for the installer, the airgapped package (if your subscription allows) and Element Desktop MSIs:

Once downloaded, copy the installer binary (and the airgapped package if needed) to the machine in which you will be running the installer from. Remember to ensure you've followed the Requirements and Recommendations page for your environment and specifically the Operating System specific Prerequisites for your intended deployment method (Standalone or Kubernetes).

ESS Subscription Credentials

As part of the deployment process, you must supply your ESS Subscription credentials.

You can access your Store Username and Token by visiting https://ems.element.io/on-premise/subscriptions, you will see a page with all of your subscriptions.

token-rotation1.png

For your subscription credentials, click on the View Tokens button. On this page, click Rotate, you will then be presented with a new token.

Operating System

The installer binary requires a Linux OS to run, supported platforms include:

Please note that Ubuntu 24.04 LTS is only supported on ESS LTS 24.10 and later. For earlier versions, while configuration can be generated, deployment will fail.

LTS ESS Version Supported Ubuntus Supported Enterprise Linux (RHEL, Rocky, etc) General Python Version requirements
23.10 20.04, 22.04 8, 9 Python 3.8-3.10
24.04 20.04, 22.04 8, 9 Python 3.9-3.11
24.10 22.04, 24.04 8, 9 Python 3.10-3.12

Element Server Suite 24.04 currently only supports up to Python 3.11

For installation in Standalone mode, i.e. onto the host itself, only the above OS's are supported, otherwise for an installation into a Kubernetes environment, make sure you have a Kubernetes platform deployed that you have access to from the host running the installer.

Network Requirements

Element Enterprise Server needs to bind and serve content over:

microk8s specifically will need to bind and serve content over:

For more information, see https://microk8s.io/docs/ports.

In a default Ubuntu installation, these ports are allowed through the firewall. You will need to ensure that these ports are passed through your firewall.

For RHEL instances with firewalld enabled, the installer will take care of opening these ports for you.

Further, you need to make sure that your host is able to access the following hosts on the internet:

In addition, you will also need to make sure that your host can access your distributions' package repositories. As these hostnames can vary, it is beyond the scope of this documentation to enumerate them.

Hardware

Regardless of if you pick the standalone server or Kubernetes deployment, you will need a base level of hardware to support the application. The general guidance for server requirements is dependant on your Federation settings:

The installer binary requires support for the x86_64 architecture. Note that for Standalone deployments, hosts will need 2 GiB of memory to run both the OS and microk8s and should have at least 50Gb free space in /var.

Component-Level Requirements

Please note that these values below are indicative and might vary a lot depending on your setup, the volume of federation traffic, active usage, bridged use-cases, integrations enabled, etc. For each profile below:

Synapse Homeserver

The installer comes with default installation profiles which configure workers depending on your setup.

Federation 1 - 500 Users 501 - 2500 Users 2501 - 10,000 Users
Closed 2 CPU
2000 MiB RAM
6 CPU
5650 MiB RAM
10 CPU
8150 MiB RAM
Limited 2 CPU
2000 MiB RAM
6 CPU
5650 MiB RAM
10 CPU
8150 MiB RAM
Open 5 CPU
4500 MiB RAM
9 CPU
8150 MiB RAM
15 CPU
11650 MiB RAM
Synapse Postgres Server

Synapse postgres server will require the following resources :

Federation 1 - 500 Users 501 - 2500 Users 2501 - 10,000 Users
Closed 1 CPU
4 GiB RAM
2 CPU
12 GiB RAM
4 CPU
16 GiB RAM
Limited 2 CPU
6 GiB RAM
4 CPU
18 GiB RAM
8 CPU
28 GiB RAM
Open 3 CPU
8 GiB RAM
5 CPU
24 GiB RAM
10 CPU
32 GiB RAM
Operator & Updater

The Updater memory usage remains at 256Mi. At least 1 CPU should be provisioned for the operator and the updater.

The Operator memory usage scales linearly with the number of integrations you deploy with ESS. It's memory usage will remain low, but might spike up to 256Mi x Nb Integrations during deployment and configuration changes.

Synapse Media

The disk usage to expect after a year can be calculated using the following formula:

Media retention can be configured with the configuration option in Synapse/Config/Data Retention of the installer.

Postgres DB size

The disk usage to expect after a year can be calculated using the following formula:

Environment

For each of the components you choose to deploy (excluding postgresql, groupsync and prometheus), you must provide a hostname on your network that meets this criteria:

It is possible to deploy Element Enterprise On-Premise with self-signed certificates and without proper DNS in place, but this is not ideal as the mobile clients and federation do not work with self-signed certificates.

In addition to hostnames for each component, you will also need a hostname and PEM encoded certificate key/cert pair for your base domain. If we were deploying a domain called example.com and wanted to deploy all of the software, we would have the following hostnames in our environment that needed to meet the above criteria:

Wildcard certificates do work with our application and it would be possible to have a certificate that validated *.example.com and example.com for the above scenario. It is key to do both the base domain and the wildcard in the same certificate in order for this to work.

Further, if you want to do voice or video across network boundaries (ie: between people not on the same local network), you will need a TURN server. If you already have one, you do not have to set up coturn. If you do not already have a TURN server, you will want to set up coturn (our installer can do this for you) and if your server is behind NAT, you will need to have an external IP in order for coturn to work.

Standalone Environment Prerequisites

Before beginning the installation of a Kubernetes deployment, there are a few things that must be prepared to ensure a successful deployment and functioning installation.

Server Minimum Requirements

It is crucial that your storage provider supports fsync for data integrity.

Check out the ESS Sizing Calculator for further guidance which you can tailor to your specific desired configuration.

Kernel Modules

While above the supported Operating Systems should have this already, please note that microk8s requires the kernel module nf_conntrack to be enabled.

if ! grep nf_conntrack /proc/modules; then
    echo "nf_conntrack" | sudo tee --append /etc/modules
    sudo modprobe nf_conntrack
fi
Network Proxy

If your environment requires proxy access to get to the internet, you will need to make the folllowing changes to your operating system configuration to enable our installer to access the resources it needs over the internet.

Ubuntu Specific Steps

If your company's proxy is http://corporate.proxy:3128, you would edit /etc/environment and add the following lines:

HTTPS_PROXY=http://corporate.proxy:3128
HTTP_PROXY=http://corporate.proxy:3128
https_proxy=http://corporate.proxy:3128
http_proxy=http://corporate.proxy:3128
NO_PROXY=10.1.0.0/16,10.152.183.0/24,127.0.0.1,*.svc
no_proxy=10.1.0.0/16,10.152.183.0/24,127.0.0.1,*.svc

The IP Ranges specified to NO_PROXY and no_proxy are specific to the microk8s cluster and prevent microk8s traffic from going over the proxy.

Enterprise Linux Specific Steps

If your company's proxy is http://corporate.proxy:3128, you would edit /etc/profile.d/http_proxy.sh and add the following lines:

export HTTP_PROXY=http://corporate.proxy:3128
export HTTPS_PROXY=http://corporate.proxy:3128
export http_proxy=http://corporate.proxy:3128
export https_proxy=http://corporate.proxy:3128
export NO_PROXY=10.1.0.0/16,10.152.183.0/24,127.0.0.1,localhost,*.svc
export no_proxy=10.1.0.0/16,10.152.183.0/24,127.0.0.1,localhost,*.svc

The IP Ranges specified to NO_PROXY and no_proxy are specific to the microk8s cluster and prevent microk8s traffic from going over the proxy.

Once your OS specific steps are complete, you will need to log out and back in for the environment variables to be re-read after setting them. If you already have microk8s running, you will need to run the following to have microk8s reload the new environment variables:

microk8s.stop
microk8s.start

If you need to use an authenticated proxy, then the URL schema for both EL and Ubuntu is as follows:

protocol:user:password@host:port

So if your proxy is corporate.proxy and listens on port 3128 without SSL and requires a username of bob and a password of inmye1em3nt then your url would be formatted:

http://bob:inmye1em3nt@corporate.proxy:3128

For further help with proxies, we suggest that you contact your proxy administrator or operating system vendor.

PostgreSQL

The installation requires that you have a postgresql database; if you do not already have a database, then the standalone installer will set up PostgreSQL on your behalf.

If you already have PostgreSQL, the installation requires that the database is setup with a locale of C and use UTF8 encoding

See Synapse Postgres Setup Docs for further details.

Once setup, or if you have this already, make note of the database name, user, and password as you will need these when configuring ESS via the installater GUI.

Kubernetes Environment Prerequisites

Before beginning the installation of a Kubernetes deployment, there are a few things that must be prepared to ensure a successful deployment and functioning installation.

PostgreSQL

Before you can begin with the installation you must have a PostgreSQL database instance available. The installer does not manage databases itself.

The database you use must be set to a locale of C and use UTF8 encoding

Look at the Synapse Postgres Setup Docs for further details as they relate to Synapse. If the locale / encoding are incorrect, Synapse will fail to initialize the database and get stuck in a CrashLoopBackoff cycle.

Please make note of the database hostname, database name, user, and password as you will need these to begin the installation. For testing and evaluation purposes, you can deploy PostgreSQL to k8s before you begin the installation process:

Kubernetes PostgreSQL Quick Start Example

For testing and evaluation purposes only - Element cannot guarantee production readiness with these sample configurations.


Requires Helm installed locally


If you do not have a database present, it is possible to deploy PostgreSQL to your Kubernetes cluster.


This is great for testing and can work great in a production environment, but only for those with a high degree of comfort with PostgreSQL as well as the trade offs involved with k8s-managed databases.


There are many different ways to do this depending on your organization's preferences - as long as it can create an instance / database with the required locale and encoding it will work just fine. For a simple non-production deployment, we will demonstrate deployment of the bitnami/postgresql into your cluster using Helm.


You can add the bitnami repo with a few commands:


helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
helm search repo bitnami/postgresql                                                                                                                                                                                                        ~/Desktop
NAME                    CHART VERSION   APP VERSION     DESCRIPTION                                       
bitnami/postgresql      12.5.7          15.3.0          PostgreSQL (Postgres) is an open source object-...
bitnami/postgresql-ha   11.7.5          15.3.0          This PostgreSQL cluster solution includes the P...

Next, you'll need to create a values.yaml file to configure your PostgreSQL instance. This example is enough to get started, but please consult the chart's README and values.yaml for a list of full parameters and options.


auth:
  # This is the necessary configuration you will need for the Installer, minus the hostname
  database: "synapse"

  username: "synapse"
  password: "PleaseChangeMe!"

primary:
  initdb:
    # This ensures that the initial database will be created with the proper collation settings
    args: "--lc-collate=C --lc-ctype=C"

  persistence:
    enabled: true
    # Set this value if you need to use a non-default StorageClass for your database's PVC
    # storageClass: ""
    size: 20Gi


  # Optional - resource requests / requirements
  # These are sufficient for a 10 - 20 user server
  resources:
    requests:
      cpu: 500m
      memory: 512Mi
    limits:
      memory: 2Gi

This example values.yaml file is enough to get you started for testing purposes, but things such as TLS configuration, backups, HA and maintenance tasks are outside of the scope of the installer and this document.


Next, pick a namespace to deploy it to - this can be the same as the Installer's target namespace if you desire. For this example we'll use the postgresql namespace.


Then it's just a single Helm command to install:


# format:
# helm install --create-namespace -n <namespace> <helm-release-name> <repo/chart> -f <values file> (-f <additional values file>)

helm install --create-namespace -n postgresql postgresql bitnami/postgresql -f values.yaml 

Which should output something like this when it is successful:


-- snip --

PostgreSQL can be accessed via port 5432 on the following DNS names from within your cluster:

    postgresql.postgresql.svc.cluster.local - Read/Write connection
-- snip --

This is telling us that postgresql.postgresql.svc.cluster.local will be our hostname for PostgreSQL connections, which is the remaining bit of configuration required for the Installer in addition to the database/username/password set in values.yaml. This will differ depending on what namespace you deploy to, so be sure to check everything over.


If needed, this output can be re-displayed with helm get notes -n <namespace> <release name>, which for this example would be helm get notes -n postgresql postgresql)

How to setup AWS RDS for Synapse
  1. Create a database instance, engine type PostgreSQL.
  2. From a host (that has access to the db host), install the postgresql-client:
sudo apt update
sudo apt install postgresql-client
  1. Connect to the RDS Postgres using:
psql -h <rds-endpoint> -p <port> -U <username>
  1. Create the synapse and integrator databases using the following:
CREATE USER synapse_user WITH PASSWORD 'your_password';

CREATE DATABASE synapse WITH ENCODING='UTF8' LC_COLLATE='C' LC_CTYPE='C' TEMPLATE=template0 OWNER=synapse_user;
GRANT ALL PRIVILEGES ON DATABASE synapse TO synapse_user;

CREATE DATABASE integrator WITH ENCODING='UTF8' LC_COLLATE='C' LC_CTYPE='C' TEMPLATE=template0 OWNER=synapse_user;
GRANT ALL PRIVILEGES ON DATABASE integrator TO synapse_user;
Kubernetes Ingress Controller

The installer does not manage cluster Ingress capabilities since this is typically a cluster-wide concern - You must have this available prior to installation. Without a working Ingress Controller you will be unable to route traffic to your services without manual configuration.

If you do not have an Ingress Controller deployed please see Kubernetes Installations - Quick Start - Deploying ingress-nginx to Kubernetes for information on how to set up a bare-bones ingress-nginx installation to your cluster.

Kubernetes Ingress (nginx) Quick Start Example

For testing and evaluation purposes only - Element cannot guarantee production readiness with these sample configurations.


Requires Helm installed locally


Similar to the PostgreSQL quick start example, this requires Helm


The kubernetes/ingress-nginx chart is an easy way to get a cluster outfitted with Ingress capabilities.


In an environment where LoadBalancer services are handled transparently, such as in a simple test k3s environment with svclb enabled there's a minimal amount of configuration.


This example values.yaml file will create an IngressClass named nginx that will be used by default for any Ingress objects in the cluster.


controller:
  ingressClassResource:
    name: nginx
    default: true
    enabled: true

However, depending on your cloud provider / vendor (i.e. AWS ALB, Google Cloud Load Balancing etc) the configuration for this can vary widely. There are several example configurations for many cloud providers in the chart's README


You can see what your resulting HTTP / HTTPS IP address for this ingress controller by examining the service it creates - for example, in my test environment I have an installed release of the ingress-nginx chart called k3s under the ingress-nginx namespace, so I can run the following:


# format:
# kubectl get service -n <namespace> <release-name>-ingress-nginx-controller
$ kubectl get service -n ingress-nginx k3s-ingress-nginx-controller

NAME                                   TYPE           CLUSTER-IP      EXTERNAL-IP                                               PORT(S)                      AGE
k3s-ingress-nginx-controller            LoadBalancer   10.43.254.210   192.168.1.129                                             80:30634/TCP,443:31500/TCP   79d

The value of EXTERNAL-IP will be the address that you'll need your DNS to point to (either locally via /etc/hosts or LAN / WAN DNS configuration) to access your installer-provisioned services.

Use an existing Ingress Controller

If you have an Ingress Controller deployed already and it is set to the default class for the cluster, you shouldn't have to do anything else.

If you're unsure you can see which providers are available in your cluster with the following command:

$ kubectl get IngressClass
NAME    CONTROLLER             PARAMETERS   AGE
nginx   k8s.io/ingress-nginx   <none>       40d

And you can check to see whether an IngressClass is set to default using kubectl, for example:

$ kubectl describe IngressClass nginx
Name:         nginx
Labels:       app.kubernetes.io/component=controller
              app.kubernetes.io/instance=ingress-nginx
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=ingress-nginx
              app.kubernetes.io/part-of=ingress-nginx
              app.kubernetes.io/version=1.1.1
              argocd.argoproj.io/instance=ingress-nginx
              helm.sh/chart=ingress-nginx-4.0.17
Annotations:  ingressclass.kubernetes.io/is-default-class: true
Controller:   k8s.io/ingress-nginx
Events:       <none>

In this example cluster there is only an nginx IngressClass and it is already default, but depending on the cluster you are deploying to this may be something you must manually set.

Airgapped Environments

An airgapped environment is any environment in which the running hosts will not have access to the greater Internet. As such these hosts will be unable to get access to the required software from Element and will also be unable to share telemetry data back with Element.

Your airgapped machine will still require access to airgapped linux repositories depending on your OS. If using Red Hat Enterprise Linux, you will also need access to the EPEL Repository in your airgapped environment.

If you are going to be installing into an airgapped environment, you will need a subscription including airgapped access and to then download the airgapped dependencies element-enterprise-installer-airgapped-<version>-gui.tar.gz file, which is a ~6GB archive that will need to be transferred to your airgapped environment.

Extract the archive, using tar -xzvf element-enterprise-installer-airgapped-<version>-gui.tar.gz so that you have an airgapped directory. Once complete, your host will be successfully setup for airgapped and ready for when you need to point the installer to that directory during installation.

For Kubernetes deployments, please note that once the image upload has been done, you will need to copy the airgapped/images/images_digests.yml file to the same path on the machine which will be used to render or deploy element services. Doing this, the new image digests will be used correctly in the kubernetes manifests used for deployment.

ESS Sizing Calculator

Use this tool to understand the required / recommended resources for your desired ESS configuration.

Disclaimer: This tool is intended for guidance only and should be used alongside expert judgment to ensure accurate results. For professional assistance with sizing, please contact our team to arrange a sizing workshop.

Deployment

Base

Name

imageName

Enabled

Standalone

microk8s

Admin UI

synapseAdminUI

Element Web Client

elementWeb

Well-Known Webserver

wellknownDelegation

Synapse

synapse

VOIP

Jitsi

jitsi

Element Call

elementCall

livekit

ElementX

Matrix Authentication Service

matrixAuthenticationService

Sliding Sync

slidingSync

Auditing

AuditBot

pipe

AdminBot

pipe

Data Sovereignty & Security

Identity Server

sydent

Secure Border Gateway

secureBorderGateway

Matrix Content Scanner

matrixContentScanner

Push Gateway

sygnal

Integrations

Webhook Integrations

hookshot

GroupSync

groupsync

Integrator

integrator

Bridges

SIP Bridge

sipbridge

XMPP Bridge

bifrost

IRC Bridge

ircbridge

Telegram Bridge

mautrixTelegram

Skype Bridge

skypeForBusinessBridge

WhatsApp Bridge

mautrixWhatsapp

Minimum Resources

vCPU (Cores)

Memory (MiB)

TOTAL

vCPU (Cores)

Memory (MiB)

TOTAL

Resource Breakdown

vCPU (Cores)

Memory (MiB)

Components

Postgres in Cluster

Operator + Updater

microk8s

Preparing Element Server Suite PoC

Please reach out our Element Sales Team if you want to run a Proof of Concept for Element Server Suite.

Note This guide is for running Proof of Concepts. We don't aim to show every feature here, we want to get you up and running most quickly. This guide is focusing on connected standalone installations currently. There are scenarios currently not covered by this guide, including installing into airgapped / disconnected environments, or testing our Cloud Based offering.

A Proof-of-Concept is done in preparation of a subscription sale with the goal of demonstrating the required capabilities.

Create an account on element.io

Please create an account on element.io. We will enable this account as part of the PoC process and grant you access to the Element Server Suite software packages.

Communication via matrix room

The account team will create a matrix room to improve communication and invite you. To do this We will need your Matrix ID (MXID) to invite you.

If you don't already have a MXID, you can create one here by signing up. This will create an account on matrix.org, you can authenticate via several identity providers.

When you have a MXID, we recommend adding it to your EMS Account via Your Account, Account. You should then send this to the account team so they can add you to the room. You could use the Element Web Client that you used to create the account or install one of the Element Mobile apps from the App or Playstore.

PoC preparation

Element Server Suite can be installed in a Kubernetes Cluster or as a standalone installation on top of an Operating System (RHEL 8/9 or Ubuntu 20.04/22.04). Most Proof-of-Concept installations will select the Standalone Installation on top of a VM which we recommend for speed and ease of operation.

Click here for an overview of the Element Server Suite. Here is the link detailing the installation process.

Preparation of the VM and Ports

Please set up a VM with 8 vCPUs and 32GB RAM and 100 GB Storage. If this sounds like a lot of resources to you, the requirements do in fact vary and could be scaled down later if required. Install Ubuntu 20.04 LTS or RHEL8. Update the system to the latest available patches and create a user to be used for maintaining the Element Server Suite. You can read more about requirements here.

You will need to be able to reach the VM on Ports 80, 443 and 8443.

DNS Names and Certificates

You need to select a base domain for the Server. This can differ from the base domain of the matrix IDs but is often the same. Read more about this in the section on Matrix IDs and Well Known delegation below.

You have chosen eng.acme.com. The following DNS entries must be prepared and point to the external IP of the VM.

This results in the following hostnames for you :

Optional for Monitoring and Integrations :

Optional for Video Chat with Jitsi :

Optional for Video Chat with Element Call :

Opitonal for Element X support :

Optional for the Admin / Audit functionality :

We require certificates for all these hostnames including the base domain to enable SSL/TLS encryption. The quick and easy way is to use the embedded letsencrypt. This is only available if you are in a connected environment. You can provide and use your own certificates.

Matrix IDs & Well Know delegation

Matrix IDs have the following format :

@USER:SERVER

In our example case the matrix server is matrix.eng.acme.com. If a user Tom Maier has a username tmaier in your LDAP, this would lead to an MXID @tmaier:matrix.eng.acme.com. This is often not desired as we like to keep the MXIDs short. It is more elegant to drop the "matrix" host name from the MXIDs. Tom's MXID would look like this @tmaier:eng.acme.com .

In order to be able to offer matrix IDs with the base domain, we recommend setting up a reverse proxy on eng.acme.com, which forwards https://eng.acme.com/.well-known/matrix/ to the matrix/synapse server on https://matrix.eng.acme.com/.well-known/matrix . Or you shorten the hostname part of your MXIDs even more to acme.com, this would require you to put the reverse proxy onto acme.com.

The configuration on your Apache WebServer should be similar to this :

    ProxyPass               /.well-known/matrix/ https://matrix.eng.acme.com/.well-known/matrix/
    ProxyPassReverse        /.well-known/matrix/ https://matrix.eng.acme.com/.well-known/matrix/
    ProxyPreserveHost On

More about well-known and MXIDs can be found in our Upstream Documentation here and here. Further configurations can be made using the well-known mechanism. An example is documented here.

Authentication and Postgres DB

The quickest setup is using local authentication and users only. This is what we recommend in a Proof-of-Concept situation. User accounts are created in the local Postgresql DB (recommended only up to 300 users) through our Admin UI or through API scripts for automation in this case. We support many mechanisms for AUthentication like LDAP, SAML2 and OIDC. We recommend to configure these as a 2nd step only if required.

You have the option to use an internal or external Postgres DB. We do recommend to use the internal Postgres DB for Proof-of-Concept installations. The internal Postgres DB is only available when you are opting for the Standalone Installation on top of an Operating System. You will need an external Postgres DB when installing into an existing Kubernetes cluster.

Checklist before starting the installation

Please prepare the above items before starting the installation. Make sure you have :

Don't hesitate to reach out to your Element Sales Team for support. We are here to guide you.

Installing Element Server Suite

First-Time Installation

Make sure you've read the Requirements and Recommendations page so your environment is ready for installation.

Running the Installer

Once the binary is on the device you wish to run the installer from, make it executable using chmod +x then run it to begin:

chmod +x ./element-installer-enterprise-edition-YY.MM.00-gui.bin
Kubernetes Deployment Note

If you are performing a Kubernetes deployment and have multiple kubernetes clusters configured in your kubeconfig, you will have to export the K8S_AUTH_CONTEXT variable before running the installer, as per the Operating System notes from the Requirements and Recommendations page:

export K8S_AUTH_CONTEXT=kube_context_name
./element-installer-enterprise-edition-YY.MM.00-gui.bin

With the installer running you will need to open a web browser and browse to one of the presented IPs. You may need to open port 8443 in your firewall to be able to access this address from a different machine. If you are unable to open port 8443 or you are having difficulty connecting from a different machine, you may want to try ssh port forwarding in which you would run:

ssh <host> -L 8443:127.0.0.1:8443

Replacing host with the IP address or hostname of the machine that is running the installer. At this point, with ssh connected in this manner, you should be able to use the https://127.0.0.1:8443 link which will then forward that request to the installer box via ssh.

Upon loading this address for the first time, you may be greeted with a message informing you that your connection isn't private, this is due to the installer initially using a self-signed certificate. Once you have completed deployment, the installer will use a certificate you specify or the certficate supplied for the admin domain on the Domains Section.

To proceed, click 'Advanced' then 'Continue', exact wording may vary across browsers.

The Installer

With the installer running, you will initially be presented with a 'Welcome to Element!' screen, from here you can click the 'Let's Go!' button to start configuring your ESS deployment. The installer has a number of sections to work through to configure your config before starting deployment, below will detail each section and what you can configure.

You can click on any sections' header, or the provided link below it, to visit that sections' detailed breakdown page which runs through what each specific option in that section does - however do please note that not all setups will require changing from the default settings.

Host Section.

The first section of the ESS installer GUI is the Host section, here you will configure essential details of how ESS will be installed including; deployment type; subscription credentials; PostgreSQL to use; and whether or not your setup is airgapped.

host_page1.png

For detailed guidance / details on each config option, check the Detailed Section Overview. Specifically for airgapped deployments, see the Airgapped notes.

Standalone Deployment

Ensure Standalone is selected, then if you are using LetsEncrypt for your certificates, you will want to make sure that you select Setup Cert Manager and enter an email address for LetsEncrypt to associate with your certificates. If you are using custom certificates or electing to manage SSL certificates yourself, then you will want to select Skip Cert Manager.

Provide your EMS Image Store Username and Token associated with your subscription, which you can find at https://ems.element.io/on-premise/subscriptions.

By default, microk8s will set up persistent volumes in /data/element-deployment and will allow 20GB of space to do this; ESS will configure the default DNS resolvers to Google (8.8.8.8 and 8.8.4.4); and a PostgreSQL database will be created for you. These defaults are suitable for most setups however change as needed i.e. if you need to use your company's DNS servers. If you elect to setup your own PostgreSQL database, make sure it is configured per the Requirements and Recommendations.

Kubernetes Deployment

Ensure Kubernetes Application is selected, then specify the Kubernetes context name for which you are deploying into. You can use kubectl config view to see which contexts you have access to. You can opt to skip the update setup or the operator setup, but unless you know why you are doing that, you should leave those options as default.

Provide your EMS Image Store Username and Token associated with your subscription, which you can find at https://ems.element.io/on-premise/subscriptions.

Airgapped

If you are installing in an airgapped environment, you'll either need to authenticate against your own container repository or download the airgapped package alongside the gui installer. If you choose our airgapped package, extract this somewhere on your system and enter the path to the extracted directory.

user@airgapped:~$ cd /home/user/Downloads/
user@airgapped:~/Downloads$ ls -l
total 7801028
-rwxr-xr-x 1 user user  129101654 Nov  7 15:51 element-installer-enterprise-edition-<version>-gui.bin
-rw-r--r-- 1 user user 7859142151 Nov 11 16:33 element-installer-enterprise-edition-airgapped-<version>-gui.tar.gz
user@airgapped:~/Downloads$ tar xf element-installer-enterprise-edition-airgapped-<version>-gui.tar.gz
user@airgapped:~/Downloads$ cd airgapped/
user@airgapped:~/Downloads/airgapped$ pwd
/home/user/Downloads/airgapped

If you are installing in standalone mode, ensure your system has a default gateway configured, even if it's not used. This is required by microk8s. See https://microk8s.io/docs/install-offline for additional details.

Domains Section.

The second section of the ESS installer GUI is the Domains section, here you will configure the fully-qualified domain names for each of the main components that will be deployed by ESS.

On this page, we get to specify the domains for our installation. In this example, we have a domain name of example.com and this would mean our MXIDs would look like @username:example.com.

The domain page performs a check to ensure that the host names provided resolve. Once you get green checks across the board, you can click continue.

For detailed guidance / details on each config option, check the Detailed Section Overview

Certificates Section.

The third section of the ESS installer GUI is the Domains section, here you will configure the certificates to use for each previously specified domain name.

If you are already serving content on your base domain, please read the Well-Known Delegation notes specifically to understand how you should configure this components' certificates.

If you wish to use your own certificates they must be in PEM encoded format, for detailed guidance / details on each config option, check the Detailed Section Overview

Database Section.

The fourth section of the ESS installer GUI is the Database section, here you will provide the configuration of the PostgreSQL database you will be using for Synapse.

If you're running in Standalone mode, and opted for the installer deployed postgres, you will not see this section.

database.png

Make sure you've read the Requirements and Recommendations page so your environment is ready for installation. Specifically for PostgreSQL, ensure you have followed the guidance specific to your deployment:

On this page you simply need to specify the database name, the database host name, the port to connect to, the SSL mode to use, and finally, the username and password to connect with. Once you have completed this section, simply click continue.

For Standalone Deployments, if your database is installed on the same server you are installing ESS to, esnure that the servers' public IP address is used. As the container is not sharing the host network namespace, entering 127.0.0.1 will resolve to the container itself and cause the installation failure.

For detailed guidance / details on each config option, check the Detailed Database Section Overview

Media Section.

The fifth section of the ESS installer GUI is the Media section, here you will configure where media will be saved as well as the maximum media upload size.

You can opt to use either a Persistent Volume Claim (default) or if you wish to use an S3 bucket. Selecting S3 will then require you to provide your S3 connection details and authentication credentials. You will also be able to adjust the maximum media upload size for your homeserver.

For detailed guidance / details on each config option, check the Detailed Media Section Overview

Cluster Section.

The sixth section of the ESS installer GUI is the Cluster section, here you will configure settings specific to the cluster in which Element Deployment will run on top of.

On standard setups, no options need configuring here so you can click continue.

For setups where on the certificates section, you uploaded certificates signed by you own private Certificate Authority, you will need to upload it's certificate in PEM encoded format. This should be a full chain certificate, like those upload in the Certificates section, including the Root Certificate Authority as well as any Intermediate Certificate Authorities.

If you are in an environment where you have self-signed certificates, you will want to disable TLS verification, by clicking Advanced and then scrolling down and unchecking Verify TLS. Please bear in mind that disabling TLS verification and using self-signed certificates is not recommended for production deployments.

If your host names are not DNS resolvable, you need to use host aliases and this can be set up here. You will also click "Advanced" and scroll down to the "Host Aliases" section in "k8s". In here, you will click "Add Host Aliases" and then you will specify an IP and host names that resolve to that IP:

hostaliases.png

For detailed guidance / details on each config option, check the Detailed Cluster Section Overview

Kubernetes Deployment

If you are not using OpenShift, you will need to set Force UID GID and Set Sec Comp to Enable under the section Security Context so that it looks like:

seccontext-enable.png

If you are using OpenShift, you should leave the values of Force UID GID and Set Sec Comp set to Auto.

Synapse Section.

The seventh section of the ESS installer GUI is the Synapse section, here you will configure settings specific to your homeserver.

synapse_page.png

While there are lots of options that can be configured in the section, it is generally recommended to complete the first-time setup before toggling on additional features i.e. Delegated Authentication, Data Retention etc.

Re-running the installer and configuring these individually after first-time setup is recommend to make troubleshooting easier should something in this section be mis-configured.

Generally speaking, for first-time setup the default options here can be left as-is, as they can be altered as needed post-deployment. Simply click continue to advance, however see below for details on some options you may wish to alter.

The first setting that you will come to is our built in performance profiles. Select the appropriate answers for Monthly Active Users and Federation Type to apply our best practices based on years of running Matrix homeservers.

Setting of Monthly Active Usersaka MAU and Federation Type within the Profile section does not directly set the maximum monthly active users or open/close Federation. These options will simply auto-configure the number of underlying pods deployed to handle the advised values.

You will be able to directly configure your desired maximum MAU and Federation in dedicated sections.

The next setting that you will see is whether you want to auto accept invites. The default of Manual will fit most use cases, but you are welcome to change this value.

The next setting is the maximum number of monthly active users (MAU) that you have purchased for your server. Your server will not allow you to go past this value. If you set this higher than your purchased MAU and you go over your purchased MAU, you will need to true up with Element to cover the cost of the unpaid users.

The next setting concerns registration. A server with open registration on the open internet can become a target, so we default to closed registration. You will notice that there is a setting called Custom and this requires explicit custom settings in the additional configuration section. Unless instructed by Element, you will not need the Custom option and should instead pick Closed or Open depending on your needs.

After this, you will see that the installer has generated a random admin password for you. You will want to use the eye icon to view the password and copy this down as you will use this with the user onprem-admin-donotdelete to log into the admin panel after installation.

synapse_page2.png

Continuing, we see telemetry. You should leave this enabled as you are required to report MAU to Element. In the event that you are installing into an enviroment without internet access, you may disable this so that it does not continue to try talking to Element. That said, you are still required to generate an MAU report at regular intervals and share that with Element.

For more information on the data that Element collects, please see: What Telemetry Data is Collected by Element?

As mentioned above, there are a lot of options that can be configured here, it is recommended to run through the detailed guidance / details on each config option available on the Detailed Synapse Section Overview

Delegated Auth.

A sub-section of the Synapse section is Delegated Authentication, which allows deferring to OIDC, SAML and LDAP Identity Providers for authentication.

It is not recommended to set this up on first-time install, however should you wish please refer to the dedicated Detailed Delegated Auth Section Overview page.

Federation.

A sub-section of the Synapse section is Federation, found under Advanced, which allows configuration of how your homeserver should federate with other homeservers.

It is not recommended to set this up on first-time install, however should you wish please refer to the dedicated Detailed Federation Section Overview page.

Element Web Section.

The eighth section of the ESS installer GUI is the Element Web section, here you can configure settings specific to the deployed Element Web client.

First almost all setups, nothing needs to be configured, simply click continue.

For airgapped environments you should click Advanced then enable Use Own URL for Sharing Links:

For detailed guidance / details on each config option, check the Detailed Section Overview

Homeserver Admin Section.

The ninth section of the ESS installer GUI is the Homeserver Admin section, here you can configure settings specific to the deployed Admin Console.

Unless advised by Element, you will not need to configure anything in this section, you will be able to access the homeserver admin via the admin domain specified in the Domains section, logging in with the built-in default Synapse Admin user onprem-admin-donotdelete whose password is defined in the Synapse section.

If you have enabled Delegated Authentication, the built-in Synapse Admin user onprem-admin-donotdelete will be unable to login unless Allow Local Users Login has been set to Enabled.

See the Delegated Authentication notes for how to promote a user from your Identity Provider to Synapse Admin

For detailed guidance / details on each config option, check the Detailed Section Overview

Integrator Section.

The final section of the ESS installer GUI when running for the first-time is the Integrator section, here you can configure settings specific to the integrator which is used to send messages to external services.

On first-time setup only PostgreSQL will need to be configured for Standalone Deployments where you are using an external PostgreSQL or Kubernetes Deployments where an external PostgreSQL is required.

For Standalone Deployments where the installer is deploying PostgreSQL for you, you will not need to configure anything.

For detailed guidance / details on each config option, check the Detailed Section Overview

The Installation Screen

After all sections you will finally be ready to begin the installation, simply click Install to begin.

installscreen.png

Depending on your OS setup, you may notice the installer hang, or directly ask for a password. Simply go back to the terminal where you are running the installer, you will see that you are being asked for the sudo password:

installstart1.png

sudoask.png

Provide your sudo password and the installation will continue. You will know the installer has finished when you see the Play Recap, as long as nothing failed the install was a success.

For Standalone Deployments, when running the installer for the first-time, you will be prompted to log out and back in again to allow Linux group membership changes to be refreshed. It is advised to simply cancel the running installer CTRL + C then reboot i.e. sudo reboot now. Then re-run the installer, return to the Installation Screen and click Install again. You will only have to perform this step once per server.

Verifying Your Installation

Once the installation has finished, it can take as much as 15 minutes on a first run for everything to be configured and set up. You can use:

watch kubectl get pods -n element-onprem

This will show the status of all pods, simply wait until all pods have come up and stablised showing as Ready. You can also keep track of the Current Deployment Status on the Installation Screen, once fully ready you should see:

What's Next?

Once your installation has been verified you should stop the running installer with CTRL + C then re-run it. You should notice instead of an IP you are given a URL matching the Synapse Admin domain you configured on the Domains section but on port 8443.

When the installer detects a successful installation, it will change from the first-time run interface to the Admin Console UI. Here you can:

Check out the Post-Installation Essentials for additional information and resources.

Core Component Sections

You already run through all these sections, however you may wish to dive deeper into each to fine-tune your configuration as required. You can find detailed breakdowns of each config option for these sections in the Installation of Core Components chapter, as well more advanced options detailed within the Advanced Configuration chapter.

The Integrations Section

This new section allows you to install new integrations to your deployment, you can find detailed installation instructions for each integration in the Integrations chapter.

You can find a full list of integrations available from the Introduction to Element Server Suite page.

Reconfiguring an existing Installation

Simply re-run the installer and run through any sections you wish to adjust your config on. Make sure to hit Save at the bottom of any changed sections, then hit Deploy and Start Deployment

Upgrading an existing Installation

First, before downloading a new version of the installer, it is important to check all upgrade notes that may affect you (any since the version you are currently on). You can check all upgrade notes specific to an LTS from it's associated book's ESS LTS YY.MM Change Logs and Upgrade Notes page, i.e. from this book (LTS 24.10) see ESS LTS 24.10 Change Logs and Upgrade Notes

If upgrading from an older LTS to a newer one, it is highly recommended to first upgrade to the latest version of the LTS you are currently running. Then perform another upgrade to the latest version of the next LTS.

Next, download the latest version of the installer, transfer it to the device where your .element-enterprise-server configuration exists and make it executable using chmod +x.

When you first run a new version of the installer, your config may be upgraded. It is highly recommended to make a backup of your config directory. See Where are the Installer Configuration Files for more information.

On first run of a new version of the installer, your config may be upgraded, once this is complete you will be able to access the installer UI. Simply go through all sections within the installer, re-confirm all options (making sure to save any changes / click save on any pages that do not have it greyed out), then hit Deploy.

Performing upgrades with GroupSync installed

If you have the GroupSync integration installed, please ensure you enable Dry Run mode.

Once deployment is complete, you can confirm via the GroupSync pod logs that everything is running as expected:

# Confirm the GroupSync Pod Name
kubectl get pods -n element-onprem | grep group

# Replace POD_NAME in the command below
kubectl logs POD_NAME -n element-onprem

If everything looks as expected, please re-deploy with Dry Run disabled to resume GroupSync functionality.

Post-Installation Essentials

End-User Documentation

After completing the installation you can share our User Guide PDF to help orient and onboard your users to Element! Or visit the Element Support book.

Where are the Installer Configuration Files

Everything that you have configured via the Element Server Suite installer is saved to configuration files placed in the .element-enterprise-server directory, found in the home directory of the user who ran the installer. In this directory, you will find a subdirectory called config that contains the actual configuration files - keep these backed up.

Running the Installer unattended

It is possible to run the installer without using the GUI provided that you have a valid set of configuration files in the .element-enterprise-server/config directory.

Using this method, you could use the GUI as a configuration editor and then take the resulting configuration and modify it as needed for further installations.

This method also makes it possible to set things up once and then run future updates without having to use the GUI.

See the Running the installer unattended section from the Automating ESS Deployment doc.

Manually creating your first user

It is highly recommended to use the Admin Console to create new users, you can see the Using the Admin Tab page for more details, specifically the Adding Users section.

However you can also create users from your terminal, by running the following command:

$ kubectl --namespace element-onprem exec --stdin --tty \
    first-element-deployment-synapse-main-0 \
    -- register_new_matrix_user --config /config/rendered/instance.yaml

New user localpart: your_username
Password: 
Confirm password: 
Make admin [no]: yes
Sending registration request...
Success!

Make sure to enter yes on Make admin if you wish to use this user on the installer or standalone Admin page.

Please note, you should be using the Admin page or the Synapse Admin API instead of kubectl/register_new_matrix_user to create subsequent users.

Standalone Deployment microk8s Specifics

Cleaning up images cache

The installer, from version 24.02, comes with the tool crictl which lets you interact with microk8s containerd daemon.

After upgrading, once all pods are running, you might want to run the following command to clean-up old images :

~/.element-enterprise-server/installer/.install-env/bin/crictl -r unix:///var/snap/microk8s/common/run/containerd.sock rmi --prune

Upgrading microk8s

Prior to versions 24.04.05

Upgrading microk8s rely on uninstalling, rebooting the machine, and reinstalling ESS on the new version. It thus involves a downtime.

To upgrade microk8s, please run the installer with : ./<installer>.bin --upgrade-cluster.

The machine will reboot during the process. Once it has rebooted, log in as the same user, and run : ./<installer>.bin unattended. ESS will be reinstalled on the upgraded microk8s cluster.

After versions 24.04.05

Microk8s will be upgraded gracefully automatically when the new installer is used. The upgrade involves upgrading the addons, and might involve a downtime of a couple of minutes while it runs.

Upgrading an existing Installation

See Upgrading an existing Installation from the Installing Element Server Suite page for details.

Installation of Core Components

Installation of Core Components

Host Section


The first section of the ESS installer GUI is the Host section, here you will configure essential details of how ESS will be installed including; deployment type; subscription credentials; PostgreSQL to use; and whether or not your setup is airgapped.

Settings configured via the UI in this section will mainly be saved to your cluster.yml. If performing a Kubernetes deployment, you will also be able to config Host Admin settings which will save configuration into both internal.yml and deployment.yml.

Depending on your environment you will need to select either Standalone or Kubernetes Application. Standalone will install microk8s locally on your machine, and deploy to it so all pods are running locally on the host machine. Kubernetes Application will deploy to your Kubernetes infrastructure in a context you will need to have already setup via your kube config.

Deployment (Standalone)

Install

Config Example
spec:
  connectivity:
    dockerhub:
      password: example
      username: example
  install:
    emsImageStore:
      password: example
      username: example
    webhooks:
      caPassphrase: example
  	# Options unique to selecting Standalone
    certManager:
      adminEmail: example@Dexample.com
    microk8s:
      dnsResolvers:
        - 8.8.8.8
        - 8.8.4.4
      postgresInCluster:
        hostPath: /data/postgres
        passwordsSeed: example
    operatorUpdaterDebugLogs: false
    useLegacyAuth: false

An example of the cluster.yml config generated when selecting Standalone, note that no specific flag is used within the config to specify selecting between Standalone or Kubernetes. If you choose to manually configure ESS bypassing the GUI, ensure only config options specific to how you wish to deploy are provided.

Select your deployment type here, if you've jumped ahead you should first read our Introduction to Element Server Suite and then see our Requirements and Recommendations which details the environment specifics needed for each deployment type.

Debug Logging

Config Example
spec:
  install:
    operatorUpdaterDebugLogs: false

Enabling this option will run the operator and updator with debug logging. You should leave this disabled by default unless you are experiencing issues.

Legacy Auth

Config Example
spec:
  install:
    useLegacyAuth: false

Disabled by default, unless upgrading from a previous LTS version lacking MAS support. Migrating to MAS from legacy authenication is not currently supported.

New to LTS 24.10, authentication by defualt uses the Matrix Authentication Service. This configurable option allows you to disable the use of MAS and revert back to the legacy authentication offered in previous versions of ESS.

Once you have deployed for the first time, you cannot enable / disable Legacy Auth. Ensure if you require SAML delegated authentication, or wish to use the GroupSync integration, you enable Legacy Authentication prior to deployment.

Cert Manager

Config Example
spec:
  install:
    # certManager: {} # When 'Skip Cert Manager' selected
    certManager:
      adminEmail: example@example.com

You should keep this enabled if you will be using Let's Encrypt to verify your domain and generate your certificates, simply provide the username where due to expire certificate notices will be sent.

If you plan to upload your own certificates, or they will be Externally Managed, you should select Skip Cert Manager.

EMS Image Store

Config Example
spec:
  install:
    emsImageStore:
      password: token
      username: test

Here you will need to provide your EMS Image Store Username and Token associated with your subscription, which you can find at https://ems.element.io/on-premise/subscriptions.

If you forget your token and hit 'Refresh' in the EMS Control Panel, you will need to ensure you redeploy your instance with the new token - otherwise subsequent deployments will fail.

MicroK8s

Config Example
spec:
  install:
    microk8s:
      persistentVolumesPath: /data/element-deployment
      registrySize: 25Gi

It is unlikely you should need to adjust these values and it is highly recommended to leave this as their defaults.

If you encounter a requirement to clean up your images cache, see the Cleaning up images cache section from the Post-Installation Essentials page.

DNS Resolvers

Config Example
spec:
  install:
    microk8s:
      dnsResolvers:
        - 8.8.8.8
        - 8.8.4.4

Defaulting to 8.8.8.8 and 8.8.4.4, the DNS server IPs set here will be used by all deployed pods. Click Add more DNS Resolvers to add additional entries as required.

Nginx Extra Configuration

Config Example
spec:
  install:
    microk8s:
      # Not present when disabled
      nginxExtraConfiguration:
        custom-http-errors: '"404"'
        server-snippet: >-
          error_page 404 /404.html; location = /404.html { internal; return 200
          "<p>Hello World!</p>"; }

As linked via the ESS installer GUI, see the Ingress-Nginx Controller ConfigMaps documentation for the options that can be configured.

Example

The below example is for demonstration purposes only, you should follow the linked guidance before adding extra configuration.

For example, if you wanted to replace the standard 404 error page, you could do this using both custom-http-errors and server-snippet. To configure via the installer, simply add the specify custom-http-errors as the Name and click Add to Nginx Extra Configuration, then provide the required value in the newly created field:

Repeat for server-snippet:

The above example is used to explain how to configure the Nginx Extra Configuration, and so is for demonstration purposes only, it is not recommended to use this example config. Ideally your web server should manage traffic that would otherwise hit a 404 being served by ESS.

PostgreSQL in Cluster

Config Example
spec:
  install:
    microk8s:
      # postgresInCluster: {} # If 'External PostgreSQL Server' selected
      postgresInCluster:
        hostPath: /data/postgres
        passwordsSeed: example

Only available in Standalone deployments you can have the installer deploy PostgreSQL for you, this will remove the requirement to configure PostgreSQL connection and authentication credentials in later parts of the installer. It is highly recommended to keep the default settings if you opt for this approach.

If you already have an external PostgreSQL server you wish to use, make sure you have followed the PostgreSQL Standalone Environment Prerequisites detailed on the Requirements and Recommendations page. Selecting this option will present an additional Database section in the installer process.

Internal Webhooks

Config Example
spec:
  install:
    webhooks:
      caPassphrase: YpiNQMMzBjalfVPQqxcxO4e211YFR5

You should not need to change this, a unique CA passphrase will b generated on first run of the installer and is used by the interal CA to self-sign certificates.

Deployment (Kubernetes Application)

Install

Config Example
spec:
  connectivity:
    dockerhub:
      password: example
      username: example
  install:
    emsImageStore:
      password: example
      username: example
    webhooks:
      caPassphrase: example
  	# Options unique to selecting Standalone
    clusterDeployment: true
    kubeContextName: example
    namespaces: {}
    skipElementCrdsSetup: false
    skipOperatorSetup: false
    skipUpdaterSetup: false
    operatorUpdaterDebugLogs: false
    useLegacyAuth: false

An example of the cluster.yml config generated when selecting Kubernetes, note that no specific flag is used within the config to specify selecting between Standalone or Kubernetes. If you choose to manually configure ESS bypassing the GUI, ensure only config options specific to how you wish to deploy are provided.

Select your deployment type here, if you've jumped ahead you should first read our Introduction to Element Server Suite and then see our Requirements and Recommendations which details the environment specifics needed for each deployment type.

Cluster Deployment

Config Example
spec:
  install:
    clusterDeployment: true

Deploy the operator & the updater using Cluster Roles.

Kube Context Name

Config Example
spec:
  install:
    kubeContextName: example

The name of the Kubernetes context you have already setup that ESS should be deployed into.

Debug Logging

Config Example
spec:
  install:
    operatorUpdaterDebugLogs: false

Enabling this option will run the operator and updator with debug logging. You should leave this disabled by default unless you are experiencing issues.

Legacy Auth

Config Example
spec:
  install:
    useLegacyAuth: false

Disabled by default, unless upgrading from a previous LTS version lacking MAS support. Migrating to MAS from legacy authenication is not currently supported.

New to LTS 24.10, authentication by defualt uses the Matrix Authentication Service. This configurable option allows you to disable the use of MAS and revert back to the legacy authentication offered in previous versions of ESS.

Once you have deployed for the first time, you cannot enable / disable Legacy Auth. Ensure if you require SAML delegated authentication, or wish to use the GroupSync integration, you enable Legacy Authentication prior to deployment.

Skip Setup Options

Config Example
spec:
  install:
    skipElementCrdsSetup: false
    skipOperatorSetup: false
    skipUpdaterSetup: false

Selecting these will allow you to skip the setup of the Element CRDs, Operator and Updater as required.

EMS Image Store

Config Example
spec:
  install:
    emsImageStore:
      password: token
      username: test

Here you will need to provide your EMS Image Store Username and Token associated with your subscription, which you can find at https://ems.element.io/on-premise/subscriptions.

If you forget your token and hit 'Refresh' in the EMS Control Panel, you will need to ensure you redeploy your instance with the new token - otherwise subsequent deployments will fail.

Namespaces

Config Example
spec:
  install:
    # namespaces: {} # When left as default namespaces
    # namespaces: # When `Create Namespaces` is disabled
    #   createNamespaces: false
    namespaces: # When custom namespaces are provided
      elementDeployment: element-example # Omit any that should remain as default
      operator: operator-example
      updater: updater-example

Allows you to specify the namespaces you wish to deploy into, with the additional option to create them if they don't exist.

Namespace-scoped Deployments

Namespace-scoped deployments in Kubernetes offer a way to organize and manage resources within specific namespaces rather than globally across the entire cluster.

Preparing the Cluster

Installing the Helm Chart Repositories

The first step is to start on a machine with helm v3 installed and configured with your kubernetes cluster and pull down the two charts that you will need.

First, let's add the element-updater repository to helm:

helm repo add element-updater https://registry.element.io/helm/element-updater --username
ems_image_store_username --password 'ems_image_store_token'

Replace ems_image_store_username and ems_image_store_token with the values provided to you by Element.

Secondly, let's add the element-operator repository to helm:

helm repo add element-operator https://registry.element.io/helm/element-operator --username ems_image_store_username --password 'ems_image_store_token'

Replace ems_image_store_username and ems_image_store_token with the values provided to you by Element.

Now that we have the repositories configured, we can verify this by:

helm repo list

and should see the following in that output:

NAME                    URL                                               
element-operator        https://registry.element.io/helm/element-operator
element-updater         https://registry.element.io/helm/element-updater

Deploy the CRDs

Write the following values.yaml file:

clusterDeployment: true
deployCrds: true
deployCrdRoles: true
deployManager: false

To install the CRDs with the helm charts, simply run:

helm install element-updater element-updater/element-updater -f values.yaml
helm install element-operator element-operator/element-operator -f values.yaml

Now at this point, you should have the following two CRDs available:

[user@helm ~]$  kubectl get crds | grep element.io
elementwebs.matrix.element.io                         2023-10-11T13:23:14Z
wellknowndelegations.matrix.element.io                2023-10-11T13:23:14Z
elementcalls.matrix.element.io                        2023-10-11T13:23:14Z
hydrogens.matrix.element.io                           2023-10-11T13:23:14Z
mautrixtelegrams.matrix.element.io                    2023-10-11T13:23:14Z
sydents.matrix.element.io                             2023-10-11T13:23:14Z
synapseusers.matrix.element.io                        2023-10-11T13:23:14Z
bifrosts.matrix.element.io                            2023-10-11T13:23:14Z
lowbandwidths.matrix.element.io                       2023-10-11T13:23:14Z
synapsemoduleconfigs.matrix.element.io                2023-10-11T13:23:14Z
matrixauthenticationservices.matrix.element.io        2023-10-11T13:23:14Z
ircbridges.matrix.element.io                          2023-10-11T13:23:14Z
slidingsyncs.matrix.element.io                        2023-10-11T13:23:14Z
securebordergateways.matrix.element.io                2023-10-11T13:23:14Z
hookshots.matrix.element.io                           2023-10-11T13:23:14Z
matrixcontentscanners.matrix.element.io               2023-10-11T13:23:14Z
sygnals.matrix.element.io                             2023-10-11T13:23:14Z
sipbridges.matrix.element.io                          2023-10-11T13:23:14Z
livekits.matrix.element.io                            2023-10-11T13:23:14Z
integrators.matrix.element.io                         2023-10-11T13:23:14Z
jitsis.matrix.element.io                              2023-10-11T13:23:14Z
mautrixwhatsapps.matrix.element.io                    2023-11-15T09:03:48Z
synapseadminuis.matrix.element.io                     2023-10-11T13:23:14Z
synapses.matrix.element.io                            2023-10-11T13:23:14Z
groupsyncs.matrix.element.io                          2023-10-11T13:23:14Z
pipes.matrix.element.io                               2023-10-11T13:23:14Z
elementdeployments.matrix.element.io                  2023-10-11T13:34:25Z
chatterboxes.matrix.element.io                        2023-11-21T15:55:59Z

Namespace-scoped role

In the namespace where the ESS deployment will happen, to give a user permissions to deploy ESS, please create the following role and roles bindings:

Once your cluster is prepared, you can setup your namespace-scoped deployment by configuring these settings:

Internal Webhooks

Config Example
spec:
  install:
    webhooks:
      caPassphrase: YpiNQMMzBjalfVPQqxcxO4e211YFR5

Connectivity

Config Example
spec:
  connectivity:

Connected

Config Example
spec:
  connectivity:
    # dockerhub: {} # When Username & Password is disabled per default
    dockerhub:
      password: password
      username: test

Connected means the installer will use the previously provided EMS Image Store credentials to pull the required pod images as part of deployment, optionally, you can specify DockerHub credentials to reduce potential rate limiting.

Airgapped

Config Example
spec:
  connectivity:
    airgapped:
      localRegistry: localhost:32000
      sourceDirectory: /home/ubuntu/airgapped/
  	  # uploadCredentials not present if `Target an Existing Local Image Registry` selected
      # uploadCredentials: {} # If 'Upload without Authentication'
      uploadCredentials:
        password: example
        username: example

An airgapped environment is any environment in which the running hosts will not have access to the greater internet. This proposes a situation in which these hosts are unable to get access to various needed bits of software from Element and also are unable to share telemetry data back with Element.

Selecting Airgapped means the installer will rely on images stored in a registry local to your environment, by default the installer will host this registry uploading images found within the specified Source Directory, however you can alternatively specify one already present in your environment instead.

Getting setup within an Airgapped environment

Alongside each Installer binary available for download, for those customers with airgapped permissions, is an equivalent airgapped package element-enterprise-installer-airgapped-<version>-gui.tar.gz. Download and copy this archive to the machine running the installer, then use tar -xzvf element-enterprise-installer-airgapped-<version>-gui.tar.gz to extract out its contents, you should see a folder airgapped with the following directories within:

Copy the full path of the root airgapped folder, for instance, /home/ubuntu/airgapped and paste that into the Source Directory field. Should you ever update the ESS installer binary, you will need to ensure you delete and replace this airgapped folder, with its updated equivalent.

Your airgapped machine will still require access to airgapped linux repositories depending on your OS. If using Red Hat Enterprise Linux, you will also need access to the EPEL repository in your airgapped environment.

Host Admin

Config Example

The Host Admin section allows you to configure the domain name and certificates to use when serving the ESS installer GUI, when running directly on the host - changes here will take affect the next time you run the installer.

Installation of Core Components

Domains Section


The second section of the ESS installer GUI is the Domains section, here you will configure the fully-qualified domain names for each of the main components that will be deployed by ESS.

The domain names configured via the UI in this section will be saved to your deployment.yml under each of the components' k8s: ingress: configuration.

This section covers all domain names used by the main components present in the installer, additional domains may be required when enabling specific integrations - you will specify integration specific domain names on each respective integrations' page.

Config Example
spec:
  components:
    elementWeb:
      k8s:
        ingress:
          fqdn: element.example.com
    integrator:
      k8s:
        ingress:
          fqdn: integrator.example.com
    matrixAuthenticationService:
      k8s:
        ingress:
          fqdn: mas.example.com
    synapse:
      k8s:
        ingress:
          fqdn: synapse.example.com
    synapseAdmin:
      k8s:
        ingress:
          fqdn: admin.example.com
  global:
    config:
      domainName: example.com

Simply provide the base domain name for your deployment, then you need to provide the sub-domains to use for Synapse (Matrix Homeserver), Element Web (Hosted Matrix Client), Synapse Admin (Hosted Admin Console) and Integrator.

Changing your base Domain Name

If you have already deployed your server, it is not possible to change your base domain name. To do so, you will need to wipe all data and start anew.

Installation of Core Components

Certificates Section


The third section of the ESS installer GUI is the Domains section, here you will configure the certificates to use for each previously specified domain name.

Certificate details configured via the UI in this section will be saved to your deployment.yml under each of the components' k8s: ingress: configuration with the cert contents (if manually uploaded) being saved to a secrets.yml in Base64.

This section covers all certificates to be used by the main components deployed by the installer, additional certificates may be required when enabling specific integrations - you will specify integration specific certificates on each respective integrations' page.

Config Example

You will need to configure certificates for the following components:

For each component, you will be presented with 4 options on how to configure the certificate.

Certmanager Let's Encrypt

Config Example
spec:
  components:
    componentName: # `elementWeb`, `integrator`, `synapse`, `synapseAdmin`, `wellKnownDelegation`
      k8s:
        ingress:
          tls:
            certmanager:
              issuer: letsencrypt
            mode: certmanager
      secretName: component # Not used with 'Certmanager Let's Encrypt'

Select this to use Let's Encrypt to generate the certificates used, do not edit the Issuer field as no other options are available at this time.

Certificate File

Config Example
spec:
  components:
    componentName: # `elementWeb`, `integrator`, `synapse`, `synapseAdmin`, `wellKnownDelegation`
      k8s:
        ingress:
          tls:
            mode: certfile
            certificate:
              certFileSecretKey: componentCertificate
              privateKeySecretKey: componentPrivateKey
      secretName: component
apiVersion: v1
kind: Secret
metadata:
  name: component
  namespace: element-onprem
data:
  componentCertificate: >-
    exampleBase64EncodedString
  componentPrivateKey: >-
    exampleBase64EncodedString
---

Select this option to be able to manually upload the certificates that should be used to serve the specified domain. Make sure you certificate files are in the PEM encoded format and it is strongly advised to include the full certificate chain within the file to reduce likelihood of certificate-based issues post deployment.

Existing TLS Certificates in the Cluster

Config Example
spec:
  components:
    componentName: # `elementWeb`, `integrator`, `synapse`, `synapseAdmin`, `wellKnownDelegation`
      k8s:
        ingress:
          tls:
            mode: existing
            secretName: example
      secretName: component # Not used with 'Existing TLS Certificates in the Cluster'

This option is most applicable to Kubernetes deployments, however can be used with Standalone. Select this option when secrets containing the certificates are already present and managed within the cluster, provide the secret name that contains the TLS certificates for ESS to use them.

Externally Managed

Config Example
spec:
  components:
    componentName: # `elementWeb`, `integrator`, `synapse`, `synapseAdmin`, `wellKnownDelegation`
      k8s:
        ingress:
          tls:
            mode: external
      secretName: component # Not used with 'Externally Managed'

Select this option is certificates are handled in front of the cluster, TLS will not be configured on the ingress for each component.

Well-Known Delegation

If you already host a site on your base domain, i.e. example.com, then you should either ensure your web server defers to the Well-Known Delegation component to serve the .well-known files or you should set Well-Known Delegation to Externally Managed and manually serve those files.

This is because Matrix clients and servers need to be able to request https://example.com/.well-known/matrix/client and https://example.com/.well-known/matrix/server respectively to work properly.

The web server hosting the base domain should either forward requests for /.well-known/matrix/client and /.well-known/matrix/server to the Well-Known Delegation component for it to serve, or a copy of the .well-known files will need to be added directly on the example.com web server.

If you don't already host a base domain example.com, then the Well-Known Delegation component hosts the .well-known files and serves the base domain i.e. example.com

Getting the contents of the .well-known files
  1. Run kubectl get cm/first-element-deployment-well-known -n element-onprem -o yaml on your ESS host, it will output something similar to the below:

    Config Example
    apiVersion: v1
    data:
      client: |-
        {
            "m.homeserver": {
                "base_url": "https://synapse.example.com"
            }
        }
       server: |-
        {
            "m.server": "synapse.example.com:443"
        }
     kind: ConfigMap
    metadata:
      creationTimestamp: "2024-06-13T09:32:52Z"
      labels:
        app.kubernetes.io/component: matrix-delegation
        app.kubernetes.io/instance: first-element-deployment-well-known
        app.kubernetes.io/managed-by: element-operator
        app.kubernetes.io/name: well-known
        app.kubernetes.io/part-of: matrix-stack
        app.kubernetes.io/version: 1.24-alpine-slim
        k8s.element.io/crdhash: 9091d9610bf403eada3eb086ed2a64ab70cc90a8
      name: first-element-deployment-well-known
      namespace: element-onprem
      ownerReferences:
      - apiVersion: matrix.element.io/v1alpha1
        kind: WellKnownDelegation
        name: first-element-deployment
        uid: 24659493-cda0-40f0-b4db-bae7e15d8f3f
      resourceVersion: "3629"
      uid: 7b0082a9-6773-4a28-a2a9-588a4a7f7602
    
  2. Copy the contents of the two supplied files (client and server) from the output into their own files:

    • Filename: client
      {
          "m.homeserver": {
              "base_url": "https://synapse.example.com"
          }
      }
      
    • Filename: server
      {
      "m.server": "synapse.example.com:443"
      }
      
  3. Configure your webserver such that each file is served correctly at, i.e for a base domain of example.com:

    • https://example.com/.well-known/matrix/client
    • https://example.com/.well-known/matrix/server
Installation of Core Components

Database Section


This section of the ESS installer GUI will only be present if you are using the Kubernetes deployment option or you have opted to use your own PostgreSQL for a Standalone deployment.

If you have not yet set up your PostgreSQL, you should ensure you have done so before proceeding, see the relevant PostgreSQL section from the Requirements and Recommendations page:

All settings configured via the UI in this section will be saved to your deployment.yml, with the contents of secrets being saved to secrets.yml. You will find specific configuration examples in each section.

Config Example

By default, if you do not change any settings on this page, defaults will be added to your configuration file/s (see example below).

Config Example
spec:
  components:
    synapse:
      config:
        postgresql:
          database: synapse
          host: db.example.com
          passwordSecretKey: postgresPassword
          user: test-username

PostgreSQL

Database

Config Example
spec:
  components:
    synapse:
      config:
        postgresql:
          database: synapse

Enter the name of the PostgreSQL Database you configured per the previously mentioned Requirements and Recommendations to use for Synapse.

Host

Config Example
spec:
  components:
    synapse:
      config:
        postgresql:
          host: db.example.com

Enter the fully qualified domain name of the PostgreSQL Database you configured per the previously mentioned Requirements and Recommendations to use for Synapse.

Port

Config Example
spec:
  components:
    synapse:
      config:
        postgresql:
          # port not present when left as default 5432
          port: 5432

Defaults to 5432, either keep if correct or provide the required port of the PostgreSQL Database you configured per the previously mentioned Requirements and Recommendations to use for Synapse.

SSL Mode

Config Example
spec:
  components:
    synapse:
      config:
        postgresql:
          # sslMode not present when left as default `require` 
          sslMode: require
          # sslMode: disable
          # sslMode: allow
          # sslMode: prefer
          # sslMode: verify-ca
          # sslMode: verify-full

Defaults to Require - it is not recommended to disable SSL, so for most setups, this setting should be left as default.

You should adjust to accommodate your environment as required, the options available are:

User

Config Example
spec:
  components:
    synapse:
      config:
        postgresql:
          user: test-username

Enter the username of a user who can access the PostgreSQL Database you configured per the previously mentioned Requirements and Recommendations to use for Synapse.

PostgreSQL Password

Config Example

Enter the password for the specified user who can access the PostgreSQL Database you configured per the previously mentioned Requirements and Recommendations to use for Synapse.

Advanced

Connection Pool

Max / Min Connections

Config Example
spec:
  components:
    synapse:
      config:
        postgresql:
          # connectionPool not present when left as default
          connectionPool:
            maxConnections: 10
            minConnections: 5

In most deployments you should not need to configure these settings, however if required, you can adjust both the minimum and maximum connections in the Synapse connection pool.

Installation of Core Components

Media Section


The Media section allows you to customise where media uploaded to your homeserver should be stored and the maximum upload size. By default this is to a Persistent Volume Claim (PVC) however you can also configure options for using S3.

All settings configured via the UI in this section will be saved to your deployment.yml, with the contents of secrets being saved to secrets.yml. You will find specific configuration examples in each section.

Config Example

By default, if you do not change any settings on this page, defaults will be added to your configuration file/s (see example below).

Config Example

Config

Media

Config Example ```yml spec: components: synapse: config: media: volume: # Present if you select either Persistent Volume Claim option size: 50Gi ```

Selecting either Persistent Volume Claim configuration option will default to using a 50Gi volume for media.

S3

Config Example
spec:
  components:
    synapse:
      config:
        media:
          s3:
            bucket: example_bucket_name
            prefix: example_prefix
            storageClass: STANDARD # Not present if left as default

Provide your bucket name and a prefix within the bucket to use. You can also adjust the storage class however it is recommended to leave it as STANDARD unless you have a specific requirement to change.

Authentication

Config Example

Provide any credentials (Access Key ID and Secret Access Key) required to authenticate access to the specified S3 bucket.

Region

Config Example
spec:
  components:
    synapse:
      config:
        media:
          s3:
            region: eu-central-1 # Not present if disabled

Toggle on this section to be able to specify the S3 bucket region you wish to use.

Endpoint URL

Config Example
spec:
  components:
    synapse:
      config:
        media:
          s3:
            endpointUrl: https://example-endpoint.url # Not present if disabled

Toggle on this section to be able to specify a non-AWS S3 endpoint URL.

Local Cleanup

Config Example
spec:
  components:
    synapse:
      config:
        media:
          s3:
            # Not present if disabled
            # localCleanup: {} # If defaults left as-is
            localCleanup:
              frequency: 2h # Only present if changed from default
              threshold: 2d # Only present if changed from default

Toggle on this section to control the frequency of local storage cleanup and the threshold since media was last accessed before it should be offloaded to S3.

Max Upload Size

Config Example
spec:
  components:
    synapse:
      config:
        media:
          maxUploadSize: 100M

By default the Max Upload size is 100M, here you can adjust this value to allow for larger or smaller uploads on your homeserver. The desired file size should be specified in bytes ending with M or K.

Installation of Core Components

Authentication Section


This is a new section introduced in LTS 24.10 which replaces the previous Delegated Authentication options found within the Synapse section. Your previous configuration will be upgraded on first-run of the newer LTS.

In the Authentication section you will find options to configure settings specific to Authentication. Regardless of if you are using the Matrix Authentication Server, or have enabled Legacy Auth, the settings on this page will remain the same.

However please note, MAS does not support delegated authentication with SAML or GroupSync - if you wish to enable either of these you will need to return to the Host section and enable Legacy Auth.

All settings configured via the UI in this section will be saved to your deployment.yml, with the contents of secrets being saved to secrets.yml. You will find specific configuration examples in each section.

Config Example

By default, if you do not change any settings on this page, defaults will be added to your configuration file/s (see example below).

Config Example

User Profiles

Config Example
spec:
  components:
    synapse:
      config:
        delegatedAuth:
          userProfiles:
            allowAvatarChange: true # Not present if left as default
            allowDisplayNameChange: true # Not present if left as default
            allowEmailChange: true # Not present if left as default

The User Profiles section provides some self-explanatory config options to adjust what changes users are allowed to make to their User Profile, such as changing their Display Name. You may wish to restrict this if you'd prefer to delegate the setting of these values to the associated Identity Provider.

OIDC

You can add and configure one, or multiple, OIDC providers - to do so you will need to click the Add OIDC / Add more OIDC button found after toggling on the ODIC section:

Once an OIDC provider is added, you can remove any providers by clicking the rubbish bin icon found to the left of the provider.

Config Example
spec:
  components:
    synapse:
      config:
        delegatedAuth:
          oidc:
            - 

IdP Name

Config Example
spec:
  components:
    synapse:
      config:
        delegatedAuth:
          oidc:
              idpName: example_name # Required

IdP ID

Config Example
spec:
  components:
    synapse:
      config:
        delegatedAuth:
          oidc:
              idpId: 01JDS2WKNYTQS21GFAKM9AKD9R # Required

IdP Brand

Config Example
spec:
  components:
    synapse:
      config:
        delegatedAuth:
          oidc:
              idpBrand: example_brand

Issuer

Config Example
spec:
  components:
    synapse:
      config:
        delegatedAuth:
          oidc:
              issuer: https://issuer.example.com/ # Required

Client Auth Method

Config Example
spec:
  components:
    synapse:
      config:
        delegatedAuth:
          oidc:
              clientAuthMethod: client_secret_basic # If no `clientAuthMethod` defined, will default to `client_secret_basic`
              # clientAuthMethod: client_secret_post
              # clientAuthMethod: none

Client ID

Config Example
spec:
  components:
    synapse:
      config:
        delegatedAuth:
          oidc:
              clientId: example_client_id

Client Secret

Config Example

Allow Existing Users

Config Example
spec:
  components:
    synapse:
      config:
        delegatedAuth:
          oidc:

Scopes

Config Example
spec:
  components:
    synapse:
      config:
        delegatedAuth:
          oidc:
              scopes:
                - openid
                - profile
                - email

User Mapping Provider

Config Example
spec:
  components:
    synapse:
      config:
        delegatedAuth:
          oidc:
              userMappingProvider:
Subject Template

Config Example
spec:
  components:
    synapse:
      config:
        delegatedAuth:
          oidc:
              userMappingProvider:
                subjectTemplate: '{{ user.subject }}'
Localpart Template

Config Example
spec:
  components:
    synapse:
      config:
        delegatedAuth:
          oidc:
              userMappingProvider:
                localpartTemplate: '{{ user.preferred_username }}'
Display Name Template

Config Example
spec:
  components:
    synapse:
      config:
        delegatedAuth:
          oidc:
              userMappingProvider:
                displayNameTemplate: '{{ user.name }}'
Email Template

Config Example
spec:
  components:
    synapse:
      config:
        delegatedAuth:
          oidc:
              userMappingProvider:
                emailTemplate: '{{ user.email }}'

Endpoints Discovery

Auto Discovery

Config Example
spec:
  components:
    synapse:
      config:
        delegatedAuth:
          oidc:
            - clientId: synapsekieranml
              clientSecretSecretKey: oidcClientSecret
              endpointsDiscovery:
                skipVerification: false
              idpId: 01JDS2WKNYTQS21GFAKM9AKD9R
              idpName: Keycloak
              issuer: https://keycloak.ems-support.element.dev/realms/matrix
              scopes:
                - openid
                - profile
                - email
              userMappingProvider:
                displayNameTemplate: '{{ user.name }}'
                emailTemplate: '{{ user.email }}'
Skip Verification
Config Example
spec:
  components:
    synapse:
      config:
        delegatedAuth:
          oidc:
            - clientId: synapsekieranml
              clientSecretSecretKey: oidcClientSecret
              endpointsDiscovery:
                skipVerification: false
              idpId: 01JDS2WKNYTQS21GFAKM9AKD9R
              idpName: Keycloak
              issuer: https://keycloak.ems-support.element.dev/realms/matrix
              scopes:
                - openid
                - profile
                - email
              userMappingProvider:
                displayNameTemplate: '{{ user.name }}'
                emailTemplate: '{{ user.email }}'

Backchannel Logout Enabled

The Matrix Authentication Service does not support configuring Backchannel Logout. You can only configure Backchannel logout if you have enabled Legacy Auth from the Host Section.

Config Example
spec:
  components:
    synapse:
      config:
        delegatedAuth:
          oidc:
            - clientId: synapsekieranml
              clientSecretSecretKey: oidcClientSecret
              endpointsDiscovery:
                skipVerification: false
              idpId: 01JDS2WKNYTQS21GFAKM9AKD9R
              idpName: Keycloak
              issuer: https://keycloak.ems-support.element.dev/realms/matrix
              scopes:
                - openid
                - profile
                - email
              userMappingProvider:
                displayNameTemplate: '{{ user.name }}'
                emailTemplate: '{{ user.email }}'

SAML

The Matrix Authentication Service does not support SAML and it is recommended to switch to OIDC. You can only enable SAML authentication if you have enabled Legacy Auth from the Host Section.

LDAP

Local Accounts

Installation of Core Components

Cluster Section


In the Cluster section you will find options to configure settings specific to the cluster which Element Deployment will run on top of. Initially only one option is presented, however some additional options are presented under 'Advanced'. By default, it is unlikely you should need to configure anything on this page.

All settings configured via the UI in this section will be saved to your deployment.yml, with the contents of secrets being saved to secrets.yml. You will find specific configuration examples in each section.

Config Example
metadata:
  annotations:
    ui.element.io/layer: |
      global:
        config:
          adminAllowIps:
            _value: defaulted
        k8s:
          ingresses:
            tls:
              certmanager:
                _value: defaulted
spec:
  components:
    synapseAdmin:
      config:
        hostOrigin: >-
          https://admin.example.com,https://admin.example.com:8443
  global:
    config:
      adminAllowIps:
        - 0.0.0.0/0
        - '::/0'
    k8s:
      ingresses:
        tls:
          certmanager:
            issuer: letsencrypt
          mode: certmanager

Config

Certificate Authority

Config Example

If you are using self-signed certificates, you will need to provide the certificate of the Certificate Authority in PEM encoded format. Just like with any certificate file uploaded to the Certificates section (and those yet to be uploaded for specific integrations), it is strongly advised to include the full certificate chain to reduce the liklihood of certificate-based issues post deployment.

Advanced

Config

Images Digests Config Map

Config Example

Used when you want to Customise container images used by ESS, see that guide for a detailed breakdown of using this option.

DNS Delegation

Config Example

It is highly discouraged from enabling support for DNS Federation Delegation, a significant number of features across ESS components are configured via .well-known files deployed by WellKnownDelegation. Enabling this will prevent those features from working so you may have a degraded experience.

This option should be used to allow Federation Delegation via a DNS SRV record instead of the standard .well-known method. You will need to enable this option if you wish to deploy a homeserver to a base domain where you cannot direct requests to /.well-known/matrix/client and /.well-known/matrix/server to the WellKnown pod (or host the files at those URLs manually).

You can read more about SRV DNS Record Delegation and the Matrix Server Spec Resolving Server Names for more information, but once enabled you should ensure you have configured a DNS SRV record in the below format which points to your specified Synapse domain:

_matrix-fed._tcp.<hostname>
TLS Verification

Config Example

You can toggle TLS verification off via this option, however, it is strongly advised to keep this enabled unless you have a specific requirement.

Generic Shared Secret

Config Example

A random Generic Shared Secret will be generated and set when you run the installer for the first time, you shouldn't need to change this unless specifically advised.

Admin Allow IPs

Config Example

This option allows you to configure the IP addresses (specifically or range/s) allowed to access the deployed Synapse Admin, in most cases, you shouldn't need to configure this as access to any administration requires logging in with a Matrix ID designated as a Synapse Admin.

Installation of Core Components

Synapse Section


Synapse is the Matrix homeserver that powers ESS, in this section you will be customising settings relating to your homeserver, analogous with settings you'd set in the homeserver.yml if configuring Synapse manually.

All settings configured via the UI in this section will be saved to your deployment.yml, with the contents of secrets being saved to secrets.yml. You will find specific configuration examples in each section.

Config Example

By default, if you do not change any settings on this page, defaults will be added to your configuration file/s (see example below).

Config Example

Profile

The profile section automatically configures Synapse Workers so you don't have to, optimising your deployment to align with the settings you define based on our recommendations.

The options you set here do not have to align with what you configure for your homeserver.

For example, you may wish for your server to be able to handle greater than 500 Monthly Active Users, so you select 2500 users. When you later define the Max MAU Users in the Config section below, you can choose any number you wish.

The same applies with Federation, you can optimise your deployment to suit Open Federation but opt to close it in the dedicated Federation section.

Monthly Active Users

Config Example
metadata:
  annotations:
    ui.element.io/profile: |
      components:
        synapse:
          _subvalues:
            mau: 500
            # mau: 2500
            # mau: 10000

Here you should select the option that covers how many Monthly Active Users i.e. if you think you'll have ~800 users, you should select 2500 to optimise your setup to handle those users.

Federation Type

Config Example
metadata:
  annotations:
    ui.element.io/profile: |
      components:
        synapse:
          _subvalues:
            fed: closed
            # fed: limited
            # fed: open

Config

Accept Invites

Config Example
spec:
  components:
    synapse:
      config:
        acceptInvites: manual
        # acceptInvites: auto
        # acceptInvites: auto_dm_only

This enables a Synapse module called Auto-Accept Invite which is used to automatically accept invites.

Manual retains the original behaviour, requiring users to accept invites to rooms, including Direct Messages.

Auto will automatically accept all invites to rooms, including Direct Messages.

Auto DM Only will only automatically accept invites to Direct Messages.

Max MAU Users

max_mau_value

limit_usage_by_mau

Config Example
spec:
  components:
    synapse:
      config:
        maxMauUsers: 250

Synapse can be configured to record the number of Monthly Active Users (also referred to as MAU) on a given homeserver, MAU only tracks local users. This option sets the hard limit of monthly active users above which the server will start blocking users. See Monthly Active Users from the Synapse documentation, including max_mau_value and limit_usage_by_mau to learn more.

Registration

enable_registration

Config Example
spec:
  components:
    synapse:
      config:
        registration: open
        # registration: custom
        # registration: closed

Open enables registration for new users, users will be able create an account via Matrix clients that support it, i.e. Element Web. Specifically, setting this option is the equivalent to setting both enable_registration and enable_registration_without_verification to true.

Closed disables registration for new users, users will only be presented the option to login to the homeserver. You will need to either manually setup users via the Admin Console / Admin API or be using something like Delegated Authentication.

Custom, allows you to completely customise your configuration of Registration via the Additional Config section found under Advanced, you could then use it to enable verification by setting enable_registration_without_verification to false or other similar settings, i.e. registrations_require_3pid.

Open or Closed registration will not affect the creation of new Matrix Accounts via Delegated Authentication. New users via Delegated Authentication i.e. LDAP, SAML or OIDC, who have yet to login to the homeserver and technically do not yet have a created Matrix ID, will still have one created when they successfully authenticate regardless of if registration is Closed.

Admin Password

Config Example

Password for the @onprem-admin-donotdelete user, a Synapse Admin user automatically created to allow you to use the Admin Console. You should use this account to promote Matrix accounts you setup to Synapse Admins. When using the Admin Console via the Installer (:8443), you should auto-login as this account, no password required.

If you are experiencing issues with accessing the Admin Console following a wipe and reinstall, ensure you do not have the previous install credentials cached. You can clear them via your browsers' settings, then refresh the page (you will be provided with a new link via the Installer CLI) to resolve.

Log

Unlike with most other sections, logging values set here are analogous to creating a <SERVERNAME>.log.config instead of the homeserver.yml. See the Logging Sample Config File for further reference.

Root Level

Config Example
spec:
  components:
    synapse:
      config:
        log:
          rootLevel: Info
          # rootLevel: Debug
          # rootLevel: Warning
          # rootLevel: Error
          # rootLevel: Critical

As defined under the Configuration file format section of the Python docs, the available options presented by the Installer are DEBUG, INFO, WARNING, ERROR and CRITICAL. These represent different severity levels for log messages and help control the verbosity of log output which help to filter messages based on their importance.

When troubleshooting, increasing the log level and redeploying can help narrow down where you're experiencing issues. By default, DEBUG is a good option to include everything allowing you to identify a problem.

It is not advised to leave your Logging Level at anything other than the default, as more verbose logging may expose information that should otherwise not be accessible. When sharing logs, remember to redact any sensitive information you do not wish to share.

Sentry DSN

Config Example
spec:
  components:
    synapse:
      config:
        log:
          sentryDsn: https://publickey:secretkey@sentry.io/projectid

Here you can specify a Sentry Data Source Name (DSN) to connect Synapse logging to a specific project within your Sentry account. A typical Sentry DSN looks like:

https://<public_key>:<secret_key>@sentry.io/<project_id>

Level Overrides

Config Example
spec:
  components:
    synapse:
      config:
        log:
          levelOverrides:
            synapse.storage.SQL: Info
            # synapse.storage.SQL: Debug
            # synapse.storage.SQL: Error
            # synapse.storage.SQL: Warning
            # synapse.storage.SQL: Critical

Here you can configure custom logging levels for specific Synapse loggers, i.e. synapse.storage.SQL. Simply add the Synapse logger and click Add to Level Overrides. You will then be able to select the desired logging level for that logger:

You can read up more on Structured Logging from the Structured Logging Synapse doc for more detailed guidance.

Security

Default Room Encryption

encryption_enabled_by_default_for_room_type

Config Example
spec:
  components:
    synapse:
      config:
        security:
          defaultRoomEncryption: auto_all
          # defaultRoomEncryption: auto_invite
          # defaultRoomEncryption: forced_all
          # defaultRoomEncryption: forced_invite
          # defaultRoomEncryption: not_set

Controls whether locally-created rooms should be end-to-end encrypted by default.

This option will only affect rooms created after it is set and will not affect rooms created by other servers.

Password Policy

password_config

Config Example
spec:
  components:
    synapse:
      config:
        security:
          # Not present when disabled
          # passwordPolicy: # {} When enabled with default settings
          passwordPolicy: # Only configured like so when values changed from thier defaults
            minimumLength: 20 # Default: 15
            requireDigit: false # Default: true
            requireLowercase: false # Default: true
            requireSymbol: false # Default: true
            requireUppercase: false # Default: true

Turning on Password Policy will allow you to define and enforce a password policy for users' accounts on your homeserver.

You may notice that despite this not being enabled, users are required when registering to set secure passwords when doing do via the Element Web client. This is because the client itself enforces secure passwords, this setting is required should you wish to ensure all accounts have enforces password requirements, as other Matrix clients may not themselves enforce secure passwords.

Telemetry

Config Example
spec:
  components:
    synapse:
      config:
        telemetry:
          enabled: true
          passwordSecretKey: telemetryPassword
          room: '#element-telemetry'

Element collects telemetry data to understand whether or not customers are in compliance with what they've purchased, so should be left enabled unless automatic sending of telemetry is not possible (i.e. Airgapped setups). By default, ESS servers connected to the internet will automatically send telemetry to Element. Please allow this to happen by making sure you have not blocked ems.element.io on port 443 from your homeserver.

What Telemetry Data is Collected by Element?

The following is a sample telemetry packet generated by Element On-Premise:

Config Example
{
    "_id" : ObjectId("6363bdd7d51c84d1f10a8126"),
    "onPremiseSubscription" : ObjectId("62f14dd303c67b542efddc4f"),
    "payload" : {
        "data" : {
            "activeUsers" : {
                "count" : 1,
                "identifiers" : {
                    "native" : [
                        "5d3510fc361b95a5d67a464a188dc3686f5eaf14f0e72733591ef6b8da478a18"
                    ]
                },
                "period" : {
                    "end" : 1667481013777,
                    "start" : 1666970260518
                }
            }
        },
        "generationTime" : 1667481013777,
        "hostname" : "element.demo",
        "instanceId" : "bd3bbf92-ac8c-472e-abb5-74b659a04eec",
        "type" : "synapse",
        "version" : 1
    },
    "request" : {
        "clientIp" : "71.70.145.71",
        "userAgent" : "Synapse/1.65.0"
    },
    "schemaVersion" : 1,
    "creationTimestamp" : ISODate("2022-11-03T13:10:47.476Z")
}

Submitting Telemetry Data to Element

If you are unable to allow Element's telemetry upload to take place, either because you are airgapped or need to block ems.element.io then you will need to manually submit telemetry data to Element.

In order to gather telemetry data, you will need to use the element-telemetry-export.py script, which comes with the installer.

To do this, run:

cd ~/.element-enterprise-server/installer/lib
/usr/bin/env python3 ./element-telemetry-export.py 

You will be prompted for an access token:

Matrix user access token not specified in the "MATRIX_USER_ACCESS_TOKEN" environment variable. Please provide the access token and hit enter: 

You will need to provide a valid access token for a user who has access to the telemetry room. This can be found by logging in to Element Web as this user, going to "All Settings", then clicking "Help & About" and finally expanding the section for "Access Token".

accesstoken.png

Provide the access token to the prompt and hit enter.

2023-04-18 15:36:41,580:INFO:Parsing configuration file (/home/karl1/.element-enterprise-server/config/telemetry-config.json)
2023-04-18 15:36:41,581:INFO:Performing Matrix sync with homeserver (https://hs.element.demo)
2023-04-18 15:36:41,643:INFO:Scanning page 1
2023-04-18 15:36:41,716:INFO:Scanning page 2
2023-04-18 15:36:41,782:INFO:Writing 19 telemetry events to ZIP file (/home/karl1/.element-enterprise-server/installer/lib/telemetry_2023-04-18.zip)
2023-04-18 15:36:41,783:INFO:Saving some internal state (for next time)

Once you have done this, you will have some messages that look similar to the above and you will have a new zip file in this directory with a date stamp in the format telemetry_YYYY-MM-DD.zip. In my case, I have telemetry_2023-04-18.zip.

If you are having SSL connectivity issues with the exporter, you may wish to either disable TLS verification or provide a CA certificate to the exporter with these optional command line parameters:

  --disable-tls-verification
                        Do not check SSL certificate validity when querying the Matrix server
  --ca-cert-path CA_CERT_PATH
                        Specify the path to the CA file (or a directory) to use when verifying Matrix server's
                        SSL certificate. Consult README.md for more details

Then browse to https://ems.element.io/on-premise/subscriptions and click "Upload Telemetry" next to the subscription you are uploading the data for:

ems-subs.png

Click browse, find the telemetry file then click "Submit Telemetry":

browse-telemetry.png

Once successful, you will see this screen:

success.png

You can then close the upload window.

Matrix Network Stats

Config Example
spec:
  components:
    synapse:
      config:
        telemetry:
          matrixNetworkStats:
            endpoint: https://test.endpoint.url

Enable Matrix Network Stats if you'd like to report your homeserver usage statistics to a statistics collection server. Per the tooltip, you can enter https://matrix.org/report-usage-stats/push to contribute to the public Matrix network statistics collection or enter your own endpoint.

See Reporting Homeserver Usage Statistics for more information on the statistics available and Using a Custom Statistics Collection Server to see how-to setup your own statistics endpoint.

URL Preview

url_preview_enabled

Config Example
spec:
  components:
    synapse:
      config:
        urlPreview: {} # {} When disabled, otherwise enabled with config as detailed in sections below.

URL previews involve fetching information from a URL (e.g., a website link) and displaying a preview of the content, such as a title, description, and an image. This feature can be useful for enhancing the user experience by providing more context about shared URLs in chat messages.

Enabling or disabling URL previews can impact the amount of information displayed in the chat interface, and it can also have privacy implications as fetching URL previews involves making requests to external servers to retrieve metadata.

Default Blacklist

When enabling URL Preview, a default blacklist using url_preview_ip_range_blacklist is configured for all private networks (see ranged below) to avoid leaking information by asking for preview of links pointing to private paths of the infrastructure. While this blacklist cannot be changed, you can whitelist specific ranges using IP Range Allowed.

Config Example
url_preview_ip_range_blacklist:
- '192.168.0.0/16'
- '100.64.0.0/10'
- '192.0.0.0/24'
- '169.254.0.0/16'
- '192.88.99.0/24'
- '198.18.0.0/15'
- '192.0.2.0/24'
- '198.51.100.0/24'
- '203.0.113.0/24'
- '224.0.0.0/4'
- '::1/128'
- 'fe80::/10'
- 'fc00::/7'
- '2001:db8::/32'
- 'ff00::/8'
- 'fec0::/10'

Config

Accept Language

url_preview_accept_language

Config Example
spec:
  components:
    synapse:
      config:
        urlPreview:
          config:
            acceptLanguage:
              - en

By setting this configuration option, you can control the language preference that Matrix Synapse communicates to external servers when fetching URL previews. This can be useful if you want to influence the language of the content retrieved for URL previews based on the preferred language of your users.

To do so, specify the Localisation country sub-code (e.g., en) that should be used as the Accept-Language header value that the server should send when fetching URL previews from external websites. The Accept-Language header is an HTTP header used by web browsers and other clients to indicate the preferred language(s) for the response.

Each value is a IETF language tag; a 2-3 letter identifier for a language, optionally followed by sub-tags separated by '-', specifying a country or region variant. Multiple values can be provided by clicking Add more Accept Language, and a weight can be added to each by using quality value syntax (;q=). '*' translates to any language.

IP Range Allowed

url_preview_ip_range_whitelist

Config Example
spec:
  components:
    synapse:
      config:
        urlPreview:
          config:
            ipRangeAllowed:
              - 10.0.0.0/24

This option allows you to provide a list of IP address CIDR ranges that URL Preview is allowed to access even if they are specified in the Default Blacklist.

User Directory

user_directory

Config Example
spec:
  components:
    synapse:
      config:
        userDirectory: # Not present when left as default, `true`
          # searchAllUsers: true
          searchAllUsers: false

This option defines whether to search all users visible to your homeserver at the time the search is performed. If set to true, Synapse will return all users on the homeserver who match the search. If false, search results will only contain users visible in public rooms and users sharing a room with the requester.

TURN

Config Example

Any provided TURN server URI should contain a schema (turn: or turns:), a hostname, optionally a port and optionally a transport parameter (?transport=udp or ?transport=tcp).

Identity Server

default_identity_server

Config Example
spec:
  components:
    synapse:
      config:
  		# Not present if disabled
  		# identityServer: {} # If enabled but `autoBind` not selected
        identityServer:
          autoBind: true

HTTP Proxy

http_proxy, https_proxy, no_proxy

Config Example
spec:
  components:
    synapse:
      config:
        httpProxy:
          httpProxy: http_proxy.example.com
          httpsProxy: https_proxy.example.com

You can use Synapse with a forward or outbound proxy. An example of when this is necessary is in corporate environments behind a DMZ (demilitarized zone). Synapse supports routing outbound HTTP(S) requests via a proxy - Note: Only HTTP(S) proxy is supported, SOCKS / alternatives are no supported.

No Proxy

Config Example
spec:
  components:
    synapse:
      config:
        httpProxy:
          noProxy:
            - no_proxy.example.com # Hostname example
            - 192.168.0.123 # IP example
            - 192.168.1.1/24 # IP range example

Here you can specify a list of hostnames, IP addresses or IP ranges (CIDR format) which should not use the HTTP/HTTPS proxy

Data Retention

retention

If this feature is enabled, Synapse will regularly look for and purge events which are older than the below specified lifetimes.

Message Lifetime in Days

Config Example
spec:
  components:
    synapse:
      config:
        dataRetention:
          messageLifetime: 1

Used to specify the number of days after a message is created that it should be deleted.

Media Lifetime in Days

Config Example
spec:
  components:
    synapse:
      config:
        dataRetention:
          mediaLifetime: 1

Used to specify the number of days after media is uploaded that it should be deleted.

Delete Rooms After Inactivity

Config Example
spec:
  components:
    synapse:
      config:
        dataRetention:
          deleteRoomsAfterInactivity: 1w

Used to specifiy how long rooms, which have not seen any activity, should be kept on the server. Rooms inactive after the specified time will be automatically deleted. Supports suffixes:

Advanced

Config

Macaroon

macaroon_secret_key

Config Example

A secret which is used to sign the:

Registration Shared Secret

registration_shared_secret

Config Example

Allows registration of standard or admin accounts by anyone who has the shared secret, even if enable_registration is not Open, see Registration.

Signing Key

signing-keys

Config Example

See the dedicated page on Synapse Federation configuration, Synapse Section: Federation for more details on how the Signing Key is used.

Additional

See the dedicated page on additional Synapse configuration, Synapse Section: Additional Config

External Appservices

Federation

See the dedicated page on Synapse Federation configuration, Synapse Section: Federation

Synapse configuration options not available within the UI

We strongly advise against including any config not configurable via the UI as it will most likely interfere with settings automatically computed by the updater. Additional configuration options are not supported so we encourage you to first raise your requirements to Support where we can best advise on them.

An Additional Config section, which allows including config not currently configurable via the UI from the Configuration Manual, is available under the 'Advanced' section of this page. See the dedicated page on additional Synapse configuration, Synapse Section: Additional Config

Installation of Core Components

Synapse Section: Federation

Federation is the process by which users on different servers can participate in the same room. For this to work, all servers participating in a room must be able to talk to each other.

When Federation is Open, you will not need to configure anything further, however to privately federate you will need to make use of the Federation section found under Advanced.

How do I turn Federation On / Off?

How Federation is enabled is automatic based on how you configure it within this Federation section.

By default Federation is enabled, to close Federation simply enable the Allow List without adding any allowed servers.

Federation Profile

At the top of the Synapse Section you can configure a Federation Type. This Profile section specifically configures the performance profile of your deployed homeserver.

As such, setting this to Open will automatically configure Synapse Workers for Federation Endpoints to better support an openly federating server.

This should not be confused with the Federation section detailed in this document.

Previous setups may have used the Synapse Additional config. Configuration of Federation settings via Additional Config, that are in conflict with any set via the UI, will not override the UI set values. As such, we do not advise including them or any related settings within the Additional Config as they are of increased risk to causing issues with your deployment and are not supported.

Client Minimum TLS Version

federation_client_minimum_tls_version

Allows you to choose the minimum TLS version that will be used for outbound federation requests. Defaults to "1.2". Configurable to "1.2" or "1.3".

Setting this value higher than "1.2" will prevent federation to most of the public Matrix network: only configure it to "1.3" if you have an entirely private federation setup and you can ensure TLS 1.3 support.

Certificate Autorities Secret Keys

Configure when you are federating between homeservers' whose certificates are signed by different Certificate Authorities, click the Add Certificate Authorities Secret Keys / Add More Certificate Authorities Secret Keys button to reveal the option to upload your CA certificate.

Uploaded certificates should be PEM encoded and include the full chain of intermediate CAs and the root CA. You can simply concatenate these files prior to uploading.

Trusted Key Servers

trusted_key_servers

Used to specify the trusted servers to download signing keys from. When synapse needs to fetch a signing key, each server is tried in parallel. Normally, the connection to the key server is validated via TLS certificates. Verify keys provide additional security by making synapse check that the response is signed by that key.

Click Add Trusted Key Servers / Add More Trusted Key Servers to add a new key server, then provide the homeservers' federated server name, i.e. the base domain of the homeserver you with to federate with. Under Verify Keys for the server, you will need to provide it's Key ID and Public Key.

Getting a Homeservers' Key ID and Public Key from your browser

Simply access the Synapse endpoint GET /_matrix/key/v2/server. You must use the domain where your Synapse is exposed, this might be different than the domain you have in your Matrix IDs. For example https://matrix.yourcomapany.com/_matrix/key/v2/server.

For the element.io homeserver, https://element.ems.host/_matrix/key/v2/server returns

{
  "old_verify_keys": {},
  "server_name": "element.io",
  "signatures": {
    "element.io": {
      "ed25519:DnK8xk": "oOgEpir32XvnuMXQs+GvB6nOuIWgYathJ8kbzDhh9TT/BVSEH116Kk9NYUVPeXHJO0HhzBeTjmAiuUTVFS8nCg"
    }
  },
  "valid_until_ts": 1715307962481,
  "verify_keys": {
    "ed25519:DnK8xk": {
      "key": "EgdGx+0oy/9IX5k7tCobr0JoiwMvmmQ8sDOVlZODh/o"
    }
  }
}

Under verify_keys, ed25519:DnK8xk is the Key ID and EgdGx+0oy/9IX5k7tCobr0JoiwMvmmQ8sDOVlZODh/o is the Public Key.

Getting an On-Premise Homeservers' Key ID and Public Key via the Installer

You can retrieve the Public Key of an On-Premise Homeserver by re-running the installer on the host, then navigating to the Synapse section. Under Advanced, Config you will be presented with the homeservers' Public Key in a blue box.

Copy the entire string, taking the example above, it would be ed25519 jRheIX llomL0SL2eq6WfzaqtPX8QzYEP3c0a5E9G9NNamU4JQ. From this string, you can derive the Key ID and Public Key required when you wish to add this homeserver to another homeservers' Federation Trusted Key Servers.

  1. The Key ID is the first two sections joined with a :, so ed25519:jRheIX
  2. The Public Key is the remainder of the string, so llomL0SL2eq6WfzaqtPX8QzYEP3c0a5E9G9NNamU4JQ

Allow List

federation_domain_whitelist

Use the Allow List to restrict federation to the given whitelist of domains, if not specified, the default is to whitelist everything. Simply provide the homeservers' federated server name, i.e. the base domain of the homeservers' you with to federate with.

We recommend also firewalling your federation listener to limit inbound federation traffic as early as possible, rather than relying purely on this application-layer restriction.

This does not stop a server from joining rooms that servers not on the whitelist are in. As such, this option is really only useful to establish a "private federation", where a group of servers all whitelist each other and have the same whitelist.

Please also note that by default an ip_range_blacklist is configured to block all private IP address ranges. If your servers require communicating on any of the below ranges, you will need to configure ip_range_whitelist. See Allowing Private Federation via ip_range_whitelist for information on configuring this.

Installation of Core Components

Element Web Section


Element Web is the web-based client for the Matrix communication protocol. Element Web serves as a user interface for accessing Matrix homeservers, allowing users to send messages, join rooms, share files, and participate in group chats.

All settings configured via the UI in this section will be saved to your deployment.yml, with the contents of secrets being saved to secrets.yml. You will find specific configuration examples in each section.

Config Example
spec:
  components:
    elementWeb:

By default, if you do not change any settings on this page, default Element Web pod CPU and Memory requirements will be added to your configuration file/s (see example below).

Config Example
spec:
  components:
    elementWeb:
      k8s:
        workloads:
          resources:
            limits:
              memory: 200Mi
            requests:
              cpu: 50m
              memory: 50Mi

Advanced

Use Own URL for Sharing Links

Config Example
spec:
  components:
    elementWeb:
      config:
  		# Not present if disabled
        useOwnUrlForSharingLinks: true

Whether the sharing links generated by this Element Web instance should use the URL of this Element Web. If turned off the sharing links use https://matrix.to unless a custom permalink prefix is set in the Additional Config section. If turned on, mobile clients will not detect links using the URL of this Element Web (or any other custom permalink prefix) unless they've been explicitly configured by Mobile Device Management (MDM).

Additional Configuration

There are no Element Web specific UI options available to configure, however you can inject custom config within the Additional Configuration section found under Advanced. Config added here is analogous with what you would add to the config.json when manually self-hosting Element Web (or when using Element Desktop), you can read more on this and see config examples via the Element Web Configuration Doc.

Config Example
spec:
  components:
    elementWeb:
      config:
        additionalConfig: |-
          "setting_defaults": {
                  "custom_themes": [
                      {
                          "name": "Electric Blue",
                          "is_dark": false,
                          "fonts": {
                              "faces": [
                                  {
                                      "font-family": "Inter",
                                      "src": [{"url": "/fonts/Inter.ttf", "format": "ttf"}]
                                  }
                              ],
                              "general": "Inter, sans",
                              "monospace": "'Courier New'"
                          },
                          "colors": {
                              "accent-color": "#3596fc",
                              "primary-color": "#368bd6",
                              "warning-color": "#ff4b55",
                              "sidebar-color": "#27303a",
                              "roomlist-background-color": "#f3f8fd",
                              "roomlist-text-color": "#2e2f32",
                              "roomlist-text-secondary-color": "#61708b",
                              "roomlist-highlights-color": "#ffffff",
                              "roomlist-separator-color": "#e3e8f0",
                              "timeline-background-color": "#ffffff",
                              "timeline-text-color": "#2e2f32",
                              "timeline-text-secondary-color": "#61708b",
                              "timeline-highlights-color": "#f3f8fd",
                              "username-colors": ["#ff0000", ...]
                              "avatar-background-colors": ["#cc0000", ...]
                          }
                      }
                  ]
              }
Common Configurations

If you would like to override the default permalink matrix.to for your homeserver, you can do so by adding the following entry to your Additional Configuration

"permalinkPrefix": "https://<element fqdn>"
Theming

Refer to the Element Web Theming Documentation for more information, see an example below where a custom theme has been applied to change the look and feel of the deployed Element Client. For some public examples of customised login screens see Mozilla and Fedora's customised clients.

"setting_defaults": {
        "custom_themes": [
            {
                "name": "Electric Blue",
                "is_dark": false,
                "fonts": {
                    "faces": [
                        {
                            "font-family": "Inter",
                            "src": [{"url": "/fonts/Inter.ttf", "format": "ttf"}]
                        }
                    ],
                    "general": "Inter, sans",
                    "monospace": "'Courier New'"
                },
                "colors": {
                    "accent-color": "#3596fc",
                    "primary-color": "#368bd6",
                    "warning-color": "#ff4b55",
                    "sidebar-color": "#27303a",
                    "roomlist-background-color": "#f3f8fd",
                    "roomlist-text-color": "#2e2f32",
                    "roomlist-text-secondary-color": "#61708b",
                    "roomlist-highlights-color": "#ffffff",
                    "roomlist-separator-color": "#e3e8f0",
                    "timeline-background-color": "#ffffff",
                    "timeline-text-color": "#2e2f32",
                    "timeline-text-secondary-color": "#61708b",
                    "timeline-highlights-color": "#f3f8fd",
                    "username-colors": ["#ff0000", ...]
                    "avatar-background-colors": ["#cc0000", ...]
                }
            }
        ]
    }

You can also modify the homepage for the Element Web client, to do so requires modification to your Well-Known Delegations' Additional Configuration, see Element Web Custom Home for more information and specifically the Well-Known Delegation documentation page under the Integrations chapter.

Installation of Core Components

Homeserver Admin Section


Homeserver Admin is the web-based client for the Synapse Admin API. Homeserver Admin serves as a user interface for administering Synapse homeservers, allowing management of users, rooms, federation and more.

All settings configured via the UI in this section will be saved to your deployment.yml, with the contents of secrets being saved to secrets.yml. You will find specific configuration examples in each section.

Config Example
spec:
  components:
    synapseAdmin:

By default, if you do not change any settings on this page, default Homeserver Admin pod CPU and Memory requirements will be added to your configuration file/s (see example below).

Config Example
spec:
  components:
    synapseAdmin:
      k8s:
        workloads:
          resources:
            limits:
              memory: 500Mi
            requests:
              cpu: 50m
              memory: 50Mi

Advanced

Verify TLS

Config Example
spec:
  components:
    synapseAdmin:
      # Not present if 'Use Global Setting' selected
      config:
  		# verifyTls: useGlobalSetting
        # verifyTls: force
  		verifyTls: disable

Configures TLS verification, options include:

It is not recommended to change this setting.

Delegated Authentication

If you are using delegated authentication and have kept Allow Local Users Login as Auto or set have directly set to Disabled then the built-in defualt Synapse Admin user onprem-admin-donotdelete will not be able to login.

Once deployed, to promote a user from your identity provider to Synapse Admin i.e. Bob:

  1. Ensure they have logged in once. so that their Matrix ID has been created, i.e. @bob:example.com
  2. Use the following to promote them to Synapse Admin:
    kubectl exec -n element-onprem -it pods/synapse-postgres-0 -- /usr/bin/psql -d synapse -U synapse_user -c "update users set admin = 1 where name = '@bob:example.com';"
    
Installation of Core Components

Integrator Section


In the Integrator section you will find options to configure settings specific to the integrator which is used to send messages to external services. By default, it is unlikely you should need to configure anything on this page, unless you wish to enable the use of Custom Widgets.

All settings configured via the UI in this section will be saved to your deployment.yml, with the contents of secrets being saved to secrets.yml. You will find specific configuration examples in each section.

Config Example
apiVersion: matrix.element.io/v1alpha2
kind: ElementDeployment
metadata:
  annotations:
    ui.element.io/layer: |
        integrator:
spec:
  components:
    integrator:

By default, if you do not change any settings on this page, defaults will be added to your configuration file/s (see example below).

Config Example
apiVersion: matrix.element.io/v1alpha2
kind: ElementDeployment
metadata:
  annotations:
    ui.element.io/layer: |
        integrator:
          k8s:
            workloads:
              _value: defaulted
spec:
  components:
    integrator:
      k8s:
        workloads:
          resources:
            appstore:
              limits:
                memory: 400Mi
              requests:
                cpu: 50m
                memory: 100Mi
            integrator:
              limits:
                memory: 350Mi
              requests:
                cpu: 100m
                memory: 100Mi
            modularWidgets:
              limits:
                memory: 200Mi
              requests:
                cpu: 50m
                memory: 50Mi
            scalarWeb:
              limits:
                memory: 200Mi
              requests:
                cpu: 50m
                memory: 50Mi

Config

Custom Widgets

Config Example
spec:
  components:
    integrator:
      config:
        # Not present if 'false' is selected
        # enableCustomWidgets: false
        enableCustomWidgets: true

Gives users the ability to add Custom Widgets to their rooms which can display an embedded a web page.

Verify TLS

Config Example
spec:
  components:
    integrator:
      # Not present if 'Use Global Setting' selected
      config:
  		# verifyTls: useGlobalSetting
        # verifyTls: force
  		verifyTls: disable

Configures TLS verification, options include:

It is not recommended to change this setting.

Log

Root Level

Config Example
spec:
  components:
    integrator:
      config:
        log:
          # Not present if left at default 'info'
          level: info
          # level: debug
          # level: warning
          # level: error

As defined under the Configuration file format section of the Python docs, the available options presented by the Installer are DEBUG, INFO, WARNING, ERROR and CRITICAL. These represent different severity levels for log messages and help control the verbosity of log output which help to filter messages based on their importance.

When troubleshooting, increasing the log level and redeploying can help narrow down where you're experiencing issues. By default, DEBUG is a good option to include everything allowing you to identify a problem.

It is not advised to leave your Logging Level at anything other than the default, as more verbose logging may expose information that should otherwise not be accessible. When sharing logs, remember to redact any sensitive information you do not wish to share.

Structured

Config Example
spec:
  components:
    integrator:
      config:
        log:
          # Not present if left at default 'false'
          # structured: false
          structured: true

Disabled by default, turn on to output logs in logstash format. Otherwise, logs are output in a console friendly format.

Postgres

If you are performing a Standalone deployment and letting the installer deploy Postgres for you, you will not need to configure any options here:

For all other deployments, you will need to configure your PostgreSQL database connection details.

Database

Config Example
spec:
  components:
    integrator:
      config:
        postgresql:
          database: integrator

Enter the name of the PostgreSQL Database you configured per the previously mentioned Requirements and Recommendations to use for Integrator.

Host

Config Example
spec:
  components:
    integrator:
      config:
        postgresql:
          host: db.example.com

Enter the fully qualified domain name of the PostgreSQL Database you configured per the previously mentioned Requirements and Recommendations to use for Integrator.

Port

Config Example
spec:
  components:
    integrator:
      config:
        postgresql:
          # port not present when left as default 5432
          port: 5432

Defaults to 5432, either keep if correct or provide the required port of the PostgreSQL Database you configured per the previously mentioned Requirements and Recommendations to use for Integrator.

SSL Mode

Config Example
spec:
  components:
    integrator:
      config:
        postgresql:
          # sslMode not present when left as default `require` 
          sslMode: require
          # sslMode: disable
          # sslMode: no-verify
          # sslMode: verify-full

Defaults to No Verify - it is not recommended to disable SSL, so for most setups, this setting should be left as default.

You should adjust to accommodate your environment as required, the options available are:

User

Config Example
spec:
  components:
    integrator:
      config:
        postgresql:
          user: test-username

Enter the username of a user who can access the PostgreSQL Database you configured per the previously mentioned Requirements and Recommendations to use for Synapse.

PostgreSQL Password

Config Example

Enter the password for the specified user who can access the PostgreSQL Database you configured per the previously mentioned Requirements and Recommendations to use for Synapse.

Jitsi Domain

Config Example
spec:
  components:
    integrator:
      config:
        jitsiDomain: https://jitsi.example.com

Enable this option to manually configure an external Jitsi domain. If this option is not set, the installer will default to the domain of the installer deployed Jitsi (if applicable).

Integrations


Integrations

Setting Up Jitsi and TURN With the Installer

Configure the Installer to install Jitsi and TURN

Prerequisites

Firewall

You will have to open the following ports to your microk8s host (or k8s cluster) to enable coturn and jitsi :

For jitsi :

For coturn, allow the following ports :

You will also have to allow the following port range, depending on the settings you define in the installer (see below) :

DNS

The jitsi and coturn domain names must resolve to the VM access IP. You must not use host_aliases for these hosts to resolve to the private IP locally on your setup.

Coturn

From the Installer's Integrations page, click "Install" under "Coturn".

coturn.png

For the coturn.yml presented by the installer, edit the file and ensure the following values are set:

Further, if you are using your own certificates instead of letsencrypt, for the coturn_fqdn, you will need to provide certificates for the installer outside of the GUI. Please find your ~/.element-enterprise-server/config directory and create a directory called ~/.element-enterprise-server/config/legacy/certs under which to put a .crt/.key PEM encoded certificate for this fqdn. If your fqdn was coturn.airgap.local, your filenames would need to be coturn.airgap.local.crt and coturn.airgap.local.key. You will need to have these certificate files in place before running the installer.

Jitsi

From the Installer's Integrations page, click "Install" under "Jitsi".

jitsi.png

For the jitsi.yml presented by the installer, edit the file and ensure the following values are set:

Further, for the jitsi_fqdn, you will need to provide .crt/.key PEM encoded certificates. These can be entered in the installer UI. If your fqdn was jitsi.airgap.local, your filenames would need to be jitsi.airgap.local.crt and jitsi.airgap.local.key. You will need to edit the file name field in the UI before pressing "Choose File" button when selecting the certificates.

If your network does not have any NAT, Jitsi cannot use the local coturn server to determine the IP it should advertise to the users. In this case, you might have issues with your calls and video. To workaround it, you can use the following configuration :

provide_node_address_as_public_ip: true

helm_override_values:
  jvb:
    extraEnvs:
    - name: JVB_ADVERTISE_IPS
      value:  "public ip of jitsi"
    - name: JVB_ADVERTISE_PRIVATE_CANDIDATES
      value: "true"

Element

elewebadvanced.png

Please go to the "Element Web" page of the installer, click on "Advanced" and add the following to "Additional Configuration":

{
  "jitsi": {
    "preferred_domain": "<jitsi_fqdn>"
  }
}

In the above text, you will want to replace <jitsi_fqdn> with the actual fqdn.

Configure the installer to use an existing Jitsi instance

elewebadvanced.png

Please go to the "Element Web" page of the installer, click on "Advanced" and add the following to "Additional Configuration":

{
      "jitsi": {
            "preferred_domain": "your.jitsi.example.org"
      }
}

replacing your.jitsi.example.org with the hostname of your Jitsi server.

You will need to re-run the installer for this change to take effect.

Integrations

Setting up Group Sync with the Installer

What is Group Sync?

Group Sync allows you to use the ACLs from your identity infrastructure in order to set up permissions on Spaces and Rooms in the Element Ecosystem. Please note that the initial version we are providing only supports a single node, non-federated configuration.

Configuring Group Sync

From the Installer's Integrations page, click "Install" under "Group Sync".

basic-config.png

Configuring the source

LDAP Servers

screencapture-3-124-12-184-8443-integrations-groupsync-2023-04-28-14_29_20 copy.png

The distinguished name can be displayed by selecting View/Advanced Features in the Active Directory console and then, right-clicking on the object, selecting Properties/Attributes Editor.

The DN is OU=Demo corp,DC=olivier,DC=sales-demos,DC=element,DC=io.

MS Graph (Azure AD)

Space Mapping

The space mapping mechanism allows us to configure spaces that Group Sync will maintain, beyond the ones that you can create manually.

It is optional – the configuration can be skipped but if you enable Group Sync, you have to edit the Space mapping by clicking on the EDIT button and rename the (unnamed space)to something meaningful.

Screenshot 2023-05-03 at 14.30.55.png

Include all users in the directory in this space: all available users, regardless of group memberships join the space. This option is convenient when creating a common subspace shared between all users.

Screenshot 2023-05-09 at 16.57.23.png

When clicking on Add new space, you can leave the space as a top level space or you can drag and drop this space onto an existing space, making this space a subspace of the existing space.

You can then map an external ID (the LDAP distinguished name) against a power level. Every user belonging to this external ID is granted the power level set in the interface. This external ID that can be any LDAP object like an OrgUnit, a Group or a Security Group. The external ID is case-sensitive

A power level 0 is a default user that can write messages, react to messages and delete his own messages.

A power level 50 is a moderator that can creates rooms, delete messages from members.

A power level 100 is an administrator but since GroupSync manages spaces, invitations to the rooms, it does not make sense to map a group against a power level 100.

Custom power levels other than 0 and 50 are not supported yet.

Users allowed in every GroupSync room

Screenshot 2023-05-09 at 16.05.13.png

A list of userid patterns that will not get kicked from rooms even if they don't belong to them according to LDAP.

This is useful for things like auditbot if Audibot has been enabled.

Patterns listed here will be wrapped in ^ and $ before matching.

Defaults Rooms

Screenshot 2023-05-09 at 16.30.39.png

A list of rooms added to every space

H

Integrations

Setting up GitLab, GitHub, JIRA and Webhooks Integrations With the Installer

In Element Server Suite, our GitLab, GitHub, and JIRA extensions are provided by the hookshot package. This documentation explains how to configure hookshot.

Configuring Hookshot with the Installer

From the Installer's Integrations page, click "Install" under "Hookshot: Github, Gitlab, Jira, and Custom Webhooks."

hookshot1.png

On the first screen here, we can set the logging level and a hookshot specific verify tls setting. Most users can leave these alone.

To use hookshot, you will need to generate a hookshot password key, when can be done by running the following command on a Linux command line:

openssl genpkey -out passkey.pem -outform PEM -algorithm RSA -pkeyopt rsa_keygen_bits:4096

which will generate output similar to this:

..................................................................................................................................................................++++
......................................................................................++++

Once this has finished, you will have a file called passkey.pem that can use to upload as the "Hookshot Password key".

If you wish to change the hookshot provisioning secret, you can, but you can also leave this alone as it is randomly generated by the installer.

hookshot2.png

Next, we get to a set of settings that allow us to make changes to the Hookshot bot's appearance.

There is also a button to show widget settings, which brings up these options:

hookshot3.png

In this form, we have the ability to control how widgets are incorporated into rooms (the defaults are usually fine) and to set a list of Disallowed IP ranges wherein widgets will not load if the homeserver IP falls in the range. If your homeservers IP falls in any of these ranges, you will want to remove that range so that the widgets will load!

Next, we have the option to enable Gitlab, which shows us the following settings:

hookshot-gitlab.png

The webhook secret is randomly generated and does not need to be changed. You can also add Gitlab instances by specifying an instance name and pasting the URL.

Next, we have the option to enable Jira, which shows us the following settings:

hookshot-jira.png

In here, we can specify the OAuth Client ID and the OAuth client secret to connect to Jira. To obtain this information, please follow these steps:

The JIRA service currently only supports atlassian.com (JIRA SaaS) when handling user authentication. Support for on-prem deployments is hoping to land soon.

Once you've set these, you'll notice that a webhook secret has been randomly generated for you. You can leave this alone or edit it if you desire.

Next, let's look at configuring Webhooks:

hookshot-webhooks.png

You can set whether or not webhooks are enabled and whether they allow JS Transformation functions. It is good to leave these enabled per the defaults. You can also specify the user id prefix for the creation of custom webhooks. If you set this to webhook_ then each new webhook will appear in a room with a username starting with webhook_.

Next, let's look at configuring Github:

hookshot-github1.png

This bridge requires a GitHub App. You will need to create one. Once you have created this, you'll be able to fill in the Auth ID and OAuth Client ID. You will also need to generate a "Github application key file" to upload this. Further, you will need to specify a "Github OAuth client secret" and a "Github webhook secret", both of which will appear on your newly created Github app page.

hookshot-github2.png

On this screen, we have the option to change how we call the bot and other minor settings. We also have the ability to select which hooks we provide notifications for, what labels we wish to exclude, and then which hooks we will ignore completely.

hookshot-github3.png

Now we have the ability to add a list of labels that we want to match. This has the impact of the integration only notifying you of issues with a specifc set of labels.

We then have the ability to add a list of labels that all newly created issues through the bot should be labeled with.

Then we have the ability to enable showing diffs in the room when a PR is created.

hookshot-github4.png

Moving along, we can configure how workflow run results are configured in the bot, including matching specific workflows and including or excluding specific workflows.

Finishing Configuration

You furrther have the ability to click "Advanced" and set any kubernetes specific settings for how this pod is run. Once you have set everything up on this page, you can click "Continue" to go back to the Integrations page.

When you have finished running the installer and the hookshot pod is up and running, there are some configurations to handle in the Element client itself in the rooms that you wish the integration to be present.

As an admin, you will need to enable hookshot in the rooms using the "Add widgets, bridges, & bots" functionality to add the "Hookshot" widget to the room and finish the setup.

Integrations

Setting up Adminbot and Auditbot

Overview

Adminbot allows for an Element Administrator to become admin in any existing room or space on a managed homeserver. This enables you to delete rooms for which the room administrator has left your company and other useful administration actions.

Auditbot allows you to have the ability to export any communications in any room that the auditbot is a member of, even if encryption is in use. This is important in enabling you to handle compliance requirements that require chat histories be obtainable.

On using Admin Bot and Audit Bot

Currently, we deploy a special version of Element Web to allow you to log in as the adminbot and auditbot. Given this, please do not make changes to widgets in rooms while logged in as the adminbot or the auditbot. The special Element Web does not have any custom settings that you have applied to the main Element Web that your users use and as such, you can cause problems for yourself by working with widgets as the adminbot and auditbot. In the future, we are working to provide custom interfaces for these bots.

Configuring Admin Bot

From the Installer's Integrations page, click "Install" under "Admin Bot"

You will then see the following:

adminbot1.png

adminbot2.png

Your first choice is to configure adminbot or enable this server as part of a federated adminbot cluster. For most cases, you'll want to select "Configure Adminbot".

Below this, we have a checkbox to either allow the adminbot to participate in DM rooms (rooms with 1-2 people) or not.

We also have a checkbox to join local rooms only. You probably want to leave this on. If you turn it off, the adminbot will try to join any federated rooms that your server is joined to.

Moving on, we also have the ability to change the logging level and set the username of the bot.

After this, we have the ability to set the "Backup Passphrase" which is used to gain access to the key backup store.

Two settings that need to be set in the "Advanced" section are the fqdn for the adminbot element web access point and its certifactes. These settings can be found by clicking "Advanced" and scrolling to:

adminbot-fqdn.png

and then:

adminbot-certs.png

Configuring Audit Bot

From the Installer's Integrations page, click "Install" under "Audit Bot".

You will then see the following:

auditbot1.png

auditbot2.png

auditbot3.png

Your first choice is to configure auditbot or enable this server as part of a federated auditbot cluster. For most cases, you'll want to select "Configure Auditbot".

Below this, we have a checkbox to either allow the adminbot to participate in DM rooms (rooms with 1-2 people) or not.

We also have a checkbox to join local rooms only. You probably want to leave this on. If you turn it off, the adminbot will try to join any federated rooms that your server is joined to.

Moving on, we also have the ability to change the logging level and set the username of the bot.

After this, we have the ability to set the "Backup Passphrase" which is used to gain access to the key backup store.

You can also configure an S3 bucket to log to and you can configure how many logfiles should be kept and how large a log file should be allowed to grow to. By default, the auditbot will log to the storage that has been attached by the cluster (check the storage settings under the "Advanced" tab).

Two settings that need to be set in the "Advanced" section are the fqdn for the auditbot element web access point and its certifactes. These settings can be found by clicking "Advanced" and scrolling to:

auditbot-fqdn.png

auditbot-certs.png

Adminbot Federation

On the central admin bot server

You will pick "Configure Admin Bot" and will fill in everything from the above Adminbot configuration instructions, but you will also add Remote Federated Homeservers in this interface:

adminbot3.png

adminbot4.png

You will need to fill out this form for each remote server that will join the federation. You will need to set the domain name and the matrix server for each to get started.

You will also need to grab the Admin user authentication token for each server and specify that here. You may get this with the following command run against a specific server: kubectl get synapseusers/adminuser-donotdelete -n element-onprem -o yaml. You are looking for the value of the field status.accessToken.

Then in the app service, you can leave Automatically compute the appservice tokens set. You will need to also get the generic shared secret from that server and specify it here as well. You can get this value from running: kubectl get -n element-onprem secrets first-element-deployment-synapse-secrets -o yaml | grep registration and looking at the value for the registrationSharedSecret.

On the remote admin bot server

Instead of selecting "Configure Adminbot", you will pick "Enable Central Adminbot Access" and will then be presented with this UI:

adminbot5.png

You will then specify the FQDN of the central adminbot server.

Auditbot Federation

On the central auditbot server

You will pick "Configure Audit Bot" and will fill in everything from the above Auditbot configuration instructions, but you will also add Remote Federated Homeservers in this interface:

auditbot4.png

auditbot5.png

You will need to fill out this form for each remote server that will join the federation. You will need to set the domain name and the matrix server for each to get started.

You will also need to grab the Admin user authentication token for each server and specify that here. You may get this with the following command run against a specific server: kubectl get synapseusers/adminuser-donotdelete -n element-onprem -o yaml. You are looking for the value of the field status.accessToken.

Then in the app service, you can leave Automatically compute the appservice tokens set. You will need to also get the generic shared secret from that server and specify it here as well. You can get this value from running: kubectl get -n element-onprem secrets first-element-deployment-synapse-secrets -o yaml | grep registration and looking at the value for the registrationSharedSecret.

On the remote audit bot server

Instead of selecting "Configure Auditbot", you will pick "Enable Central Auditbot Access" and will then be presented with this UI:

auditbot6.png

You will then specify the FQDN of the central auditbot server.

Integrations

Setting Up Hydrogen

Configuring Hydrogen

From the Installer's Integrations page, click "Install" under "Hydrogen".

For the hydrogen.yml presented by the installer, edit the file and ensure the following values are set:

You will need to re-run the installer after making these changes for them to take effect.

Integrations

Setting up On-Premise Metrics

Setting up VictoriaMetrics and Grafana

From the Installer's Integrations page, click "Install" under "Monitoring"

For the provided prom.yml, see the following descriptions of the parameters:

For the specified grafana_fqdn, you will need to provide a crt/key PEM encoded key pair in ~/.element-enterprise-server/config/legacy/certs prior to running the installer. If our hostname were metrics.airgap.local, the installer will expect to find metrics.airgap.local.crt and metrics.airgap.local.key in the ~/.element-enterprise-server/config/legacy/certs` directory. If you are using Let's Encrypt, you do not need to add these files.

After running the installer, open the FQDN of Grafana. The initial login user is admin and password is the value of admin_password. You'll be required to set a new password, please define one secured and keep it in a safe place. ~

Logs

On single-node setups configured using our installer, if you chose to enable log aggregation via Loki, you can find your logs in Grafana by going to Explore, selecting loki as the data source, then use Label filters by for example app.

Integrations

Setting Up the Telegram Bridge

Configuring Telegram bridge

On Telegram platform

Basic config

From the Installer's Integrations page, click "Install" under "Telegram Bridge".

For the provided telegram.yml file, please see the following options:

For the specified telegram_fqdn, you will need to provide a crt/key PEM encoded key pair in ~/.element-enterprise-server/config/legacy/certs prior to running the installer. If our hostname were telegram.airgap.local, the installer will expect to find telegram.airgap.local.crt and telegram.airgap.local.key in the ~/.element-enterprise-server/config/legacy/certs` directory. If you are using Let's Encrypt, you do not need to add these files.

You will need to re-run the installer after making changes for these to take effect.

Usage

Integrations

Setting Up the Teams Bridge

Configuring Teams Bridge

Register with Microsoft Azure

You will first need to generate an "Application" to serve connect your Teams bridge with Microsoft.

Permissions

You will need to set some API permissions.

For each of the list below click Add permission > Microsoft Graph > and then set the Delegated permissions.

For each of the list below click Add permission > Microsoft Graph > and then set the Application permissions:

Once you are done, click Grant admin consent

Setting up the bot user

The bridge requires a Teams user to be registered as a "bot" to send messages on behalf of Matrix users. You just need to allocate one user from the Teams interface to do this.

Getting the groupId

The groupId can be found by opening Teams, clicking ... on a team, and clicking "Get link to team". The groupId is included in the URL 12345678-abcd-efgh-ijkl-lmnopqrstuvw in this example.

https://teams.microsoft.com/l/team/19%3XXX%40thread.tacv2/conversations?groupId=12345678-abcd-efgh-ijkl-lmnopqrstuvw&tenantId=87654321-dcba-hgfe-lkji-zyxwvutsrqpo

On the hosting machine

Generate teams registration keys

openssl genrsa -out teams.key 1024
openssl req -new -x509 -key teams.key -out teams.crt -days 365

These keys need to be placed in ~/.element-enterprise-server/config/legacy/certs/teams on the machine that you are running the installer on.

Configure Teams Bridge

From the Installer's Integrations page, click "Install" under "Microsoft Teams Bridge"

For the provided teams.yml, please the following documentation of the parameters:

teams_client_id: # teams app client id
teams_client_secret: # teams app secret
teams_tenant_id: # teams app tenant id
teams_bot_username: # teams bot username
teams_bot_password: # teams bot password
teams_cert_file: teams.crt
teams_cert_private: teams.key
teams_fqdn: <teams bridge fqdn>
teams_bridged_groups:
- group_id: 218b0bfe-05d3-4a63-8323-846d189f1dc1 #change me
  properties:
    autoCreateRooms:
      public: true
      powerLevelContent:
        users:
          "@alice:example.com": 100 # This will add <alice> account as admin
          "@teams-bot:example.com": 100 # the Teams bot mxid <bot_sender_localpart>:<domain_name>
    autoCreateSpace: true
    limits:
      maxChannels: 25
      maxTeamsUsers: 25
# repeat "- group_id:" section above for each Team you want to bridge

     
bot_display_name: Teams Bridge Bot
bot_sender_localpart: teams-bot
enable_welcome_room: true
welcome_room_text: |
 Welcome, your Element host is configured to bridge to a Teams instance.

 This means that Microsoft Teams messages will appear on your Element
 account and you can send messages in Element rooms to have them appear
 on teams.

 To allow Element to access your Teams account, please say `login` and
 follow the steps to get connected. Once you are connected, you can open
 the 🧭 Explore Rooms dialog to find your Teams rooms.
# namespaces_prefix_user: OPTIONAL: default to _teams_
# namespaces_prefix_aliases: OPTIONAL: default to teams_

You will need to re-run the installer for changes to take effect.

Integrations

Setting Up the IRC Bridge

Matrix IRC Bridge

The Matrix IRC Bridge is an IRC bridge for Matrix that will pass all IRC messages through to Matrix, and all Matrix messages through to IRC. Please also refer to the bridges' specific documentation for additional guidance.

For usage of the IRC Bridge via it's bot user see Using the Matrix IRC Bridge documentation.

Installation and Configuration

From the Installer's Integrations page find the IRC Bridge entry, and click Install.This will setup the IRC Bridges' config directory, by default this will be located:

~/.element-enterprise-server/config/legacy/ircbridge

You will initially be taken to the bridges configuration page, for any subsequent edits, the Install button will be replaced with Configure, indicating the bridge is installed.

There are two sections of the Matrix IRC Bridge configuration page, the Bridge.yml section, and a section to Upload a Private Key. We'll start with the latter as it's the simplest of the two, and is referenced in the first.

Upload a Private Key

As the bridge needs to send plaintext passwords to the IRC server, it cannot send a password hash, so those passwords are stored encrypted in the bridge database. When a user specifies a password to use, using the admin room command !storepass server.name passw0rd, the password is encrypted using a RSA PEM-formatted private key. When a connection is made to IRC on behalf of the Matrix user, this password will be sent as the server password (PASS command).

Therefore you will need a Private Key file, by default called passkey.pem:

The Bridge.yml Section

The Bridge.yml is the complete configuration of the Matrix IRC Bridge. It points to a private key file (Private Key Settings), and both configures the bridges' own settings and functionality (Bridge Settings), and the specific IRC services you want it to connect with (IRC Settings).

Private Key Settings

key_file: passkey.pem

By default this is the first line in the Bridge.yml config, it refers to the file either moved into the IRC Bridges' config directory, or generated in there using openssl. If moved into the directory ensure the file was correctly renamed to passkey.pem.

Bridge Settings

The rest of the configuration sits under the bridged_irc_servers: section:

bridged_irc_servers:

You'll notice all entries within are initially indented ( ) so all code blocks will include this indentation. Focusing on settings relating to the bridge itself (and not any specific IRC connection) covers everything except the address: and associated parameters: sections, by default found at the end of the Bridge.yml.

Postgres

If you are using postgres-create-in-cluster you can leave this section as-is, the default ircbridge-postgres / ircbridge / postgres_password values will ensure your setup works correctly.

- postgres_fqdn: ircbridge-postgres
  postgres_user: ircbridge
  postgres_db: ircbridge
  postgres_password: postgres_password

Otherwise you should edit as needed to connect to your existing Postgres setup:

You can uncomment the following to use as needed, note if unspecified some of these will default to the advised values, you do not need to uncomment if you are happy with the defaults.

For example, your Postgres section might instead look like the below:

- postgres_fqdn: https://db.example.com
  postgres_user: example-user
  postgres_db: matrixircbridge
  postgres_password: example-password
  # postgres_data_path: "/mnt/data/<bridged>-postgres"
  postgres_port: 2345
  postgres_sslmode: 'verify-full'
IRC Bridge Admins

Within the admins: section you will need to list all the Matrix User ID's of your users who should be Admins of the IRC Bridge. You should list one Matrix User ID per line using the full Matrix User ID formatted like @USERNAME:HOMESERVER

  admins:
  - "@user-one:example.com"
  - "@user-two:example.com"
Provisioning

Provisioning allows you to set specified rules about existing room when bridging those rooms to IRC Channels.

So the example bridge.yml config below will block the bridging of a room if it has any User IDs within it from the badguys.com homeserver except @doubleagent:badguys.com, and limit the number of bridged rooms to 50.

  enable_provisioning: true
  provisioning_rules:
    userIds:
      exempt:
        - "@doubleagent:badguys.com"
      conflict:
        - "@.*:badguys.com"
  provisioning_room_limit: 50
IRC Ident

If you are using the Ident protocol you can enable it usage with the following config:

  enable_ident: false
  ident_port_type: 'HostPort'
  ident_port_number: 10230
Miscellaneous

Finally there are a few additional options to configure:

The defaults are usually best left as-is unless a specific need requires changing these, however for troubleshooting purposes, switching logging_level to debug can help identify issues with the bridge.

  logging_level: debug
  enable_presence: false
  drop_matrix_messages_after_seconds: 0
  bot_username: "ircbridgebot"
  rmau_limit: 100
  users_prefix: "irc_"
  alias_prefix: "irc_"
Advanced Additional Configuration

You can find more advanced configuration options by checking the config.yaml sample provided on the Matrix IRC Bridge repository.

You can ignore the servers: block as config in that section should be added under the parameters: section associated with address: that will be setup per the below section. If you copy any config, ensure the indentation is correct, as above, all entries within are initially indented ( ), so they are under the bridged_irc_servers: section.

IRC Settings

The final section of Bridge.yml, here you specify the IRC network(s) you want the bridge to connect with, this is done using address: and parameter: formatted like so:

  address: irc.example.com
  parameters:

Aside from the address of the IRC Network, everything is configured within the parameters: section, and so is initially indented , all code blocks will include this indentation.

Basic IRC Network Configuration

At a minimum, you will need to specify the name: of your IRC Network, as well as some details for the bots configuration on the IRC side of the connection, you can use the below to get up and running.

    name: "Example IRC"
    botConfig:
      enabled: true
      nick: "MatrixBot"
      username: "matrixbot"
      password: "some_password"
Advanced IRC Network Configuration (Load Balancing, SSL, etc.)

For more fine-grained control of the IRC connection, there are some additional configuration lines you may wish to make use of. As these are not required, if unspecified some of these will default to the advised values, you do not need to include any of these if you are happy with the defaults. You can use the below config options, in addition to those in the section above, to get more complex setups up and running.

If you end up needing any of these additional configuration options, your parameters: section may look like the below example:

    name: "Example IRC"
    additionalAddresses: [ "irc2.example.com" ]
    onlyAdditionalAddresses: false
    port: 6697
    ssl: true
    sslselfsign: false
    sasl: false
    allowExpiredCerts: false
    botConfig:
      enabled: true
      nick: "MatrixBot"
      username: "matrixbot"
      password: "some_password"
      joinChannelsIfNoUsers: true
Mapping IRC user modes to Matrix power levels

You can use the configuration below to map the conversion of IRC user modes to Matrix power levels. This enables bridging of IRC ops to Matrix power levels only, it does not enable the reverse. If a user has been given multiple modes, the one that maps to the highest power level will be used.

    modePowerMap:
      o: 50
      v: 1
Configuring DMs between users

By default private messaging is enabled via the bridge and Matrix Direct Message rooms can be federated. You can customise this behaviour using the privateMessages: config section.

    privateMessages:
      enabled: true
      federate: true
Mapping IRC Channels to Matrix Rooms

Whilst a user can use the !join command (if Dynamic Channels are enabled) to manually connect to IRC Channels, you can specify mappings of IRC Channels to Matrix Rooms, 1 Channel can be mapped to multiple Matrix Rooms, up-front. The Matrix Room must already exist, and you will need to include it's Room ID within the configuration - you can get this ID by using the 3-dot menu next to the room, and opening Settings.

See the below example configuration for mapping the #welcome IRC Channel:

    mappings:
      "#welcome":
        roomIds: ["!exampleroomidhere:example.com"]
Allowing !join with Dynamic Channels

If you would like for users to be able to use the !join command to join any allowed IRC Channel you will need to configure dynamicChannels:.

You may remember you set an alias prefix in the Miscellaneous section above. If you wish to fully customise the format of aliases of bridged rooms you should remove that `alias_prefix:` line. However the only benefit to this would be to add a suffix to the Matrix Room alias so is not recommended.

In addition you could also specify the below, though it is unlikely you should need to specify the exact Matrix Room Version to use.

    dynamicChannels:
      enabled: true
      createAlias: true
      published: true
      useHomeserverDirectory: true
      joinRule: invite
      federate: true
      aliasTemplate: "#irc_$CHANNEL"
      whitelist:
        - "@foo:example.com"
        - "@bar:example.com"
      exclude: ["#foo", "#bar"]
Exclude users from using the bridge

Using the excludedUsers: configuration you can specify Regex to identify users to be kicked from any IRC Bridged rooms.

    excludedUsers:
      - regex: "@.*:evilcorp.com"
        kickReason: "We don't like Evilcorp"
Syncing Matrix and IRC Membership lists

To manage and control how Matrix and IRC membership lists are synced you will need to include membershipLists: within your config.

Within membershipLists: are the following sections, global:, rooms:, channels: and ignoreIdleUsersOnStartup:. For global:, rooms:, channels: you can specify initial:, incremental: and requireMatrixJoined: which all default to false. You can configure settings globally, using global:, or specific to Matrix Rooms with rooms: or IRC Channels via channels:.

The last section is ignoreIdleUsersOnStartup: which determines if the bridge should ignore users which are not considered active on the bridge during startup.

    membershipLists:
      enabled: false
      floodDelayMs: 10000

      global:
        ircToMatrix:
          initial: false
          incremental: false
          requireMatrixJoined: false

        matrixToIrc:
          initial: false
          incremental: false
          
      rooms:
        - room: "!fuasirouddJoxtwfge:localhost"
          matrixToIrc:
            initial: false
            incremental: false

      channels:
        - channel: "#foo"
          ircToMatrix:
            initial: false
            incremental: false
            requireMatrixJoined: false

      ignoreIdleUsersOnStartup:
        enabled: true
        idleForHours: 720
        exclude: "foobar"
Configuring how IRC users appear in Matrix

As part of the bridge IRC users and their messages will appear in Matrix as Matrix users, you will be able to click on their profiles perform actions just like any other user. You can configure how they are display using matrixClients:.

You may remember you set a user name prefix in the Miscellaneous section above. If you wish to fully customise the format of your IRC users' Matrix User IDs you should remove that `users_prefix:` line. However the only benefit to this would be to add a suffix to the Matrix User ID so is not recommended.

    matrixClients:
      userTemplate: "@irc_$NICK"
      displayName: "$NICK"
      joinAttempts: -1
Configuring how Matrix users appear in IRC

As part of the bridge Matrix users and their messages will appear in IRC as IRC users, you will be able to perform IRC actions on them like any other user. You can configure how this functions using ircClients:.

You can also optionally configure the following, they do not need to be included in your config if you are not changing their default values.

    ircClients:
      nickTemplate: "$DISPLAY[m]"
      allowNickChanges: true
      maxClients: 30
      # ipv6:
      #   only: false
      idleTimeout: 10800
      reconnectIntervalMs: 5000
      concurrentReconnectLimit: 50
      lineLimit: 3
      realnameFormat: "mxid"
      # pingTimeoutMs: 600000
      # pingRateMs: 60000
      kickOn:
        channelJoinFailure: true
        ircConnectionFailure: true
        userQuit: true

Deploying the IRC Bridge

Once you have make the required changes to your Bridge.yml configuration, make sure you find and click the Save button at the bottom of the IRC Bridge configuration page to ensure your changes are updated.

You will then need to re-Deploy for any changes to take effect, as above ensure all changes made are saved then click Deploy.

Using the Bridge

For usage of the IRC Bridge via it's bot user see Using the Matrix IRC Bridge documentation, or for end user focused documentation see Using the Matrix IRC Bridge as an End User.

If you have setup mapping of rooms in your Bridge.yml, some rooms will already be connected IRC, users need only join the bridged room and start messaging. IRC users should see Matrix users in the Channel and be able to communicate with them like any other IRC user.

Integrations

Setting Up the SIP Bridge

Configuring SIP bridge

Basic config

From the Installer's Integrations page, click "Install" under "SIP Bridge"

For the provided sipbridge.yml, please see the following documentation:

- `postgres_create_in_cluster`: `true` to create the postgres db into the k8s cluster. On a standalone deployment, it is necessary to define the `postgres_data_path`.
- `postgres_fqdn`: The fqdn of the postgres server. If using `postgres_create_in_cluster`, you can choose the name of the workload.
- `postgres_data_path`: "/mnt/data/sipbridge-postgres"
- `postgres_port`: 5432
- `postgres_user`: The user to connect to the db.
- `postgres_db`: The name of the db.
- `postgres_password`: A password to connect to the db.
- `port_type`: `HostPort` or `NodePort` depending on which kind of deployment you want to use. On standalone deployment, we advise you to use `HostPort` mode.
- `port`: The port on which to configure the SIP protocol. On `NodePort` mode, it should be in kubernetes range:
- `enable_tcp`: `true` to enable TCP SIP.
- `pstn_gateway`: The hostname of the PSTN Gateway.
- `external_address`: The external address of the SIP Bridge
- `proxy` : The address of the SIP Proxy
- `user_agent`: A user agent for the sip bridge.
- `user_avatar`: An MXC url to the sip bridge avatar. Don't define it if you have not uploaded any avatar.
- `encryption_key`: A 32 character long secret used for encryption. Generate this with `pwgen 32 1`
Integrations

Setting Up the XMPP Bridge

Configuring the XMPP Bridge

The XMPP bridge relies on the xmpp "component" feature. It is an equivalent of matrix application services. You need to configure an XMPP Component on an XMPP Server that the bridge will use to bridge matrix and xmpp user.

On the hosting machine

From the Installer's Integrations page, click "Install" under "XMPP Bridge".

Examples

In all the examples below the following are set:

Prosody Example

If you are configuring prosody, you need the following component configuration (for the sample xmpp server, matrix.xmpp.example.com):

    Component "matrix.xmpp.example.com"
        ssl = {
          certificate = "/etc/prosody/certs/tls.crt";
          key = "/etc/prosody/certs/tls.key";
        }
      component_secret = "eeb8choosaim3oothaeGh0aequiop4ji"

And then with that configured, you would pass the following into xmpp.yml:

xmpp_service: xmpp://xmpp.example.com:5347
xmpp_domain: "matrix.xmpp.example.com" # external component subdomain
xmpp_component_password: eeb8choosaim3oothaeGh0aequiop4ji # xmpp component password

Note: We've used pwgen 32 1 to generate the component_secret.

Joining an XMPP Room

Once you have the XMPP bridge up, you need to map an XMPP room to a Matrix ID. For example, if the room on XMPP is named: #welcome@conference.xmpp.example.com, where conference is the FQDN of the component hosting rooms for your XMPP instance, then on Matrix, you would join:

#_xmpp_welcome_conference.xmpp.example.com:example.com

So you can simply send the following command in your Element client to jump into the XMPP room via Matrix

/join #_xmpp_welcome_conference.xmpp.example.com:example.com

Joining a Matrix room from XMPP

If the Element/Matrix room is public you should be able to query the room list at the external component server address (Ex: matrix.xmpp.example.com)

The Matrix room at alias #roomname:example.com maps to #roomname#example.com@matrix.xmpp.example.com on the XMPP server xmpp.example.com if your xmpp_domain: matrix.xmpp.example.com

Note: If the Matrix room has users with the same name as yor XMPP account, you will need to edit you XMPP nickname to be unique in the room

Element XMPP
#roomname:element.local (native Matrix room) #roomname#element.local@element.xmpp.example.com (bridged into XMPP)
#_xmpp_roomname_conference.xmpp.example.com:element.local (bridged into Matrix/Element) #roomname@conference.xmpp.example.com (native XMPP room)

Using the bridge as an end user

For end user documentation you can visit the Using the Matrix XMPP Bridge as an End User documentation.

Integrations

Setting up Location Sharing

Overview

The ability to send a location share, whether static or live, is available without any additional configuration.

However, when receiving a location share, in order to display it on a map, the client must have access to a tile server. If it does not, the location will be displayed as text with coordinates.

By default, location sharing uses a MapTiler instance and API key that is sourced and paid for by Element. This is provided free, primarily for personal EMS users and those on Matrix.org.

If no alternate tileserver is configured either on the HomeServer or client then the mobile and desktop applications will fall back to Element's MapTiler instance. Self-hosted instances of Element Web will not fall back, and will show an error message.

Using Element's MapTiler instance

Customers should be advised that our MapTiler instance is not intended for commercial use, it does not come with any uptime or support SLA, we are not under any contractual obligation to provide it or continue to provide it, and for the most robust privacy customers should either source their own cloud-based tileserver or self-host one on-premises.

However, if they wish to use our instance with Element Web for testing, demonstration or POC purposes, they can configure the map_style_url by adding extra configurations in the advanced section of the Element Web page in the installer:

{
   "map_style_url": "https://api.maptiler.com/maps/streets/style.json?key=fU3vlMsMn4Jb6dnEIFsx"
}

Using a different tileserver

If the customer sources an alternate tileserver, whether from MapTiler or elsewhere, you should enter the tileserver URL in the extra_client section of the Well-Known Delegation Integration accessed from the Integrations page in the Installer:

{
... other info ...
"m.tile_server": {
"map_style_url": "http://mytileserver.example.com/style.json"
}

Self-hosting a tileserver

Customers can also host their own tileserver if they wish to dedicate the resources to doing so. Detailed information on how to do so is available here.

Changing permissions for live location sharing

By default live location sharing is restricted to moderators of rooms. In direct messages, both participants are admins by default so this isn't a problem. However this does impact public and private rooms. To change the default permissions for new rooms the following Synapse additional configuration should be set

default_power_level_content_override:
  private_chat:
    events:
      "m.beacon_info": 0
      "org.matrix.msc3672.beacon_info": 0
      "m.room.name": 50
      "m.room.power_levels": 100
      "m.room.history_visibility": 100
      "m.room.canonical_alias": 50
      "m.room.avatar": 50
      "m.room.tombstone": 100
      "m.room.server_acl": 100
      "m.room.encryption": 100
  # Not strictly necessary as this is used for direct messages, however if additional users are later invited into the room they won't be administrators
  trusted_private_chat:
    events:
      "m.beacon_info": 0
      "org.matrix.msc3672.beacon_info": 0
      "m.room.name": 50
      "m.room.power_levels": 100
      "m.room.history_visibility": 100
      "m.room.canonical_alias": 50
      "m.room.avatar": 50
      "m.room.tombstone": 100
      "m.room.server_acl": 100
      "m.room.encryption": 100
  public_chat:
    events:
      "m.beacon_info": 0
      "org.matrix.msc3672.beacon_info": 0
      "m.room.name": 50
      "m.room.power_levels": 100
      "m.room.history_visibility": 100
      "m.room.canonical_alias": 50
      "m.room.avatar": 50
      "m.room.tombstone": 100
      "m.room.server_acl": 100
      "m.room.encryption": 100
Integrations

Removing Legacy Integrations

Today, if you remove a Yaml integration's config, its components will not be removed from the cluster automatically. You will also need to manually remove the custom resources from the Kubernetes cluster.

Removing Monitoring stack

You need to delete first the VMSingle and the VMAgent from the namespace :

kubectl delete vmsingle/monitoring -n <monitoring ns>
kubectl delete vmagent/monitoring -n <monitoring ns> 

Once done, you can delete the namespace : kubectl delete ns/<monitoring ns>

Integrations

Setting up Sliding Sync

Introduction to Sliding Sync

Sliding Sync is a backend component required by the Element X client beta. It provides a mechanism for the fast synchronisation of Matrix rooms. It is not recommended for production use and is only provide to enable the usage of the Element X client. The current version does not support SSO (OIDC/SAML/CAS). If you wish to try out the Element X client, then you need to be using password-based auth to allow Sliding Sync to work. SSO support (OIDC/SAML/CAS) will be added with a later version of the Sliding Sync tooling.

Installing Sliding Sync

From the integrations page, simply click the install button next to Sliding Sync:

slidingsync-integrations.png

This will take you to the following page:

slidingsync1.png

You should be able to ignore both the sync secret and the logging, but if you ever wanted to change them, you can do that here.

If you are using an external PostgreSQL database, then you will need to create a new database for sliding sync and configure that here:

slidingsync2.png

You will also need to set two values in the "Advanced" section -- the FQDN for sliding sync:

slidingsync-fqdn.png

and the certificates for serving that FQDN over SSL:

slidingsync-certs.png

Integrations

Setting up Element Call

Introduction

Element Call is Element's next generation of video calling, set to replace Jitsi in the future. Element Call is currently an experimental feature so please use it accordingly; it is not expected to replace Jitsi yet.

How to set up Element Call

Required domains

In addition to the core set of domains for any ESS deployment, an Element Call installation on ESS uses the following domains:

Ensure you have acquired DNS records for these domains before installing Element Call on your ESS instance.

Required ports

Ensure that any firewalls in front of your ESS instance allow external traffic on the following ports:

Basic installation

In the Admin Console, visit the Configure page, select Integrations on the left sidebar, and select Element Call (Experimental).

On the next page, the SFU > Networking section must be configured. Read the descriptions of the available networking modes to decide which is appropriate for your ESS instance.

Next, click the Advanced button at the bottom of the page, then to show the Kubernetes section, then click the Show button in that section.

In the section that appears, configure the Ingress and Ingresses > SFU sections with the Element Call Domain and Element Call SFU Domain (respectively) that you acquired earlier, as well as their TLS sections to associate those domain names with an SSL certificate for secure connections.

Other settings on the page may be left at their defaults, or set to your preference.

How to set up Element Call for airgapped environments

Your ESS instance must host Coturn in order for Element Call to function in airgapped environments. To do this, click Install next to Coturn from the integrations page.

On the Coturn integration page, set the External IP of your ESS instance that clients should be able to reach it at, the Coturn Domain, and at least STUN TURN.

Then, within the Element Call integration page, ensure SFU Networking has no STUN Servers defined. This will cause the deployed Coturn to be used by connecting users as the STUN server to discover their public IP address.

Element Call with guest access

By default, Element Call shares the same user access restrictions as the Synapse homeserver. This means that unless Synapse has been configured to allow guest users, calls on Element Call are accessible only to Matrix users registered on the Synapse homeserver. However, enabling guest users in Synapse to allow unregistered access to Element Call opens up the entire homeserver to guest account creation, which may be undesirable.

To solve the needs of allowing guest access to Element Call while blocking guest account creation on the homeserver, it is possible to grant guess access via federation with an additional dedicated homeserver, managed by an additional ESS instance. This involves a total of two ESS instances:

Guest access to Element Call is achieved via a closed federation between the two instances: the main instance federates with the guest instance and any other homeservers it wishes to federate with, and the guest homeserver federates only with the main instance. This allows unregistered users to join Element Call on the main instance by creating an account on the guest instance with open registration, while preventing these guest accounts from being used to reach any other homeservers.

How to set up Element Call with guest access

Integrations

Setting Up the Skype for Business Bridge

Configuring the Skype for Business Bridge

Domains and certificates

The first step in preparing a Skype for Business (S4B) Bridge is to assign it a hostname that other S4B Server deployments can connect to it via SIP federation. This requires configuring DNS records and obtaining a TLS certificate for that hostname, which can be any name of your choosing.

The hostname assigned to a S4B Bridge is also known as its "SIP domain", as it serves as the domain name of the virtual SIP server managed by the bridge for federating with S4B Servers. The rest of this guide refers to a bridge's SIP domain as <bridge-sipdomain>.

Once you've chosen a hostname to assign to your bridge, other S4B Servers must be able to resolve that hostname to the bridge's public IP address via DNS. The most straightforward way to achieve this is to obtain public DNS records for <bridge-sipdomain>. If obtaining public records is not an option, an S4B Server administrator may configure it with internal records instead (which is outside the scope of this guide).

The DNS records to obtain are as follows:

You must also obtain a TLS certificate for <bridge-sipdomain>. It may be obtained from either a public CSA like Let's Encrypt, or by any PKI scheme shared between the bridge & any S4B Servers it must connect with.

Basic config

From the Installer's Integrations page, click "Install" under "Skype for Business Bridge".

The most important configuration options are under Advanced > Exposed Services, which is where to set the SIP domain & TLS certificates of the bridge:

Configuring Skype for Business Server

In order for a S4B Server deployment to connect to your bridge, the deployment must first be configured with an Edge Server to support SIP federation & to explicitly allow federation with the SIP domain of the bridge.

This section describes how to modify an existing S4B Server deployment to federate with the bridge. It assumes that a functional S4B Server deployment has already been prepared; details on how to install a S4B Server deployment from scratch are out-of-scope of this guide.

Overview

To support SIP federation, a S4B Server deployment uses a pool of one or more Edge Servers to relay traffic from external SIP domains to the pool of internal servers that provide the core functionalty of the deployment, known as Front End Servers. This design is necessary because Front End Servers are meant to be run within the private network of a deployment, without access to external networks.

Edge Servers are also used as a proxy for allowing native S4B users to log in from outside the deployment's private network. Users who connect in this manner are known as "remote users".

Once equipped with an Edge Server, a S4B Server deployment must then be configured with which external SIP domains it may federate with. By default, traffic from all external SIP domains is blocked.

The S4B Bridge acts as a SIP endpoint with its own SIP domain. Thus, for it to connect to a S4B Server deployment, the deployment must not only be equipped with an Edge Server, but it must set the bridge's SIP domain as an "allowed" domain.

Below is a simple diagram of the network topology of a S4B Server deployment federated with a S4B Bridge:

external S4B clients <───> Edge Pool <───> S4B Bridge <~~~> Matrix homeserver <═══> Matrix clients
                               A                                  A
                               │                                  ╏
                               V                                  V
internal S4B clients <─> Front End Pool                     Matrix homeserver <═══> Matrix clients

<───>: SIP
<~~~>: Matrix Application Service API
<═══>: Matrix Client-Server API
<╍╍╍>: Matrix Federation API

This guide covers only the usecase of a single Front End Server and Edge Server. It is expected that similar instructions apply for multi-server pools, but that has not been tested.

Prerequisites

A S4B Server deployment must be prepared with least the following components in order for it to be capable of adding an Edge Server:

Such a deployment will have set some hostnames, which are referred to elsewhere in this guide as follows:

Deploying the Edge Server

An Edge Server must be deployed on a standalone host within the private network of the S4B Server deployment. It cannot be collocated on the same host as the Front End Server (source).

The OS to install on the Edge Server's host must be either Windows Server 2019 or 2016. Other versions of Windows Server, even newer versions, will not work (source). It should also be the same version of Windows Server that is installed on the host running the Front End Server. The host must also be outside of the Active Directory domain of the deployment.

Assign the host with a name of your choosing, which will be referred to elsewhere in this guide as <edge>. The internal FQDN of the host is therefore <edge>.<s4b-intdomain>.

After installing the OS, ensure Internet connectivity and perform Windows Update. Then, use the Server Manager desktop app (which can be found in Windows Search) to install the prerequisites listed by the official S4B documentation. Do not install any components needed for a Front End Server, as they may interfere with Edge Server components. It is also recommended to not install IIS on the Edge Server, despite the official documention, as it interferes with VoIP functionality.

Next, install the Skype for Business Administrative Tools. You may use the same installation media that was used for installing the Front End Server. Otherwise, it may be obtained from this download link.

Running the installation media will install two programs, known as the Core Components: the Deployment Wizard and the Management Shell. When using the Deployment Wizard on the Edge Server's host, do not run any tasks related to Active Directory, which should have already been run on the Front End Server, and must be run only once for the entire deployment. It is also unnecessary to install the rest of the Administrative Tools, such as the Topology Builder, on the Edge Server host.

Network topology

The network interfaces of hosts within the deployment must be configured such that inbound external SIP traffic is handled solely by one interface of the Edge Server, and that traffic between the Edge and Front End Servers remains within the private network of the deployment.

The Edge Server needs at least two network interfaces:

Also, the firewall of the Edge Server must at least leave port 5061 open, and have it accessible to either the public Internet, or to the public IP address of your S4B Bridge host.

The Front End server needs at least one network interface, and for it to be an internal-facing interface with the same properties of the Edge Server's internal-facing interface. If Internet connectivity is desired (like for facilitating Phone Access & Meeting URLs), add a separate external-facing interface for handling external traffic, instead of making the internal-facing interface publicly routable.

The IP addresses of these interfaces are referred to elsewhere in this guide as follows:

DNS records

Internal records

The deployment needs an internal DNS record for the Edge Server's internal-facing interface in order to identify it by name. To add this record, open the DNS Manager on the Domain Controller host, and add an A/AAAA record for <edge>.<s4b-intdomain>, the FQDN of the Edge Server host, with the target address set to <edge-intaddr>.

External records

In order for your S4B Bridge to reach your Edge Server, acquire these public DNS records for advertising the SIP domain of your S4B Server deployment:

Topology configuration

The topology of your S4B Server deployment may now be updated to include the Edge Server.

On the Front End Server, open the Topology Builder. Choose the option to download the current topology to a file, as this will ensure that you will edit an up-to-date version of the topology in the following steps.

Once the topology is loaded, navigate through the tree list on the left of the window to find the "Edge pools" entry (under "Skype for Business" > "site" > "Skype for Business Server 2019" > "Edge Pools"), right click it, select "New Edge Pool...", and apply the following settings in the wizard that appears:

Next, in the settings for your site (available by right-clicking the tree entry immediately below the top-level "Skype for Business Server" item and choosing "Edit Properties"), enable:

All required topology changes have now been set. To apply these changes onto the Front End Server:

The toplogy must next be published onto the Edge Server. To do so:

Certificates

S4B sends/receives all SIP traffic over TLS; thus, the Edge Server needs its own set of certificates, both internal & external to the S4B Server deployment.

To obtain all required certificates, open the Deployment Wizard on the Edge Server, click "Install or Update Skype for Business Server System", and execute the "Request, Install or Assign Certificates" task. This will display the Certificate Wizard, which shows a list of all required certificates, and which services they must contain the domain names of. Only two certificates should be listed: "Edge internal" and "External Edge certificate (public Internet)".

The "Edge internal" certificate should be obtained by sending a certificate signing request to the Domain Controller in your deployment, which acts as an internal Certificate Signing Authority. To do so, click the "Edge internal" entry in the list, then click the Request button on the right edge of the window. This will display a dialog that guides you through the steps of sending the request. Once the request is sent, enter the Domain Controller, accept the request, and then go back to the Edge Server to assign the approved certificate.

In contrast, the "External Edge certificate" must be provided by a Certificate Authority that is trusted by the host running the S4B Bridge. This may be a public CA such as Let's Encrypt, or any custom PKI scheme of your choosing. If using the latter, ensure that the root CA's certificate is installed on both the Edge Server host and the S4B Bridge host.

The "External Edge certificate" must contain these names:

Once the certificate is obtained, use the Certificate Wizard on the Edge Server to assign it.

Restart to apply changes

Changes to server topology requires restarting system services on both the Front End Server and Edge Server. To do so, open the Management Server on each server, and run these commands:

  1. Run Stop-CsWindowsService on the Edge Server, and wait for it to complete.
  2. Run Stop-CsWindowsService on the Front End Server, and wait for it to complete.
  3. Run Start-CsWindowsService on the Front End Server, and wait for it to complete.
  4. Run Start-CsWindowsService on the Edge Server, and wait for it to complete.

Federation settings

With the topology in place, the S4B Server deployment may now be configured to allow federation with your S4B Bridge. Federation settings may be applied on the Front End Server either in the web admin panel at https://<frnt>.<s4b-intdomain>/macp, or via Powershell commands in the Management Shell. This section lists each setting that must be applied in the web admin panel, followed by its equivalent Powershell in the Management Shell.

Log into the admin panel using the credentials of your Windows account on the Front End Server, and expand the "Federation and External Access" section on the left sidebar. Then, navigate to the following sections and apply these settings:

To verify any of these settings in Powershell, replace New- or Set- in any of the issued commands with Get-. To unapply a setting, use Remove-.

These changes may take some time before they get applied. When in doubt, restart all services by running Stop-CsWindowsService then Start-CsWindowsService in the S4B Server Management Shell on both the Front End Server and the Edge Server.

Contact mapping

Matrix users in S4B

Once a S4B Server is connected to an instance of the bridge, a Matrix user may be added to a S4B user's contact list as a "Contact Not in My Organization". The S4B desktop client provides this action via the "Add a contact" button, which is on the right edge of the main window just below the contact search bar.

Proceeding will display a prompt to set the IM Address of the contact to be added. Technically, an IM Address is a SIP address without the leading sip: scheme.

The IM Address of a Matrix user managed by the bridge is derived from the user's MXID, and has the following mapping:

@username:matrixdomainusername+homeserver@bridge-sipdomain

S4B users in Matrix

S4B users are represented in Matrix by virtual "ghost" users managed by the bridge. The MXID of a virtual S4B user is derived from the "Bridge > User Prefix" setting (from the bridge's Integrations configuration page in the Installer) and the IM Address (i.e. the SIP Address) of the virtual user's corresponding S4B user, and has the following mapping:

username@s4b-sipdomain@<user-prefix>sip=3ausername=40s4b-sipdomain:matrixdomain

Thus, with a <user-prefix> of _s4b_, the IM Address to MXID mapping is:

username@s4b-sipdomain@_s4b_sip=3ausername=40s4b-sipdomain:matrixdomain

Advanced Configuration

Need help doing something more advanced? See guides for Helm Chart installs, Synapse Workers and more!

Advanced Configuration

Synapse Section: Additional Config

The Additional Config section, which allows including config not currently configurable via the UI from the Configuration Manual, is available under the 'Advanced' section of the Synapse page.

We strongly advise against including any config not configurable via the UI as it will most likely interfere with settings automatically computed by the updater. Additional configuration options are not supported so we encourage you to first raise your requirements to Support where we can best advise on them.

Configuration should follow the same format as supplied by the Configuration Manual, if you include options that have otherwise been configured via the UI they will be overridden with the exception of MAU, Federation and Data Retention (see Nonoverridable Config). Though as noted above, any additional config carries the risk that it will most likely interfere with settings automatically computed by the updater.

What version of Synapse am I running?

Remember to set the configuration manual page to the version of Synapse deployed by the installer, otherwise you may see configuration options / guidance not applicable to the version of Synapse you have deployed.

You can determine the version of Synapse you have deployed by using kubectl describe pod first-element-deployment-synapse-main-0 -n element-onprem | grep version, changing the pod name as needed. This will output something like app.kubernetes.io/version=v1.93.0-lts.1-base, as such when you visit any link to the Configuration Manual, you should update the page to see the correct information for your version.

Known Issues

max_mau_value, limit_usage_by_mau, federation and retention

Configuration of these via Additional Config, that are in conflict with those set via the UI, will not override the UI set values. As such, we do not advise including them or any related settings within the Additional Config as they are of increased risk to causing issues with your deployment.

auto_join_rooms

Due to how the installer sets up Synapse, the auto_join_rooms option will only work when configured as required on the first deployment. Should you configure this on an existing deployment, or change the rooms on a subsequent deployment, it will not function and you'll receive various errors within the Synapse pod logs. To resolve you will need to manually create the rooms and specify auto_join_mxid_localpart in your config. If you're using AdminBot / AuditBot, either would be a perfect candidate for the specified MXID as you can be sure they will be in any room you specify.

Therefore in order to get this setup, you'll need to follow these steps:

As usual, with auto_join_rooms, the caveat is that changing the rooms will not automatically join previously registered users to the updated rooms. To automate this you will likely need to make use of the Admin API, see Using Python with the Admin + Client-Server APIs, specifically Example #1: Join Users to Rooms would be a good starting point.

Exceptions

While use of Additional Config is not recommended, there are certain circumstances built-in to the UI that will allow you to defer to configuration options you will need to specify within the Additional Config block. These exceptions will be covered here, however please be advised, using them still carries risk of instability so we'd recommend sticking with options fully supported by the UI itself.

Custom Registration

Within the Synapse section of the installer, as part of the registration configuration, you can select Custom. When doing so, configuration of Registration should be done via Additional Config, allowing you more control. Options that can be configured can be found at the linked Registration section of the Synapse Configuration Manual, but include:

Allowing Private Federation via ip_range_whitelist

By default private IP ranges are blacklisted, per ip_range_blacklist. So when looking to privately federate between two homeservers, where they'd communicate over one of these private ranges, without specifying said range using ip_range_whitelist it will fail showing errors like the below:

synapse.http.federation.well_known_resolver - 259 - INFO - GET-369 - Fetching https://server2.example.com/.well-known/matrix/server
synapse.http.client - 199 - INFO - sentinel - Blocked 172.20.8.127 from DNS resolution to server2.example.com

To resolve this, you will need to add the following to the Additional config:

ip_range_whitelist:
   - '172.16.0.0/12'

Config Example

When setting additional config via the UI, the following would be added to the your deployment.yml:

spec:
  components:
    synapse:
      config:
        additional: |-
          ip_range_whitelist:
             - '172.16.0.0/12'
Advanced Configuration

Synapse Section: Workers

The Workers section, which allows you to configure Synape Workers, is available under the 'Advanced' section of the Synapse page.

What are Synapse Workers

Synapse is built on Python, an inherent limitation of which is only being able to execute one thread at a time (due to the GIL). To allow for horizontal scaling Synapse is built to split out functionality into multiple separate python processes. While for small instances it is recommended to run Synapse in the default monolith mode, for larger instances where performance is a concern it can be helpful to split out functionality into these separate processes, called Workers.

Without Workers With Workers

For a detailed high-level overview of workers, see the How we fixed Synapse's Scalability blogpost.

Benefits of Using Workers

  1. Scalability. By distributing tasks across multiple processes, Synapse can handle more concurrent operations and better utilize system resources.
  2. Fault Isolation. If a specific worker crashes, it only affects the functionality it handles, rather than bringing down the entire server.
  3. Performance Optimisation. By dedicating workers to specific high-demand tasks, you can improve the overall performance by removing bottlenecks.

Worker ↔ Synapse Communication

The separat Worker processes communicate with each other via a Synapse-specific protocol called 'replication' (analogous to MySQL- or Postgres-style database replication) which feeds streams of newly written data between processes so they can be kept in sync with the database state.

Synapse uses a Redis pub/sub channel to send the replication stream between all configured Synapse processes. Additionally, processes may make HTTP requests to each other, primarily for operations which need to wait for a reply ─ such as sending an event.

All the workers and the main process connect to Redis, which relays replication commands between processes with Synapse using it as a shared cache and as a pub/sub mechanism.

How to configure

Click on Add Workers

You have to select a Worker Type. Here are the workers which can be useful to you :

If you are experiencing resources congestion, you can try to reduce the resources requested by each worker. Be aware that

You will need to re-run the installer after making these changes for them to take effect.

Worker Types

The ESS Installer has a number of Worker Types, see below for a breakdown of what they are and how they work.

Appservice
Background
Client Reader
Encryption
Event Creator
Event Persister
Federation Inbound
Federation Reader
Federation Sender
Initial Synchrotron
Media Repository
Presence Writer
Pusher
Receipts Account
Sso Login
Synchrotron
Typing Persister
User Dir
Frontend Proxy
Advanced Configuration

Kubernetes Override Sections

Found in under Advanced in any section where you configure a component of the installer, under the Kubernetes heading. Here you can override Kubernetes configuration for each component.

Common

Annotations

In Kubernetes, annotations are key-value pairs associated with Kubernetes objects like pods, services, and nodes. Annotations are meant to be used for non-identifying metadata and are typically used to provide additional information about the objects. Unlike labels, which are used for identification and organization, annotations are more free-form and can contain arbitrary data.

Annotations are often used for various purposes, such as:

Ingress

Annotations

See explanation of annotations above

Services

Depending on the component you are viewing, you may see Limits and Requests broken out for each sub-component applicable to that component. When configuring Element Web you will only see the Limits and Requests config, for Integrator however, you will see Limits and Requests for each sub-component; Appstore; Integrator; Modular Widgets; and Scalar Web.

Workloads

Annotations

See explanation of annotations above

Resources

Depending on the component you are viewing, you may see Limits and Requests broken out for each sub-component applicable to that component. When configuring Element Web you will only see the Limits and Requests config, for Integrator however, you will see Limits and Requests for each sub-component; Appstore; Integrator; Modular Widgets; and Scalar Web.

Limits
Requests

Security Context

Docker Secrets

Host Aliases

Advanced Configuration

Customise Containers used by ESS

In specific use cases you might want to change the image used for a specific pod, for example, to add additional contents, change web clients features, etc. In general the steps to do this involve:

We strongly advise against customising any pods. Customised containers are not supported and may break your setup so we encourage you to first raise your requirements to Support where we can best advise on them.

Non-Airgapped Environments

Creating the new Images Digests Config Map

In order to override images used by ESS during the install, you will need to inject a new ConfigMap which specifies the image to use for each component. To do that, you will need to inject a ConfigMap. It's structure maps the components of the ESS, all of them can be overridden :

Config Example
data:
  images_digests: |# Copyright 2023 New Vector Ltd
    adminbot:
      access_element_web:
      haproxy:
      pipe:
    auditbot:
      access_element_web:
      haproxy:
      pipe:
    element_call:
      element_call:
      sfu:
      jwt:
      redis:
    element_web:
      element_web:
    groupsync:
      groupsync:
    hookshot:
      hookshot:
    hydrogen:
      hydrogen:
    integrator:
      integrator:
      modular_widgets:
      appstore:
    irc_bridges:
      irc_bridges:
    jitsi:
      jicofo:
      jvb:
      prosody:
      web:
      sysctl:
      prometheus_exporter:
      haproxy:
      user_verification_service:
    matrix_authentication_service:
      init:
      matrix_authentication_service:
    secure_border_gateway:
      secure_border_gateway:
    sip_bridge:
      sip_bridge:
    skype_for_business_bridge:
      skype_for_business_bridge:
    sliding_sync:
      api:
      poller:
    sydent:
      sydent:
    sygnal:
      sygnal:
    synapse:
      haproxy:
      redis:
      synapse:
    synapse_admin:
      synapse_admin:
    telegram_bridge:
      telegram_bridge:
    well_known_delegation:
      well_known_delegation:
    xmpp_bridge:
      xmpp_bridge:

Each container on this tree needs at least the following properties to override the source of download :

image_repository_path: elementdeployment/vectorim/element-web
image_repository_server: localregistry.local

You can also override the image tag and the image digest if you want to enforce using digests in your deployment :

image_digest: sha256:ee01604ac0ec8ed4b56d96589976bd84b6eaca52e7a506de0444b15a363a6967
image_tag: v0.2.2

For example, the required ConfigMap manifest (e.g. images_digest_configmap.yml) format would be, to override the element_web/element_web container source path :

Config Example
apiVersion: v1
kind: ConfigMap
metadata:
  name: config_map_name
  namespace: namespace_of_your_deployment
data:
  images_digests: |
    element_web:
      element_web:
        image_repository_path: mycompany/custom-element-web
        image_repository_server: docker.io
        image_tag: v2.1.1-patched

Notes:

The new ConfigMap can then be injected into the cluster with:

kubectl apply -f images_digest_configmap.yml -n <namespace of your deployment>

Configuring the installer

You will also need to configure the ESS Installer to use the new Images Digests Config Map by adding the <config map name> into the Cluster advanced section.

Supplying registry credentials

If your registry requires authentication, you will need to create a new secret. So for example, if your registry is called myregistry and the URL of the registry is myregistry.tld, the command would be:

kubectl create secret docker-registry myregistry --docker-username=<registry user> --docker-password=<registry password> --docker-server=myregistry.tld -n <your namespace>

The new secret can then be added into the ESS Installer GUI advanced cluster Docker Secrets:

Airgapped Environments

To perform these actions, you will need the airgapped archive extracted onto a host with an internet connection:

  1. Open a terminal, you will be using the crane binary found within the airgapped directory extracted. Firstly make sure to authenticate with any of the registries you will be downloading from using:

    airgapped/utils/crane auth login REGISTRY.DOMAIN -u EMS_USERNAME -p EMS_TOKEN
    

    You will need to do this for both gchr.io and gitlab-registry:

    airgapped/utils/crane auth login gitlab-registry.matrix.org -u EMS_USERNAME -p EMS_TOKEN
    
    airgapped/utils/crane auth login ghcr.io -u EMS_USERNAME -p EMS_TOKEN
    
  2. Use the following to download the required image:

    airgapped/utils/crane pull --format tarball <imagenanme> image.tar
    

    Note: <imagename> should be formatted like so registry/organisation/repo:version, for example, to download the Element Call Version 0.5.12 image, the <imagename> would be ghcr.io/vector-im/element-call:v0.5.12

    airgapped/utils/crane pull --format tarball ghcr.io/vector-im/element-call:v0.5.12 image.tar
    
    • For registry.element.io you will need to use skopeo instead i.e.:
      skopeo copy docker://registry.element.io/group-sync:v0.13.7-dbg docker-archive://$(pwd)/gsync-dbg.tar
      
  3. The generate the image digest (used in the next step). Continuing the Element Call Version 0.5.12 example, use the below command to return the image digest string:

    airgapped/utils/crane --platform amd64 digest --tarball image.tar
    

    Returns:

    sha256:f16c6ef5954135fb4e4e0af6b3cb174e641cd2cbee901b1262b2fdf05ddcedfc
    
  4. Copy image.tar into the airgapped/images folder, renaming it to the digest string generated in step 3, <digest>.tar excluding the sha256: prefix. For our Element Call Version 0.5.12 example, the filename would be:

    f16c6ef5954135fb4e4e0af6b3cb174e641cd2cbee901b1262b2fdf05ddcedfc.tar
    
  5. Edit the images_digests.yml file also found in the airgapped/images folder, like so:

      <component_name>:
        <component_image>:
          image_digest: sha256:<digest>
          image_repository_path: <organisation>/<repo>
          image_repository_server: <registry>
          image_tag: <new version>
    

    For our Element Call Version 0.5.12 example, you would update like so:

      element_call:
        element_call:
          image_digest: sha256:f16c6ef5954135fb4e4e0af6b3cb174e641cd2cbee901b1262b2fdf05ddcedfc
          image_repository_path: vector-im/element-call
          image_repository_server: ghcr.io
          image_tag: v0.5.12
    

Handling new releases of ESS

If you are overriding image, you will need to make sure that your images are compatible with the new releases of ESS. You can use a staging environment to tests the upgrades for example.

Advanced Configuration

Secrets

Under 'Advanced' in each section, you may find a block listing all the associated secrets configured as part of this section. This directly correlates to your secrets.yml and will allow you to remove secrets no longer required. For example, on the Cluster Section you may have uploaded a Certificate Authority CA.pem, you can use this block to remove it should it no longer be required.

It is not however advised to modify the contents of secrets from this view, you should always do so via the associated UI that configures it in the first place, see the below example from the Cluster section.

CA Pem
Config Example

If you have uploaded a Certificate Authority certificate, you will find it listed in this section, if a certificate was uploaded in error, you can use the 'Delete' button next to the entry to remove it.

Generic Shared Secret
Config Example

Like with the CA certificate option above, this will be present due to the Generic Shared Secret, this is auto-generated and will be replaced if you change it there (and click 'Save' / 'Continue'). It is not advised to edit this property here.

Advanced Configuration

How to run a Webserver on Standalone Deployments

This guide is does not come with support by Element. It is not part of the Element Server Suite (ESS) product. Use at your own risk. Remember you are responsible of maintaining this software stack yourself.

Some config options require a web content to be served. For example:

One way to provide this content is to run a web server in the same microk8s Kubernetes Cluster as the Element Enterprise Suite.

You should first consider using an existing webserver before installing and maintaining an additional webserver for these requirements.

The following guide describes the steps to setup the Bitnami Apache helm chart in the Standalone microk8s cluster setup by Element Server Suite.

Requirements:

Results:

This guide is applicable to the Single Node deployment of Element Server Suite but can be used for guidance on how to host a webserver in other Kubernetes Clusters as well.

You can use any webserver that you like, in this example we will user the Bitnami Apache chart.

We need helm version 3. You can follow this Guide or ask microk8s to install helm3.

Installing Prerequisites

Enabling Helm3 with microk8s

$ microk8s enable helm3
Infer repository core for addon helm3
Enabling Helm 3
Fetching helm version v3.8.0.
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 12.9M  100 12.9M    0     0  17.4M      0 --:--:-- --:--:-- --:--:-- 17.4M
Helm 3 is enabled

Let's check if it is working

$ microk8s.helm3 version
version.BuildInfo{Version:"v3.8.0", GitCommit:"d14138609b01886f544b2025f5000351c9eb092e", GitTreeState:"clean", GoVersion:"go1.17.5"}

Create and Alias for helm

echo alias helm=microk8s.helm3 >> ~/.bashrc
source ~/.bashrc

Enable the Bitnami Helm Chart repository

Add the bitnami repository

helm repo add bitnami https://charts.bitnami.com/bitnami

Update the repo information

helm repo update

Preparation and Configuration

Prepare the Web-Server Content

Create a directory to supply content:

sudo mkdir /var/www/apache-content

Create a homepage home.html, i.e.:

<h2 style="text-align:center"><br />
Welcome to the Element Chat Server.</h2>

<p style="text-align:center">You can find a <a href="https://static.element.io/pdfs/element-user-guide.pdf">Getting Started Guide here</a></p>

<p style="text-align:center">Powered by&nbsp;<a href="https://matrix.org/">Matrix</a>, provided by <a href="http://element.io">Element</a>.</p>

<p style="text-align:center"><a href="https://element.BASEDOMAIN/#/directory">Explore rooms</a></p>

<p style="text-align:center"><strong><span style="font-size:20px"><span style="color:#c0392b">Create a Key Backup &amp; Passphrase now!<br />
(see Getting Started Guite p. 5)</span></span></strong></p>

Put your content into the apache-content directory:

cp /tmp/background.jpg /apache-content/
cp /tmp/home.html ~element/apache-content/

There are multiple ways to provide this content to the apache pod. The bitnami helm chart user ConfigMaps, Physical Volumes or a Git Repository.

ConfigMaps are a good choice for smaller amounts of data. There is a hard limit of 1MiB on ConfigMaps. So if all your data is not more that 1MiB, the config map is a good choice for you.

Physical Volumes are a good choice for larger amounts of data. There are several choices for backing storage available. In the context of the standalone deployments of ESS a Physical Hostpath is the most practical. HostPath is not a good solution for mutli node k8s clusters, unless you pin a pod to a certain node. Pinning the pod to a single node would put the workload at risk, should that node go down.

Git Repository is a favourite as it versions the content and you track and revert to earlier states easily. The bitnami apache helm chart is built in a way that updates in regular intervals to your latest changes.

We are selecting the Physical Volume option to serve content in this case. Our instance of Microk8s comes with the Hostpath storage addon enabled.

Define the physical volume:

cat <<EOF>pv-volume.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: apache-content-pv
  labels:
    type: local
spec:
  storageClassName: microk8s-hostpath
  persistentVolumeReclaimPolicy: Retain
  capacity:
    storage: 100Mi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/var/www/apache-content"
EOF

Apply to the cluster

kubectl apply -f pv-volume.yaml

Next we need a Physical Volume Claim:

cat <<EOF>pv-claim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: apache-content-pvc
spec:
  volumeName: apache-content-pv
  storageClassName: microk8s-hostpath
  accessModes: [ReadWriteOnce]
  resources: { requests: { storage: 100Mi } }
EOF

Apply to the cluster to create the pvc

kubectl apply -f pv-claim.yaml

Configure the Helm Chart

We need to add configurations to adjust the apache deployment to our needs. The K8s service should be switched to ClusterIP. The Single Node deployment includes an Ingress configuration through nginx that we can use to route traffic to this webserver. The name of the ingressClass is "public". We will need to provide a hostname. This name needs to be resolvable through DNS. This could be done through the wildcard entry for *.$BASEDOMAIN that you might already have. You will need a certificate and certificate private key to secure this connection through TLS.

The full list of configuration options of this chart is explained in the bitnami repository here

Create a file called apache-values.yml in the home directory of your element user directory.

Remember to replace BASEDOMAIN with the correct value for your deployment.

cat <<EOF>apache-values.yaml
service:
  type: ClusterIP
ingress:
  enabled: true
  ingressClassName: "public"
  hostname: pages.BASEDOMAIN
htdocsPVC: apache-content-pvc
EOF

Deployment

Deploy the Apache Helm Chart

Now we are ready to deploy the apache helm chart

helm install myhomepage -f apache-values.yaml oci://registry-1.docker.io/bitnamicharts/apache

Manage the deployment

List the deployed helm charts:

$ helm list 
NAME      	NAMESPACE	REVISION	UPDATED                                	STATUS  	CHART        	APP VERSION
myhomepage	default  	1       	2023-09-06 14:46:33.352124975 +0000 UTC	deployed	apache-10.1.0	2.4.57     

Get more details:

$ helm status myhomepage
NAME: myhomepage
LAST DEPLOYED: Wed Sep  6 14:46:33 2023
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: apache
CHART VERSION: 10.1.0
APP VERSION: 2.4.57

** Please be patient while the chart is being deployed **

1. Get the Apache URL by running:

  You should be able to access your new Apache installation through:
      - http://pages.lutz-gui.sales-demos.element.io

If you need to update the deployment, modify the required apache-values.yaml and run :

helm upgrade myhomepage -f apache-values.yaml oci://registry-1.docker.io/bitnamicharts/apache

If you don't want the deployment any more, you can remove it.

helm uninstall myhomepage

Secure the deployment with certificates

If you are in a connected environment, you can rely on cert-manager to create certificates and secrets for you.

Cert-manager with letsencrypt

If you have cert-manager enabled. You will just need to add the right annotations to the ingress of your deployment. Modify you apache-values.yaml and add these lines to the ingress block :

  tls: true
  annotations: 
    cert-manager.io/cluster-issuer: letsencrypt
    kubernetes.io/ingress.class: public

You will need to upgrade your deployment to reflect these changes:

helm upgrade myhomepage -f apache-values.yaml oci://registry-1.docker.io/bitnamicharts/apache
Custom Certificates

There are situations in which you want custom certificates instead. These can be used by modifying your apache-values.yaml. Add the following lines to the ingress block in the apache-values.yaml. Take care to get the indentation right. Replace the ... with your data.

  tls: true
  extraTls:
  - hosts:
    - pages.lutz-gui.sales-demos.element.io
    secretName: "pages.lutz-gui.sales-demos.element.io-tls"
  secrets:
    - name: pages.lutz-gui.sales-demos.element.io-tls
      key: |-
        -----BEGIN RSA PRIVATE KEY-----
        ...
        -----END RSA PRIVATE KEY-----
      certificate: |-
        -----BEGIN CERTIFICATE-----
        ...
        -----END CERTIFICATE-----

You will need to upgrade your deployment to reflect these changes:

helm upgrade myhomepage -f apache-values.yaml oci://registry-1.docker.io/bitnamicharts/apache

Tips and Tricks

You can make your life easier by using bash completing and an alias for kubectl. You will need to have the bash-completion package installed as a prerequisite.

For all users on the system:

kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl > /dev/null

Set an aias for kubectl for your user:

echo 'alias k=kubectl' >>~/.bashrc

Enable auto-completion for your alias

echo 'complete -o default -F __start_kubectl k' >>~/.bashrc

After reloading your Shell, you can now enjoy auto completion for your k ( kubectl ) commands.

Advanced Configuration

ESS CRDs support in ArgoCD

ArgoCD can support getting the ESS CRDs Status as resource health using Custom Health Checks

You need to configure the following under the configmap argocd-cm of argocd :

data:
  resource.customizations: |
    matrix.element.io/*:
      health.lua: |
        hs = {}
        if obj.status ~= nil then
          if obj.status.conditions ~= nil then
            for i, condition in ipairs(obj.status.conditions) do
              if condition.type == "Failure" and condition.status == "True" then
                hs.status = "Degraded"
                hs.message = condition.message
                return hs
              end
              if condition.type == "Running" and condition.status == "True" and condition.reason ~= "Successful" then
                hs.status = "Progressing"
                hs.message = condition.message
                return hs
              end
              if condition.type == "Available" and condition.status == "True" then
                hs.status = "Healthy"
                hs.message = condition.message
                return hs
              end
              if condition.type == "Available" and condition.status == "False" then
                hs.status = "Degraded"
                hs.message = condition.message
                return hs
              end
              if condition.type == "Successful" and condition.status == "True" then
                hs.status = "Healthy"
                hs.message = condition.message
                return hs
              end
            end
          end
        end

        hs.status = "Progressing"
        hs.message = "Waiting for the CR to start to converge..."
        return hs
    EOT
Advanced Configuration

Verifying ESS releases against Cosign

Cosign ESS Verification Key

ESS does not use Cosign transaction log to be able to support airgapped deployment. We are instead relying on a public key that you can ask if you need to run image verification in your cluster.

The ESS Cosign public key is the following one :

-----BEGIN PUBLIC KEY-----
MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAE1Lc+7BqkqD+0XYft05CeXto/Ga1Y
DKNk3o48PIJ2JMrq3mzw13/m5rzlGjdgJCs6yctf4+UdACZx5WSiIWTFbQ==
-----END PUBLIC KEY-----

Verifying manually

To verify a container against ESS Keys, you will have to run the following command :

If you are running in an airgapped environment, then you will need to append --insecure-ignore-tlog=true to the above commands

Verifying automatically

You will have to setup and configure your SIGStore Admission Policy to use ESS Public Key.

Advanced Configuration

Notifications, MDM & Push Gateway

The stock Android and iOS Apps will use an Element owned Push Gateway to send Notification via the Apple or Google Notifiction Services.

The URL of our push gateway is https://matrix.org/_matrix/push/v1/notify

The apps will on startup register with the Google or Apple Notification Services (APNs) and request a push_notification_client_identifier. If notifications need sending, the homeserver will use the configured Push Gateway to sent notification through the APNs.

What is a Notification?

A notification will not contain sensitive content. This is what notificatons actually look like :

▿ 5 elements
  ▿ 0 : 2 elements
    ▿ key : AnyHashable("unread_count")
      - value : "unread_count"
    - value : 1
  ▿ 1 : 2 elements
    ▿ key : AnyHashable("pusher_notification_client_identifier")
      - value : "pusher_notification_client_identifier"
    - value : ad0bd22bb90fabde45429b3b79cdbba12bd86f3dafb80ea22d2b1343995d8418
  ▿ 2 : 2 elements
    ▿ key : AnyHashable("aps")
      - value : "aps"
    ▿ value : 2 elements
      ▿ 0 : 2 elements
        - key : alert
        ▿ value : 2 elements
          ▿ 0 : 2 elements
            - key : loc-key
            - value : Notification
          ▿ 1 : 2 elements
            - key : loc-args
            - value : 0 elements
      ▿ 1 : 2 elements
        - key : mutable-content
        - value : 1
  ▿ 3 : 2 elements
    ▿ key : AnyHashable("room_id")
      - value : "room_id"
    - value : !vkibNVqwhZVOaNskRU:matrix.org
  ▿ 4 : 2 elements
    ▿ key : AnyHashable("event_id")
      - value : "event_id"
    - value : $0cTr40iZmOd3Aj0c65e_7F6NNVF_BwzEFpyXuMEp29g

We recommend that you use the stock Element Apps from PlayStore or Applestore together with the Push Gateway that we as Element host.

Mobile Device Management (MDM)

You can use Mobile Device Management to configure and roll out Mobil Applications. To be able to configure mobile apps this way, the app needs to implement certain interfaces in a standard way. This is called AppConfig.

The Android Element App does not support AppConfig currently. You will need to rebuild the apk to include changes like a different homeserver or a diffrent pusherURL.

The iOS Element App got enabled for AppConfig in version 1.11.2. this allows the change of the following parameters and keys without the need to recompile the app.

If you employ a Mobile Device Management solution like e.g. VmWare Workspace One, you will need to configure your iOS Element app with these keys as documented here in section Publish and update Managed AppConfig for your app in Workspace ONE.

Depending on the brand of MDM you are using, you can create the required keys manually, or enable these setting with an XML file. The XML file might look like this :

<managedAppConfiguration>
     <version>1</version>
     <bundleId>im.vector.app</bundleId>
     <dict>
          <string keyName="im.vector.app.serverConfigDefaultHomeserverUrlString">
               <defaultValue>
                    <value>https://matrix.BASEDOMAIN</value>
               </defaultValue>
          </string>
          <string keyName="im.vector.app.clientPermalinkBaseUrl">
               <defaultValue>
                    <value>https://messenger.BASEDOMAIN</value>
               </defaultValue>
          </string>
     </dict>
</managedAppConfiguration>

Using your own Push Gateway ( Sygnal )

Some organization still feel uncomfortable with using our Push Gateway. You are able to use your own push gateway (e.g. Sygnal) if you want.

You can install Sygnal as an integration with the Element Server Suite.

During the App Upload process a private key is created. We as Element Company retain and use that key on our Push infrastructure. This is why you can not use the stock Element Apps, but will need to upload your own version of the Element App. This will give you access to your own private notification key that is bound to the app you uploaded.

You will need to configure your Sygnal with the private key of your Element App.

You will need to set the "im.vector.app.serverConfigSygnalAPIUrlString" for the iOS App or the equilivant in the Android App Source code.

Advanced Configuration

Helm Chart Installation

Introduction

This document will walk you through how to get started with our Element Server Suite Helm Charts. These charts are provided to be used in environments which typically deploy applications by helm charts. If you are unfamiliar with helm charts, we'd highly recommend that you start with our Enterprise Installer.

General concepts

ESS deployment rely on the following components to deploy the workloads on a kubernetes cluster :

  1. Updater : It reads an ElementDeployment CRD manifest, and generates the associated individual Element CRDs manifests linked together
  2. Operator : It reads the individual Element CRDs manifests to generates the associated kubernetes workloads
  3. ElementDeployment : This CRD is a simple structure following the pattern :
spec:
  global:
    k8s:
      # Global settings that will be applied by default to all workloads if not forced locally. This is where you will be able to configure a default ingress certificate, default number of replicas on the deployments, etc.
    config:
      # Global configuration that can be used by every element component
    secretName: # The global secret name. Required secrets keys can be found in the description of this field using `kubectl explain`. Every config named `<foo>SecretKey` will point to a secret key containing the secret targetted by this secret name.
  components:
    <component name>:
      k8s: 
        # Local kubernetes configuration of this component. You can override here the global values to force a certain behaviour for each components.
      config:
        # This component configuration
      secretName: # The component secret name containing secret values. Required secrets keys can be found in the description of this field using `kubectl explain`.  Every config named `<foo>SecretKey` will point to a secret key containing the secret targetted by this secret name.
   <another component>:
     ...

Any change to the ElementDeployment manifest deployed in the namespace will trigger a reconciliation loop. This loop will update the Element manifests read by the Operator. It will again trigger a reconciliation loop in the Operator process, which will update kubernetes workloads accordingly.

If you manually change a workload, it will trigger a reconciliation loop and the Operator will override your change on the workload.

The deployment must be managed only through the ElementDeployment CRD.

Installing the Operator and the Updater helm charts

We advise you to deploy the helm charts in one of the deployments model :

  1. Cluster-Wide deployment : In this mode, the CRDs Conversion Webhook and the controller managers are deployed in their own namespace, separated from ESS deployments. They are able to manage ESS deployments in any namespace of the cluster The install and the upgrade of the helm chart requires cluster admin permissions.
  2. Namespace-scoped deployment : In this mode, only the CRDs conversion webhooks require cluster admin permissions. The Controller managers are deployed directly in the namespace of the element deployment. The install and the upgrade of ESS does not require cluster admin permissions if the CRDs do not change.

All-in-one deployment (Requires cert-manager)

When cert-manager is present in the cluster, it is possible to use the all-in-one ess-system helm chart to deploy the operator and the updater.

First, let's add the ess-system repository to helm, replace ems_image_store_username and ems_image_store_token with the values provided to you by Element.

helm repo add ess-system https://registry.element.io/helm/ess-system --username
<ems_image_store_username> --password '<ems_image_store_token>' --version ~2.17.0

Cluster-wide deployment

When deploying ESS-System as a cluster-wide deployment, updating ESS requires ClusterAdmin permissions.

Create the following values file :


emsImageStore:
  username: <username>
  password: <password>

element-operator:
  clusterDeployment: true
  deployCrds: true  # Deploys the CRDs and the Conversion Webhooks
  deployCrdRoles: true  # Deploys roles to give permissions to users to manage specific ESS CRs
  deployManager: true  # Deploys the controller managers

element-updater:
  clusterDeployment: true
  deployCrds: true  # Deploys the CRDs and the Conversion Webhooks
  deployCrdRoles: true  # Deploys roles to give permissions to users to manage specific ESS CRs
  deployManager: true  # Deploys the controller managers

Namespace-scoped deployment

When deploying ESS-System as a namespace-scoped deployment, you have to deploy ess-system in two parts :

  1. One for the CRDs and the conversion webhooks. This part will be managed with ClusterAdmin permissions. These update less often.
  2. One for the controller managers. This part will be managed with namespace-scoped permissions.

In this mode, the ElementDeployment CR is deployed in the same namespace as the controller-managers.

Create the following values file to deploy the CRDs and the conversion webhooks :


emsImageStore:
  username: <username>
  password: <password>

element-operator:
  clusterDeployment: true
  deployCrds: true  # Deploys the CRDs and the Conversion Webhooks
  deployCrdRoles: false  # Deploys roles to give permissions to users to manage specific ESS CRs
  deployManager: false  # Deploys the controller managers

element-updater:
  clusterDeployment: true
  deployCrds: true  # Deploys the CRDs and the Conversion Webhooks
  deployCrdRoles: false  # Deploys roles to give permissions to users to manage specific ESS CRs
  deployManager: false  # Deploys the controller managers

Create the following values file to deploy the controller managers in their namespace :


emsImageStore:
  username: <username>
  password: <password>

element-operator:
  clusterDeployment: false
  deployCrds: false  # Deploys the CRDs and the Conversion Webhooks
  deployCrdRoles: false  # Deploys roles to give permissions to users to manage specific ESS CRs
  deployManager: true  # Deploys the controller managers

element-updater:
  clusterDeployment: false
  deployCrds: false  # Deploys the CRDs and the Conversion Webhooks
  deployCrdRoles: false  # Deploys roles to give permissions to users to manage specific ESS CRs
  deployManager: true  # Deploys the controller managers

Without cert-manager present on the cluster

First, let's add the element-updater and element-operator repositories to helm, replace ems_image_store_username and ems_image_store_token with the values provided to you by Element.

helm repo add element-updater https://registry.element.io/helm/element-updater --username
<ems_image_store_username> --password '<ems_image_store_token>'
helm repo add element-operator https://registry.element.io/helm/element-operator --username <ems_image_store_username> --password '<ems_image_store_token>'

Now that we have the repositories configured, we can verify this by:

helm repo list

and should see the following in that output:

NAME                    URL                                               
element-operator        https://registry.element.io/helm/element-operator
element-updater         https://registry.element.io/helm/element-updater

N.B. This guide assumes that you are using the element-updater and element-operator namespaces. You can call it whatever you want and if it doesn't exist yet, you can create it with: kubectl create ns <name>.

Generating an image pull secret with EMS credentials

To generate an ems-credentials to be used by your helm chart deployment, you will need to generate an authentication token and palce it in a secret.

kubectl create secret -n element-updater docker-registry ems-credentials --docker-server=registry.element.io --docker-username=<EMSusername> --docker-password=<EMStoken>`
kubectl create secret -n element-operator docker-registry ems-credentials --docker-server=registry.element.io --docker-username=<EMSusername> --docker-password=<EMStoken>`

Generating a TLS secret for the webhook

The conversion webhooks need their own self-signed CA and TLS certificate to be integrated into kubernetes.

For example using easy-rsa :

easyrsa init-pki
easyrsa --batch "--req-cn=ESS-CA`date +%s`" build-ca nopass
easyrsa --subject-alt-name="DNS:element-operator-conversion-webhook.element-operator"\
  --days=10000 \
  build-server-full element-operator-conversion-webhook nopass
easyrsa --subject-alt-name="DNS:element-updater-conversion-webhook.element-updater"\
  --days=10000 \
  build-server-full element-updater-conversion-webhook nopass

Create a secret for each of these two certificates :

kubectl create secret tls element-operator-conversion-webhook --cert=pki/issued/element-operator-conversion-webhook.crt --key=pki/private/element-operator-conversion-webhook.key  --namespace element-operator
kubectl create secret tls element-updater-conversion-webhook --cert=pki/issued/element-updater-conversion-webhook.crt --key=pki/private/element-updater-conversion-webhook.key  --namespace element-updater

Installing the helm chart for the element-updater and the element-operator

Create the following values file to deploy the controller managers in their namespace :

values.element-operator.yml :

clusterDeployment: true
deployCrds: true  # Deploys the CRDs and the Conversion Webhooks
deployCrdRoles: true  # Deploys roles to give permissions to users to manage specific ESS CRs
deployManager: true  # Deploys the controller managers
crds:
  conversionWebhook:
    caBundle: # Paste here the content of `base64 pki/ca.crt -w 0`
    tlsSecretName: element-operator-conversion-webhook
    imagePullSecret: ems-credentials
operator:
  imagePullSecret: ems-credentials

values.element-updater.yml :

clusterDeployment: true
deployCrds: true  # Deploys the CRDs and the Conversion Webhooks
deployCrdRoles: true  # Deploys roles to give permissions to users to manage specific ESS CRs
deployManager: true  # Deploys the controller managers
crds:
  conversionWebhook:
    caBundle: # Paste here the content of `base64 pki/ca.crt -w 0`
    tlsSecretName: element-updater-conversion-webhook
    imagePullSecret: ems-credentials
updater:
  imagePullSecret: ems-credentials

Run the helm install command :

helm install element-operator element-operator/element-operator --namespace element-operator -f values.yaml  --version ~2.17.0
helm install element-updater element-updater/element-updater --namespace element-updater -f values.yaml --version ~2.17.0

Now at this point, you should have the following 4 containers up and running:

[user@helm ~]$ kubectl get pods -n element-operator
NAMESPACE            NAME                                                   READY   STATUS    RESTARTS        AGE
element-operator     element-operator-controller-manager-c8fc5c47-nzt2t     2/2     Running   0               6m5s
element-operator     element-operator-conversion-webhook-7477d98c9b-xc89s   1/1     Running   0               6m5s
[user@helm ~]$ kubectl get pods -n element-updater
NAMESPACE            NAME                                                   READY   STATUS    RESTARTS        AGE
element-updater      element-updater-controller-manager-6f8476f6cb-74nx5    2/2     Running   0               106s
element-updater      element-updater-conversion-webhook-65ddcbb569-qzbfs    1/1     Running   0               81s

Generating the ElementDeployment CR to Deploy Element Server Suite

The ess-stack helm chart is available in the ess-system repository :

helm repo add ess-system https://registry.element.io/helm/ess-system --username
<ems_image_store_username> --password '<ems_image_store_token>'

You can install it using the following command against your values file. See below for the value file configuration.

helm install ess-system/ess-stack --namespace element-onprem -f values.yaml  --version ~2.17.0

It will deploy an ElementDeployment CR and its associated secrets from the chart values file.

The values file will contain the following structure :

emsImageStore:
  username: <username>
  password: <password>

secrets:
  global:
    content:
      genericSharedSecret: # generic shared secret
  synapse:
    content:
        macaroon: # macaroon
        adminPassword: # synapse admin password
        postgresPassword: # postgres password
        telemetryPassword: # your ems image store password
        registrationSharedSecret: # registration shared secret
        # python3 -c "import signedjson.key; signing_key = signedjson.key.generate_signing_key(0); print(f\"{signing_key.alg} {signing_key.version} {signedjson.key.encode_signing_key_base64(signing_key)}\")"
        signingKey: # REPLACE WITH OUTPUT FROM PYTHON COMMAND ABOVE

 # globalOptions contains the global properties of the ELementDeployment CRD
globalOptions:
  config:
    domainName: # your base domain
  k8s:
    ingresses:
      tls:
        mode: certmanager
        certmanager:
          issuer: letsencrypt
    workloads:
      replicas: 1

components:
  elementWeb: 
    k8s:
      ingress:
        fqdn:  # element web fqdn
  synapse:
    config:
      media:
        volume:
          size: 5Gi
      postgresql:
        database: # postgres database
        host:  # postgres host
        port: 5432
        user: # postgres user
      telemetry:
        username: <your ems image store username>
        instanceId: <your ems image store username>
    k8s:
      ingress:
        fqdn: # synapse fqdn
  wellKnownDelegation:
    config: {}
    k8s: {}

Checking deployment progress

To check on the progress of the deployment, you will first watch the logs of the updater:

kubectl logs -f -n element-updater element-updater-controller-manager-<rest of pod name>

You will have to tab complete to get the correct hash for the element-updater-controller-manager pod name.

Once the updater is no longer pushing out new logs, you can track progress with the operator or by watching pods come up in the element-onprem namespace.

Operator status:

kubectl logs -f -n element-operator element-operator element-operator-controller-manager-<rest of pod name>

Watching reconciliation move forward in the element-onprem namespace:

kubectl get elementdeployment -o yaml | grep dependentCRs -A20 -n element-onprem -w

Watching dependent CRs errors :

kubectl get <dependentCR>/<name> -o yaml

Watching pods come up in the element-onprem namespace:

kubectl get pods -n element-onprem -w

Administration

Migrating? Automate your deployment? Configuring Backups? Guides for Administrators here!

Administration

Authentication Configuration Examples

Provided below are some configuration examples covering how you can set up various types of Delegated Authentication. For a more detailed look at what each configuration option does, please refer to the Authentication Section detailed document.

LDAP on Windows AD

OpenID on Microsoft Azure

Before configuring within the installer, you have to configure Microsoft Azure Active Directory.

Set up Microsoft Azure Active Directory

Screenshot 2023-05-03 at 16.30.06.png

For the bridge to be able to operate correctly, navigate to API permissions, add Microsoft Graph APIs, choose Delegated Permissions and add:

Remember to grant the admin consent for those.

To setup the installer, you'll need:

Configure the installer

OpenID on Microsoft AD FS

Install Microsoft AD FS

Before starting the installation, make sure:

You can find a checklist here.

Steps to follow:

Install AD CS

You need to install the AD CS Server Role.

Obtain and Configure an SSL Certificate for AD FS

Before installing AD FS, you are required to generate a certificate for your federation service. The SSL certificate is used for securing communications between federation servers and clients.

Install AD FS

You need to install the AD FS Role Service.

Configure the federation service

AD FS is installed but not configured.

Screenshot 2023-06-22 at 15.55.57.png

Screenshot 2023-06-22 at 15.57.41.png

Screenshot 2023-06-22 at 15.59.27.png

Screenshot 2023-06-22 at 16.04.13.png

Screenshot 2023-06-22 at 16.05.50.png

Add AD FS as an OpenID Connect identity provider

To enable sign-in for users with an AD FS account, create an Application Group in your AD FS.
To create an Application Group, follow theses steps:

Screenshot 2023-06-22 at 16.39.52.png

Screenshot 2023-06-22 at 16.45.44.png

Screenshot 2023-06-22 at 16.56.40.png

Screenshot 2023-06-23 at 09.48.07.png

Screenshot 2023-06-23 at 09.51.06.png

Export Domain Trusted Root Certificate

Configure the installer

Add an OIDC provider in the 'Synapse' configuration after enabling Delegated Auth and set the following fields in the installer:

Screenshot 2023-05-04 at 10.45.23.png

Screenshot 2023-05-03 at 17.27.00.png

Other configurations are documented here.

SAML on Microsoft Azure

Before setting up the installer, you have to configure Microsoft Entra ID.

Set up Microsoft Entra ID

With an account with enough rights, go to : Enterprise Applications

  1. Click on New Application
  2. Click on Create your own application on the top left corner
  3. Choose a name for it, and select Integrate any other application you don't find in the gallery
  4. Click on "Create"
  5. Select Set up single sign on
  6. Select SAML
  7. Edit on Basic SAML Configuration
  8. In Identifier , add the following URL : https://synapse_fqdn/_synapse/client/saml2/metadata.xml
  9. Remove the default URL
  10. In Reply URL , add the following URL : https://synapse_fqdn/_synapse/client/saml2/authn_response
  11. Click on Save

  1. Make a note of the App Federation Metadata Url under SAML Certificates as this will be required in a later step.
  2. Edit on Attributes & Claims
  3. Remove all defaults for additional claims
  4. Click on Add new claim to add the following (suggested) claims (the UID will be used as the MXID):
    • Name: uid , Transformation : ExtractMailPrefix , Parameter 1 : user.userprincipalname
    • Name: email , Source attribute : user.mail
    • Name: displayName , Source attribute : user.displayname
  5. Click on Save

  1. In the application overview screen select Users and Groups and add groups and users which may have access to element

Configure the installer

Add a SAML provider in the 'Synapse' configuration after enabling Delegated Auth and set the following (suggested) fields in the installer:

Troubleshooting

Redirection loop on SSO

Synapse needs to have the X-Forwarded-For and X-Forwarded-Proto headers set by the reverse proxy doing the TLS termination. If you are using a Kubernetes installation with your own reverse proxy terminating TLS, please make sure that the appropriate headers are set.

Administration

Automating ESS Deployment

The .element-enterprise-server Directory

Config examples included on this page may not up-to-date and are solely provided for demonstration purposes. It is highly recommended to run the version of the installer you wish to install to generate and configure config files that work with that version.

Once these config files have been created by the installer, you should refer to the up-to-date config examples available in the installation documentation to understand how each config option can be modified.

When you first run the installer binary, it will create a directory in your home folder, ~/.element-enterprise-server. This is where you'll find everything the installer uses / generates as part of the installation including your configuration, the installer itself and logs.

As you run through the GUI, it will output config files within ~/.element-enterprise-server/config that will be used when you deploy. This is the best way to get started, before any automation effort, you should run through the installer and get a working config that suits your requirements.

This will generate the config files, which can then be modified as needed, for your automation efforts, then in order to understand how deployments could be automated, you should understand what config is stored where.

The cluster.yml Config File

The Cluster YAML configuration file is populated with information used by all aspects of the installer. To start you'll find apiVersion:, kind: and metadata which are used by the installer itself to identify the version of your configuration file. In cases where you switch to a new version of the installer, it will then upgrade this config in-line with the latest versions requirements.

Config Example
apiVersion: ess.element.io/v1alpha1
kind: InstallerSettings
metadata:
  annotations:
    k8s.element.io/version: 2023-07.09-gui
  name: first-element-cluster

The configuration information is then stored in the spec: section, for instance you'll see; your Postgres in cluster information; DNS Resolvers; EMS Token; etc. See the example below:

spec:
  connectivity:
    dockerhub: {}
  install:
    certManager:
      adminEmail: admin@example.com
    emsImageStore:
      password: examplesubscriptionpassword
      username: examplesubscriptionusername
    microk8s:
      dnsResolvers:
      - 8.8.8.8
      - 8.8.4.4
      postgresInCluster:
        hostPath: /data/postgres
        passwordsSeed: examplepasswordsseed

The deployment.yml Config File

The Deployment YAML configuration file is populated with the bulk of the configuration for you're deployment. As above, you'll find apiVersion:, kind: and metadata which are used by the installer itself to identify the version of your configuration file. In cases where you switch to a new version of the installer, it will then upgrade this config in-line with the latest versions requirements.

Config Example
apiVersion: matrix.element.io/v1alpha1
kind: ElementDeployment
metadata:
  name: first-element-deployment
  namespace: element-onprem

The configuration is again found within the spec: section of this file, which itself has two main sections:

components:

First each component has a named section, such as elementWeb, integrator, synapseAdmin, or in this example synapse:

      synapse:

Within each component, there are two sections to organise the configuration:

global:

The global: section works just like component: above, split into two sections config: and k8s:. It will set the default settings for all new components, you can see an example below:

Config Example
  global:
    config:
      adminAllowIps:
      - 0.0.0.0/0
      - ::/0
      certificateAuthoritySecretKey: ca.pem
      domainName: example.com
      genericSharedSecretSecretKey: genericSharedSecret
      supportDnsFederationDelegation: false
      verifyTls: true
    k8s:
      common:
        annotations: {}
      ingresses:
        annotations: {}
        services:
          type: ClusterIP
        tls:
          certmanager:
            issuer: letsencrypt
          mode: certmanager
      monitoring:
        serviceMonitor:
          deploy: auto
      workloads:
        annotations: {}
        hostAliases: []
        replicas: 2
        securityContext:
          forceUidGid: auto
          setSecComp: auto
    secretName: global

The secrets.yml Config File

The Secrets YAML configuration file is populated, as expected, the secrets used for your configuration. It consists of multiple entries, separated by lines of --- each following the below format:

Config Example
apiVersion: v1
data:
  genericSharedSecret: Q1BoVmNIaEIzWUR6VVZjZXpkMXhuQnNubHhLVVlM
kind: Secret
metadata:
  name: global
  namespace: element-onprem

The main section of interest for automation purposes, is the data: section, here you will find a dictionary of secrets, in the above you can see a genericSharedSecret and it's value opposite.

The legacy Directory

The legacy directory stores configuration for specific components not yet updated to the new format within the component: section of the deployment.yml. Work is steadily progressing on updating these legacy components to the new format, however in the meantime, you will find a folder for each legacy component here.

As integrations are upgraded to the new format this example (IRC) may become outdated, however the process remains identical for any integrations still using the legacy format. Make sure to check via the installer if the integration you are looking for is configured in this way.

Within each components folder, you will see a .yml file, which is where the configuration of that component is stored. For instance, if you setup the IRC Bridge, it will create ~/.element-enterprise-server/config/legacy/ircbridge with bridge.yml inside. You can use the Integrations and Add-Ons chapter of our documentation for guidance on how these files are configured. Using the IRC Bridge example, you would have a bridge.yml like so:

Config Example
key_file: passkey.pem
bridged_irc_servers:
- postgres_fqdn: ircbridge-postgres
  postgres_user: ircbridge
  postgres_db: ircbridge
  postgres_password: postgres_password
  admins:
  - "@user:example.com"
  logging_level: debug
  enable_presence: true
  drop_matrix_messages_after_seconds: 0
  bot_username: "ircbridgebot"
  provisioning_room_limit: 50
  rmau_limit: 100
  users_prefix: "irc_"
  alias_prefix: "irc_"
  address: irc.example.com
  parameters:
    name: "Example IRC"
    port: 6697
    ssl: true
    botConfig:
      enabled: true
      nick: "MatrixBot"
      username: "matrixbot"
      password: "some_password"
    dynamicChannels:
      enabled: true
    mappings:
      "#welcome":
        roomIds: ["!MLdeIFVsWCgrPkcYkL:example.com"]
    ircClients:
      allowNickChanges: true

There is also another important folder in legacy. The certs directory, here you will need to add any CA.pem file and certificates for the FQDN of any legacy components. As part of any automation, you will need to ensure these files are correct per setup and named correctly, the certificates in this directory should be named using the fully qualified domain name (.key and .crt).

Automating your deployment

Once you have a set of working configuration, you should make a backup of your ~/.element-enterprise-server/config directory. Through whatever form of automation you choose, automate the modification of your cluster.yml, deployment.yml, secrets.yml and any legacy *.ymls to adjust set values as needed.

For instance, perhaps you need 6 identical homeservers each with their own domain name, you would need to edit the fqdn of each component and the domainName in deployment.yml. You'd then have 6 config directories, each differing in domain, ready to be used by an installer binary.

On each of the 6 hosts, create the ~/.element-enterprise-server directory and copy that hosts specific config to ~/.element-enterprise-server/config. Copy the installer binary to the host, ensuring it's executable.

Running the installer unattended

Once host system is setup, you can add unattended when running the binary to run the installer unattended. It will pickup the configuration and start the deployment installation without needing to use the GUI to get it started.

./element-enterprise-graphical-installer-YYYY-MM.VERSION-gui.bin unattended
Administration

Backup and Restore

Welcome, ESS Administrators. This guide is crafted for your role, focusing on the pragmatic aspects of securing crucial data within the Element Server Suite (ESS). ESS integrates with external PostgreSQL databases and persistent volumes and is deployable in standalone or Kubernetes mode. To ensure data integrity, we recommend including valuable, though not strictly consistent, data in backups. The guide also addresses data restoration and a straightforward disaster recovery plan.

Software Overview

ESS provides Synapse and Integrations which require an external PostgreSQL and persistent volumes. It offers standalone or Kubernetes deployment.

You'll find below a description of the content of each component data and db backup.

Synapse

Adminbot

Auditbot

Matrix Authentication Service

Sliding Sync

Sydent

Integrator

Bridges (XMPP, IRC, Whatsapp, SIP, Telegram)

Backup Policy & Backup Procedure

There is no particular prerequisite to do before executing an ESS backup. Only Synapse and MAS Databases should be backed up in sync and stay consistent. All other individual components can be backed up on it's own lifecycle.

Backups frequency and retention periods must be defined according to your own SLAs and SLIs.

Data restoration

The following ESS components should be restored first in case of complete restoration. Other components can be restore on their distinctively, on their own time:

Disaster Recovery Plan

In case of disaster recovery, the following components are critical for your system recovery:

The following systems will recover features subsets, and might involve reset & data loss if not recovered :

Security Considerations

Some backups will contain sensitive data, Here is a description of the type of data and the risks associated to it. When available, make sure to enable encryption for your stored backups. You should use appropriate access controls and authentication for your backup processes.

Synapse

Synapse media and db backups should be considered sensitive.

Synapse media backups will contain all user media (avatar, photos, video, files). If your organization is enforcing encrypted rooms, the media will be stored encrypted with each user e2ee keys. If you are not enforcing encryption, you might have media stored in cleartext here, and appropriate measures should be taken to ensure that the backups are safely secured.

Synapse postgresql backups will contain all user key backup storage, where their keys are stored safely encrypted with each user passphrase. Synapse DB will also store room states and events. If your organization is enforcing encrypted rooms, these will be stored encrypted with each user e2ee keys.

The Synapse documentation contains further details on backup and restoration. Importantly the e2e_one_time_keys_json table should not be restored from backup.

Adminbot

Adminbot PV backup should be considered sensitive.

Any user accessing it could read the content of your organization rooms. Would such an event occur, revoking the bot tokens would prevent logging in as the AdminBot and stop any pulling of the room messages content.

Auditbot

Auditbot PV backup should be considered sensitive.

Any user accessing it could read the content of your organization rooms. Would such an event occur, revoking the bot tokens would prevent logging in as the AuditBot and stop any pulling of the room messages content.

Logs stored by the AuditBot for audit capabilities are not encrypted, so any user able to access it will be able to read any logged room content.

Sliding Sync

Sliding-Sync DB Backups should be considered sensitive.

Sliding-Sync database backups will contain Users Access tokens, which are encrypted with Sliding Sync Secret Key. The tokens are only refreshed regularly if you are using Matrix Authentication Services. These tokens give access to user messages-sending capabilities, but cannot read encrypted messages without user keys.

Sydent

Sydent DB Backups should be considered sensitive.

Sydent DB Backups contain association between user matrix accounts and their external identifiers (mails, phone numbers, external social networks, etc).

Matrix Authentication Service

Matrix Authentication Service DB Backups should be considered sensitive.

Matrix Authentication Service database backups will contain user access tokens, so they give access to user accounts. It will also contain the OIDC providers and confidential OAuth 2.0 Clients configuration, with secrets stored encrypted using MAS encryption key.

IRC Bridge

IRC Bridge DB Backups should be considered sensitive.

IRC Bridge DB Backups contain user IRC passwords. These passwords give access to users IRC account, and should be reinitialized in case of incident.

Standalone Deployment Guidelines

General storage recommentations for single-node instances

Adminbot storage:

Auditbot storage:

Synapse storage:

Postgres (in-cluster) storage:

Backup Guidance:

Administration

Configuring Element Desktop

Element Desktop is a Matrix client for desktop platforms with Element Web at its core.

You can download Element Desktop for Mac, Linux or Windows from the Element downloads page.

See https://web-docs.element.dev/ for the Element Web and Desktop documentation.

Aligning Element Desktop with your ESS deployed Element Web

By default, Element Desktop will be configured to point to the Matrix.org homeserver, however this is configurable by supplying a User Specified config.json.

As Element Desktop is mainly Element Web, but packaged as a Desktop application, this config.json is identical to the config.json ESS will configure and deploy for you at https://<element_web_fqdn>/config.json, so it is recommended to setup Element Desktop using that file directly.

How you do this will depend on your specific environment, but you will need to ensure the config.json is placed in the correct location to be used by Element Desktop.

In the paths above, $NAME is typically Element, unless you use --profile $PROFILE in which case it becomes Element-$PROFILE.

As Microsoft Windows File Explorer by default hides file extensions, please double check to ensure the config.json does indeed have the .json file extension, not .txt.

Customising your desktop configuration

You may wish to further customise Element Desktop, if the changes you wish to make should not also apply to your ESS deployed Element Web, you will need to add them in addition to your existing config.json.

You can find Desktop specific configuration options, or just customise using any options from the Element Web Config docs.

The Element Desktop MSI

Where to download

Customers who have a subscription to the Enterprise edition of the Element Server Suite (ESS) can download a MSI version of Element Desktop. This version of Element Desktop is by default installed into Program Files (instead of per user) and can be used to deploy into enterprise environments. To download, login to your EMS Accoutn and access from the same download page you'd find the enterprise installer, https://ems.element.io/on-premise/download.

Using the Element Desktop MSI

The Element Desktop MSI can be used to install Element Desktop to all desired machines in your environment, unlike the usual installer, you can customise it's install directory (which now defaults to Program Files).

You can customise the installation directory by installing the MSI using, or just generally configuring the APPLICATIONFOLDER:

msiexec /i "Element 1.11.66.msi" APPLICATIONFOLDER="C:\Element"
MSI and config.json

Once users run Element for the first time, an Element folder will be created in their AppData profile specific to that user. By using Group Policy, Logon Scripts, SCCM or whatever other method you like, ensure the desired config.json is present within %APPDATA%\Element. (The config.json can be present prior to the directories creation.)

Administration

Guidance on High Availability

ESS makes use of Kubernetes for deployment so most guidiance on high-availability is tied directly with general Kubernetes guidance on high availability.

Kubernetes

High-Level Overview

It is strongly advised to make use of the Kubernetes documentation to ensure your environment is setup for high availability, see links above. At a high-level, Kubernetes achieves high availability through:

How does this tie into ESS

As ESS is deployed into a Kubernetes cluster, if you are looking for high availability you should ensure your environment is configured with that in mind. One important factor is to ensure you deploy using the Kubernetes deployment option, whilst Standalone mode will deploy to a Kubernetes cluster, by definition it exists solely on a single node so options for high availability will be limited.

PostgreSQL

High-Level Overview

To ensure a smooth failover process for ESS, it is crucial to prepare a robust database topology. The following list outline the necessary element to take into consideration:

By carefully preparing the database topology as described, you can ensure that the failover process for ESS is efficient and reliable, minimizing downtime and maintaining data integrity.

How does this tie into ESS

As ESS relies on PostgreSQL for its database if you are looking for high availability you should ensure your environment is configured with that in mind. The database replicas can be achieved the same way in both Kubernetes and Standalone deployment, as the database is not managed by ESS.

ESS failover plan

This document outlines a high-level, semi-automatic, failover plan for ESS. The plan ensures continuity of service by switching to a secondary data center (DC) in the event of a failure in the primary data center.

Prerequisites

ESS Architecture for failover capabilities based on 3 datacenters

DC1 (Primary)

DC2

DC3

Failover Process

When DC1 experiences downtime and needs to be failed over to DC2, follow these steps:

You should decline your own failover procedure based on this high-level failover overview. By doing so, you can ensure that ESS continues to operate smoothly and with minimal downtime, maintaining service availability even when the primary data center goes down.

Administration

Migrating from Self-Hosted to ESS

This document is currently work-in-progress and might not be accurate. Please speak with your Element contact if you have any questions.

Preparation

This section outlines what you should do ahead of the migration in order to ensure the migration goes as quickly as possible and without issues.

Note that the database and media may be duplicated/stored twice on your ESS host during the import process depending on how you do things.

If you are migrating from EMS, see also https://ems-docs.element.io/books/element-cloud-documentation/page/migrate-from-ems-to-self-hosted for import documentation tailored to the EMS export.

Setup your new ESS server

Follow the ESS docs for first-time installation, configuring to match your existing homeserver before proceeding with the below.

The Domain Name on the Domains page during the ESS initial setup wizard must be the same as you have on your current setup. The other domains can be changed if you wish.

To make the import later easier, we recommend you select the following Synapse Profile. You can change this as required after the import.

After the ESS installation, you can check your ESS Synapse version on the Admin -> Server Info page:

Export your old Matrix server

SSH to your old Matrix server

You might want to run everything in a tmux or a screen session to avoid disruption in case of a lost SSH connection.

Upgrade your old Synapse to the same version EES is running

Follow https://element-hq.github.io/synapse/latest/upgrade.html

Please be aware that ESS, especially our LTS releases may not run the latest available Synapse release. Please speak with your Element contact for advice on how to resolve this issue. Note that Synapse does support downgrading, but occationally a new Synapse version includes database schema changes and this limits downgrading. See https://element-hq.github.io/synapse/latest/upgrade.html#rolling-back-to-older-versions for additional details and compatible versions.

Start Synapse, make sure it's happy.
Stop Synapse

Create a folder to store everything

mkdir -p /tmp/synapse_export
cd /tmp/synapse_export

The guide from here on assumes your current working directory is /tmp/synapse_export.

Set restrictive permissions on the folder

If you are working as root: (otherwise set restrictive permissions as needed):

chmod 700 /tmp/synapse_export

Copy Synapse config

Get the following files :

Stop Synapse

Once Synapse is stopped, do not start it again after this

Doing so can cause issues with federation and inconsistent data for your users.

While you wait for the database to export or files to transfer, you should edit or create the well-known files and DNS records to point to your new EES host. This can take a while to update so should be done as soon as possible in order to ensure your server will function properly when the migration is complete.

Database export

Dump your database:

pg_dump -Fc -O -h <dbhost> -U <dbusername> -d <dbname> -W -f synapse.dump

Import to your ESS server

Database import

Enter a bash shell on the Synapse postgres container:

Stop Synapse

kubectl .... replicas=0

Note that this might differ depending on how you have your Postgres managed. Please consult the documentation for your deployment system.

kubectl exec -it -n element-onprem synapse-postgres-0 --container postgres  -- /bin/bash

Then on postgres container shell run:

psql -U synapse_user synapse

The following command will erase the existing Synapse Database without warning or confirmation. Please ensure that is is the correct database and there is no production data on it.

DO $$ DECLARE
r RECORD;
BEGIN
  FOR r IN (SELECT tablename FROM pg_tables WHERE schemaname = current_schema()) LOOP
    EXECUTE 'DROP TABLE ' || quote_ident(r.tablename) || ' CASCADE';
  END LOOP;
END $$;

DROP sequence cache_invalidation_stream_seq;
DROP sequence state_group_id_seq;
DROP sequence user_id_seq;
DROP sequence account_data_sequence;
DROP sequence application_services_txn_id_seq;
DROP sequence device_inbox_sequence;
DROP sequence event_auth_chain_id;
DROP sequence events_backfill_stream_seq;
DROP sequence events_stream_seq;
DROP sequence presence_stream_sequence;
DROP sequence receipts_sequence;
DROP sequence un_partial_stated_event_stream_sequence;
DROP sequence un_partial_stated_room_stream_sequence;

Use \q to quit, then back on the host run:

gzip -d synapse_export.sql.gz
sudo cp synapse_export.sql /data/postgres/synapse/
# or
kibectl --namespace element-onprem cp synapse_export.sql element-onprem synapse-postgres-0:/tmp

Finally on the pod:

cd /var/lib/postgresql/data
# or
cd /tmo

pg_restore <connection> --no-owner --role=<new role> -d <new db name> dump.sql
Administration

Starting and Stopping ESS Services

Stopping a component

To stop a component, such as Synapse, it is necessary to stop the operator :

kubectl scale deploy/element-operator-controller-manager -n operator-onprem --replicas 0

Once the operator is stopped, you can delete the Synapse resource to remove all Synapse workloads :

kubectl delete synapse/first-element-deployment -n element-onprem

To get a list of resources that you can remove, you can look at the following command :

kubectl get elementdeployment/first-element-deployment -n element-onprem  --template='{{range $key, $value := .status.dependentCRs}}{{$key}}{{"\n"}}{{end}}'

Example :

ElementWeb/first-element-deployment
Hookshot/first-element-deployment
Integrator/first-element-deployment
MatrixAuthenticationService/first-element-deployment
Synapse/first-element-deployment
SynapseAdminUI/first-element-deployment
SynapseUser/first-element-deployment-adminuser-donotdelete
SynapseUser/first-element-deployment-telemetry-donotdelete
WellKnownDelegation/first-element-deployment

Starting a component

To stop a component, such as Synapse, it is necessary to start the operator :

kubectl scale deploy/element-operator-controller-manager -n operator-onprem --replicas 1

Because the Synapse resource will automatically have been recreated by the updater, the operator on startup will automatically detect it and recreate all synapse workloads.

Administration

Using the Admin Console

Opening the Admin Console

First, let’s get started by logging into the admin console. To do this, make sure that the installer is still running or bring it up by running the installer binary like this (Please specify the correct version and don’t just copy this line!):

./element-enterprise-graphical-installer-2023-06.01-gui.bin

You will then see output similar to:

To start configuration open:
        https://admin.element.demo:8443/a/XWDPB7NQ

The Configure Tab

adminconsole-docs1.png

You’ll notice that the first page is the “Configure” tab on the top and the sections in the left hand menu mirror those in the installer:

Note that all settings under the “Configure” tab presently require you to re-deploy your installation by using the conveniently located “Deploy” button. Please make all changes across any of these pages that you wish to deploy prior to hitting the “Deploy” button.

The Admin Tab

If you click on the “Admin” tab, you will see the following screen:

adminconsole-docs2.png

See the section by section guide on Using the Admin Tab for a more detailed look at using it, otherwise see the below overview:

In the left hand menu, we have the following options:

Administration

Using the Admin Tab

Users Section

By default the users section will display all active user accounts present on your homeserver, listing their Matrix ID followed by their Display Name and whether the user is a Synapse Admin.

Navigating

Users will be displayed in a list, defaulting to a maximum of 10 users per page, you can show more users per page using the drop found at the bottom left of the list.

To navigate between pages, you can use the page navigation options found at the bottom right of the list.

Sorting and Filtering

The default view of users can be adjusted using the available sorting and filtering options.

To sort, select the sort button and select how users should be organised, options include by Matrix ID (A-Z or Z-A), by Display Name (A-Z or Z-A) and displaying Admins first.

To search for users specifically, you can use the filter search box found above the list of users. Simply enter your search term and the list will be filtered for matches.

By default a number of account types are excluded from the list of users, these are deactivated accounts, guest accounts, support accounts and bot accounts. You can include these accounts by selecting the filter button then choosing the appropriate option.

To remove these includes, you can click the 'x' icon next to the filter added just above the list view.

Adding Users

You can add user accounts manually by clicking the Add button found at the top right of the admin interface. This will take you to a page where you can register a new Synapse user.

Note, if your homeserver has a Terms of Service, users added in this way will need to accept those terms after logging in. This differs from the usual flow of users who create their account themselves, accepting the terms during the sign up process.

Once any additional user/s have been added, simply click the 'Back to people list' button to return to the user list.

Adding a single user

Provide the required username of the new user, if the user should be made a Synapse admin you should check the 'Make new user server admin' checkbox, then press the Add button. A new user will be added and their password will appear on screen.

Adding multiple users at once

You are also able to import bulk users at once, either click the username,email,phone,displayname,password button, or manually create a csv file with those headings. Only the username is required and if the password is left blank, a random one will be generated. The CSV should be limited to no more than 30MB, you can see an example below:

username,email,phone,displayname,password
grover.penner,,,Grover Penner,grover
titus.allison,,,Titus Allison,titus
martie.dean,,,Martie Dean,martie
rachyl.dpears,,,Rachyl Spears,rachyl
imogen.bates,,,Imogen Bates,imogen

Either drag the CSV file into the window, or using the 'Choose file' button and press 'Import' to create the users. You will receive confirmation the users have been created.

Managing Users

You can manage an existing user by clicking on their account from the user list. You will then be presented with a view where you can manage the account.

Note, you can quickly copy the accounts Matrix ID by clicking on it, you will see a tooltip confirm the ID has been copied.

You can make a user a Synapse admin by checking the 'Admin' checkbox found to the right of the Matrix ID. Clicking this checkbox will cause a confirmation prompt to appear to confirm the action.

Note, this does not currently give any additional permissions in Element clients. It grants permission to use the Synapse Admin API

You can edit the users' existing Display Name by clicking the 'edit' button found following their existing Display Name, and you can reset the users' password by clicking the 'Reset' button.

From this view you can also see when a user was last logged in and a list of their currently active devices (i.e. sessions).

Finally you are also able to manually deactivate the account by clicking the 'Deactivate account' button, this will cause a confirmation prompt to appear to confirm the action.

Note, this action will remove active access tokens, reset the password, and delete third-party IDs (to prevent the user requesting a password reset). It will also mark the user as GDPR-erased (stopping their data from being distributed further, and deleting it entirely if there are no other references to it).

Rooms Section

By default the rooms section will display all rooms present on your homeserver, listing their room name, or ID if not applicable, followed by the member count.

Navigating

Rooms will be displayed in a list, defaulting to a maximum of 10 rooms per page, you can show more rooms per page using the drop found at the bottom left of the list.

To navigate between pages, you can use the page navigation options found at the bottom right of the list.

Sorting and Filtering

The default view of rooms can be adjusted using the available sorting and filtering options.

To sort, select the sort button and select how rooms should be organised, options include by Name (A-Z or Z-A) and Room Members (highest first, least first).

To search for rooms specifically, you can use the filter search box found above the list of rooms. Simply enter your search term and the list will be filtered for matches.

Managing Rooms

You can manage an existing room by clicking on its name from the room list. You will then be presented with a view where you can manage the room.

From this view you can view information about the room, including the room name and topic, room ID, members and alias etc. To view the members of the room, you can click the 'View list' link next to the member count to be taken to a view of all accounts within the room.

You can control whether the room is visible in the public directory by toggling the 'Show room in directory' checkbox.

You are also able to delete the room by clicking the 'Delete room' button at the bottom of the page, doing so will cause a confirmation prompt to appear to confirm the action.

Note, this operation is irreversible.

Media Section

The Media section shows your a pie chart visualisation of the top users of media storage on your homeserver, you can click the individual Matrix IDs from the key to include / exclude those users from the visualisation. You can also hover over the pie chart segments to see a tooltip highlighting the size of storage used by the specific user as well as the quantity of items.

Server Info Section

This section allows you to see version specific information about your homeserver, including Synapse version, ESS version, Python version and the default room version.

The view also highlight user access rights to change passwords, avatars and display names as well as a JSON output of the full server capabilities.

Finally it will identify the version of your hosted element client instance.

Reported Events Section

Federation Section

The Federation section shows all homeservers your homeserver is federating with, i.e. which homeservers users from your homeserver share a room with followed by it's current status.

Navigating

Homeservers will be displayed in a list, defaulting to a maximum of 10 homeservers per page, you can show more homeservers per page using the drop found at the bottom left of the list.

To navigate between pages, you can use the page navigation options found at the bottom right of the list.

Managing Individual Homeserver Federation

You can manage an existing federation destination (homeserver) by clicking on its name from the room list. You will then be presented with a view where you can view the latest status of the federation as well as a list of the federated rooms.

Clicking on any of the rooms from the list, will allow you to manage the specific room via the Rooms section.

Admin Bot Section

If you make use of Admin Bot you will be able to use this section to log in as the configured Admin Bot user. Click the 'Click here to log in' button to log in and following the instructions provided to read encrypted messages (if required).

Do not make changes to widgets in rooms while logged in as the Adminbot. The dedicated Element Web for Adminbot does not have the custom configuration your main Element Web client has, as such you can cause problems when working with widgets.

Audit Section

If you make use of Audit Bot you will be able to use this section to perform audit tasks on your homeserver.

Support and Troubleshooting

Support and Troubleshooting

Support

Getting in touch

Need some help? Simply log in to your EMS Control Panel with the EMS Account associated with your Element Server Suite Enterprise subscription.

Then click the Your Account button, found at the top right of the page, then Help & Support.

You'll be presented with a contact form:

Please provide as many details as you can, once submitted, you should receive a confirmation email which you can reply to with any additional information.

Service Level Agreements (SLA)

This document summarises the SLAs for our price plans and establishes a baseline for our services. For information on our price plans visit: https://element.io/pricing

SLA response times

All price plans include unlimited support requests, and all requests are initiated by email or web form.

Enterprise Sovereign
Level 1 Urgent 4 hours 2 hours
Level 2 High 8 hours 4 hours
Level 3 Medium 1 day 1 day
Level 4 Low 2 days 2 days

Business Enterprise Sovereign Level 1 (Urgent) 1 day 4 hours 2 hours Level 2 (High) 1 day 8 hours 4 hours Level 3 (Medium) 2 days 1 day 1 day Level 4 (Low) 3 days 2 days 2 days

Coverage: 9am - 6pm GMT / BST (UTC / UTC+1) excluding weekends and UK public holidays

Scope of support

Includes
Excludes
Important information
Support and Troubleshooting

Troubleshooting

Introduction to Troubleshooting

Troubleshooting the Element Installer comes down to knowing a little bit about kubernetes and how to check the status of the various resources. This guide will walk you through some of the initial steps that you'll want to take when things are going wrong.

Known issues

Installer fails and asks you to start firewalld

The current installer will check if you have firewalld installed on your system. It does expect to find firewalld started as a systemd service if it is installed. If it is not started, the installer will terminate with a failure that asks you to start it. We noticed some Linux distributions like SLES15P4, RHEL8 and AlmaLinux8 that have firewalld installed as a default package but not enabled, or started.

If you hit this issue, you don't need to enable and start firewalld. The workaround is to uninstall firewalld, if you are not planning on using it.

On SLES

zypper remove firewalld -y

On RHEL8

dnf remove firewalld -y 

Airgapped installation does not start

If you are using element-enterprise-graphical-installer-2023-03.02-gui.bin and element-enterprise-installer-airgapped-2023-03.02-gui.tar.gz. You might run into an error looking like this:

Looking in links: ./airgapped/pip

WARNING: Url './airgapped/pip' is ignored. It is either a non-existing path or lacks a specific scheme.

ERROR: Could not find a version that satisfies the requirement wheel (from versions: none)

ERROR: No matching distribution found for wheel

The workaround for it is to copy the pip folder from the airgapped directory to ~/.element-enterprise-server/installer/airgapped/pip

Wiping all user data and start fresh with an existing config

On a standalone deployment you can wipe and start fresh by running:

sudo snap remove microk8s --purge && sudo rm -rf /data && sudo reboot

then run ./<element-installer>.bin unattended (this will require passwordless sudo to run noninteractively)

Failure downloading https://..., An unknown error occurred: ''CustomHTTPSConnection'' object has no attribute ''cert_file''

Make sure you are using a supported operating system version. See https://ems-docs.element.io/books/element-on-premise-documentation-lts-2404/page/requirements-and-recommendations for more details.

install.sh problems

Sometimes there will be problems when running the ansible-playbook portion of the installer. When this happens, you can increase the verbosity of ansible logging by editing .ansible.rc in the installer directory and setting:

export ANSIBLE_DEBUG=true
export ANSIBLE_VERBOSITY=4

and re-running the installer. This will generate quite verbose output, but that typically will help pinpoint what the actual problem with the installer is.

Problems post-installation

Checking Pod Status and Getting Logs

[user@element2 ~]$ kubectl get pods -n element-onprem
kubectl get pods -n element-onprem
NAME                                                         READY   STATUS    RESTARTS   AGE
first-element-deployment-element-web-6cc66f48c5-lvd7w        1/1     Running   0          4d20h
first-element-deployment-element-call-c9975d55b-dzjw2        1/1     Running   0          4d20h
integrator-postgres-0                                        3/3     Running   0          4d20h
synapse-postgres-0                                           3/3     Running   0          4d20h
first-element-deployment-integrator-59bcfc67c5-jkbm6         3/3     Running   0          4d20h
adminbot-admin-app-element-web-c9d456769-rpk9l               1/1     Running   0          4d20h
auditbot-admin-app-element-web-5859f54b4f-8lbng              1/1     Running   0          4d20h
first-element-deployment-synapse-redis-68f7bfbdc-wht9m       1/1     Running   0          4d20h
first-element-deployment-synapse-haproxy-7f66f5fdf5-8sfkf    1/1     Running   0          4d20h
adminbot-pipe-0                                              1/1     Running   0          4d20h
auditbot-pipe-0                                              1/1     Running   0          4d20h
first-element-deployment-synapse-admin-ui-564bb5bb9f-87zb4   1/1     Running   0          4d20h
first-element-deployment-groupsync-0                         1/1     Running   0          20h
first-element-deployment-well-known-64d4cfd45f-l9kkr         1/1     Running   0          20h
first-element-deployment-synapse-main-0                      1/1     Running   0          20h
first-element-deployment-synapse-appservice-0                1/1     Running   0          20h

The above kubectl get pods -n element-onprem is the first place to start. You'll notice in the above, all of the pods are in the Running status and this indicates that all should be well. If the state is anything other than "Running" or "Creating", then you'll want to grab logs for those pods. To grab the logs for a pod, run:

kubectl logs -n element-onprem <pod name>

replacing <pod name> with the actual pod name. If we wanted to get the logs from synapse, the specific syntax would be:

kubectl logs -n element-onprem first-element-deployment-synapse-main-0

and this would generate logs similar to:

  2022-05-03 17:46:33,333 - synapse.util.caches.lrucache - 154 - INFO - LruCache._expire_old_entries-2887 - Dropped 0 items from caches
2022-05-03 17:46:33,375 - synapse.storage.databases.main.metrics - 471 - INFO - generate_user_daily_visits-289 - Calling _generate_user_daily_visits
2022-05-03 17:46:58,424 - synapse.metrics._gc - 118 - INFO - sentinel - Collecting gc 1
2022-05-03 17:47:03,334 - synapse.util.caches.lrucache - 154 - INFO - LruCache._expire_old_entries-2888 - Dropped 0 items from caches
2022-05-03 17:47:33,333 - synapse.util.caches.lrucache - 154 - INFO - LruCache._expire_old_entries-2889 - Dropped 0 items from caches
2022-05-03 17:48:03,333 - synapse.util.caches.lrucache - 154 - INFO - LruCache._expire_old_entries-2890 - Dropped 0 items from caches
[user@element2 ~]$ kubectl get pods -A
NAMESPACE            NAME                                        READY   STATUS    RESTARTS   AGE
kube-system          calico-node-2lznr                           1/1     Running   0          8d
kube-system          calico-kube-controllers-c548999db-s5cjm     1/1     Running   0          8d
kube-system          coredns-5dbccd956f-glc8f                    1/1     Running   0          8d
kube-system          dashboard-metrics-scraper-6b6f796c8d-8x6p4  1/1     Running   0          8d
ingress              nginx-ingress-microk8s-controller-w8lcn     1/1     Running   0          8d
cert-manager         cert-manager-cainjector-6586bddc69-9xwkj    1/1     Running   0          8d
kube-system          hostpath-provisioner-78cb89d65b-djfq5       1/1     Running   0          8d
kube-system          kubernetes-dashboard-765646474b-5lhxp       1/1     Running   0          8d
cert-manager         cert-manager-5bb9dd7d5d-cg9h8               1/1     Running   0          8d
container-registry   registry-f69889b8c-zkhm5                    1/1     Running   0          8d
cert-manager         cert-manager-webhook-6fc8f4666b-9tmjb       1/1     Running   0          8d
kube-system          metrics-server-5f8f64cb86-f876p             1/1     Running   0          8d
jitsi                sysctl-jvb-vs9mn                            1/1     Running   0          8d
jitsi                shard-0-jicofo-7c5cd9fff5-qrzmk             1/1     Running   0          8d
jitsi                shard-0-web-fdd565cd6-v49ps                 1/1     Running   0          8d
jitsi                shard-0-web-fdd565cd6-wmzpb                 1/1     Running   0          8d
jitsi                shard-0-prosody-6d466f5bcb-5qsbb            1/1     Running   0          8d
jitsi                shard-0-jvb-0                               1/2     Running   0          8d
operator-onprem      element-operator-controller-manager-...     2/2     Running   0          4d
updater-onprem       element-updater-controller-manager-...      2/2     Running   0          4d
element-onprem       first-element-deployment-element-web-...    1/1     Running   0          4d
element-onprem       first-element-deployment-element-call-...   1/1     Running   0          4d
element-onprem       integrator-postgres-0                       3/3     Running   0          4d
element-onprem       synapse-postgres-0                          3/3     Running   0          4d
element-onprem       first-element-deployment-integrator-...     3/3     Running   0          4d
element-onprem       adminbot-admin-app-element-web-...          1/1     Running   0          4d
element-onprem       auditbot-admin-app-element-web-...          1/1     Running   0          4d
element-onprem       first-element-deployment-synapse-redis-...  1/1     Running   0          4d
element-onprem       first-element-deployment-synapse-haproxy-.. 1/1     Running   0          4d
element-onprem       adminbot-pipe-0                             1/1     Running   0          4d
element-onprem       auditbot-pipe-0                             1/1     Running   0          4d
element-onprem       first-element-deployment-synapse-admin-ui-. 1/1     Running   0          4d
element-onprem       first-element-deployment-groupsync-0        1/1     Running   0          20h
element-onprem       first-element-deployment-well-known-...     1/1     Running   0          20h
element-onprem       first-element-deployment-synapse-main-0     1/1     Running   0          20h
element-onprem       first-element-deployment-synapse-appservice-0 1/1   Running   0          20h
kubectl logs -n <namespace> <pod name>
kubectl logs -n ingress nginx-ingress-microk8s-controller-w8lcn

and you would see logs similar to:

I0502 14:15:08.467258       6 leaderelection.go:248] attempting to acquire leader lease ingress/ingress-controller-leader...
I0502 14:15:08.467587       6 controller.go:155] "Configuration changes detected, backend reload required"
I0502 14:15:08.481539       6 leaderelection.go:258] successfully acquired lease ingress/ingress-controller-leader
I0502 14:15:08.481656       6 status.go:84] "New leader elected" identity="nginx-ingress-microk8s-controller-n6wmk"
I0502 14:15:08.515623       6 controller.go:172] "Backend successfully reloaded"
I0502 14:15:08.515681       6 controller.go:183] "Initial sync, sleeping for 1 second"
I0502 14:15:08.515705       6 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress", Name:"nginx-ingress-microk8s-controller-n6wmk", UID:"548d9478-094e-4a19-ba61-284b60152b85", APIVersion:"v1", ResourceVersion:"524688", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration

Again, for all pods not in the Running or Creating state, please use the above method to get log data to send to Element.

Default administrator

The installer creates a default administrator onprem-admin-donotdelete The Synapse admin user password is defined under the synapse section in the installer

Error: UPGRADE FAILED: another operation (install/upgrade/rollback) is in progress

Delete the updater namespace and Deploy again.

kubectl delete namespaces updater-onprem

microk8s takes a logn time to become ready after system boot

See https://ems-docs.element.io/link/109#bkmrk-kernel-modules

Node-based pods failing name resolution

05:03:45:601 ERROR [Pipeline] Unable to verify identity configuration for bot-auditbot: Unknown errcode Unknown error
05:03:45:601 ERROR [Pipeline] Unable to verify identity. Stopping
matrix-pipe encountered an error and has stopped Error: getaddrinfo EAI_AGAIN synapse.prod.ourdomain
    at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:84:26) {
  errno: -3001,
  code: 'EAI_AGAIN',
  syscall: 'getaddrinfo',
  hostname: 'synapse.prod.ourdomain'
}

To see what Hosts are set, try:

kubectl exec -it -n element-onprem <pod name> getent hosts

So to do this on the adminbot-pipe-0 pod, it would look like:

kubectl exec -it -n element-onprem adminbot-pipe-0 getent hosts

and return output similar to:

127.0.0.1       localhost
127.0.0.1       localhost ip6-localhost ip6-loopback
10.1.241.27     adminbot-pipe-0
192.168.122.5   ems.onprem element.ems.onprem hs.ems.onprem adminbot.ems.onprem auditbot.ems.onprem integrator.ems.onprem hookshot.ems.onprem admin.ems.onprem eleweb.ems.onprem

Node-based pods failing SSL

2023-02-06 15:42:04 ERROR: IrcBridge Failed to fetch roomlist from joined rooms: Error: unable to verify the first certificate. Retrying
MatrixHttpClient (REQ-13) Error: unable to verify the first certificate
at TLSSocket.onConnectSecure (_tls_wrap.js:1515:34)
at TLSSocket.emit (events.js:400:28)
at TLSSocket.emit (domain.js:475:12)
at TLSSocket. finishInit (_tls_wrap.js:937:8),
at TLSWrap.ssl.onhandshakedone (_tls_wrap.js:709:12) {
code: 'UNABLE TO VERIFY LEAF SIGNATURE

Drop into a shell on the pod

kubectl exec -it -n element-onprem adminbot-pipe-0 -- /bin/sh

Check it's ability to send a request to the Synapse server

node

require=("http")
request(https://synapse.server/)

Reconciliation failing / Enable enhanced updater logging

If your reconciliation is failing, a good place to start is with the updater logs

kubectl --namespace updater-onprem logs \
    "$(kubectl --namespace updater-onprem get pods --no-headers \
        --output=custom-columns="NAME:.metadata.name" | grep controller)" \
    --since 10m

If that doesn't have the answers you seek, for example

TASK [Build all components manifests] ******************************** 
fatal: [localhost]: FAILED! => {"censored": "the output has been hidden due to
the fact that 'no_log: true' was specified for this result"}

You can enable debug logging by editing the updater deployment

kubectl --namespace updater-onprem edit \
    deploy/element-updater-controller-manager
        - name: DEBUG_MANIFESTS
          value: "1"

Wait a bit for the updater to re-run and then fetch the updater logs again. Look for fatal or to get the stdout from Ansible, look for Ansible Task StdOut. See also Unhealthy deployment below.

Click for a specific example

I had this "unknown playbook failure"

After enabling debug logging for the updater, I found this error telling me that my Telegram bridge is misconfigured

--------------------------- Ansible Task StdOut -------------------------------
 TASK [Build all components manifests] ******************************** 
fatal: [localhost]: FAILED! => {"msg": "The task includes an option with an
undefined variable. The error was: 'dict object' has no attribute
'telegramApiId'. 'dict object' has no attribute 'telegramApiId'. 'dict object'
has no attribute 'telegramApiId'. 'dict object' has no attribute
'telegramApiId'. 'dict object' has no attribute 'telegramApiId'. 'dict object'
has no attribute 'telegramApiId'. 'dict object' has no attribute
'telegramApiId'. 'dict object' has no attribute 'telegramApiId'\n\nThe error
appears to be in '/element.io/roles/elementdeployment/tasks/prepare.yml': line
21, column 3, but may\nbe elsewhere in the file depending on the exact syntax
problem.\n\nThe offending line appears to be:\n\n\n- name: \"Build all
components manifests\"\n  ^ here\n"}

Unhealthy deployment

kubectl get elementdeployment --all-namespaces --output yaml

In the status you will see which component is having an issue. You can then do

kubectl --namespace element-onprem get `<kind>`/`<name>` --output yaml

And you would see the issue in the status.

Other Commands of Interest

Some other commands that may yield some interesting data while troubleshooting are:

Check list of active kubernetes events

kubectl get events -A

You will see a list of events or the message No resources found.

kubectl get services -n element-onprem

This should return output similar to:

NAME                             TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                    AGE
postgres                         ClusterIP   10.152.183.47    <none>        5432/TCP                   6d23h
app-element-web                  ClusterIP   10.152.183.60    <none>        80/TCP                     6d23h
server-well-known                ClusterIP   10.152.183.185   <none>        80/TCP                     6d23h
instance-synapse-main-headless   ClusterIP   None             <none>        80/TCP                     6d23h
instance-synapse-main-0          ClusterIP   10.152.183.105   <none>        80/TCP,9093/TCP,9001/TCP   6d23h
instance-synapse-haproxy         ClusterIP   10.152.183.78    <none>        80/TCP                     6d23h

Connect to the Synapse Database

kubectl --namespace element-onprem exec --container postgres --stdin --tty synapse-postgres-0 -- bash
psql "dbname=$POSTGRES_DB user=$POSTGRES_USER password=$POSTGRES_PASSWORD"

The variables POSTGRES_DB, POSTGRES_USER, and POSTGRES_PASSWORD are already set on the postgres pod, so you do not need to know or find the values. Just paste the psql command as it is above and press enter.

Excessive Synapse Database Space Usage

Connect to the Synapse database.

SQL queries provided for reference only. Ensure you fully understand what they do before runnign and use at your own risk.

List tables ordered by size
SELECT
    schemaname AS table_schema,
    relname AS table_name,
    pg_size_pretty(pg_relation_size(relid)) AS data_size
FROM pg_catalog.pg_statio_user_tables
ORDER BY pg_relation_size(relid) DESC;
Example output
 table_schema |              table_name               | data_size 
--------------+---------------------------------------+-----------
 public       | event_json                            | 2090 MB
 public       | event_auth                            | 961 MB
 public       | events                                | 399 MB
 public       | current_state_delta_stream            | 341 MB
 public       | state_groups_state                    | 294 MB
 public       | room_memberships                      | 270 MB
 public       | cache_invalidation_stream_by_instance | 265 MB
 public       | stream_ordering_to_exterm             | 252 MB
 public       | state_events                          | 249 MB
 public       | event_edges                           | 208 MB
(10 rows)
Count unique values in a table ordered by count

This example counts events per room from the event_json table (where all your messages etc. are stored). This may take a while to run and may use a lot of system resources.

SELECT
    room_id,
    COUNT(*) AS count
FROM event_json
GROUP BY room_id
ORDER BY count DESC
LIMIT 10;
Example output
             room_id             |  count  
---------------------------------+---------
 !GahmaiShiezefienae:example.com | 1382242
 !gutheetheixuFohmae:example.com |    1933
 !OhnuokaiCoocieghoh:example.com |     357
 !efaeMegazeeriteibo:example.com |     175
 !ohcahTueyaesiopohc:example.com |      93
 !ithaeTaiRaewieThoo:example.com |      43
 !PhohkuShuShahhieWa:example.com |      39
 !eghaiPhetahHohweku:example.com |      37
 !faiLeiZeefirierahn:example.com |      29
 !Eehahhaepahzooshah:example.com |      27
(10 rows)

In this instance something unusual might be going on in !GahmaiShiezefienae:example.com that warrants further investigation.

Export logs from all Synapse pods to a file

This will export logs from the last 5 minutes.

for pod in $(kubectl --namespace element-onprem get pods --no-headers \
    --output=custom-columns="NAME:.metadata.name" | grep '\-synapse')
do
    echo "$pod" >> synapse.log
    kubectl --namespace element-onprem logs "$pod" --since 5m >> synapse.log
done

Grep all configmaps

for configmap in $(kubectl --namespace element-onprem get configmaps --no-headers --output=custom-columns="NAME:.metadata.name"); do
    kubectl --namespace element-onprem describe configmaps "$configmap" \
    | grep --extended-regex '(host|password)'
done

List Synapse pods, sorted by pod age/creation time

kubectl --namespace element-onprem get pods --sort-by 'metadata.creationTimestamp' | grep --extended-regex '(NAME|-synapse)'

Matrix Authentication Service admin

If your server use Matrix Authentication Service (MAS), you might accoationally need to interact with this directly. This can be done either using the MAS Admin API or using mas-cli.

Here is an one-liner for connectign to mas-cli:

kubectl --namespace element-onprem exec --stdin --tty \
    "$(kubectl --namespace element-onprem get pods \
        --output=custom-columns='NAME:.metadata.name' \
        | grep first-element-deployment-matrix-authentication-service)" \
    -- mas-cli help

Alternately, to make it easier, you can create an alias:

alias mas-cli='kubectl --namespace element-onprem exec --stdin --tty \
    "$(kubectl --namespace element-onprem get pods \
        --output=custom-columns="NAME:.metadata.name" \
        | grep first-element-deployment-matrix-authentication-service)" \
    -- mas-cli '

Redeploy the micro8ks setup

It is possible to redeploy microk8s by running the following command as root:

snap remove microk8s

This command does remove all microk8s pods and related microk8s storage volumes. Once this command has been run, you need to reboot your server - otherwise you may have networking issues. Add --purge flag to remove the data if disk usage is a concern.

After the reboot, you can re-run the installer and have it re-deploy microk8s and Element Enterprise On-Premise for you.

Show all persistent volumes and persistent volume claims for the element-onprem namespace

kubectl get pv -n element-onprem

This will give you output similar to:

NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                                   STORAGECLASS        REASON   AGE
pvc-fc3459f0-eb62-4afa-94ce-7b8f8105c6d1   20Gi       RWX            Delete           Bound    container-registry/registry-claim                       microk8s-hostpath            8d
integrator-postgres                        5Gi        RWO            Recycle          Bound    element-onprem/integrator-postgres                      microk8s-hostpath            8d
synapse-postgres                           5Gi        RWO            Recycle          Bound    element-onprem/synapse-postgres                         microk8s-hostpath            8d
hostpath-synapse-media                     50Gi       RWO            Recycle          Bound    element-onprem/first-element-deployment-synapse-media   microk8s-hostpath            8d
adminbot-bot-data                          10M        RWO            Recycle          Bound    element-onprem/adminbot-bot-data                        microk8s-hostpath            8d
auditbot-bot-data                          10M        RWO            Recycle          Bound    element-onprem/auditbot-bot-data                        microk8s-hostpath            8d

Show deployments in the element-onprem namespace

kubectl get deploy -n element-onprem

This will return output similar to:

NAME                       READY   UP-TO-DATE   AVAILABLE   AGE
app-element-web            1/1     1            1           6d23h
server-well-known          1/1     1            1           6d23h
instance-synapse-haproxy   1/1     1            1           6d23h

Show hostname to IP mappings from within a pod

Run:

kubectl exec -n element-onprem <pod_name> -- getent hosts

and you will see output similar to:

127.0.0.1       localhost
127.0.0.1       localhost ip6-localhost ip6-loopback
10.1.241.30     instance-hookshot-0.instance-hookshot.element-onprem.svc.cluster.local instance-hookshot-0
192.168.122.5   ems.onprem element.ems.onprem hs.ems.onprem adminbot.ems.onprem auditbot.ems.onprem integrator.ems.onprem hookshot.ems.onprem admin.ems.onprem eleweb.ems.onprem

This will help you troubleshoot host resolution.

Show the Element Web configuration

kubectl describe cm -n element-onprem app-element-web

and this will return output similar to:

config.json:
----
{
    "default_server_config": {
        "m.homeserver": {
            "base_url": "https://synapse2.local",
            "server_name": "local"
        } 
  },
  "dummy_end": "placeholder",
  "integrations_jitsi_widget_url": "https://dimension.element2.local/widgets/jitsi",
  "integrations_rest_url": "https://dimension.element2.local/api/v1/scalar",
  "integrations_ui_url": "https://dimension.element2.local/element",
  "integrations_widgets_urls": [
      "https://dimension.element2.local/widgets"
  ]
}

Show the nginx configuration for Element Web: (If using nginx as your ingress controller in production or using thPoC installer.)

kubectl describe cm -n element-onprem app-element-web-nginx

and this will return output similar to:

  server {
      listen       8080;

      add_header X-Frame-Options SAMEORIGIN;
      add_header X-Content-Type-Options nosniff;
      add_header X-XSS-Protection "1; mode=block";
      add_header Content-Security-Policy "frame-ancestors 'self'";
      add_header X-Robots-Tag "noindex, nofollow, noarchive, noimageindex";

      location / {
          root   /usr/share/nginx/html;
          index  index.html index.htm;

          charset utf-8;
      }
  }

Show the status of all namespaces

kubectl get namespaces

which will return output similar to:

NAME                 STATUS   AGE
kube-system          Active   20d
kube-public          Active   20d
kube-node-lease      Active   20d
default              Active   20d
ingress              Active   6d23h
container-registry   Active   6d23h
operator-onprem      Active   6d23h
element-onprem       Active   6d23h

Show the status of the stateful sets in the element-onprem namespace

kubectl get sts -n element-onprem

This should return output similar to:

NAME                    READY   AGE
postgres                1/1     6d23h
instance-synapse-main   1/1     6d23h

Show the Synapse configuration

Click to see commands for installers prior to version 2023-05.05

For installers prior to 2022-05.06, use:

kubectl describe cm -n element-onprem first-element-deployment-synapse-shared

For the 2022-05.06 installer and later, use:

kubectl -n element-onprem get secret synapse-secrets -o yaml 2>&1 | grep shared.yaml | awk -F 'shared.yaml: ' '{print $2}' - | base64 -d

For the 2023-05.05 installer and later, use:

kubectl --namespace element-onprem get \
    secrets first-element-deployment-synapse-main --output yaml | \
    grep instance_template.yaml | awk '{print $2}' | base64 --decode

Verify DNS names and IPs in certificates

In the certs directory under the configuration directory, run:

for i in $(ls *crt); do echo $i && openssl x509 -in $i -noout -text | grep DNS; done

This will give you output similar to:

local.crt
              DNS:local, IP Address:192.168.122.118, IP Address:127.0.0.1
synapse2.local.crt
              DNS:synapse2.local, IP Address:192.168.122.118, IP Address:127.0.0.1

and this will allow you to verify that you have the right host names and IP addresses in your certificates.

View the MAU Settings in Synapse

kubectl get  -n element-onprem secrets/synapse-secrets -o yaml | grep -i shared.yaml -m 1| awk -F ': ' '{print $2}' - | base64 -d 

which will return output similar to:

# Local custom settings
mau_stats_only: true

limit_usage_by_mau: False
max_mau_value: 1000
mau_trial_days: 2

mau_appservice_trial_days:
  chatterbox: 0

enable_registration_token_3pid_bypass: true

Integration issues

GitHub not sending events

You can trace webhook calls from your GitHub application under Settings/developer settings/GitHub Apps

Select your GitHub App

Click on Advanced and you should see queries issues by your app under Recent Deliveries

Screenshot 2023-04-06 at 12.12.42.png

Updater and Operator in ImagePullBackOff state

Check EMS Image Store Username and Token

Check to see if you can pull the Docker image:

kubectl get pods -l app.kubernetes.io/instance=element-operator-controller-manager -n operator-onprem -o yaml | grep 'image:'

grab the entry like image: gitlab-registry.matrix.org/ems-image-store/standard/kubernetes-operator@sha256:305c7ae51e3b3bfbeff8abf2454b47f86d676fa573ec13b45f8fa567dc02fcd1

Should look like

microk8s.ctr image pull gitlab-registry.matrix.org/ems-image-store/standard/kubernetes-operator@sha256:305c7ae51e3b3bfbeff8abf2454b47f86d676fa573ec13b45f8fa567dc02fcd1 -u <EMS Image Store usenamer>:<EMS Image Store token>

ESS LTS 24.10 Change Logs and Upgrade Notes

Upgrade Notes for the 24.10 LTS

If you plan on upgrading to this LTS we always recommend upgrading to the latest patch version of your current LTS and then updating to the latest version of this LTS.

If you plan on updating, we recommend installing the latest patch version.

Whether upgrading or updating, you should be aware of all significant upgrade notes from each prior patch version. Any highlighted patch notes for this specific LTS have been collated for convenience below, you can find the full changelogs of each release thereafter.

24.10.01-gui

The required Python versions are now 3.10, 3.11, 3.12.

As a result, Ubuntu 24.04 is now supported but Ubuntu 20.04 support is dropped. Please consult the Ubuntu documentation for upgrading between Ubuntu LTS versions.

The installer will attempt to install the required packages in some scenarios.

Airgapped customers should ensure that Python 3.12 packages are available in their package mirrors.

Alternatively, Python 3.10, 3.11, or 3.12 can be preinstalled on the server in all situations.

24.10.02-gui

Security Issues

Enterprise

Upgrade Element Web to v1.11.85, fixes CVE-2024-50336, CVE-2024-51749 and CVE-2024-51750.

Bug Fixes

Enterprise

When setting securityContext for pods, also set runAsGroup.

Deprecations

Starter

Starter Edition is deprecated, and will not be released anymore.

24.10.01-gui

Release Summary

The required Python versions are now 3.10, 3.11, 3.12. As a result, Ubuntu 24.04 is now supported but Ubuntu 20.04 support is dropped. Please consult the Ubuntu documentation for upgrading between Ubuntu LTS versions. The installer will attempt to install the required packages in some scenarios. Airgapped customers should ensure that Python 3.12 packages are available in their package mirrors. Alternatively, Python 3.10, 3.11, or 3.12 can be preinstalled on the server in all situations.

New Features

Enterprise

XMPP Bridge and IRC Bridge both support Authenticated Medias. Their signing key is generated automatically by the installer UI.

Enterprise / Starter

Authenticated Media is now enforced by default. All components but Matrix Content Scanner are compatible with it. If you need to disable it, please add enable_authenticated_media: false to Synapse -> Additional YAML.

Enterprise / Starter

Add the possibility to allow/deny rooms and log events for Auditbot.

Enterprise / Starter

Support overriding just the server and path in the image digest ConfigMap.

Enterprise / Starter

Support Element Call in Element X.

Enterprise / Starter

Matrix Authentication Service and Synapse only use internal paths to communicate, removing the need for hostAliases setup between the two.

Enterprise

All ESS Images are now hosted behind registry.element.io.

Enterprise

Synapse workers supporting multiple replicas can now be configured for automatic horizontal scaling.

Enterprise / Starter

Expose images_digests.yml file in the Download screen for Airgapped customers who want to sync their registry directly with registry.element.io.

Upgrade Notes

Enterprise / Starter

Upgrade to cert-manager 1.15.3.

Enterprise / Starter

Operator - Upgrade Python to 3.12, Ansible to 2.17.

Enterprise / Starter

Upgrade Synapse to v1.116.0.

Enterprise / Starter

Upgrade Element Web to v1.11.82.

Enterprise

Update XMPP Bridge to 2.0.1.

Enterprise

Update Adminbot and Auditbot to 6.3.1.

Enterprise

Update IRC Bridge to 3.0.2.

Enterprise

Update Hydrogen to 0.5.0.

Enterprise / Starter

Update Admin Console to v16.105.4.

Enterprise / Starter

Upgrade microk8s to 1.31.

As per 24.10 releases, the standalone installer only supports upgrading microk8s installed from 23.10 releases.

As per 23.10.35/24.04.05/24.05.01, the standalone installer now upgrades microk8s automatically. The microk8s upgrade procedure does not involve an uninstall/reinstall of microk8s anymore. It now will automatically upgrade microk8s to the expected version, and as such, the --upgrade-cluster flag has been removed.

Any customization to CNI Configuration in /var/snap/microk8s/current/args/cni-network/cni.yaml will have to be reconfigured.

During the upgrade, microk8s & workloads will restart several times. Managed addons that require upgrading will be temporarily disabled to be upgraded.

This all will induce a small downtime of a couple of minutes.

Enterprise / Starter

The installer now makes sure the upgrade comes from a supported version.

Security Issues

Enterprise / Starter

Upgrade to Ansible 9 for security fixes and Python compatibility.

Bug Fixes

Enterprise

Allow only one VoIP platform (Jitsi or Element Call) to be enabled.

Enterprise

Fix migration of authentication settings from <24.07.01 with Matrix Authentication Service installed.

Enterprise / Starter

Fix an issue where, after update, the installer UI would ask to save for changes on the Host screen when the user actually did not click anything.

Enterprise

Fix monitoring integration tab not rendering.

Enterprise

Fix Auditbot logs viewer when Matrix Authentication Service is setup.

Deprecations

Starter

Matrix Content Scanner is not available anymore in Starter Edition.

Non-LTS Monthly Release Changes

This section summarises all the changes between the previous LTS and this one during the monthly non-LTS releases. Duplicate entries where individual components received upgrades have been removed so only the latest version is mentioned.

You can then compare the below changelog against the above LTS releases for an accurate overall changelog if upgrading from a previous LTS.

Some changes added to non-LTS monthly releases are backported into older LTS releases if required. As such, some of the below features may already be present in a previous LTS. You can check the associated LTS books' respective changelog page to compare.

Release Summary

The required Python versions are now 3.9, 3.10, 3.11. These are available on all supported OS distributions. The installer will attempt to install the required packages in some scenarios. Airgapped customers should ensure that Python 3.9 packages are available in their package mirrors. Alternatively, Python 3.9, 3.10, or 3.11 can be preinstalled on the server in all situations.

Enterprise

This release adds the possibility to enable Matrix Authentication Service during initial setup. Enabling Matrix Authentication Service is experimental; a couple of features do not work yet with it (Auditbot, Adminbot, Element Call, GroupSync, Admin UI). Enabling MAS allows you to use Element X with OIDC or LDAP login.

Enterprise

This release now makes ESS ElementX ready by default. Any new installation will deploy Matrix Authentication Service. Existing setups will not profit from this change, migration paths are planned later in the future.

New Features

General

Support knocking with generic_worker federation.

Enterprise / Starter

Major Change: The standalone installer now upgrades microk8s gracefully and automatically. The microk8s upgrade procedure no longer involves an uninstall/reinstall of microk8s. It now automatically upgrades microk8s to the expected version, and the --upgrade-cluster flag has been removed.

Any customization to CNI Configuration in /var/snap/microk8s/current/args/cni-network/cni.yaml will need to be reconfigured. During the upgrade, microk8s will restart, and addons will be disabled to force an upgrade. This process may induce a small downtime of a couple of minutes.

Enterprise

Status watchers are now golang containers, reducing resources used by the operator and updater.

Enterprise

Allow configuration of Synapse database connection pool sizes.

Enterprise

Add a ServiceMonitor to scrape metrics of microk8s ingress.

Enterprise

Expose Operator & Updater metrics.

Enterprise

Add support for Outbound webhooks in Hookshot.

Enterprise

Synapse OIDC support attribute requirements.

Enterprise

Add a new experimental feature to enable Matrix Authentication Service during ESS bootstrap.

Enterprise

Simplification of the OIDC provider configuration. After upgrading, please make sure that your OIDC settings were properly migrated to the new view.

Enterprise

It is now possible to enable the new Matrix Authentication Service when bootstrapping a new ESS setup. It is an experimental feature, incompatible with Groupsync, Element Call, Auditbot, and Adminbot at this time. It is required to try out Element X with OIDC login.

Enterprise

It is now possible to use LDAP with Matrix Authentication Service.

Enterprise / Starter

Properly enforce patterns check in UI inputs under cards that can be enabled/disabled.

Enterprise

Display deployment availability in the UI, in addition to the reconciliation status.

Enterprise

Element Call is now MAS-Compatible.

Enterprise

Add the possibility to configure a matrix stats endpoint.

Enterprise

Setup the onprem-admin user as a MAS admin.

Enterprise

Allow configuration of empty (no) disallowed IP ranges in Hookshot.

Enterprise

Validate Synapse Telemetry is consistently set.

Enterprise / Starter

Synapse improve worker configuration.

Enterprise / Starter

Allow blocking of non-scanned media.

Enterprise

Adminbot/Auditbot + MAS compatibility.

Enterprise / Starter

The UI now properly marks secrets as required when necessary.

Enterprise / Starter

The reconciliation process now ensures that all secrets are present and shows missing secrets if necessary.

Enterprise

Add Hookshot permissions configuration.

Enterprise

Add the possibility to manage Federation dynamically from the Admin Console when Secure Border Gateway is enabled.

Enterprise / Starter

Speed up initial Synapse deployment.

Enterprise

Add the possibility to configure user deprovisioning and room cleanup in GroupSync.

Enterprise

Synapse auto invite: use Synapse native feature, run on background worker if it exists.

Enterprise

Allow to override a container image without configuring a new digest.

Enterprise / Starter

Support MSC4186 / Simplified Sliding Sync natively in Synapse.

Enterprise / Starter

Support authenticated media APIs (MSC3916) in Synapse.

Enterprise / Starter

Scrape Synapse HAProxy metrics.

Enterprise

Scrape Adminbot and Auditbot HAProxy metrics.

Enterprise

Set default volume sizes for Matrix Content Scanner volumes.

Enterprise

Set default volume sizes for Adminbot, Auditbot & Sydent volumes.

Enterprise / Starter

The administration interface can now manage users on deployments using Matrix OIDC.

Enterprise

Administrators can now configure the SBG allowlist within the Admin UI.

Enterprise / Starter

The user management page now allows admins to toggle the locked status of users.

Enterprise / Starter

The user management page now displays the primary email address of users.

Enterprise / Starter

The user management page will now default to showing locked and deactivated users when searching by name.

Enterprise

Enabling MAS is not experimental anymore, and is now the default setup mode.

Enterprise

Allow to override a container image without configuring a new digest.

Enterprise / Starter

Allow configuration of the operator and updater with debug logs.

Enterprise / Starter

Check for supported Python versions when starting a deployment run. Recreate the virtual environment if it is using the wrong Python version.

Enterprise / Starter

The installer now makes sure that the microk8s version on the host is supported before starting the upgrade process.

Enterprise / Starter

Speed improvements in the operator/updater reconciliation process.

Upgrade Notes

Enterprise

Upgrade Telegram bridge to 0.15.1-mod-1.

Enterprise

Upgrade WhatsApp bridge to 0.10.7-mod-1.

Enterprise

Upgrade Sygnal to 0.14.3 to support the latest Firebase API.

Enterprise

Update Synapse Admin to v16.92.0.

Enterprise

Update Adminbot to Pipe 6.1.1.

Enterprise / Starter

Matrix Content Scanner upgrade to 1.0.8.

Enterprise / Starter

On RHEL and derived platforms, it now requires python 3.11 installed.

Enterprise

Upgrade SecureBorderGateway to v1.2.0.

Enterprise

Upgrade Auditbot to 6.1.2 to improve overall request handling efficiency, especially at high-loads.

Enterprise / Starter

Upgrade to Synapse 1.114.0.

Enterprise

Upgrade to Element Call 0.6.3 with improved call layout.

Enterprise

Upgrade to Matrix Authentication Service 0.11.0 and support password auth.

Enterprise

Synapse registration and password policy settings are now moved to Authentication configuration, under Local Password Database mode.

Enterprise

Upgrade Hydrogen to v0.4.1-fix.

Enterprise / Starter

Upgrade to cert-manager 1.12.13.

Enterprise / Starter

Upgrade ElementWeb to v1.11.81.

Enterprise / Starter

Services got renamed, -headless suffixes are all removed. If you are using Network Policies, those will need to be upgraded to the new names.

Enterprise

Global upgrade of the monitoring stack. Victoria Metrics is now on version 1.101.

Enterprise

Now that Synapse brings native Sliding Sync protocol, the Sliding Sync proxy has been discontinued. Its PostgreSQL cluster instance is being cleaned-up.

Security Issues

Enterprise

Previous update might have enabled unexpectedly outbound webhooks in Hookshot. If you don't need this feature, make sure that it is disabled in Hookshot integration, under Generic Webhooks settings.

Enterprise

Better image signatures, enterprise is now published to sigstore.

Enterprise / Starter

Upgrade to Ansible 8 for security fixes.

Bug Fixes

Enterprise / Starter

Fix Remove button not working for some integrations.

Enterprise / Starter

Fix cert-manager upgrade failing to remove old resources.

Enterprise / Starter

Fix operator and updater having permissions issues under Openshift.

Enterprise / Starter

Fix Jitsi JVB failing to get ready when STUN servers list is empty and Coturn is not deployed.

Starter

Fix upgrade failing.

Enterprise

Fix missing storage class on some Monitoring PVCs.

Enterprise

Fix media screen on standalone setup.

Enterprise / Starter

Remove --upgrade-cluster parameter as microk8s is now upgraded gracefully.

Enterprise

Fix inconsistent behavior when switching between S3/Persistent volume option under the media tab.

Enterprise / Starter

Fix watchers to avoid triggering unneeded reconciliation loops.

Enterprise

GroupSync: Fix issue when LDAP identities contain commas in their names.

Enterprise

Configuring monitoring stack persistent volumes properly in microk8s requires recreating their statefulsets.

Starter / Enterprise

Fix haproxy failing on IPv4-only nodes.

Enterprise / Starter

The installer no longer flakes between bootstrap and installer view when the Kubernetes cluster is intermittently unreachable.

Enterprise

Fix an Ansible error when installing the telemetry script on the local host when user GID != UID.

Enterprise / Starter

Allow well-known delegation to omit configuration of the ingress entirely without triggering unknown variable errors.

Enterprise / Starter

Allow configuration of Matrix Content Scanner without a storage class name.

Enterprise / Starter

Mark Postgres configuration as required for all components that use a Postgres database.

Enterprise

Mark the source for GroupSync as required.

Enterprise

Remove workloads and dependent CRs from statuses when they're no longer deployed.

Enterprise

Fix provisioning of users that are not rate-limited.

Enterprise

Better identification for the Telegram and WhatsApp bridges in their respective apps.

Enterprise / Starter

Fix an issue where the cert-manager issuer would try to be created but the cert-manager webhook would not be ready.

Starter / Enterprise

Fix haproxy failing on IPv4-only nodes.

Enterprise

Fix monitoring of kube etcd and kube scheduler on microk8s.

Enterprise

Don't include cert-manager in the airgapped tarball. ESS doesn't install or manage cert-manager in airgapped deploys.

Enterprise

Avoid leaking Postgres connections when there are issues provisioning Synapse users.

Enterprise

SIPBridge - Disable Virtual rooms.

Enterprise

Attempt to detect OpenShift and configure operator & updater installation values appropriately.

Enterprise / Starter

Fix an issue preventing setup when a proxy is configured on the host.

Enterprise

Fix a critical issue which would prevent users from accessing Adminbot and Auditbot UI.

Enterprise

Fixes an issue where Auditbot UI would fail to open because tokens were unable to refresh.

Enterprise

Revert change of 24.04.07 which prevented Adminbot and Auditbot from doing an initial sync.

Enterprise

Create new devices for Adminbot and Auditbot to work with the new Rust SDK cryptographic libraries.

Enterprise

Reduce secrets leaks from operator & updater logs. If you need, for debugging purposes, to enable secrets logging, you must edit the operator & updater deployments and set the environment variable DEBUG_MANIFESTS=1.

Enterprise / Starter

Refactor Synapse config files to own the priority of each setting managed by ESS.

Enterprise

Sygnal upgrade to 0.15.0 for further Firebase API fixes.

Enterprise

Adminbot and Auditbot are currently incompatible with MAS.

Enterprise

Synapse - Override botocore CA bundle to allow pushing against non-AWS S3 providers.

Enterprise

Add support for Element Call configuration in Element Well Known file.

Enterprise

Matrix Authentication Service - Fix UI configuration of certificates for ingresses.

Enterprise

Minor speed up to initial setup of Synapse.

Starter

Fix MAU Limit, which was configured at 250 instead of 200.

Enterprise

Prevent users from manually editing the Auditbot/Adminbot passphrase.

Enterprise

Fix display of the status of the reconciliation.

Enterprise

Fix Coturn page causing a memory leak.

Enterprise / Starter

Ensure the nf_conntrack module is loaded in the kernel when deploying in standalone mode.

Enterprise / Starter

Fix microk8s services subnet parsing.

Enterprise / Starter

Fix some CVEs in the operator/updater/conversion webhook.

Enterprise / Starter

Fix Matrix Content Scanner not working as expected.

Enterprise

Configure max upload size in Secure Border Gateway request body size limit.

Enterprise

Prevent users from editing Auditbot and Adminbot passphrases in the UI.

Enterprise

Enforce pattern checks against inputs under options.

Enterprise / Starter

Increase Matrix Content Scanner ClamAV startup reliability.

Enterprise / Starter

Reduce false positives from Matrix Content Scanner.

Enterprise / Starter

On RHEL and derived platforms, the installer should not rely on platform-python for tasks other than Firewalld and SELinux tasks for microk8s setup.

Enterprise / Starter

Fix proxy variables configuration check preventing the installer to go through.

Enterprise / Starter

Fix an issue preventing setup when a proxy is configured on the host. On a proxy configuration errors, the installer will now continue the setup process after displaying the verification error message.

Enterprise / Starter

Enable MSC 3967 on Synapse to avoid some device verification issues.

Enterprise

Setup the onprem-admin user as a MAS admin.

Enterprise / Starter

Fix pulling operator & updater images from behind a proxy.

Enterprise / Starter

Expired sessions are now automatically logged out of the admin interface.

Enterprise / Starter

OIDC sessions are now refreshed correctly when the token expires.

Enterprise

An error is now displayed when the standalone admin UI cannot load the audit/admin interface configuration.

Enterprise

Ensure operator and updater metrics are correctly scraped.

Enterprise

Ensure Telemetry room permissions are consistent.

Enterprise

Ensure component settings for storageClassName override the global setting.

Enterprise / Starter

Removing an item from a list field will now only delete one item.

Enterprise

Setup the onprem-admin user as a MAS admin.

Enterprise / Starter

Fix Synapse being stuck with registration closed even if explicitly allowed.

Enterprise / Starter

Improve reliability of changing the Postgres password in cluster if the password seed changes.

Enterprise / Starter

Fix potential permissions issues during microk8s upgrades.

Enterprise

Construct storage for Matrix Content Scanner if deploying on ESS managed microk8s.

Enterprise

Correctly import airgapped registry settings when upgrading from before 24.04.

Enterprise / Starter

Remove unneeded reconciliations due to bad orphan detection.

Enterprise / Starter

Fix updater metrics scraping.

Enterprise / Starter

Improve reliability of setting up CoreDNS.

Enterprise / Starter

Validate that the node IP is excluded from an HTTP Proxy if one is configured.

Enterprise

Fix empty dashboards (NGinx, Kubernetes Workloads, etc) in Grafana.

Enterprise

Fix missing VMAlert component which is required to gather record metrics.

Enterprise / Starter

Fix microk8s stop command not stopping running containers.

Enterprise / Starter

Improve reliability of some microk8s interactions.

Deprecations

Enterprise

Element Call participants limits feature is deprecated. The option has been removed from the UI.

Enterprise

Jitsi and Element Call can not be deployed together.