EMS Knowledge Base

The knowledge base for all Element Matrix Services provided products.

I can't upload files after updating to 0.6.1

Issue

Environment

Resolution

To resolve this issue, recursively change the permissions of the directory configured in parameters.yml as media_host_data_path. For this example, in paramters.yml, we have:

media_host_data_path: "/mnt/data"

and a quick ls on this path shows the 991 ownership:

$ ls -l /mnt/
total 4
drwxr-xr-x 3 991 991 4096 Apr 27 13:20 data

To fix this, run:

sudo chown 10991:991 -R /mnt/data

afterwards, ls should show the 10991 ownership:

$ ls -l /mnt/
total 4
drwxr-xr-x 3 10991 991 4096 Apr 27 13:20 data

and now you should be able to upload files again.

Root Cause

In this case, the installation started with 0.5.3 and in 0.6.0, we changed the UIDs that synapse runs as in order to avoid conflicting with any potential system UID. Previously, the UID was 991, but we moved to 10991. As such, this breaks permissions on the existing synapse_media directory.

You may see an error similar to this one in your synapse logs, which can be obtained by running kubectl logs -n element-onprem instance-synapse-main-0:

2022-04-27 13:28:02,521 - synapse.http.server - 100 - ERROR - POST-59388 - Failed handle request via 'UploadResource': <XForwardedForRequest at 0x7f9aa49f9e20 method='POST' uri='/_matrix/media/r0/upload' clientproto='HTTP/1.1' site='8008'>
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/site-packages/synapse/http/server.py", line 269, in _async_render_wrapper
    callback_return = await self._async_render(request)
  File "/usr/local/lib/python3.9/site-packages/synapse/http/server.py", line 297, in _async_render
    callback_return = await raw_callback_return
  File "/usr/local/lib/python3.9/site-packages/synapse/rest/media/v1/upload_resource.py", line 96, in _async_render_POST
    content_uri = await self.media_repo.create_content(
  File "/usr/local/lib/python3.9/site-packages/synapse/rest/media/v1/media_repository.py", line 178, in create_content
    fname = await self.media_storage.store_file(content, file_info)
  File "/usr/local/lib/python3.9/site-packages/synapse/rest/media/v1/media_storage.py", line 92, in store_file
    with self.store_into_file(file_info) as (f, fname, finish_cb):
  File "/usr/local/lib/python3.9/contextlib.py", line 119, in __enter__
    return next(self.gen)
  File "/usr/local/lib/python3.9/site-packages/synapse/rest/media/v1/media_storage.py", line 135, in store_into_file
    os.makedirs(dirname, exist_ok=True)
  File "/usr/local/lib/python3.9/os.py", line 215, in makedirs
    makedirs(head, exist_ok=exist_ok)
  File "/usr/local/lib/python3.9/os.py", line 225, in makedirs
    mkdir(name, mode)
PermissionError: [Errno 13] Permission denied: '/media/media_store/local_content/PQ'

synapse-haproxy container in CrashLoopBackOff state

Issue

We are seeing

[karl1@element ~]$ kubectl get pods -n element-onprem
NAME                                        READY   STATUS             RESTARTS   AGE
server-well-known-8c6bd8447-fts78           1/1     Running            2          39h
app-element-web-c5bd87777-745gh             1/1     Running            2          39h
postgres-0                                  1/1     Running            2          39h
instance-synapse-haproxy-5b4b55fc9c-jv7pp   0/1     CrashLoopBackOff   40         39h
instance-synapse-main-0                     1/1     Running            6          39h

and the synapse-haproxy container never leaves the CrashLoopBackOff state.

Environment

Resolution

Add the following lines to /etc/security/limits.conf:

*              soft    nofile  100000
*              hard    nofile  100000

and reboot the box. After a reboot, the microk8s environment will come back up and the synapse-haproxy container should run without error.

Root Cause

Check the logs of synapse-haproxy with this command:

kubectl logs -n element-onprem instance-synapse-haproxy-5b4b55fc9c-jv7pp

You will want to replace the instance name with your specific instance. See if you have this message:

'[haproxy.main()] Cannot raise FD limit to 80034, limit 65536.'

If so, you have run out of open file descriptors and as such the container cannot start.

Can't connect to local registry 127.0.0.1:32000

Issue

    "msg": "non-zero return code",
    "rc": 1,
    "start": "2022-05-26 10:37:08.441849",
    "stderr": "Error: Get \"https://localhost:32000/v2/\": dial tcp [::1]:32000: connect: connection refused; Get \"http://localhost:32000/v2/\": dial tcp [::1]:32000: connect: connection refused",
    "stderr_lines": [
        "Error: Get \"https://localhost:32000/v2/\": dial tcp [::1]:32000: connect: connection refused; Get \"http://localhost:32000/v2/\": dial tcp [::1]:32000: connect: connection refused"
    ],

Environment

Resolution

First, let's begin by removing any bits of the old image that may be in containerd. We need the name of the image to do this and looking at this error:

"cdkbot/registry-amd64:2.6": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/cdkbot/registry-amd64:2.6": failed to extract layer 

the image is named docker.io/cdkbot/registry-amd64:2.6. So we will now run:

microk8s.ctr rm docker.io/cdkbot/registry-amd64:2.6

Unmount the offending volume from the kubectl describe pod setup:

sudo umount /var/snap/microk8s/common/var/lib/containerd/tmpmounts/containerd-mount490181863

If this succeeds, then you can issue:

microk8s.ctr pull docker.io/cdkbot/registry-amd64:2.6

and if this succeeds, you can then run:

kubectl delete pod -n container-registry registry

and watch the registry come back up.

If you cannot get the mounted volume to unmount, you may need to reboot to completely clear the issue.

Root Cause

The root cause is that the registry container will not start:

$ kubectl get pods -A
NAMESPACE            NAME                                         READY   STATUS             RESTARTS   AGE
kube-system          hostpath-provisioner-566686b959-jl2b4        1/1     Running            1          69m
...
container-registry   registry-9b57d9df8-kmks4                     0/1     ImagePullBackOff   0          44m

To figure out why this won't start, we need to run kubectl describe pod -n container-registry registry:

$ kubectl describe pod -n container-registry registry
Name:         registry-9b57d9df8-k7v2r
Namespace:    container-registry
Priority:     0
Node:         mynode/192.168.122.1
Start Time:   Thu, 26 May 2022 11:33:04 -0700
Labels:       app=registry
              pod-template-hash=9b57dea58
...
  Normal   BackOff           5m41s (x4 over 7m36s)  kubelet            Back-off pulling image "cdkbot/registry-amd64:2.6"
  Warning  Failed            5m41s (x4 over 7m36s)  kubelet            Error: ImagePullBackOff
  Warning  Failed            2m58s                  kubelet            Failed to pull image "cdkbot/registry-amd64:2.6": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/cdkbot/registry-amd64:2.6": failed to extract layer sha256:8aa4fcad5eeb286fe9696898d988dc85503c6392d1a2bd9023911fb0d6d27081: failed to unmount /var/snap/microk8s/common/var/lib/containerd/tmpmounts/containerd-mount490181863: failed to unmount target /var/snap/microk8s/common/var/lib/containerd/tmpmounts/containerd-mount490181863: device or resource busy: unknown

Looking at the above, we can seee that /var/snap/microk8s/common/var/lib/containerd/tmpmounts/containerd-mount490181863 is busy and failing to unmount, thus causing our problem.

We've also noticed in this case that bits of an old image download can be left in containerd and we've updated the resolution to handle this as well.

Installer fails with AnsibleUnsafeText object has no attribute 'addons'

Issue

TASK [microk8s : convert from list to dict] ***************************************************************************************************************************************************************************************
task path: /home/user/element-enterprise-installer-2022-05.06/ansible/roles/microk8s/tasks/addons.yml:12
fatal: [localhost]: FAILED! => {
    "msg": "'ansible.utils.unsafe_proxy.AnsibleUnsafeText object' has no attribute 'addons'"
}

Environment

Resolution

Run:

microk8s.start

and then restart the installer.

Root Cause

Situations exist where the installer can get in a state that microk8s has not started but the installer thinks microk8s is running.

How to Setup Local Host Resolution Without DNS

Overview

In an Element Enterprise On-Premise environment, hostnames must resolve to the appropriate IP addresses. If you have a proper DNS server with records for these hostnames in place, then you will be good to go.

In the event that a DNS server is not available for proper hostname resolution, you may use /etc/hosts and host_aliases. This article will walk you through that.

If you choose to use this method, do note that federation outside of your local environment will not be possible. Further, using the mobile applications will not be possible as they must be able to access your environment, which typically requires DNS.

Further, this assumes that you are using the single node installer based on microk8s.

Steps

For single node installations with microk8s, if we were setting up was synapse and element and these ran on the local domain with the IP of 192.168.122.39, we would set the following entries in /etc/hosts:

192.168.122.39 element.local 
192.168.122.39 synapse.local
192.168.122.39 local

and the following in host_aliases in the parameters.yml file found in your configuration directory:

host_aliases:
  - ip: "192.168.122.39"
    hostnames:
      - "element.local"
      - "synapse.local"
      - "local"

How to Upgrade microk8s for Single Node Installations

microk8s in Single Node Installations

For Element On-Premise and Element Enterprise On-Premise, we offer a multi-node installer and a single-node installer. In our single-node installations, we install Canonical's microk8s, a lightweight distribution of kubernetes. We then use this installation to deploy our software via our kubernetes operator. All of this is managed by our installer.

That said, we do not handle the upgrading of existing microk8s installations with the installer. This document details how to upgrade microk8s when needed. If you have any questions, please do not hesitate to contact Element Support.

Upgrading microk8s

The first step in upgrading microk8s to the latest version deployed by the installer is to remove the existing microk8s installation. Given that all of microk8s is managed by a snap, we can do this without worrying about our Element Enterprise On-Premise installation. The important data for your installation is all stored outside of the snap space and will not be impacted by removing microk8s. Start by running:

sudo snap list

and just determine that microk8s is installed:

[user@element2 element-enterprise-installer-2022-05.06]$ sudo snap list
Name      Version    Rev    Tracking     Publisher   Notes
core      16-2.55.5  13250  -            canonical✓  core
core18    20220428   2409   -            canonical✓  base
microk8s  v1.21.13    3410   1.21/stable  canonical✓  classic

Once you've made sure that microk8s is installed, remove it by running:

sudo snap remove microk8s

Now at this point, you should be able to verify that microk8s is no longer installed by running:

sudo snap list

and getting output similar to:

[user@element2 element-enterprise-installer-2022-05.06]$ sudo snap list
Name      Version    Rev    Tracking     Publisher   Notes
core      16-2.55.5  13250  -            canonical✓  core
core18    20220428   2409   -            canonical✓  base

Now that you no longer have microk8s installed, you are ready to run the latest installer. Once you run the latest installer, it will install the latest version of microk8s.

When the installer finishes, you should see an upgraded version of microk8s installed if you run sudo snap list similar to:

Name      Version   Rev    Tracking       Publisher   Notes
core18    20220706  2538   latest/stable  canonical✓  base
microk8s  v1.24.3   3597   1.24/stable    canonical✓  classic
snapd     2.56.2    16292  latest/stable  canonical✓  snapd

At this point, you will need to reboot the server to restore proper networking into the microk8s cluster. After a reboot, wait for your pods to start and your Element Enterprise On-Premise installation is now running a later version of microk8s.

After upgrading to 1.0.0, postgres-0 is in CrashLoopBackOff state

Issue

Environment

Resolution

To fix this issue, first read the root cause and issue sections and double check that this is your issue. The resolution is to delete the sts, pvc, and pv for postgres, the empty data directory and then re-run the installer. These steps WILL destroy any existing Postgresql data, which in the ephemeral case (that this issue decsribes) is none.

To find where the data directory is, run:

kubectl describe pv postgres | grep -i path

This will show output similar to:

StorageClass:      microk8s-hostpath
    Type:          HostPath (bare host directory volume)
    Path:          /mnt/data/synapse-postgres
    HostPathType:  

From here, we can see that /mnt/data/synapse-postgres is where postgres is trying to initiate the database. Let's take a look at that directory:

[user@element2 element-enterprise-installer-1.0.0]$ sudo ls -l /mnt/data/synapse-postgres/
total 0
drwx------. 2 systemd-coredump input 6 Apr 26 15:13 data
[user@element2 element-enterprise-installer-1.0.0]$ sudo ls -l /mnt/data/synapse-postgres/data
total 0

As you can see, we have the data directory and it is empty. Make a note of this directory for later.

Now we need to remove the pvc and the pv. If you really do have just an empty data directory, there is no need to make a backup. If you have more than data in your postgres pv path, you will want to STOP AND MAKE A BACKUP OF THAT PATH'S CONTENTS.

Now, to delete the PVC, you will need two terminals. In one terminal, you will run:

kubectl delete pvc -n element-onprem postgres

You will notice that this command just sits there waiting once run. In another terminal, run this command:

kubectl delete pod -n element-onprem postgres-0

As soon as the pod is deleted, you should notice that the kubectl delete pvc command also completes. At this point, we need to now delete the pv:

kubectl delete pv -n element-onprem postgres

Now it is time to remove the sts for postgres:

kubectl delete sts -n element-onprem postgres

Remove the data directory:

sudo rm -r /mnt/data/synapse-postgres/data

Now re-run the installer. Once the installer is re-run, you should have a working postgresql. You should notice a running pod in kubectl get pods -n element-onprem:

postgres-0                                  1/1     Running   0              2m11s

and your /mnt/data/synapse-postgres directory should have entries similar to:

drwx------. 6 systemd-coredump input    54 May  6 10:14 base
drwx------. 2 systemd-coredump input  4096 May  6 10:15 global
drwx------. 2 systemd-coredump input     6 May  6 10:14 pg_commit_ts
drwx------. 2 systemd-coredump input     6 May  6 10:14 pg_dynshmem
-rw-------. 1 systemd-coredump input  4782 May  6 10:14 pg_hba.conf
-rw-------. 1 systemd-coredump input  1636 May  6 10:14 pg_ident.conf
drwx------. 4 systemd-coredump input    68 May  6 10:14 pg_logical
drwx------. 4 systemd-coredump input    36 May  6 10:14 pg_multixact
drwx------. 2 systemd-coredump input     6 May  6 10:14 pg_notify
drwx------. 2 systemd-coredump input     6 May  6 10:14 pg_replslot
drwx------. 2 systemd-coredump input     6 May  6 10:14 pg_serial
drwx------. 2 systemd-coredump input     6 May  6 10:14 pg_snapshots
drwx------. 2 systemd-coredump input     6 May  6 10:14 pg_stat
drwx------. 2 systemd-coredump input    63 May  6 10:15 pg_stat_tmp
drwx------. 2 systemd-coredump input    18 May  6 10:14 pg_subtrans
drwx------. 2 systemd-coredump input     6 May  6 10:14 pg_tblspc
drwx------. 2 systemd-coredump input     6 May  6 10:14 pg_twophase
-rw-------. 1 systemd-coredump input     3 May  6 10:14 PG_VERSION
drwx------. 3 systemd-coredump input    60 May  6 10:14 pg_wal
drwx------. 2 systemd-coredump input    18 May  6 10:14 pg_xact
-rw-------. 1 systemd-coredump input    88 May  6 10:14 postgresql.auto.conf
-rw-------. 1 systemd-coredump input 28156 May  6 10:14 postgresql.conf
-rw-------. 1 systemd-coredump input    36 May  6 10:14 postmaster.opts
-rw-------. 1 systemd-coredump input    94 May  6 10:14 postmaster.pid

Finally, restart the synapse pod by doing:

kubectl delete pod -n element-onprem instance-synapse-main-0

Wait for that pod to restart and be completely running again. Verify with kubectl get pods -n element-onprem that you have a line similar to:

instance-synapse-main-0                     1/1     Running   0              2m36s

Root Cause

In 0.6.1, we had a bug which caused the included postgresql database to not get written to disk and thus it did not survive restarts. The bug has been fixed in 1.0.0, however, prior versions of the installer did get as far as writing a data directory into the postgresql storage set up by microk8s. As such, postgres finds this directory on start up and fails to init a new database with the specific log mentioned in the Issue section.

If you do not have this specific error, please do not run the steps in the Resolution section of this knowledge base solution.

Integrator fails with Unable to initialise application. Failed to validate config: data/jitsi_domain must match format "hostname"...

Issue

Environment

Resolution

Root Cause

There is a bug in this installer which specifies https:// in front of the default domain and this causes the error. This will be fixed in a future installer.

Installer fails looking for join_local_rooms_only

Issue

Environment

Resolution

If you have previously installed adminbot, please add the following line to your CONFIG_DIR/adminbot/adminbot.yml:

join_local_rooms_only: true

If you have previously installed auditbot, please add the following line to your CONFIG_DIR/auditbot/auditbot.yml:

join_local_rooms_only: true

and re-run the installer. Afterwards, you should not hit this error message.

Root Cause

We added a required variable join_local_rooms_only to the adminbot and auditbot configuration that must be set for the installer to complete successfully.

Hookshot fails to display configuration widget

Issue

Environment

Resolution

You will need to rewrite the Disallowed IP list into the hookshot config so as not to include your IP address's range. Assuming that your IP address was in the 192.168.122.0/24 range, you could add the following to CONFIG_DIR/hookshot/hookshot.yml:

disallowed_ip_ranges:
     - 127.0.0.0/8
     - 10.0.0.0/8
     - 172.16.0.0/12
     - 100.64.0.0/10
     - 169.254.0.0/16
     - 192.88.99.0/24
     - 198.18.0.0/15
     - 192.0.2.0/24
     - 198.51.100.0/24
     - 203.0.113.0/24
     - 224.0.0.0/4
     - ::1/128
     - fe80::/10
     - fc00::/7
     - 2001:db8::/32
     - ff00::/8
     - fec0::/10

After this, you would re-run the installer. This removes all ranges that are closely related to the 192.168.122.0/24 subnet. You will need to adjust this for your particular use case. To build the above list, we took the list mentioned in the root cause and edited it down.

Root Cause

Looking at the logs for hookshot while attempting this configuration (kubectl logs -n element-onprem instance-hookshot-0) shows:

INFO 18:04:13:625 [Appservice] 10.1.108.141 - - [11/Oct/2022:18:04:13 +0000] "PUT /transactions/133?access_token=%3Credacted%3E HTTP/1.1" 200 2 "-" "Synapse/1.65.0"

Oct-11 18:04:21.668 WARN ProvisioningApi Failed to fetch the server URL for element.demo ApiError: API error M_AS_BAD_OPENID: Server is disallowed
    at ProvisioningApi.checkIpBlacklist (/bin/matrix-hookshot/node_modules/matrix-appservice-bridge/lib/provisioning/api.js:235:19)
    at async ProvisioningApi.postExchangeOpenId (/bin/matrix-hookshot/node_modules/matrix-appservice-bridge/lib/provisioning/api.js:259:17) {
  error: 'Server is disallowed',
  errcode: 'M_AS_BAD_OPENID',
  statusCode: 500,
  additionalContent: {}
}
Oct-11 18:04:21.668 ERROR ProvisioningApi ApiError: API error M_AS_BAD_OPENID: Could not identify server url
    at ProvisioningApi.postExchangeOpenId (/bin/matrix-hookshot/node_modules/matrix-appservice-bridge/lib/provisioning/api.js:264:19) {
  error: 'Could not identify server url',
  errcode: 'M_AS_BAD_OPENID',
  statusCode: 500,
  additionalContent: {}
}

The "Server is disallowed" message tells us that the IP address of synapse is disallowed by hookshot. Hookshot has a default list of disallowed IPs as documented here: https://github.com/matrix-org/matrix-hookshot/blob/main/docs/advanced/widgets.md to prevent Server Side Request Forgery. If your IP address is in that list, then you will need to redefine the disallow list to not include your IP range.

Installer hangs on "microk8s : Wait for microk8s to be ready" task

Issue

Environment

Resolution

In parameters.yml, make sure the following is set for local_registry:

local_registry: localhost:32000

Once you have set this to localhost instead of 127.0.0.1, you can re-run the installer and the issue should be resolved.

Root Cause

The root cause of this issue is that specifying 127.0.0.1 for the local_registry causes microk8s to not be able to access the local registry and thus cannot finish setting itself up. As a result, the installer constantly waits for microk8s to be up, which never happens.

After an install, I only have the postgres-0 pod!

Issue

Environment

Resolution

Root Cause

The reason that this is happening is under certain scenarios, microk8s fails to load the br_netfilter kernel module and this allows the calico networking to fall back to user space routing, which fails to work in this environment and causes the calico-kube-controllers pod to not start, which cascades into the rest of the stack not really coming up. More on this specific issue can be seen here: https://github.com/canonical/microk8s/issues/3085. The microk8s team does expect to release a fix and we will work to incorporate it in the future.

Using Self-Signed Certificates with mkcert

Overview

We do not recommend using self-signed certificates with Element Enterprise On-Premise, however, we recognize that there are times when self-signed certificates can be the fastest way forward for demo or PoC purposes. It is in this spirit that these directions are provided.

Steps

The following instructions will enable you to use a tool called mkcert to generate self-signed certificates. Element does not ship this tool and so these directions are provided as one example of how to get self-signed certificates.

Ubuntu:

sudo apt-get install wget libnss3-tools

EL:

sudo yum install wget nss-tools -y

Both EL and Ubuntu:

wget -O mkcert "https://dl.filippo.io/mkcert/latest?for=linux/amd64"
sudo mv mkcert /usr/bin/
sudo chmod +x /usr/bin/mkcert

Once you have mkcert executable, you can run:

mkcert -install
The local CA is now installed in the system trust store! ⚡️

Now, you can verify the CA Root by doing:

mkcert -CAROOT
/home/element-demo/.local/share/mkcert

Your output may not be exactly the same, but it should be similar. Once we’ve done this, we need to generate self-signed certificates for our hostnames.

You can either do this by generating a wildcard certificate that works for all subdomains or you can do this per domain.

The following is an example for how to build a wildcard cert for element.local. You will only need to run this once and then you can use the generated certificate for all hostnames that require a certificate:

mkcert *.element.local element.local 192.168.122.39 127.0.0.1

Created a new certificate valid for the following names 📜 - "*.element.local"
 - "element.local"
 - "192.168.122.39"
 - "127.0.0.1"

Reminder: X.509 wildcards only go one level deep, so this won't match a.b.element.local ℹ

The certificate is at "./_wildcard.element.local+3.pem" and the key at "./_wildcard.element.local+3-key.pem" ✅
It will expire on 5 July 2025 🗓

The following is an example of how to do it for element.local. You will need to do this for all of the aforementioned hostnames, including the fqdn.tld.

The run for the element fqdn looks like this:

mkcert element.local element 192.168.122.39 127.0.0.1

Created a new certificate valid for the following names
- "element.local"
- "element"
- "192.168.122.39"
- "127.0.0.1"

The certificate is at "./element.local+3.pem" and the key at
"./element.local+3-key.pem" ✅

It will expire on 1 May 2024

Once you have self-signed certificates, you need to rename them for each host with the form of fqdn.crt and fqdn.key.

Using our above example, these are the commands we would need to run from the installer directory just for the element.local certificate: (We ran mkcert in that directory as well.)

cp element.local+3.pem  element.local.crt
cp element.local+3-key.pem  element.local.key

In the case of the wildcard certificate, we could run:

cp ./_wildcard.element.local+3.pem wildcard.element.local.crt
cp ./_wildcard.element.local+3-key.pem wildcard.element.local.key

and then use this file where needed in the graphical installer for a crt/key pair.

Installer fails on firewalld, but firewalld is not installed

Issue

I'm seeing this with the installer, but I don't have firewalld installed:

2022-08-15 15:52:20,258 p=33 u=element n=ansible | TASK [microk8s : Check that firewalld is started if installed] ******************************************************************************************************************************************************************************
2022-08-15 15:52:20,299 p=33 u=element n=ansible | fatal: [localhost]: FAILED! => {
    "changed": false,
    "msg": "Firewalld is installed. Please start it for the installer to successfully configure it.\n"
}

Environment

Resolution

Upgrade to Element Enterprise Installer 2022-08.02, which has the fixes for this.

Root Cause

On Ubuntu, systemd will report data for firewalld if it has been installed but is then uninstalled. Our installer did not account for this scenario and upon finding this scenario, we modified our checks. Those fixes went into 2022-08.02.

Pip cache errors

Issue

Environment

Resolution

cd ~/.cache/pip
rm -rf http

Root cause

What Telemetry Data is Collected by Element?

Issue

Environment

Telemetry Data Sample

The following is a sample telemetry packet generated by Element On-Premise:

{
    "_id" : ObjectId("6363bdd7d51c84d1f10a8126"),
    "onPremiseSubscription" : ObjectId("62f14dd303c67b542efddc4f"),
    "payload" : {
        "data" : {
            "activeUsers" : {
                "count" : 1,
                "identifiers" : {
                    "native" : [
                        "5d3510fc361b95a5d67a464a188dc3686f5eaf14f0e72733591ef6b8da478a18"
                    ]
                },
                "period" : {
                    "end" : 1667481013777,
                    "start" : 1666970260518
                }
            }
        },
        "generationTime" : 1667481013777,
        "hostname" : "element.demo",
        "instanceId" : "bd3bbf92-ac8c-472e-abb5-74b659a04eec",
        "type" : "synapse",
        "version" : 1
    },
    "request" : {
        "clientIp" : "71.70.145.71",
        "userAgent" : "Synapse/1.65.0"
    },
    "schemaVersion" : 1,
    "creationTimestamp" : ISODate("2022-11-03T13:10:47.476Z")
}

Getting a 502 Bad Gateway Error When Accessing Element Web

Issue

Environment

Resolution

sudo firewall-cmd --add-service={http,https} --permanent
sudo firewall-cmd --add-masquerade --permanent
sudo firewall-cmd --reload

Root Cause

By default, firewalld does not allow masquerading (Network Address Translation, NAT) through the firewall. This causes all sorts of trouble with doing the NAT required to access pods in microk8s.

Configuring a microk8s Single Node Instance to Use a Network Proxy

Overview

If you are using the microk8s Single Node Installer and your site requires proxy access to get to the internet, making a few quick changes to your operating system configuration will enable our installer to access the resources it needs over the internet. This document discusses these changes.

Steps

We also cover the case where you need to use a proxy to access the internet. Please make sure that the following host variables are set:

Ubuntu Specific Directions

If your company's proxy is http://corporate.proxy:3128, you would edit /etc/environment and add the following lines:

HTTPS_PROXY=http://corporate.proxy:3128
HTTP_PROXY=http://corporate.proxy:3128
https_proxy=http://corporate.proxy:3128
http_proxy=http://corporate.proxy:3128
NO_PROXY=10.1.0.0/16,10.152.183.0/24,127.0.0.1
no_proxy=10.1.0.0/16,10.152.183.0/24,127.0.0.1

The IP Ranges specified to NO_PROXY and no_proxy are specific to the microk8s cluster and prevent microk8s traffic from going over the proxy.

EL Specific Directions

Using the same example of having a company proxy at http://corporate.proxy:3128, you would edit /etc/profile.d/http_proxy.sh and add the following lines:

export HTTP_PROXY=http://corporate.proxy:3128
export HTTPS_PROXY=http://corporate.proxy:3128
export http_proxy=http://corporate.proxy:3128
export https_proxy=http://corporate.proxy:3128
export NO_PROXY=10.1.0.0/16,10.152.183.0/24,127.0.0.1
export no_proxy=10.1.0.0/16,10.152.183.0/24,127.0.0.1

The IP Ranges specified to NO_PROXY and no_proxy are specific to the microk8s cluster and prevent microk8s traffic from going over the proxy.

In Conclusion

You will need to log out and back in for the environment variables to be re-read after setting them. If you already have microk8s running, you will need to issue:

microk8s.stop
microk8s.start

to have it reload the new environment variables.

If you need to use an authenticated proxy, then the URL schema for both EL and Ubuntu is as follows:

protocol:user:password@host:port

So if your proxy is corporate.proxy and listens on port 3128 without SSL and requires a username of bob and a password of inmye1em3nt then your url would be formatted:

http://bob:inmye1em3nt@corporate.proxy:3128

For further help with proxies, we suggest that you contact your proxy administrator or operating system vendor.

Installer 2022-08.01 fails to pull element web into the cluster

Issue

Environment

Resolution

It is necessary to uncomment the following variables in secrets.yml :

dockerhub_username: 
dockerhub_token: 

If you have a dockerhub_username and dockerhub_token, please define them in secrets.yml. If not, then please leave them blank but uncommented.

Root Cause

Version 2022-08.01 uses an element web image hosted in ems-image-store. A defect appeared during the migration of the image, and the installers looks for the variables dockerhub_username and dockerhub_token to know if it has to configure docker secrets into the cluster.

url.js:354 error starting dimension

Issue

Starting matrix-dimension
url.js:354
      this.auth = decodeURIComponent(rest.slice(0, atSign));
                  ^

URIError: URI malformed
    at decodeURIComponent (<anonymous>)
    at Url.parse (url.js:354:19)
    at Object.urlParse [as parse] (url.js:157:13)
    at new Sequelize (/home/node/matrix-dimension/node_modules/sequelize/dist/lib/sequelize.js:1:1292)
    at new Sequelize (/home/node/matrix-dimension/node_modules/sequelize-typescript/dist/sequelize/sequelize/sequelize.js:16:9)
    at new _DimensionStore (/home/node/matrix-dimension/build/app/db/DimensionStore.js:42:30)
    at Object.<anonymous> (/home/node/matrix-dimension/build/app/db/DimensionStore.js:106:26)
    at Module._compile (internal/modules/cjs/loader.js:1072:14)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:1101:10)
    at Module.load (internal/modules/cjs/loader.js:937:32)

Environment

Resolution

Ensure that you do not have any % characters in your PostgreSQL password. Once you have removed any % characters from your PostgreSQL password, please update your configuration files and re-run the installer.

Root Cause

Dimension does not properly encode the % for it's Postgresql connection URL and this triggers the above error.

Installer fails on enabling addons

Issue

The installer is stating that it's failed and I'm seeing messages like:

skipping: [localhost] => (item=host-access) 
changed: [localhost] => (item=ingress)
FAILED - RETRYING: [localhost]: enable addons (3 retries left).
FAILED - RETRYING: [localhost]: enable addons (2 retries left).
FAILED - RETRYING: [localhost]: enable addons (1 retries left).
failed: [localhost] (item=metrics-server) => {"ansible_loop_var": "item", "attempts": 3, "changed": true, "cmd": ["/snap/bin/microk8s.enable", "metrics-server"], "delta": "0:00:09.568390", "end": "2022-04-13 12:08:41.833858", "item": {"enabled": true, "name": "metrics-server"}, "msg": "non-zero return code", "rc": -15, "start": "2022-04-13 12:08:32.265468", "stderr": "Warning: apiregistration.k8s.io/v1beta1 APIService is deprecated in v1.19+, unavailable in v1.22+; use apiregistration.k8s.io/v1 APIService", "stderr_lines": ["Warning: apiregistration.k8s.io/v1beta1 APIService is deprecated in v1.19+, unavailable in v1.22+; use apiregistration.k8s.io/v1 APIService"], "stdout": "Enabling Metrics-Server\nclusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader unchanged\nclusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator unchanged\nrolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader unchanged\napiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io unchanged\nserviceaccount/metrics-server unchanged\ndeployment.apps/metrics-server unchanged\nservice/metrics-server unchanged\nclusterrole.rbac.authorization.k8s.io/system:metrics-server unchanged\nclusterrolebinding.rbac.authorization.k8s.io/system:metrics-server unchanged\nclusterrolebinding.rbac.authorization.k8s.io/microk8s-admin unchanged", "stdout_lines": ["Enabling Metrics-Server", "clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader unchanged", "clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator unchanged", "rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader unchanged", "apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io unchanged", "serviceaccount/metrics-server unchanged", "deployment.apps/metrics-server unchanged", "service/metrics-server unchanged", "clusterrole.rbac.authorization.k8s.io/system:metrics-server unchanged", "clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server unchanged", "clusterrolebinding.rbac.authorization.k8s.io/microk8s-admin unchanged"]}
skipping: [localhost] => (item=rbac) 
changed: [localhost] => (item=registry)

Environment

Resolution

Re-run the installer until these errors clear and all of the microk8s addons are enabled.

Root Cause

There is a microk8s timing issue that we have not quite figured out.

I'd like to turn off federation

Issue

Environment

Resolution

Add an empty federation_domain_whitelist. To do this, add the following in a .yml file in the Synapse config folder:

federation_domain_whitelist: []

See https://matrix-org.github.io/synapse/latest/usage/configuration/config_documentation.html?highlight=federation#federation for further details.

N.B. we recommend also firewalling your federation listener to limit inbound federation traffic as early as possible, rather than relying purely on this application-layer restriction. If not specified, the default is to whitelist everything.

Root Cause

Federation is on by default.

How do I migrate to SSO while keeping my original accounts?

Issue

Environment

Resolution

Transferring SSO external_ids to original users

To transfer 'external_ids' from SSO accounts, to your original accounts you will need to use the Admin API.

Admin API Documentation

Getting an Access Token

Before being able to use the Admin API, you will need an admin account and it's 'Access Token', you can make a user a Synapse Admin by either following the steps in the link above (required for On-Premise), or following these steps on the EMS Control Panel:

  1. Access the 'Server Admin' tab
  2. Under the 'Users' tab, select the user that should be made Synapse Admin
  3. Click the checkbox, next to 'Synapse Admin' and click 'Yes' to confirm

Once a user is a Synapse Admin, you can retrieve their 'Access Token' by logging in via the Element Matrix client:

  1. Click on the users' profile icon in the top-left and select 'All Settings'
  2. Open the 'Help & About' settings page, then scroll down to the 'Advanced' section
  3. Click 'Access Token' to reveal the token, copy this to interact with the Admin API

Using an Access Token

Access tokens will need to be passed into all API requests as an Authorization Bearer token, see examples below:

Bash:

curl --request GET 'https://example.com/_synapse/admin/v2/users?from=0&limit=10&guests=false' \
	--header 'Authorization: Bearer syt_adminToken'

Windows:

$headers.Add("Authorization", "Bearer syt_adminToken")

Python:

import requests
headers = {
    'Authorization': 'Bearer syt_adminToken',
}

Customising commands for your specific enviroment

The commands to follow are generic and will need to be modified to suit your specific environment:

Getting SSO users' external_ids

Admin API List Accounts

Admin API Query User Account

  1. For each user you can then use their name in another GET request to /_synapse/admin/v2/users/<user_id>, replacing <user_id> with name.

    Bash:

    curl --request GET 'https://example.com/_synapse/admin/v2/users/@user:example.com' \
    	--header 'Authorization: Bearer syt_adminToken'
    

    Windows:

    $headers = New-Object "System.Collections.Generic.Dictionary[[String],[String]]"
    $headers.Add("Authorization", "Bearer syt_adminToken")
    
    $response = Invoke-RestMethod 'https://example.com/_synapse/admin/v2/users/@user:example.com' -Method 'GET' -Headers $headers
    $response | ConvertTo-Json
    
  2. You will find the external_ids for each user within the JSON output. You can programatically run through all users and generate a list of only those with external_ids, removing unneeded information. (See example below)

    Python:

    import requests
    
    # REPLACE THESE VALUES WITH ACCESS TOKEN AND HOME SERVER URL
    headers = {
        'Authorization': 'Bearer syt_adminToken',
    }
    url = 'https://example.com'
    
    # GET LIST OF ALL USERS ON HOME SERVER
    # OUTPUT: 'all_users' contains a list of all users
    next_token = '0'
    last_token = ''
    all_users = []
    get_users = requests.get(url + '/_synapse/admin/v2/users?from=' + next_token + '&limit=10&guests=false', headers=headers).json()
    for user in get_users['users']:
        all_users.append(user['name'])
    while ('next_token' in get_users) and (next_token != last_token):
        next_token = get_users['next_token']
        get_users = requests.get(url + '/_synapse/admin/v2/users?from=' + next_token + '&limit=10&guests=false', headers=headers).json()
        for user in get_users['users']:
            all_users.append(user['name'])
    
    # FOR EACH USER, GET ALL INFO, EXCLUDE THOSE WITHOUT 'external_ids'
    # OUTPUT: 'all_external_ids' contains a list of all users with external ids
    all_external_ids = []
    for user in all_users:
        get_user = requests.get(url + '/_synapse/admin/v2/users/' + user, headers=headers).json()
        if get_user['external_ids'].__len__() != 0:
            all_external_ids.append(
                {
                    'sso_username': user,
                    'original_username': '',
                    'external_ids': get_user['external_ids']
                }
            )
    
    

Transferring SSO external_ids information

Admin API Create or Modify Account

With all external_ids collected, you will need to identify each SSO Account and the associated original account that you'd like to transfer the associated SSO information over too.

If using the Python example above, you will need to store the original username within all_external_ids[X]['original_username'], replacing X with the index of the SSO user. If you create a dictionary with keys named of the SSO username, and values of the desired Original username you could use the following to update all_external_ids:

Python:

# ADD REQUIRED ORIGINAL USERNAME
for user in all_external_ids:
    dict_storing_sso2orig = {'@example_sso_user:example.com': '@example_orig_user:example.com'}
    user['original_username'] = dict_storing_sso2orig[str(user['sso_username'])]

Once you have related all SSO Usernames to Original Usernames you can then, using the link above, use a PUT request to /_synapse/admin/v2/users/<user_id> to change the external_ids data for each account.

Bash:

Windows:

Continuing with the Python example, you can now use this to remove the external_ids from each SSO account, and add that information to the associated Original account.

Python:

# REMOVE 'external_ids' from 'sso_username' ACCOUNTS FROM 'all_external_ids' THEN
# UPDATE ALL 'original_username' ACCOUNTS FROM 'all_external_ids' WITH 'external_ids' FROM 'sso_username'
for user in all_external_ids:
    data = '{"external_ids":' + str(user['external_ids']).replace("'", '"').replace(" ", "") + '}'
    remove_sso = requests.put(url + '/_synapse/admin/v2/users/' + user['sso_username'], headers=headers, data='{"external_ids":[]}')
    if remove_sso.status_code == 200:
        add_sso = requests.put(url + '/_synapse/admin/v2/users/' + user['original_username'], headers=headers, data=data)
Python Example

The full python script is available below:

import requests

# REPLACE THESE VALUES WITH ACCESS TOKEN AND HOME SERVER URL
headers = {
    'Authorization': 'Bearer syt_adminToken',
}
url = 'https://example.com'

# GET LIST OF ALL USERS ON HOME SERVER
# OUTPUT: 'all_users' contains a list of all users
next_token = '0'
last_token = ''
all_users = []
get_users = requests.get(url + '/_synapse/admin/v2/users?from=' + next_token + '&limit=10&guests=false',
                         headers=headers).json()
for user in get_users['users']:
    all_users.append(user['name'])
while ('next_token' in get_users) and (next_token != last_token):
    next_token = get_users['next_token']
    get_users = requests.get(url + '/_synapse/admin/v2/users?from=' + next_token + '&limit=10&guests=false',
                             headers=headers).json()
    for user in get_users['users']:
        all_users.append(user['name'])

# FOR EACH USER, GET ALL INFO, EXCLUDE THOSE WITHOUT 'external_ids'
# OUTPUT: 'all_external_ids' contains a list of all users with external ids
all_external_ids = []
for user in all_users:
    get_user = requests.get(url + '/_synapse/admin/v2/users/' + user, headers=headers).json()
    if get_user['external_ids'].__len__() != 0:
        all_external_ids.append(
            {
                'sso_username': user,
                'original_username': '',
                'external_ids': get_user['external_ids']
            }
        )

# ADD REQUIRED ORIGINAL USERNAME
# REPLACE CONTENTS OF 'dict_storing_sso2orig' TO SET UP RELATED SSO -> ORIGINAL ACCOUNTS
# CHANGE 'readme' VARIABLE BELOW TO 'True' TO CONTINUE
readme = False
dict_storing_sso2orig = {'@example_sso_user:example.com': '@example_orig_user:example.com'}
for user in all_external_ids:
    if readme is True:
        user['original_username'] = dict_storing_sso2orig[str(user['sso_username'])]

# REMOVE 'external_ids' from 'sso_username' ACCOUNTS FROM 'all_external_ids' THEN
# UPDATE ALL 'original_username' ACCOUNTS FROM 'all_external_ids' WITH 'external_ids' FROM 'sso_username'
# CHANGE 'dict_storing_sso2orig_check' VARIABLE BELOW TO 'True' TO CONTINUE
dict_storing_sso2orig_check = False
for user in all_external_ids:
    if (dict_storing_sso2orig_check is True) and (dict_storing_sso2orig != {'@example_sso_user:example.com': '@example_orig_user:example.com'}):
        data = '{"external_ids":' + str(user['external_ids']).replace("'", '"').replace(" ", "") + '}'
        remove_sso = requests.put(url + '/_synapse/admin/v2/users/' + user['sso_username'], headers=headers, data='{"external_ids":[]}')
        if remove_sso.status_code == 200:
            add_sso = requests.put(url + '/_synapse/admin/v2/users/' + user['original_username'], headers=headers, data=data)

Root Cause

If SSO is not setup prior to using matrix, new SSO duplicate accounts are created following it's configuration. Users would prefer to keep their existing accounts and associated setup (Rooms / etc.) so migrating external_ids from these new SSO accounts to the originals is required.

How do I give a user admin rights when I am using delegated authentication and cannot log into the admin console?

Issue

Environment

Resolution

If you wish to give @bob:server.name admin access, then as a user with kubernetes access to your environment, run:

kubectl exec -n element-onprem -it pods/synapse-postgres-0 -- /usr/bin/psql -d synapse -U synapse_user -c "update users set admin = 1 where name = '@bob:server.name';"

You will want to replace @bob:server.name with the actual user that you wish to give admin to.

Root Cause

The issue is that the delegated authentication does not have an onprem-admin-donotdelete user and so there is no way to log in with the provided admin account.

How do I run the installer without using the GUI?

Issue

Environment

Resolution

You will need to prepare the cluster.yml, deployment.yml, secrets.yml, and legacy configuration directory to cater to your deployment. The best way to do this is to use the GUI installer and just not click install. Then you can move those files around and edit them as needed.

Once you have them configured correctly, you can run the installer in an unattended fashion without the GUI by doing:

$ ./element-enterprise-graphical-installer-YYYY-MM.VERSION-gui.bin unattended

replacing YYYY-MM.VERSION with the specific string for your downloaded installer.

Root Cause

When we switched to a GUI installer, the primary interface stopped being command line based.

Submitting Telemetry Data to Element

Issue

Environment

Resolution

By default, ESS servers connected to the internet will automatically send telemetry to Element. Please allow this to happen by making sure you have not blocked ems.element.io on port 443 from your homeserver. If you are air-gapped or need to block ems.element.io, then please follow the resolution below to manually submit telemetry.

In order to gather telemetry data, you will need to use the element-telemetry-export.py script, which comes with the installer.

To do this, run:

cd ~/.element-enterprise-server/installer/lib
/usr/bin/env python3 ./element-telemetry-export.py 

You will be prompted for an access token:

Matrix user access token not specified in the "MATRIX_USER_ACCESS_TOKEN" environment variable. Please provide the access token and hit enter: 

You will need to provide a valid access token for a user who has access to the telemetry room. This can be found by logging in to Element Web as this user, going to "All Settings", then clicking "Help & About" and finally expanding the section for "Access Token".

accesstoken.png

Provide the access token to the prompt and hit enter.

Once you have done this, you will have some messages that look similar to:

2023-04-18 15:36:41,580:INFO:Parsing configuration file (/home/karl1/.element-enterprise-server/config/telemetry-config.json)
2023-04-18 15:36:41,581:INFO:Performing Matrix sync with homeserver (https://hs.element.demo)
2023-04-18 15:36:41,643:INFO:Scanning page 1
2023-04-18 15:36:41,716:INFO:Scanning page 2
2023-04-18 15:36:41,782:INFO:Writing 19 telemetry events to ZIP file (/home/karl1/.element-enterprise-server/installer/lib/telemetry_2023-04-18.zip)
2023-04-18 15:36:41,783:INFO:Saving some internal state (for next time)

and you will have a new zip file in this directory with a date stamp in the format telemetry_YYYY-MM-DD.zip. In my case, I have telemetry_2023-04-18.zip.

If you are having SSL connectivity issues with the exporter, you may wish to either disable TLS verification or provide a CA certificate to the exporter with these optional command line parameters:

  --disable-tls-verification
                        Do not check SSL certificate validity when querying the Matrix server
  --ca-cert-path CA_CERT_PATH
                        Specify the path to the CA file (or a directory) to use when verifying Matrix server's
                        SSL certificate. Consult README.md for more details

I can now browse to https://ems.element.io/on-premise/subscriptions and click "Upload Telemetry" next to the subscription that I wish to upload the data for:

ems-subs.png

I can then browse for my telemetry file and click "Submit Telemetry":

browse-telemetry.png

Once successful, you will see this screen:

success.png

You can then close the upload window.

Root Cause

What data should I collect when I have an issue?

Issue

Environment

Resolution

The following data is helpful to collect for Element:

For issues perceived to be with the installer, please run:

kubectl logs -n operator-onprem $(kubectl get pods -n operator-onprem  | grep -i running | awk '{print $1}') > ~/operator-$(hostname).log
kubectl logs -n updater-onprem $(kubectl get pods -n updater-onprem  | grep -i running | awk '{print $1}') > ~/updater-$(hostname).log
tar cvjf element-installer-config-logs-$(hostname).tar.bz2 ~/.element-enterprise-server/config ~/.element-enterprise-server/logs ~/operator-$(hostname).log ~/updater-$(hostname).log

and send element-installer-config-logs-your.hostname.com.tar.bz2 to Element. Replace your.hostname.com with the hostname of your server to find the file.

For issues perceived to be with microk8s in single node environments, please run:

microk8s.inspect

This command will build a tarball of microk8s related data and tell you at the end where that tarball is located:

Building the report tarball
  Report tarball is at /var/snap/microk8s/4950/inspection-report-20230419_074048.tar.gz

Please send that specific tarball to Element for analysis.

I need a new token!

Issue

Environment

Resolution

Browse to https://ems.element.io/on-premise/subscriptions and you will see a page with all of your subscriptions that looks similar to:

token-rotation1.png

For your subscription, click on the "View Tokens" button and you will see this view:

On this page, click "Rotate" and you will be presented with a new token. Take care to keep it in a safe place.

Root Cause

A token is required for an Element Server Suite setup to talk to EMS and pull down the required software.

module 'jsonschema._utils' has no attribute 'load_schema'

Issue

Environment

Resolution

Please pause any updates to your installer before updating to 2023-07.02.

If you need to run an update, you can work around the error by editing ~/.element-enterprise-server/installer/requirements.txt and change openapi_schema_validator==0.4.0 to openapi_schema_validator==0.5.0 .

Please update your version of the installer to 2023-07.02 or newer. Installations using these newer versions will prevent this issue from re-occurring.

Root Cause

The On-Premise installer installs required dependancies when run. The update of jsonschema to 4.18.0 has broken compatibility with the version of openapi_schema_validator the affected installer versions use.

Customers using the airgapped installer of affected versions will not be affected as the dependancies included in the build pre-date the 4.18.0 update.

Kubernetes internal certificates have expired

Issue

Environment

Resolution

Run the following commands to refresh your internal certificates:

  1. sudo microk8s.refresh-certs -e server.crt
  2. sudo microk8s.refresh-certs -e front-proxy-client.crt
  3. sudo snap restart microk8s

Root Cause

Most likely caused by an open issue with microk8s regarding auto-renewing of certificates, see Issue 2489.