Introduction
In recent years, the Matrix.org foundation has made significant strides in revolutionizing real-time communication with the introduction of the MatrixRTC tech stack. This new stack, which replaces the legacy WebRTC 1.0 and Jitsi-based system, is designed to offer enhanced performance, scalability, and security. However, the transition to MatrixRTC comes with its own set of challenges, particularly for Synology NAS users who wish to deploy this technology using Container Manager (Docker).
The MatrixRTC tech stack relies on two critical components: a Selective Forwarding Unit (SFU) powered by LiveKit and a JWT service for session authorization. While Element.io (the developer of synapse) initially provided a centralized SFU as a fallback to ease the transition, recent updates to their clients (Element X and Element Web) have deprecated these fallback solutions. This means that homeserver owners must now host their own SFU and JWT services to maintain compatibility with the new tech stack and to be able to perform voice calls and other RTC-dependent functions that may be added in the future.
Despite the availability of the official tutorial on github for self hosting element-call and Ansible playbooks for implementing MatrixRTC, there is – as of this day – a notable lack of guides tailored to Synology NAS environments. Synology’s unique technical constraints, such as limited reverse proxy configuration options, necessitate a specialized approach to deployment. This blog post aims to fill that gap by providing a comprehensive tutorial on deploying MatrixRTC to a Synology NAS using Docker.
By following this guide, Synology NAS users will be able to seamlessly transition to the new MatrixRTC stack, ensuring their homeservers remain up-to-date and fully functional when it comes to voice calls with Element-Call and other RTC features which may be available in the future. Whether you’re a seasoned developer or a newcomer to Matrix, this tutorial will equip you with the knowledge and tools needed to navigate the complexities of this Synology NAS deployment.
Setup Prerequisits
To successfully implement the MatrixRTC tech stack on your Synology NAS, there are several prerequisites that need to be met. First and foremost, you should have a Synapse Matrix homeserver running. While it is optimal to deploy this homeserver on your Synology NAS, it is not strictly necessary. Synapse is the most widely used Matrix homeserver and supports all the features required for MatrixRTC.
Your Synology NAS must be capable of running Container Manager (Docker), as we will be using Docker containers to deploy the various components of the MatrixRTC stack. This capability is crucial for the deployment process.
Additionally, you must have access to your router or firewall to expose ports to the internet. This involves configuring port forwarding, typically for TCP ports such as 80 and 443, to ensure that your Synology NAS can communicate with external services.
Creating new sub-domains with CNAME capabilities is another essential requirement. This is necessary for setting up the various services required by MatrixRTC. Ensure that your DNS provider supports CNAME records and that you have the ability to manage these records. Btw. I will not cover dynDNS routing in this tutorial nor do I go into full depth what CNAME routing is and how it works.
Finally, you need to be able to make adjustments to Synology’s reverse proxy configuration. This is a given since we will be using the standard Synology board software. The reverse proxy will help manage traffic and ensure that requests are properly routed to the correct services.
By ensuring these prerequisites are met, you’ll be well-prepared to proceed with the deployment of MatrixRTC on your Synology NAS.
Implementation
Synpase Homeserver Preparation
To enable the Element Call features on your Synapse homeserver, several configuration adjustments to your homeserver.yaml are necessary. These adjustments ensure that your homeserver supports the required Matrix Spec Changes (MSCs) and can properly integrate with the MatrixRTC backend.
First, enable MSC3266: Room Summary API.
This API is crucial for Element Call to join rooms over federation using knocking. It allows the homeserver to request a room summary of rooms you are not joined, which includes the room join rules. This information is used to decide if the user gets prompted with the option to knock, receives a „cannot join“ error, or sees the join view. In your homeserver.yaml configuration file, add:
experimental_features: msc3266_enabled: true
Next, enable MSC4222: Adding state_after to sync v2.
This MSC allows clients to opt-in to a change of the sync v2 API that enables them to correctly track the state of the room. This is required by Element Call to reliably track room state. In your homeserver.yaml configuration file, add:
experimental_features: msc4222_enabled: true
Configure MSC4140: Delayed Events.
Delayed events are necessary for proper call participation signaling. Without this, calls in Matrix rooms may get stuck. In your homeserver.yaml configuration file, add:
experimental_features: max_event_delay_duration: 24h
Note: A really good overview about this potential pittfall and a technical solution can be found on the highly recommended Spaetzblog article you can find here.
Lastly adjust the rate limiting settings to match the requirements for key sharing and heart-beat frequency. In your homeserver.yaml configuration file, add:
rc_message: per_second: 0.5 burst_count: 30 rc_delayed_event_mgmt: per_second: 1 burst_count: 20
Delegation specialties
Element Call requires a LiveKit SFU alongside a Matrix LiveKit JWT auth service to implement MSC4195: MatrixRTC using LiveKit backend. This backend must be announced to the client via your homeserver’s .well-known/matrix/client
. Ensure your .well-known/matrix/client
configuration includes the necessary Foci configs for the MatrixRTC backend.
Client delegation is a crucial aspect of this setup, as it allows you to serve Matrix services from a subdomain while maintaining the primary domain for user IDs and room aliases. This approach ensures that your Matrix IDs remain consistent and recognizable, even if the services are hosted on different subdomains. For example, you can serve Synapse on matrix.example.org but still have Matrix IDs like @john:example.org.
To achieve this, you need to configure your .well-known/matrix/client file to delegate client traffic properly. Here is an example entry to illustrate what needs to be adjusted:
{ "m.homeserver": {"base_url": "https://m.HOMESERVERNAME.TLD"}, "org.matrix.msc4143.rtc_foci": [{"type": "livekit","livekit_service_url": "https://jwt.HOMESERVERNAME.TLD"}] }
The line "org.matrix.msc4143.rtc_foci": [{"type": "livekit","livekit_service_url": "https://jwt.HOMESERVERNAME.TLD"}]
was added and specifies the configuration for the MatrixRTC backend. It indicates that the type of RTC backend being used is livekit
, and it provides the URL for the LiveKit JWT service (https://jwt.HOMESERVERNAME.TLD
). This URL is where the client will obtain JWT tokens necessary for session authorization, enabling secure and authenticated communication through the LiveKit SFU.
In this example, we are introducing the first of two new subdomains: the subdomain for the JWT service (jwt.HOMESERVERNAME.TLD
). This subdomain is essential for the proper functioning of the MatrixRTC backend. We will add the remaining subdomains further down in the implementation guide to ensure all necessary services are correctly configured.
Note: Make sure this file is served with the correct MIME type (application/json
) and appropriate CORS headers otherwise the jwt token can not be issued correctly.
# Example Configuration <Location "/.well-known/matrix/client"> Header set Access-Control-Allow-Origin "*" Header set Content-Type "application/json" </Location>
Port Forwarding
To ensure the new MatrixRTC setup runs smoothly, it’s essential to configure port forwarding on your router or firewall. This allows external traffic to reach your Synology NAS, enabling secure communication for the various services involved.
First, we need to expose port 443. This port is used for HTTPS traffic, which ensures that data transmitted between the client and the Matrix LiveKit JWT auth service is encrypted and secure. HTTPS is crucial for protecting sensitive information and maintaining privacy. Since we expect that a homeserver is already running on the server, port 443 should already be open.
Next, we need to expose port 7881. This port is used for secure WebSocket connections. WebSockets allow for real-time, bidirectional communication between the client and server, which is necessary for the interactive features of MatrixRTC. Using TLS (Transport Layer Security) on this port ensures that the WebSocket connections are secure.
Additionally, we need to expose a range of UDP ports from 50000 to 50200. These ports are used for WebRTC traffic, which includes audio and video data. WebRTC relies on UDP (User Datagram Protocol) for fast and efficient transmission of media streams. Additionally, TURN (Traversal Using Relays around NAT) servers may use these ports to help clients connect through NAT (Network Address Translation) and firewalls. Ensuring these ports are open allows for seamless media streaming and communication.
The LiveKit SFU also includes an embedded TURN server, which simplifies the setup by eliminating the need for a separate TURN server like Coturn. For the TURN server component, we need to expose port 5349 for TLS traffic and port 3478 for UDP traffic. These ports are essential for TURN server operations, which facilitate media relay when direct peer-to-peer connections are not possible.
By configuring port forwarding for ports 443, 7881, 5349, 3478, and 50000-50200, you enable secure and efficient communication for the MatrixRTC setup. This ensures that all necessary traffic can reach your Synology NAS, allowing the various components to function correctly.
An illustration of the traffic we need to route through can be found here.
Reverse Proxy Configuration and SSL Certificates
To set up the MatrixRTC tech stack on your Synology NAS, we need to configure two new subdomains and set up the reverse proxy using Synology’s built-in reverse proxy solution. Additionally, we’ll obtain SSL certificates via Let’s Encrypt to ensure secure communication.
First, we need to create two new subdomains: sfu.HOMESERVERNAME.TLD
for the LiveKit SFU and jwt.HOMESERVERNAME.TLD
for the JWT token issuer. These subdomains should be routed to your Synology NAS using CNAME records. This setup ensures that traffic intended for these services is correctly directed to your NAS.
Once the subdomains are created, we need to configure the reverse proxy on your Synology NAS. Unlike a typical Nginx setup where you can use a single subdomain to address both the SFU and JWT services, Synology’s reverse proxy requires a 1-to-1 relationship. Therefore, we need to create two separate reverse proxy entries.
Here is an example of how the reverse proxy configuration would look in a fully fletched Nginx webserver:
server { ... location ^~ /livekit/jwt/ { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; # JWT Service running at port 8080 proxy_pass http://localhost:8080/; } location ^~ /livekit/sfu/ { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_send_timeout 120; proxy_read_timeout 120; proxy_buffering off; proxy_set_header Accept-Encoding gzip; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; # LiveKit SFU websocket connection running at port 7880 proxy_pass http://localhost:7880/; } }
However, since Synology’s capabilities are limited, we need to create two separate reverse proxy entries:
- JWT Service Reverse Proxy:
- Source:
https://jwt.HOMESERVERNAME.TLD
- Destination:
http://localhost:8080
- Source:
- SFU Service Reverse Proxy:
- Source:
https://sfu.HOMESERVERNAME.TLD
- Destination:
http://localhost:7880
- Source:
To set up the reverse proxy on Synology NAS:
- Access the Control Panel in your Synology DSM interface.
- Navigate to the „Application Portal“ section.
- Click on „Reverse Proxy“ and then „Create“ to create a new entry.
- Enter the details for the JWT service and SFU service as described above.
An in depth explanation and visualisation of synology’s reverse proxy can be found here.
Next, we need to obtain SSL certificates for these subdomains using Let’s Encrypt. This ensures that all communication is encrypted and secure. Follow these steps to obtain the certificates:
- Go to DSM Control Panel > Security > Certificate.
- Click „Add“ and select „Add a new certificate.“
- Choose „Get a certificate from Let’s Encrypt“ and click „Next.“
- Enter the domain name (e.g.,
jwt.HOMESERVERNAME.TLD
), your email address, and any additional domain names if needed. - Click „Apply“ to obtain and install the certificate.
Repeat the process for the sfu.HOMESERVERNAME.TLD
subdomain.
By setting up the subdomains, configuring the reverse proxy, and obtaining SSL certificates, you ensure that your MatrixRTC setup on Synology NAS is secure and properly routed. This configuration allows the LiveKit SFU and JWT services to function correctly, providing a robust and scalable real-time communication environment.
Container Manager Setup
To set up the container manager for deploying the MatrixRTC tech stack on your Synology NAS, follow these steps:
First, define your Docker destination by creating a location path. Open your Synology File Station and navigate to the desired volume. Create the following directory structure:
/volume1/docker/matrix/element-call
Within this destination, add a folder named config
, and within this folder, create another folder named livekit
. The path should look like this eventually:
/volume1/docker/element-call/config/livekit
Next, create a new file named livekit.yaml
within the livekit
folder. This file will serve as the configuration file for your LiveKit SFU. Open a text editor and add the following entries to the file:
port: 7880 bind_addresses: - "" rtc: tcp_port: 7881 port_range_start: 50000 port_range_end: 50200 use_external_ip: true turn: enabled: true domain: sfu.HOMESERNAME.TLD cert_file: "" key_file: "" tls_port: 5349 udp_port: 3478 external_tls: true keys: LIVEKIT_KEY: LIVEKIT_SECRET # Put in two random values each logging: level: info
Save the file and place it into the config
folder you created. The final path for the configuration file should be:
/volume1/docker/element-call/config/livekit/livekit.yaml
This configuration file specifies the ports and settings necessary for the LiveKit SFU to operate correctly. The port
and rtc
sections define the ports used for WebSocket and WebRTC traffic, while the turn
section configures the TURN server settings. The keys
section requires you to generate two random values for LIVEKIT_KEY
and LIVEKIT_SECRET
, which are used for authentication.
By following these steps, you will have set up the necessary directory structure and configuration file for deploying the LiveKit SFU on your Synology NAS.
To create a new project within Container Manager for deploying the MatrixRTC tech stack, you need to start container manager and navigate to projects and create new project. Here you can define the path we just created for our docker environment:
/volume1/docker/element-call
Now we need to create a new compose.yaml
. This file provides instructions on how to build and configure your Docker containers. You can use the following compose.yaml
for your setup. Please adjust the parameter according to your specific setup:
services: livekit: image: livekit/livekit-server:latest container_name: Livekit command: --config /etc/livekit.yaml restart: unless-stopped volumes: - /volume1/docker/matrix/element-call/config/livekit/livekit.yaml:/etc/livekit.yaml ports: - "7880:7880" # LiveKit API (via Reverse Proxy) - "50000-50200:50000-50200/udp" # WebRTC UDP Ports jwt-service: image: ghcr.io/element-hq/lk-jwt-service:latest-ci container_name: JWT restart: unless-stopped ports: - "8080:8080" environment: - LIVEKIT_URL=wss://sfu.HOMESERVERNAME.TLD - LIVEKIT_SECRET=XXXXXXX # Put in the livekit secret defined within the livekit.yaml - LIVEKIT_KEY=XXXXXX # Put in the livekit key defined within the livekit.yaml - LIVEKIT_LOCAL_HOMESERVERS=HOMESERVERNAME.TLD # Set this to your homeserver name
Save the compose.yaml
file and proceed with the deployment.
By following these steps, you will have set up the necessary Docker containers for the LiveKit SFU and JWT service on your Synology NAS. This configuration ensures that the services are correctly routed and can communicate securely with the MatrixRTC backend.
Testing the Setup
Once your Docker containers start without any errors, you can proceed to test if everything is working correctly. To do so, you can use the LiveKit Connection Tester. Enter your SFU subdomain in the following format: wss://sfu.HOMESERVERNAME.TLD
.
There are different ways to create a room token to test the setup. The room token is essentially a JWT token that will be issued when a new call is initiated, serving as an authentication mechanism. Here are some methods to obtain a room token:
Using LiveKit CLI:
Install the LiveKit CLI from GitHub and use the following command to create a token:
lk create-token --api-secret my_key --api-key my_secret --create --join --list --name testuser -i testuser -r testroom
Using cURL:
You can also create JWT tokens on your SFU by making a POST request with cURL:
curl -X POST -d '{"room":"!ROOM:HOMESERVERNAME.TLD","openid_token":{"access_token":"XXXXXX","expires_in":3600, "matrix_server_name":"HOMESERVERNAME.LTD","token_type":"Bearer"},"device_id":"XXXXXX"}' https://jwt.HOMESERVERNAME.TLD
Using Element Web Client:
- Start a call on the Element web client with another client.
- Open the developer console of your web browser and filter the network tab for
/sfu/get
(or your specific URL). - You’ll find a POST call that sends something and returns a JWT token in JSON form. This JWT token can be used for the connection test.
By following these steps, you can verify that your MatrixRTC setup is functioning correctly and that clients can connect to your LiveKit SFU.
Final Thoughts
As you can see the deployment is straight forward but needs to be thought out quite well to avoid certain mistakes I did during my exploration. That beeing said I want to emphasis that my own deployment and this guide wouldn’t have been possible without the brilliant people in the Matrix room #webrtc:matrix.org who greatly supported me with my own setup. I can only highly recommend to reach out and ask for help if something does not work as planned or described above. If you want to find out more about MatrixRTC I highly recommend checking out this presentation held by Timo Kandra (element.io) on Youtube. You can also reach out to me at https://ztfr.net/#/@dome:ztfr.de