Synapse 0.19.3 released

Hi all,

We’ve released Synapse 0.19.3-rc2 as 0.19.3 with no changes. This is a slightly unusual release, as 0.19.3-rc2 dates from March 13th and a lot of stuff has landed on the develop branch since then – however, we’ll be releasing that as 0.20.0 once it’s ready. Instead, 0.19.3 has a set of intermediary performance and bug fixes; the only new feature is a set of admin APIs kindly contributed by @morteza-araby.

The changelog follows – please upgrade from https://github.com/matrix-org/synapse or your OS packages as normal :)

Changes in synapse v0.19.3 (2017-03-20)

No changes since v0.19.3-rc2

Changes in synapse v0.19.3-rc2 (2017-03-13)

Bug fixes:

  • Fix bug in handling of incoming device list updates over federation.

Changes in synapse v0.19.3-rc1 (2017-03-08)

Features:

Changes:

Bug fixes:

  • Fix synapse_port_db failure. Thanks to @Pneumaticat! (PR #1904)
  • Fix caching to not cache error responses (PR #1913)
  • Fix APIs to make kick & ban reasons work (PR #1917)
  • Fix bugs in the /keys/changes api (PR #1921)
  • Fix bug where users couldn’t forget rooms they were banned from (PR #1922)
  • Fix issue with long language values in pushers API (PR #1925)
  • Fix a race in transaction queue (PR #1930)
  • Fix dynamic thumbnailing to preserve aspect ratio. Thanks to @jkolo! (PR #1945)
  • Fix device list update to not constantly resync (PR #1964)
  • Fix potential for huge memory usage when getting device that have changed (PR #1969)

Dendrite receives its first messages!!!

Hi all,

We hit a major milestone today on Dendrite, our next-generation golang homeserver: Dendrite received its first messages!!

Before you get too excited, please understand that Dendrite is still a pre-alpha work in progress – whilst we successfully created some rooms on an instance and sent a bunch of messages into them via the Client-Server API, most other functionality (e.g. receiving messages via /sync, logging in, registering, federation etc) is yet to exist. It cannot yet be used as a homeserver. However, this is still a huge step in the right direction, as it demonstrates the core DAG functionality of Matrix is intact, and the beginnings of a usable Client-Server API are hooked up.

The architecture of Dendrite is genuinely interesting – please check out the wiring diagram if you haven’t already. The idea is that the server is broken down into a series of components which process streams of data stored in Kafka-style append-only logs. Each component scales horizontally (you can have as many as required to handle load), which is an enormous win over Synapse’s monolithic design. Each component is also decoupled from each other by the logs, letting them run on entirely different machines as required. Please note that whilst the initial implementation is using Kafka for convenience, the actual append-only log mechanism is abstracted away – in future we expect to see configurations of Dendrite which operate entirely from within a single go executable (using go channels as the log mechanism), as well as alternatives to Kafka for distributed solutions.

The components which have taken form so far are the central roomserver service, which is responsible (as the name suggests) for maintaining the state and integrity of one or more rooms – authorizing events into the room DAG; storing them in postgres, tracking the auth chain of events where needed; etc. Much of the core matrix DAG logic of the roomserver is provided by gomatrixserverlib. The roomserver receives events sent by users via the ‘client room send’ component (and ‘federation backfill’ component, when that exists). The ‘client room send’ component (and in future also ‘client sync’) is provided by the clientapi service – which is what as of today is successfully creating rooms and events and relaying them to the roomserver!

The actual events we’ve been testing with are the history of the Matrix Core room: around 10k events. Right now the roomserver (and the postgres DB that backs it) are the main bottleneck in the pipeline rather than clientapi, so it’s been interesting to see how rapidly the roomserver can consume its log of events. As of today’s benchmark, on a generic dev workstation and an entirely unoptimised roomserver (i.e. no caching whatsoever) running on a single core, we’re seeing it ingest the room history at over 350 events per second. The vast majority of this work is going into encoding/decoding JSON or waiting for postgres: with a simple event cache to avoid repeatedly hitting the DB for every auth and state event, we expect this to increase significantly. And then as we increase the number of cores, kafka partitions and roomserver instances it should scale fairly arbitrarily(!)

For context, the main synapse process for Matrix.org currently maxes out persisting events at around 15 and 20 per second (although it is also spending a bunch of time relaying events to the various worker processes, and other miscellanies). As such, an initial benchmark for Dendrite of 350 msgs/s really does look incredibly promising.

You may be wondering where this leaves Synapse? Well, a major driver for implementing Dendrite has been to support the growth of the main matrix.org server, which currently persists around 10 events/s (whilst emitting around 1500 events/s). We have exhausted most of the low-hanging fruit for optimising Synapse, and have got to point where the architectural fixes required are of similar shape and size to the work going into Dendrite. So, whilst Synapse is going to be around for a while yet, we’re putting the majority of our long-term plans into Dendrite, with a distinct degree of urgency as we race against the ever-increasing traffic levels on the Matrix.org server!

Finally, you may be wondering what happened to Dendron, our original experiment in Golang servers. Well: Dendron was an attempt at a strangler pattern rewrite of Synapse, acting as an shim in front of Synapse which could gradually swap out endpoints with their golang implementations. In practice, the operational complexity it introduced, as well as the amount of room for improvement (at the time) we had in Synapse, and the relatively tight coupling to Synapse’s existing architecture, storage & schema, meant that it was far from a clear win – and effectively served as an excuse to learn Go. As such, we’ve finally formally killed it off as of last week – Matrix.org is now running behind a normal haproxy, and Dendron is dead. Meanwhile, Dendrite (aka Dendron done Right ;) is very much alive, progressing fast, free from the shackles of Synapse.

We’ll try to keep the blog updated with progress on Dendrite as it continues to grow!

How do I bridge thee? Let me count the ways…

Bridges come in many flavours, and we need consistent terminology within the Matrix community to ensure everyone (users, developers, core team) is on the same page. This post is primarily intended for bridge developers to refer to when building bridges.

The most recent version of this document is here (source) but we’re also posting it as a blog post for visibility.

Types of rooms

Portal rooms

Bridges can register themselves as controlling chunks of room aliases namespace, letting Matrix users join remote rooms transparently if they /join #freenode_#wherever:matrix.org or similar. The resulting Matrix room is typically automatically bridged to the single target remote room. Access control for Matrix users is typically managed by the remote network’s side of the room. This is called a portal room, and is useful for jumping into remote rooms without any configuration needed whatsoever – using Matrix as a ‘bouncer’ for the remote network.

Plumbed rooms

Alternatively, an existing Matrix room can be can plumbed into one or more specific remote rooms by configuring a bridge (which can be run by anyone). For instance, #matrix:matrix.org is plumbed into #matrix on Freenode, matrixdotorg/#matrix on Slack, etc. Access control for Matrix users is necessarily managed by the Matrix side of the room. This is useful for using Matrix to link together different communities.

Migrating rooms between a portal & plumbed room is currently a bit of a mess, as there’s not yet a way for users to remove portal rooms once they’re created, so you can end up with a mix of portal & plumbed users bridged into a room, which looks weird from both the Matrix and non-Matrix viewpoints. https://github.com/matrix-org/matrix-appservice-irc/issues/387 tracks this.

Types of bridges (simplest first):

Bridgebot-based bridges

The simplest way to exchange messages with a remote network is to have the bridge log into the network using one or more predefined users called bridge bots – typically called MatrixBridge or MatrixBridge[123] etc. These relay traffic on behalf of the users on the other side, but it’s a terrible experience as all the metadata about the messages and senders is lost. This is how the telematrix matrix<->telegram bridge currently works.

Bot-API (aka Virtual user) based bridges

Some remote systems support the idea of injecting messages from ‘fake’ or ‘virtual’ users, which can be used to represent the Matrix-side users as unique entities in the remote network. For instance, Slack’s inbound webhooks lets remote bots be created on demand, letting Matrix users be shown cosmetically correctly in the timeline as virtual users. However, the resulting virtual users aren’t real users on the remote system, so don’t have presence/profile and can’t be tab-completed or direct-messaged etc. They also have no way to receive typing notifs or other richer info which may not be available via bot APIs. This is how the current matrix-appservice-slack bridge works.

Simple puppeted bridge

This is a richer form of bridging, where the bridge logs into the remote service as if it were a real 3rd party client for that service. As a result, the Matrix user has to already have a valid account on the remote system. In exchange, the Matrix user ‘puppets’ their remote user, such that other users on the remote system aren’t even aware they are speaking to a user via Matrix. The full semantics of the remote system are available to the bridge to expose into Matrix. However, the bridge has to handle the authentication process to log the user into the remote bridge.

This is essentially how the current matrix-appservice-irc bridge works (if you configure it to log into the remote IRC network as your ‘real’ IRC nickname). matrix-appservice-gitter is being extended to support both puppeted and bridgebot-based operation. It’s how the experimental matrix-appservice-tg bridge works.

Going forwards we’re aiming for all bridges to be at least simple puppeted, if not double-puppeted.

Double-puppeted bridge

A simple ‘puppeted bridge’ allows the Matrix user to control their account on their remote network. However, ideally this puppeting should work in both directions, so if the user logs into (say) their native telegram client and starts conversations, sends messages etc, these should be reflected back into Matrix as if the user had done them there. This requires the bridge to be able to puppet the Matrix side of the bridge on behalf of the user.

This is the holy-grail of bridging; matrix-puppet-bridge is a community project that tries to facilitate development of double puppeted bridges, having done so for several networks. The main obstacle is working out an elegant way of having the bridge auth with Matrix as the matrix user (which requires some kind of scoped access_token delegation).

Server-to-server bridging

Some remote protocols (IRC, XMPP, SIP, SMTP, NNTP, GnuSocial etc) support federation – either open or closed. The most elegant way of bridging to these protocols would be to have the bridge participate in the federation as a server, directly bridging the entire namespace into Matrix.

We’re not aware of anyone who’s done this yet.

Sidecar bridge

Finally: the types of bridging described above assume that you are synchronising the conversation history of the remote system into Matrix, so it may be decentralised and exposed to multiple users within the wider Matrix network.

This can cause problems where the remote system may have arbitrarily complicated permissions (ACLs) controlling access to the history, which will then need to be correctly synchronised with Matrix’s ACL model, without introducing security issues such as races. We already see some problems with this on the IRC bridge, where history visibility for +i and +k channels have to be carefully synchronised with the Matrix rooms.

You can also hit problems with other network-specific features not yet having equivalent representation in the Matrix protocol (e.g. ephemeral messages, or op-only messages – although arguably that’s a type of ACL).

One solution could be to support an entirely different architecture of bridging, where the Matrix client-server API is mapped directly to the remote service, meaning that ACL decisions are delegated to the remote service, and conversations are not exposed into the wider Matrix. This is effectively using the bridge purely as a 3rd party client for the network (similar to Bitlbee). The bridge is only available to a single user, and conversations cannot be shared with other Matrix users as they aren’t actually Matrix rooms. (Another solution could be to use Active Policy Servers at last as a way of centralising and delegating ACLs for a room)

This is essentially an entirely different product to the rest of Matrix, and whilst it could be a solution for some particularly painful ACL problems, we’re focusing on non-sidecar bridges for now.

Load problems on the Matrix.org homeserver

Hi folks,

Since FOSDEM we’ve seen even more interest in Matrix than normal, and we’ve been having some problems getting the Matrix.org homeserver to keep up with demand.  This has resulted in performance being slightly slower than normal at peak times, but the main impact has been the additional traffic exacerbating outages on the homeserver – either by revealing new failure modes, or making it harder to recover rapidly after something goes wrong.

Specifically: on Friday afternoon we had a service disruption caused by someone sending an unusual event into Matrix HQ.  It turns out that both matrix-android-sdk and matrix-ios-sdk based clients (e.g. Riot/Android and iOS) handled this naively by simply resyncing the room state… which has been fine in the past, but not when you have several hundred clients actively syncing the room, and resulted in a thundering herd effect which overloaded the server for ~10 mins or so whilst they all resynced the room (which, in turn, nowadays, involves calculating and syncing several MB of JSON state to each client).  The traffic load was then high enough that it took the server a further 10-20 minutes for the server to fully catch up and recover after the herd had dissipated.  We then had a repeat performance on Monday morning of the same failure mode.

Similarly, we had disruption last night after a user who hadn’t used the service for ages logged on for the first time and rapidly caught up on a few rooms which literally had *millions* of unread messages in them.  Generally this would be okay, but the combination of loaded DB and the sheer number of notifications being deleted ended up with 4 long-running DB deletes in parallel.  This seems to have caused postgres to lock the event_actions_table more aggressively than we’d expect, blocking other queries which were trying to access it… causing most requests to block until the deletes were over.  At the current traffic volumes this meant that the main synapse process tried to serve thousands of simultaneous requests as they stacked up and ran out of filehandles within about 10 minutes and wedged the whole synapse solid before the DB could unblock.  Irritatingly, it turns out our end-to-end monitoring has a bug where it in turn can crash on receiving a 500 from synapse, so despite having PagerDuty all set up and running (and having been receiving pages for traffic delays over the last few weeks)… we didn’t get paged when we got actual failed traffic rather than slow traffic, which delayed resolving the issue.  Finally, whilst rolling out a fix this afternoon, we again hit issues with the traffic load causing more problems than we were expecting, making a routine redeploy distinctly more disruptive.

So, what are we doing about this?

  1. Fix the root causes:
    • The ‘android/iOS thundering herd’ bug is being worked on both the android/iOS side (fixing the naive behaviour) and the server side.  A temporary mitigation is in now place which moves the server-side code to worker processes so that worst case it can’t take out the main synapse process and can scale better.
    • The ‘event_push_actions table is inefficient’ bug had already been fixed – so this was a matter of rushing through the hotfix to matrix.org before we saw a recurrence.
  2. Move to faster hardware.  Our current DB master is a “fast when we bought it 5 years ago” machine whose IO is simply starting to saturate (6x 300GB 10krpm disks in RAID5, fwiw), which is maxing out at around 500IOPS and 20MB/s of random access, and acting as a *very* hard limit to the current synapse performance.  We’re currently in the process of evaluating SSD-backed IO for the DB (in fact, we’re already running a DB slave), and assuming this tests out okay we’re hoping to migrate next week, which should give us a 10x-20x speed up on disk IO and buy considerable headroom.  Watch this space for details.
  3. Make synapse faster.  We’re continuing to plug away at optimisations (e.g. stuff like this), but these are reaching the point of diminishing returns, especially relative to the win from faster hardware.
  4. Fix the end-to-end monitoring.  This already happened.
  5. Load-test before deploying.  This is hard, as you really need to test against precisely the same traffic profile as live traffic, and that’s hard to simulate.  We’re thinking about ways of fixing this, but the best solution is probably going to be clustering and being able to do incremental redeploys to gradually test new changes.  On which note:
  6. Fix synapse’s architectural deficiencies to support clustering, allowing for rolling zero-downtime redeploys, and better horizontal scalability to handle traffic spikes like this.  We’re choosing not to fix this in synapse, but we are currently in full swing implementing dendrite as a next-generation homeserver in Golang, architected from the outset for clustering and horizontal scalability.  N.B. most of the exciting stuff is happening on feature branches and gomatrixserverlib atm. Also, we’re deliberately taking the time to try to get it right this time, unlike bits of synapse which were something of a rush job.  It’ll be a few weeks before dendrite is functional enough to even send a message (let alone finish the implementation), but hopefully faster hardware will give the synapse deployment on matrix.org enough headroom for us to get dendrite ready to take over when the time comes!

The good news of course is that you can run your own synapse today to avoid getting caught up in this operational fun & games, and unless you’re planning to put tens of thousands of daily active users on the server you should be okay!

Meanwhile, please accept our apologies for the instability and be assured that we’re doing everything we can to get out this turbulence as rapidly as possible.

Matthew