Synapse 0.19.3 released

Hi all,

We’ve released Synapse 0.19.3-rc2 as 0.19.3 with no changes. This is a slightly unusual release, as 0.19.3-rc2 dates from March 13th and a lot of stuff has landed on the develop branch since then – however, we’ll be releasing that as 0.20.0 once it’s ready. Instead, 0.19.3 has a set of intermediary performance and bug fixes; the only new feature is a set of admin APIs kindly contributed by @morteza-araby.

The changelog follows – please upgrade from https://github.com/matrix-org/synapse or your OS packages as normal :)

Changes in synapse v0.19.3 (2017-03-20)

No changes since v0.19.3-rc2

Changes in synapse v0.19.3-rc2 (2017-03-13)

Bug fixes:

  • Fix bug in handling of incoming device list updates over federation.

Changes in synapse v0.19.3-rc1 (2017-03-08)

Features:

Changes:

Bug fixes:

  • Fix synapse_port_db failure. Thanks to @Pneumaticat! (PR #1904)
  • Fix caching to not cache error responses (PR #1913)
  • Fix APIs to make kick & ban reasons work (PR #1917)
  • Fix bugs in the /keys/changes api (PR #1921)
  • Fix bug where users couldn’t forget rooms they were banned from (PR #1922)
  • Fix issue with long language values in pushers API (PR #1925)
  • Fix a race in transaction queue (PR #1930)
  • Fix dynamic thumbnailing to preserve aspect ratio. Thanks to @jkolo! (PR #1945)
  • Fix device list update to not constantly resync (PR #1964)
  • Fix potential for huge memory usage when getting device that have changed (PR #1969)

An Adventure in IRC-Land

Hi everyone. I’m Kegan, one of the core developers at matrix.org. This is the first in a series on the matrix.org IRC bridge. The aim of this series is to try to give a behind the scenes look at how the IRC bridge works, what kinds of problems we encountered, and how we plan to scale in the future. This post looks at how the IRC bridge actually works.

Firstly, what is “bridging”? The simple answer is that it is a program which maps between different messaging protocols so that users on different protocols can communicate with each other. Some protocols may have features which are not supported in the other (typing notifications in Matrix, DCC – direct file transfers – in IRC). This means that bridging will always be “inferior” to just using the respective protocol. That being said, where there is common ground a bridge can work well; all messaging protocols support sending and receiving text messages for example. As we’ll see however, the devil is in the detail…

A lot of existing IRC bridges for different protocols share one thing in common: they use a single global bot to bridge traffic. This bot listens to all messages from IRC, and sends them to the other network. The bot also listens for messages from users on the other network, and sends messages on their behalf to IRC. This is a lot easier than having to maintain dedicated TCP connections for each user. However, it isn’t a great experience for IRC users as they:

  • Don’t know who is reading messages on a channel as there is just 1 bot in the membership list.
  • Cannot PM users on the other network.
  • Cannot kick/ban users on the other network without affecting everyone else.
  • Cannot bing/mention users on the other network easily (tab completion).

We made the decision very early on that we would keep dedicated TCP connections for each Matrix user. This means every Matrix user has their own tiny IRC client. This has its own problems:

  • It involves multiple connections to the IRCd so you need special permission to set up an i:line.
  • You need to be able to support identification of individual users (via ident or unique IPv6 addresses).
  • With all these connections to the same IRC channels, you need to have some way to identify which incoming messages have already been handled and which have not.

Mapping Rooms

So now that we have a way to send and receive messages, how do we map the rooms/channels between protocols? This isn’t as easy as you may think. We can have a single static one-to-one mapping:

  • All messages to #channel go to !abcdef:matrix.org.
  • All messages from !abcdef:matrix.org go to #channel.
  • All PMs between @alice:matrix.org and Bob go to !wxyz:matrix.org and the respective PM on IRC.

In order to make PMs secure, we need to limit who can access the room. This is done by making the Matrix PM room “invite-only”. This can cause problems though if the Matrix user ever leaves that room: they won’t be able to ever re-join! The IRC bridges get around this by allowing Matrix users to replace their dedicated PM room with a new room, and by checking to make sure that the Matrix user is inside the room before sending messages.

Then you have problem of “ownership” of rooms. Who should be able to kick users in a bridged room? There are two main scenarios to consider:

  • The IRC channel has existed for a while and there are existing IRC channel operators.
  • The IRC channel does not exist, but there are existing Matrix moderators.

In the first case, we want to defer ownership to the channel operators. This is what happens by default for all bridged IRC channels on matrix.org. The Matrix users have no power in the room, and are at the mercy of the IRC channel operators. The channel operators are represented by virtual Matrix users in the room. However, they do not have any power level: they are at the same level as real Matrix users. Why? The bridge does this because, unlike IRC, it’s not possible in Matrix to bring a user to the same level as yourself (e.g +o), and then downgrade them back to a regular user (e.g. -o). Instead, the bridge bot itself acts as a custodian for the room, and performs privileged IRC operations (topic changing, kickbans, etc) on the IRC channel operator’s behalf.

In the second case, we want to defer ownership to the Matrix moderators. This is what happens when you “provision a room” in Matrix. The bridge will PM a currently online channel operator and ask for their permission to bridge to Matrix. If they accept, the bridge is made and the power levels in the pre-existing Matrix room are left untouched, giving moderators in Matrix control over the room. However, this power doesn’t extend completely to IRC. If a Matrix moderator grants moderator powers to another Matrix user, this will not be mapped to IRC. Why? It’s not possible for the bridge to give chanops to any random user on any random IRC channel, so it cannot always honour the request. This relies on the humans on either side of the bridge to communicate and map power accordingly. This is done on purpose as there is no 100% perfect mapping between IRC powers and Matrix powers: it’s always going to need to compromise which only a human can make.

Finally, there is the problem of one-to-many mappings. It is possible to have two Matrix rooms bridged to the same IRC channel. The problem occurs when a Matrix user in one room speaks. The bridge can easily map that to IRC, but unless it also maps it back to Matrix, the message will never make it to the 2nd Matrix room. The bridge cannot control/puppet the Matrix user who spoke, so instead it creates a virtual Matrix user to represent that real Matrix user and then sends the message into the 2nd Matrix room. Needless to say, this can be quite confusing and we strongly discourage one-to-many mappings for this reason.

Mapping Messages

Mapping Matrix messages to IRC is rather easy for the most part. Messages are passed from the Homeserver to the bridge via the AS API, and the bridge sends a textual representation of the message to IRC using the IRC connection for that Matrix user. The exact form of the text for images, videos and long text can be quite subjective, and there is inevitably some data loss along the way. For example, you can send big text headings, tables and lists in Matrix, but there is no equivalent on IRC. Thankfully, most Matrix users are sending the corresponding markdown and so the formatting can be reasonably preserved by just sending the plaintext (markdown) body.

Mapping IRC messages to Matrix is more difficult: not because it’s hard to represent the message in Matrix, but because of the architecture of the bridge. The bridge maintains separate connections for each Matrix user. This means the bridge might have, for example, 5 users (and hence connections) on the same channel. When an IRC user sends a message, the bridge gets 5 copies of the message. How does the bridge know:

  • If the message has already been sent?
  • If the message is an intentional duplicate?

The IRC protocol does not have message IDs, so the bridge cannot de-duplicate messages as they arrive. Instead, it “nominates” a single user’s connection to be responsible for delivering messages from that channel. This introduces another problem though. Long-lived TCP connections are fickle things, and can fail without any kind of visible warning until you try to send bytes down it. If a user’s connection drops, another user needs to take over responsibility for delivering messages. This is what the “IRC Event Broker” class does. It allows users to “steal” messages if the bridge has any indication that the connection in charge has dropped. This technique has worked well for us, and gives us the ability to have more robust connections to the channel than with one TCP connection alone.

Admin Rooms

Admin rooms are private Matrix rooms between a real Matrix user and the bridge bot. It allows the Matrix user to control their connection to IRC. It allows:

  • The IRC nick to be changed.
  • The ability to issue /whois commands.
  • The ability to bypass the bridge and send raw IRC commands directly down the TCP connection (e.g. MODE commands).
  • The ability to save a NickServ password for use when the bridge reconnects you.
  • The ability to disconnect from the network entirely.

To perform these actions, Matrix users send a text message which starts with a command name, e.g !whois $ARG. Like all commands, you expect to get a reply once you’ve issued it. However, IRC makes this extremely difficult to do. There is no request/response pair like there is with HTTP requests. Instead, the IRC server may:

  • Ignore the request entirely.
  • Send an error you’re aware of (in the RFC/most servers)
  • Send some information which can be assumed to indicate success.
  • Send an error you’re unaware of.
  • Send some information which sometimes indicates success.

This makes it very difficult to know if a request succeeded or failed, and I’ll go into more detail in the next post which focuses on problems we’ve encountered when developing the IRC bridge. This room is also used to inform the Matrix user about general information about their IRC connection, such as when their connection has been lost, or if there are any errors (e.g. “requires chanops to do this action”). The bridge makes no effort to parse these errors, because it doesn’t always know what caused the error to happen.

Wrapup

Developing a comprehensive IRC bridge is a very difficult task. This post has outlined a few of the ways in which we’ve designed our bridge, and some of the general problems in this field. The bridge is constantly improving as we discover new edge cases with the plethora of IRCd implementations out there. The next post will look at some of these edge cases and look back at some previous outages and examine why they occurred.

New bridged IRC network: GIMPNet

Hey everyone! As of last week, we are now bridging irc.gimp.org (GIMPNet) for all your GTK+/GNOME needs! It’s running a bleeding-edge version of the IRC bridge which supports basic chanops syncing from IRC to Matrix. This means that if an IRC user gives chanops to a Matrix connection, the bridge will give that Matrix user moderator privileges in the room, allowing them to set the room topic/avatar/alias/etc! We hope this will make customising Matrix-bridged rooms a lot easier.

For a more complete list of current and future bridged IRC networks, see the official wishlist.

Load problems on the Matrix.org homeserver

Hi folks,

Since FOSDEM we’ve seen even more interest in Matrix than normal, and we’ve been having some problems getting the Matrix.org homeserver to keep up with demand.  This has resulted in performance being slightly slower than normal at peak times, but the main impact has been the additional traffic exacerbating outages on the homeserver – either by revealing new failure modes, or making it harder to recover rapidly after something goes wrong.

Specifically: on Friday afternoon we had a service disruption caused by someone sending an unusual event into Matrix HQ.  It turns out that both matrix-android-sdk and matrix-ios-sdk based clients (e.g. Riot/Android and iOS) handled this naively by simply resyncing the room state… which has been fine in the past, but not when you have several hundred clients actively syncing the room, and resulted in a thundering herd effect which overloaded the server for ~10 mins or so whilst they all resynced the room (which, in turn, nowadays, involves calculating and syncing several MB of JSON state to each client).  The traffic load was then high enough that it took the server a further 10-20 minutes for the server to fully catch up and recover after the herd had dissipated.  We then had a repeat performance on Monday morning of the same failure mode.

Similarly, we had disruption last night after a user who hadn’t used the service for ages logged on for the first time and rapidly caught up on a few rooms which literally had *millions* of unread messages in them.  Generally this would be okay, but the combination of loaded DB and the sheer number of notifications being deleted ended up with 4 long-running DB deletes in parallel.  This seems to have caused postgres to lock the event_actions_table more aggressively than we’d expect, blocking other queries which were trying to access it… causing most requests to block until the deletes were over.  At the current traffic volumes this meant that the main synapse process tried to serve thousands of simultaneous requests as they stacked up and ran out of filehandles within about 10 minutes and wedged the whole synapse solid before the DB could unblock.  Irritatingly, it turns out our end-to-end monitoring has a bug where it in turn can crash on receiving a 500 from synapse, so despite having PagerDuty all set up and running (and having been receiving pages for traffic delays over the last few weeks)… we didn’t get paged when we got actual failed traffic rather than slow traffic, which delayed resolving the issue.  Finally, whilst rolling out a fix this afternoon, we again hit issues with the traffic load causing more problems than we were expecting, making a routine redeploy distinctly more disruptive.

So, what are we doing about this?

  1. Fix the root causes:
    • The ‘android/iOS thundering herd’ bug is being worked on both the android/iOS side (fixing the naive behaviour) and the server side.  A temporary mitigation is in now place which moves the server-side code to worker processes so that worst case it can’t take out the main synapse process and can scale better.
    • The ‘event_push_actions table is inefficient’ bug had already been fixed – so this was a matter of rushing through the hotfix to matrix.org before we saw a recurrence.
  2. Move to faster hardware.  Our current DB master is a “fast when we bought it 5 years ago” machine whose IO is simply starting to saturate (6x 300GB 10krpm disks in RAID5, fwiw), which is maxing out at around 500IOPS and 20MB/s of random access, and acting as a *very* hard limit to the current synapse performance.  We’re currently in the process of evaluating SSD-backed IO for the DB (in fact, we’re already running a DB slave), and assuming this tests out okay we’re hoping to migrate next week, which should give us a 10x-20x speed up on disk IO and buy considerable headroom.  Watch this space for details.
  3. Make synapse faster.  We’re continuing to plug away at optimisations (e.g. stuff like this), but these are reaching the point of diminishing returns, especially relative to the win from faster hardware.
  4. Fix the end-to-end monitoring.  This already happened.
  5. Load-test before deploying.  This is hard, as you really need to test against precisely the same traffic profile as live traffic, and that’s hard to simulate.  We’re thinking about ways of fixing this, but the best solution is probably going to be clustering and being able to do incremental redeploys to gradually test new changes.  On which note:
  6. Fix synapse’s architectural deficiencies to support clustering, allowing for rolling zero-downtime redeploys, and better horizontal scalability to handle traffic spikes like this.  We’re choosing not to fix this in synapse, but we are currently in full swing implementing dendrite as a next-generation homeserver in Golang, architected from the outset for clustering and horizontal scalability.  N.B. most of the exciting stuff is happening on feature branches and gomatrixserverlib atm. Also, we’re deliberately taking the time to try to get it right this time, unlike bits of synapse which were something of a rush job.  It’ll be a few weeks before dendrite is functional enough to even send a message (let alone finish the implementation), but hopefully faster hardware will give the synapse deployment on matrix.org enough headroom for us to get dendrite ready to take over when the time comes!

The good news of course is that you can run your own synapse today to avoid getting caught up in this operational fun & games, and unless you’re planning to put tens of thousands of daily active users on the server you should be okay!

Meanwhile, please accept our apologies for the instability and be assured that we’re doing everything we can to get out this turbulence as rapidly as possible.

Matthew

 

Synapse 0.19.1 released

Hi folks,

We’re a little late with this, but Synapse 0.19.1 was released last week. The only change is a bugfix to a regression in room state replication that snuck in during the performance improvements that landed in 0.19.0. Please upgrade if you haven’t already. We’ve also fixed the Debian repository to make installing Synapse easier on Jessie by including backported packages for stuff like Twisted where we’re forced to use the latest releases.

You can grab it from https://github.com/matrix-org/synapse/ as always.

Changes in synapse v0.19.1 (2017-02-09)

  • Fix bug where state was incorrectly reset in a room when synapse received an event over federation that did not pass auth checks (PR #1892)