Matthew Hodgson

157 posts tagged with "Matthew Hodgson" (See all Author)

Opening up cyberspace with Matrix and WebVR!

04.04.2017 00:00 — Tech Matthew Hodgson

TL;DR: here's the demo!

Hi everyone,

Today is a special day, the sort of day where you take a big step towards an ultimate dream. Starting Matrix and seeing it gaining momentum is already huge for us, a once in a lifetime opportunity. But one of the crazier things which drove us to create Matrix is the dream of creating cyberspace; the legendary promised land of the internet.

Whether it's the Matrix of Neuromancer, the Metaverse of Snow Crash or the Other Plane of True Names, an immersive 3D environment where people can meet from around the world to communicate, create and share is the ultimate expression of the Internet's potential as a way to connect: the idea of an open, neutral, decentralised virtual reality within the 'net.

This is essentially the software developer equivalent of lying on your back at night, looking up at the stars, and wondering if you'll ever fly among them... and Matrix is not alone in dreaming of this!  There have been many walled-garden virtual worlds over the years - Second Life, Habbo Hotel, all of the MMORPGs, Project Sansar etc.  And there have been decentralised worlds which lack the graphics but share the vision - whether it's FidoNet, Usenet, IRC servers, XMPP, the blogosphere or Matrix as it's used today.  And there are a few ambitious projects like Croquet/OpenCobalt, Open Simulator, JanusVR or High Fidelity which aim for a decentralised cyberspace, albeit without defining an open standard.

But despite all this activity, where is the open cyberspace? Where is the universal fabric which could weave these worlds together?  Where is the VR equivalent of The Web itself?  Where is the neutral communication layer that could connect the galaxies of different apps and users into a coherent reality?  How do you bridge between today's traditional web apps and their VR equivalents?

Aside from cultural ones, we believe there are three missing ingredients which have been technically holding back the development of an open cyberspace so far:

  1. The hardware
  2. Client software support (i.e. apps)
  3. A universal real-time data layer to store the space
Nowadays the hardware problem is effectively solved: the HTC Vive, Oculus Rift and even Google Cardboard have brought VR displays to the general public.  Meanwhile, accelerometers and head-tracking turn normal screens into displays for immersive content without even needing goggles, giving everyone a window into a virtual world.

Client software is a more interesting story:  If there are many custom and proprietary VR apps that already exist out there, almost none of them can connect to other servers than the ones ran by their own vendors, or even other services and apps.   An open neutral cyberspace is just like the web: it needs the equivalent of web browsers, i.e. ubiquitous client apps which can connect to any servers and services written by any vendors and hosted by any providers, communicating together via an open common language.   And while web browsers of course exist, until very recently there was no way to link them into VR hardware.

This has changed with the creation of WebVR by Mozilla - defining an API to let browsers render VR content, gracefully degrading across hardware and platforms such that you get the best possible experience all the way from a top-end gaming PC + Vive, down to tapping on a link on a simple smartphone.  WebVR is a genuine revolution: suddenly every webapp on the planet can create a virtual world! And frameworks like A-Frame, aframe-react and React VR make it incredibly easy and fun to do.

So looking back at our list, the final missing piece is nothing less than a backbone: some kind of data layer to link these apps together.  Right now, all the WebVR apps out there are little islands - each its own isolated walled garden and there is no standard way to provide shared experiences.  There is no standard way for users to communicate between these worlds (or between the VR and non-VR web) - be that by messaging, VoIP, Video or even VR interactions.  There is no standard way to define an avatar, its location and movement within a world, or how it might travel between worlds.  And finally, there is no standard way to describe the world's state in general: each webapp is free to manage its scene and its content any way it likes; there is nowhere to expose the realtime scene-graph of the world such that other avatars, bots, apps, services etc. can interact with it. The same way there is no standard way to exchange messages or reuse a user profile between messaging apps today: if the cyberspace is taking shape as we speak, it is definitely not taking the path of openness. At least not yet.

Predictably enough, it's this last point of the 'missing data layer for cyberspace' which we've been thinking about with Matrix: an open, neutral, decentralised meta-network powering or connecting these worlds.  To start with, we've made Matrix available as a generic communications layer for VR, taking WebVR (via A-Frame) and combining it with matrix-js-sdk, as an open, secure and decentralised way to place voip calls, video calls and instant messaging both within and between WebVR apps and the rest of the existing Matrix ecosystem (e.g. apps like Riot).

In fact, the best way is to test it live: we've put together a quick demo at https://matrix.org/vrdemo to show it off, so please give it a go!

In the demo you get:

  1. a virtual lobby, providing a 1:1 WebRTC video call via Matrix through to a ‘guide' user of your choice anywhere else in Matrix (VR or not).  From the lobby you can jump into two other apps:
  2. a video conference, calling between all the participants of a given Matrix room in VR (no interop yet with other Matrix apps)
  3. a 'virtual tourism' example, featuring a 1:1 WebRTC video call with a guide, superimposed over the top of the user going skiing through 360 degree video footage.
Video calling requires a WebRTC-capable browser (Chrome or Firefox). Unfortunately no iOS browsers support it yet. If you have dedicated VR hardware (Vive or Rift), you'll have to configure your browser appropriately to use the demo - see https://webvr.rocks for the latest details.

Needless to say, the demo's open sourced under the Apache License like all things Matrix - you can check out the code from https://github.com/matrix-org/matrix-vr-demo.  Huge kudos to Richard Lewis, Rob Swain and Ben Lewis for building this out - and to Aidan Taub and Tom Elliott for providing the 360 degree video footage!

The demo is quite high-bandwidth and hardware intensive, so here's a video of it in action, just in case:

Now, it's important to understand that here we're using Matrix as a standard communications API for VR, but we're not using Matrix to store any VR world data (yet).  The demo uses plain A-Frame via aframe-react to render its world: we are not providing an API which exposes the world itself onto the network for folks to interact with and extend.  This is because Matrix is currently optimised for storing and synchronising two types of data structure: decentralised timelines of conversation data, and arbitrary decentralised key-value data (e.g. room names, membership, topics).

However, the job of storing arbitrary world data requires storing and flexibly querying it as an object graph of some kind - e.g. as a scene graph hierarchy.  Doing this efficiently whilst supporting Matrix's decentralised eventual consistency model is tantamount to evolving Matrix into being a generic decentralised object-graph database (whilst upholding the constraints of that virtual world).  This is tractable, but it's a bunch more work than just supporting the eventually-consistent timeline & key-value store we have today.  It's something we're thinking about though. :)

Also, Matrix is currently not super low-latency - on a typical busy Synapse deployment event transmission between clients has a latency of 50-200ms (ignoring network).  This is fine for instant messaging and setting up VoIP calls etc, but useless for publishing the realtime state of a virtual world: having to wait 200ms to be told that something happened in an interactive virtual world would be a terrible experience.  However, there are various fixes we can do here: Matrix itself is getting faster; Dendrite is expected to be one or two orders of magnitude faster than Synapse.  We could also use Matrix simply as a signalling layer in order to negotiate a lower latency realtime protocol to describe world data updates (much as we use Matrix as a signalling layer for setting up RTP streams for VoIP calls or MIDI sessions).

Finally, you need somewhere to store the world assets themselves (textures, sounds, Collada or GLTF assets, etc).  This is no different to using attachments in Matrix today - this could be as plain HTTP, or via the Matrix decentralised content store (mxc:// URLs), or via something like IPFS.

This said, it's only a matter of time before someone starts storing world data itself in Matrix.  We have more work to do before it's a tight fit, but this has always been one of the long-term goals of the project and we're going to do everything we can to support it well.

So: this is the future we're thinking of.  Obviously work on today's Matrix servers, clients, spec & bridges is our focus and priority right now and we lots of work left there - but the longer term plan is critical too.  Communication in VR is pretty much a blank canvas right now, and Matrix can be the connecting fabric for it - which is unbelievably exciting.  Right now our demo is just a PoC - we'd encourage all devs reading this to have a think about how to extend it, and how we all can build together the new frontier of cyberspace!

Finally, if you're interested in chatting more about VR on Matrix, come hang out over at #vr:matrix.org!

  • Matthew, Amandine & the Matrix team

Amandine grins at the future in the newest skunkworks zone of the London https://t.co/y2YCHNIbgU HQ... :> :D 😈 pic.twitter.com/K5xBz7U9o2

— Matrix (@matrixdotorg) March 18, 2017

Synapse 0.19.3 released

21.03.2017 00:00 — General Matthew Hodgson

Hi all,

We've released Synapse 0.19.3-rc2 as 0.19.3 with no changes. This is a slightly unusual release, as 0.19.3-rc2 dates from March 13th and a lot of stuff has landed on the develop branch since then - however, we'll be releasing that as 0.20.0 once it's ready. Instead, 0.19.3 has a set of intermediary performance and bug fixes; the only new feature is a set of admin APIs kindly contributed by @morteza-araby.

The changelog follows - please upgrade from https://github.com/matrix-org/synapse or your OS packages as normal :)

Changes in synapse v0.19.3 (2017-03-20)

No changes since v0.19.3-rc2

Changes in synapse v0.19.3-rc2 (2017-03-13)

Bug fixes:

  • Fix bug in handling of incoming device list updates over federation.

Changes in synapse v0.19.3-rc1 (2017-03-08)

Features:

Changes: Bug fixes:
  • Fix synapse_port_db failure. Thanks to @Pneumaticat! (PR #1904)
  • Fix caching to not cache error responses (PR #1913)
  • Fix APIs to make kick & ban reasons work (PR #1917)
  • Fix bugs in the /keys/changes api (PR #1921)
  • Fix bug where users couldn't forget rooms they were banned from (PR #1922)
  • Fix issue with long language values in pushers API (PR #1925)
  • Fix a race in transaction queue (PR #1930)
  • Fix dynamic thumbnailing to preserve aspect ratio. Thanks to @jkolo! (PR #1945)
  • Fix device list update to not constantly resync (PR #1964)
  • Fix potential for huge memory usage when getting device that have changed (PR #1969)

Dendrite receives its first messages!!!

15.03.2017 00:00 — Tech Matthew Hodgson

Hi all,

We hit a major milestone today on Dendrite, our next-generation golang homeserver: Dendrite received its first messages!!

Before you get too excited, please understand that Dendrite is still a pre-alpha work in progress - whilst we successfully created some rooms on an instance and sent a bunch of messages into them via the Client-Server API, most other functionality (e.g. receiving messages via /sync, logging in, registering, federation etc) is yet to exist. It cannot yet be used as a homeserver. However, this is still a huge step in the right direction, as it demonstrates the core DAG functionality of Matrix is intact, and the beginnings of a usable Client-Server API are hooked up.

The architecture of Dendrite is genuinely interesting - please check out the wiring diagram if you haven't already. The idea is that the server is broken down into a series of components which process streams of data stored in Kafka-style append-only logs. Each component scales horizontally (you can have as many as required to handle load), which is an enormous win over Synapse's monolithic design. Each component is also decoupled from each other by the logs, letting them run on entirely different machines as required. Please note that whilst the initial implementation is using Kafka for convenience, the actual append-only log mechanism is abstracted away - in future we expect to see configurations of Dendrite which operate entirely from within a single go executable (using go channels as the log mechanism), as well as alternatives to Kafka for distributed solutions.

The components which have taken form so far are the central roomserver service, which is responsible (as the name suggests) for maintaining the state and integrity of one or more rooms - authorizing events into the room DAG; storing them in postgres, tracking the auth chain of events where needed; etc. Much of the core matrix DAG logic of the roomserver is provided by gomatrixserverlib. The roomserver receives events sent by users via the 'client room send' component (and 'federation backfill' component, when that exists). The 'client room send' component (and in future also 'client sync') is provided by the clientapi service - which is what as of today is successfully creating rooms and events and relaying them to the roomserver!

The actual events we've been testing with are the history of the Matrix Core room: around 10k events. Right now the roomserver (and the postgres DB that backs it) are the main bottleneck in the pipeline rather than clientapi, so it's been interesting to see how rapidly the roomserver can consume its log of events. As of today's benchmark, on a generic dev workstation and an entirely unoptimised roomserver (i.e. no caching whatsoever) running on a single core, we're seeing it ingest the room history at over 350 events per second. The vast majority of this work is going into encoding/decoding JSON or waiting for postgres: with a simple event cache to avoid repeatedly hitting the DB for every auth and state event, we expect this to increase significantly. And then as we increase the number of cores, kafka partitions and roomserver instances it should scale fairly arbitrarily(!)

For context, the main synapse process for Matrix.org currently maxes out persisting events at around 15 and 20 per second (although it is also spending a bunch of time relaying events to the various worker processes, and other miscellanies). As such, an initial benchmark for Dendrite of 350 msgs/s really does look incredibly promising.

You may be wondering where this leaves Synapse? Well, a major driver for implementing Dendrite has been to support the growth of the main matrix.org server, which currently persists around 10 events/s (whilst emitting around 1500 events/s). We have exhausted most of the low-hanging fruit for optimising Synapse, and have got to point where the architectural fixes required are of similar shape and size to the work going into Dendrite. So, whilst Synapse is going to be around for a while yet, we're putting the majority of our long-term plans into Dendrite, with a distinct degree of urgency as we race against the ever-increasing traffic levels on the Matrix.org server!

Finally, you may be wondering what happened to Dendron, our original experiment in Golang servers. Well: Dendron was an attempt at a strangler pattern rewrite of Synapse, acting as an shim in front of Synapse which could gradually swap out endpoints with their golang implementations. In practice, the operational complexity it introduced, as well as the amount of room for improvement (at the time) we had in Synapse, and the relatively tight coupling to Synapse's existing architecture, storage & schema, meant that it was far from a clear win - and effectively served as an excuse to learn Go. As such, we've finally formally killed it off as of last week - Matrix.org is now running behind a normal haproxy, and Dendron is dead. Meanwhile, Dendrite (aka Dendron done Right ;) is very much alive, progressing fast, free from the shackles of Synapse.

We'll try to keep the blog updated with progress on Dendrite as it continues to grow!

How do I bridge thee? Let me count the ways...

11.03.2017 00:00 — Tech Matthew Hodgson

Bridges come in many flavours, and we need consistent terminology within the Matrix community to ensure everyone (users, developers, core team) is on the same page. This post is primarily intended for bridge developers to refer to when building bridges.

The most recent version of this document is here (source) but we're also posting it as a blog post for visibility.

Types of rooms

Portal rooms

Bridges can register themselves as controlling chunks of room aliases namespace, letting Matrix users join remote rooms transparently if they /join #freenode_#wherever:matrix.org or similar. The resulting Matrix room is typically automatically bridged to the single target remote room. Access control for Matrix users is typically managed by the remote network's side of the room. This is called a portal room, and is useful for jumping into remote rooms without any configuration needed whatsoever - using Matrix as a ‘bouncer' for the remote network.

Plumbed rooms

Alternatively, an existing Matrix room can be can plumbed into one or more specific remote rooms by configuring a bridge (which can be run by anyone). For instance, #matrix:matrix.org is plumbed into #matrix on Freenode, matrixdotorg/#matrix on Slack, etc. Access control for Matrix users is necessarily managed by the Matrix side of the room. This is useful for using Matrix to link together different communities.

Migrating rooms between a portal & plumbed room is currently a bit of a mess, as there's not yet a way for users to remove portal rooms once they're created, so you can end up with a mix of portal & plumbed users bridged into a room, which looks weird from both the Matrix and non-Matrix viewpoints. https://github.com/matrix-org/matrix-appservice-irc/issues/387 tracks this.

Types of bridges (simplest first):

Bridgebot-based bridges

The simplest way to exchange messages with a remote network is to have the bridge log into the network using one or more predefined users called bridge bots - typically called MatrixBridge or MatrixBridge[123] etc. These relay traffic on behalf of the users on the other side, but it's a terrible experience as all the metadata about the messages and senders is lost. This is how the telematrix matrix<->telegram bridge currently works.

Bot-API (aka Virtual user) based bridges

Some remote systems support the idea of injecting messages from ‘fake' or ‘virtual' users, which can be used to represent the Matrix-side users as unique entities in the remote network. For instance, Slack's inbound webhooks lets remote bots be created on demand, letting Matrix users be shown cosmetically correctly in the timeline as virtual users. However, the resulting virtual users aren't real users on the remote system, so don't have presence/profile and can't be tab-completed or direct-messaged etc. They also have no way to receive typing notifs or other richer info which may not be available via bot APIs. This is how the current matrix-appservice-slack bridge works.

Simple puppeted bridge

This is a richer form of bridging, where the bridge logs into the remote service as if it were a real 3rd party client for that service. As a result, the Matrix user has to already have a valid account on the remote system. In exchange, the Matrix user ‘puppets' their remote user, such that other users on the remote system aren't even aware they are speaking to a user via Matrix. The full semantics of the remote system are available to the bridge to expose into Matrix. However, the bridge has to handle the authentication process to log the user into the remote bridge.

This is essentially how the current matrix-appservice-irc bridge works (if you configure it to log into the remote IRC network as your ‘real' IRC nickname). matrix-appservice-gitter is being extended to support both puppeted and bridgebot-based operation. It's how the experimental matrix-appservice-tg bridge works.

Going forwards we're aiming for all bridges to be at least simple puppeted, if not double-puppeted.

Double-puppeted bridge

A simple ‘puppeted bridge' allows the Matrix user to control their account on their remote network. However, ideally this puppeting should work in both directions, so if the user logs into (say) their native telegram client and starts conversations, sends messages etc, these should be reflected back into Matrix as if the user had done them there. This requires the bridge to be able to puppet the Matrix side of the bridge on behalf of the user.

This is the holy-grail of bridging; matrix-puppet-bridge is a community project that tries to facilitate development of double puppeted bridges, having done so for several networks. The main obstacle is working out an elegant way of having the bridge auth with Matrix as the matrix user (which requires some kind of scoped access_token delegation).

Server-to-server bridging

Some remote protocols (IRC, XMPP, SIP, SMTP, NNTP, GnuSocial etc) support federation - either open or closed. The most elegant way of bridging to these protocols would be to have the bridge participate in the federation as a server, directly bridging the entire namespace into Matrix.

We're not aware of anyone who's done this yet.

Sidecar bridge

Finally: the types of bridging described above assume that you are synchronising the conversation history of the remote system into Matrix, so it may be decentralised and exposed to multiple users within the wider Matrix network.

This can cause problems where the remote system may have arbitrarily complicated permissions (ACLs) controlling access to the history, which will then need to be correctly synchronised with Matrix's ACL model, without introducing security issues such as races. We already see some problems with this on the IRC bridge, where history visibility for +i and +k channels have to be carefully synchronised with the Matrix rooms.

You can also hit problems with other network-specific features not yet having equivalent representation in the Matrix protocol (e.g. ephemeral messages, or op-only messages - although arguably that's a type of ACL).

One solution could be to support an entirely different architecture of bridging, where the Matrix client-server API is mapped directly to the remote service, meaning that ACL decisions are delegated to the remote service, and conversations are not exposed into the wider Matrix. This is effectively using the bridge purely as a 3rd party client for the network (similar to Bitlbee). The bridge is only available to a single user, and conversations cannot be shared with other Matrix users as they aren't actually Matrix rooms. (Another solution could be to use Active Policy Servers at last as a way of centralising and delegating ACLs for a room)

This is essentially an entirely different product to the rest of Matrix, and whilst it could be a solution for some particularly painful ACL problems, we're focusing on non-sidecar bridges for now.

Load problems on the Matrix.org homeserver

17.02.2017 00:00 — General Matthew Hodgson

Hi folks,

Since FOSDEM we've seen even more interest in Matrix than normal, and we've been having some problems getting the Matrix.org homeserver to keep up with demand.  This has resulted in performance being slightly slower than normal at peak times, but the main impact has been the additional traffic exacerbating outages on the homeserver - either by revealing new failure modes, or making it harder to recover rapidly after something goes wrong.

Specifically: on Friday afternoon we had a service disruption caused by someone sending an unusual event into Matrix HQ.  It turns out that both matrix-android-sdk and matrix-ios-sdk based clients (e.g. Riot/Android and iOS) handled this naively by simply resyncing the room state... which has been fine in the past, but not when you have several hundred clients actively syncing the room, and resulted in a thundering herd effect which overloaded the server for ~10 mins or so whilst they all resynced the room (which, in turn, nowadays, involves calculating and syncing several MB of JSON state to each client).  The traffic load was then high enough that it took the server a further 10-20 minutes for the server to fully catch up and recover after the herd had dissipated.  We then had a repeat performance on Monday morning of the same failure mode.

Similarly, we had disruption last night after a user who hadn't used the service for ages logged on for the first time and rapidly caught up on a few rooms which literally had millions of unread messages in them.  Generally this would be okay, but the combination of loaded DB and the sheer number of notifications being deleted ended up with 4 long-running DB deletes in parallel.  This seems to have caused postgres to lock the event_actions_table more aggressively than we'd expect, blocking other queries which were trying to access it... causing most requests to block until the deletes were over.  At the current traffic volumes this meant that the main synapse process tried to serve thousands of simultaneous requests as they stacked up and ran out of filehandles within about 10 minutes and wedged the whole synapse solid before the DB could unblock.  Irritatingly, it turns out our end-to-end monitoring has a bug where it in turn can crash on receiving a 500 from synapse, so despite having PagerDuty all set up and running (and having been receiving pages for traffic delays over the last few weeks)... we didn't get paged when we got actual failed traffic rather than slow traffic, which delayed resolving the issue.  Finally, whilst rolling out a fix this afternoon, we again hit issues with the traffic load causing more problems than we were expecting, making a routine redeploy distinctly more disruptive.

So, what are we doing about this?

  1. Fix the root causes:
    • The 'android/iOS thundering herd' bug is being worked on both the android/iOS side (fixing the naive behaviour) and the server side.  A temporary mitigation is in now place which moves the server-side code to worker processes so that worst case it can't take out the main synapse process and can scale better.
    • The 'event_push_actions table is inefficient' bug had already been fixed - so this was a matter of rushing through the hotfix to matrix.org before we saw a recurrence.
  2. Move to faster hardware.  Our current DB master is a "fast when we bought it 5 years ago" machine whose IO is simply starting to saturate (6x 300GB 10krpm disks in RAID5, fwiw), which is maxing out at around 500IOPS and 20MB/s of random access, and acting as a *very* hard limit to the current synapse performance.  We're currently in the process of evaluating SSD-backed IO for the DB (in fact, we're already running a DB slave), and assuming this tests out okay we're hoping to migrate next week, which should give us a 10x-20x speed up on disk IO and buy considerable headroom.  Watch this space for details.
  3. Make synapse faster.  We're continuing to plug away at optimisations (e.g. stuff like this), but these are reaching the point of diminishing returns, especially relative to the win from faster hardware.
  4. Fix the end-to-end monitoring.  This already happened.
  5. Load-test before deploying.  This is hard, as you really need to test against precisely the same traffic profile as live traffic, and that's hard to simulate.  We're thinking about ways of fixing this, but the best solution is probably going to be clustering and being able to do incremental redeploys to gradually test new changes.  On which note:
  6. Fix synapse's architectural deficiencies to support clustering, allowing for rolling zero-downtime redeploys, and better horizontal scalability to handle traffic spikes like this.  We're choosing not to fix this in synapse, but we are currently in full swing implementing dendrite as a next-generation homeserver in Golang, architected from the outset for clustering and horizontal scalability.  N.B. most of the exciting stuff is happening on feature branches and gomatrixserverlib atm. Also, we're deliberately taking the time to try to get it right this time, unlike bits of synapse which were something of a rush job.  It'll be a few weeks before dendrite is functional enough to even send a message (let alone finish the implementation), but hopefully faster hardware will give the synapse deployment on matrix.org enough headroom for us to get dendrite ready to take over when the time comes!
The good news of course is that you can run your own synapse today to avoid getting caught up in this operational fun & games, and unless you're planning to put tens of thousands of daily active users on the server you should be okay!

Meanwhile, please accept our apologies for the instability and be assured that we're doing everything we can to get out this turbulence as rapidly as possible.

Matthew

Synapse 0.19.1 released

14.02.2017 00:00 — General Matthew Hodgson

Hi folks,

We're a little late with this, but Synapse 0.19.1 was released last week. The only change is a bugfix to a regression in room state replication that snuck in during the performance improvements that landed in 0.19.0. Please upgrade if you haven't already. We've also fixed the Debian repository to make installing Synapse easier on Jessie by including backported packages for stuff like Twisted where we're forced to use the latest releases.

You can grab it from https://github.com/matrix-org/synapse/ as always.

Changes in synapse v0.19.1 (2017-02-09)

  • Fix bug where state was incorrectly reset in a room when synapse received an event over federation that did not pass auth checks (PR #1892)

FOSDEM 2017 report

06.02.2017 00:00 — General Matthew Hodgson

Hi all,

FOSDEM this year was even more crazy and incredible than ever - with attendance up from 6,000 to 9,000 folks, it's almost impossible to describe the atmosphere. Matt Jordan from Asterisk describes it as DisneyWorld for OSS Geeks, but it's even more than that: it's basically a corporeal representation of the whole FOSS movement.  There is no entrance fee; there is no intrusive sponsorship; there is no corporate presence: it's just a venue for huge numbers of FOSS projects and their users and communities to come together in one place (the Université Libre de Bruxelles) and talk and learn.  Imagine if someone built a virtual world with storefronts for every open source project imaginable, where you could chat to the core team, geek out with other users, or gather in auditoriums to hear updates on the latest projects & ideas.  Well, this is FOSDEM... except even better, it's in real life.  With copious amounts of Belgian beer.

Anyway: this year we had our normal stand on the 2nd floor of K building, sharing the Realtime Lounge chill-out space with the XMPP Standards Foundation.  This year we had a larger representation than ever before with Matthew, Erik and Luke from the London team as well as Manu & Yannick from Rennes - which is just as well given all 5 of us ended up speaking literally non-stop from 10am to 6pm on both Saturday & Sunday (and then into the night as proceedings deteriorated/evolved into an impromptu Matrix meetup with Coffee, uhoreg, tadzik, realitygaps and others!).  The level of interest at the Matrix booth was frankly phenomenal: a major change from the last two FOSDEMs in that this year pretty much everyone had already heard of Matrix, and were most likely to want to enthuse about features and bugs in Synapse or Riot, or geek out about writing new bridges/bots/clients, or trying to work out a way to incorporate Matrix into their own projects or companies.

#RTCLounge with @ara4n from @matrixdotorg busy demoing cool stuff pic.twitter.com/Vc0uLEceQP

— miconda (@miconda) February 4, 2017

Synapse 0.19 and Riot 0.9.7 were also released on Saturday to try to ensure that anyone joining Matrix for the best time at FOSDEM were on the latest & greatest code - especially given the performance and E2E fixes present in both.  Amazingly the last-minute release didn't backfire: if you haven't upgraded to Synapse 0.19 we recommend going so asap.  And if you're a Riot user, make sure you're on the latest version :)

We were very lucky to have two talks accepted this year: the main one in the Security Track on the Jansen main stage telling the tale of how we added end-to-end encryption to Matrix via Olm & Megolm - and the other in the Decentralised Internet room (AW1.125), focusing on the unsolved future problems of decentralised accounts, identity, reputation in Matrix.  Both talks were well attended, with huge queues for the Decentralised Internet room: we can only apologise to everyone who queued for 20+ minutes only to still not be able to get in.  Hopefully next year FOSDEM will allocate a larger room for decentralisation!  On the plus side, this year FOSDEM did an amazing job of videoing the sessions - livestreaming every talk, and automatically publishing the recordings (via a fantastic 'publish your own talk' web interface) - so many of the people who couldn't get into the room (as well as the rest of the world) were able to watch it live anyway by the stream.

This is how popular decentralised communication with @matrixdotorg is at #fosdem2017. pic.twitter.com/6T5PK6RRJE

— Jan Weisensee (@ilumium) February 5, 2017

Security track at #FOSDEM: @matrixdotorg project & @ara4n pic.twitter.com/QwroHSNh8Z

— miconda (@miconda) February 5, 2017

https://t.co/x0x7xuzlH2 presentation at the Decentralized Internet Devroom @fosdem pic.twitter.com/J2Wxo9SZ8H

— Tristan Nitot (@nitot) February 5, 2017

You can watch the video of the talks from the FOSDEM website here and here.  Both talks necessarily include the similar exposition for folks unfamiliar with Matrix, so apologies for the duplication - also, the "future of decentralised communication" talk ended up a bit rushed; 20 minutes is not a lot of time to both explain Matrix and give an overview of the challenges we face in fixing spam, identity, moderation etc.  But if you like hearing overenthusiastic people talking too fast about how amazing Matrix is, you may wish to check out the videos :)  You can also get at the slides as PDF here (E2E Encryption) and here (Future of Decentralisation).

Huge thanks to evevryone who came to the talks or came and spoke to us at the stand or around the campus.  We had an amazing time, and are already looking forward to next year!

Matthew & the team

Synapse 0.19 is here, just in time for FOSDEM!

04.02.2017 00:00 — Tech Matthew Hodgson

Hi all,

We're happy to announce the release of Synapse 0.19.0 (same as 0.19.0-rc4) today, just in time for anyone discovering Matrix for the first time at FOSDEM 2017!  In fact, here's Erik doing the release right now (with moral support from Luke):

release

This is a pretty big release, with a bunch of new features and lots and lots of debugging and optimisation work following on some of the dramas that we had with 0.18 over the Christmas break.  The biggest things are:

  • IPv6 Support (unless you have an IPv6 only resolver), thanks to contributions from Glyph from Twisted and Kyrias!
  • A new API for tracking the E2E devices present in a room (required for fixing many of the remaining E2E bugs...)
  • Rewrite the 'state resolution' algorithm to be orders of magnitude more performant
  • Lots of tuning to the caching logic.
If you're already running a server, please upgrade!  And if you're not, go grab yourself a brand new Synapse from Github. Debian packages will follow shortly (as soon as Erik can figure out the necessary backporting required for Twisted 16.6.0)

And here's the full changelog...

Changes in synapse v0.19.0 (2017-02-04)

No changes since RC 4.

Changes in synapse v0.19.0-rc4 (2017-02-02)

  • Bump cache sizes for common membership queries (PR #1879)

Changes in synapse v0.19.0-rc3 (2017-02-02)

  • Fix email push in pusher worker (PR #1875)
  • Make presence.get_new_events a bit faster (PR #1876)
  • Make /keys/changes a bit more performant (PR #1877)

Changes in synapse v0.19.0-rc2 (2017-02-02)

  • Include newly joined users in /keys/changes API (PR #1872)

Changes in synapse v0.19.0-rc1 (2017-02-02)

Features:

  • Add support for specifying multiple bind addresses (PR #1709, #1712, #1795, #1835). Thanks to @kyrias!
  • Add /account/3pid/delete endpoint (PR #1714)
  • Add config option to configure the Riot URL used in notification emails (PR #1811). Thanks to @aperezdc!
  • Add username and password config options for turn server (PR #1832). Thanks to @xsteadfastx!
  • Implement device lists updates over federation (PR #1857, #1861, #1864)
  • Implement /keys/changes (PR #1869, #1872)
Changes:
  • Improve IPv6 support (PR #1696). Thanks to @kyrias and @glyph!
  • Log which files we saved attachments to in the media_repository (PR #1791)
  • Linearize updates to membership via PUT /state/ to better handle multiple joins (PR #1787)
  • Limit number of entries to prefill from cache on startup (PR #1792)
  • Remove full_twisted_stacktraces option (PR #1802)
  • Measure size of some caches by sum of the size of cached values (PR #1815)
  • Measure metrics of string_cache (PR #1821)
  • Reduce logging verbosity (PR #1822, #1823, #1824)
  • Don't clobber a displayname or avatar_url if provided by an m.room.member event (PR #1852)
  • Better handle 401/404 response for federation /send/ (PR #1866, #1871)
Fixes:
  • Fix ability to change password to a non-ascii one (PR #1711)
  • Fix push getting stuck due to looking at the wrong view of state (PR #1820)
  • Fix email address comparison to be case insensitive (PR #1827)
  • Fix occasional inconsistencies of room membership (PR #1836, #1840)
Performance:
  • Don't block messages sending on bumping presence (PR #1789)
  • Change device_inbox stream index to include user (PR #1793)
  • Optimise state resolution (PR #1818)
  • Use DB cache of joined users for presence (PR #1862)
  • Add an index to make membership queries faster (PR #1867)

Matrix.org homeserver outage (25th Jan 2017)

25.01.2017 00:00 — Tech Matthew Hodgson

Hi folks,

As many will have noticed there was a major outage on the Matrix homeserver for matrix.org last night (UK-time). This impacted anyone with an account on the matrix.org server, as well as anyone using matrix.org-hosted bots & bridges. As Matrix rooms are shared over all participants, rooms with participants on other servers were unaffected (for users on those servers). Here's a quick explanation of what went wrong (times are UTC):

  • 2017-01-24 16:00 - We notice that we're badly running out of diskspace on the matrix.org backup postgres replica. (Turns out the backup box, whilst identical hardware to the master, had been built out as RAID-10 rather than RAID-5 and so has less disk space).
  • 2017-01-24 17:00 - We decide to drop a large DB index: event_push_actions(room_id, event_id, user_id, profile_tag), which was taking up a disproportionate amount of disk space, on the basis that it didn't appear to be being used according to the postgres stats. All seems good.
  • 2017-01-24 ~23:00 - The core matrix.org team go to bed.
  • 2017-01-24 23:33 - Someone redacts an event in a very active room (probably #matrix:matrix.org) which necessitates redacting the associated push notification from the event_push_actions table. This takes out a lock within persist_event, which is then blocked on deleting the push notification. It turns out that this deletion requires the missing DB constraint, causing the query to run for hours whilst holding the transaction lock. The symptoms are that anything reading events from the DB was blocked on the transaction, causing messages not to be relayed to other clients or servers despite appearing to send correctly. Meanwhile, the fact that events are being received by the server fine (including over federation) makes the monitoring graphs look largely healthy.
  • 2017-01-24 23:35 - End-to-end monitoring detects problems, and sends alerts into pagerduty and various Matrix rooms. Unfortunately we'd failed to upgrade the pageduty trial into a paid account a few months ago, however, so the alerts are lost.
  • 2017-01-25 08:00 - Matrix team starts to wake up and spot problems, but confusion over the right escalation process (especially with Matthew on holiday) means folks assume that other members of the team must already be investigating.
  • 2017-01-25 09:00 - Server gets restarted, service starts to resume, although box suffers from load problems as traffic tries to catch up.
  • 2017-01-25 09:45 - Normal service on the homeserver itself is largely resumed (other than bridges; see below)
  • 2017-01-25 10:41 - Root cause established and the redaction path is patched on matrix.org to stop a recurrence.
  • 2017-01-25 11:15 - Bridges are seen to be lagging and taking much longer to recover than expected. Decision made to let them continue to catch up normally rather than risk further disruption (e.g. IRC join/part spam) by restarting them.
  • 2017-01-25 13:00 - All hosted bridges returned to normal.

Obviously this is rather embarrassing, and a huge pain for everyone using the matrix.org homeserver - many apologies indeed for the outage. On the plus side, all the other Matrix homeservers out there carried on without noticing any problems (which actually complicated spotting that things had broken, given many of the core team primarily use their personal homeservers).

In some ways the root cause here is that the core team has been focusing all its energy recently on improving the overall Matrix codebase rather than operational issues on matrix.org itself, and as a result our ops practices have fallen behind (especially as the health of the Matrix ecosystem as a whole is arguably more important than the health of a single homeserver deployment). However, we clearly need to improve things here given the number of people (>750K at the last count) dependent on the Matrix.org homeserver and its bridges & bots.

Lessons learnt on our side are:

  • Make sure that even though we had monitoring graphs & thresholds set up on all the right things... monitoring alerts actually have to be routed somewhere useful - i.e. phone calls to the team's phones. Pagerduty is now set up and running properly to this end.
  • Make sure that people know to wake up the right people anyway if the monitoring alerting system fails.
  • To be even more paranoid about hotfixes to production at 5pm, especially if they can wait 'til the next day (as this one could have).
  • To investigate ways to rapidly recover bridges without causing unnecessary disruption.

Apologies again to everyone who was bitten by this - we're doing everything we can to ensure it doesn't happen again.

Matthew & the team.