Last month at FOSDEM 2019 we gave a talk about a new experimental ultra-low-bandwidth transport for Matrix which swaps our baseline HTTPS+JSON transport for a custom one built on CoAP+CBOR+Noise+Flate+UDP. (CoAP is the RPC protocol; CBOR is the encoding; Noise powers the transport layer encryption; Flate compresses everything uses predefined compression maps).
The challenge here was to see if we could demonstrate Matrix working usably over networks running at around 100 bits per second of throughput (where it’d take 2 minutes to send a typical 1500 byte ethernet packet!!) and very high latencies. You can see the original FOSDEM talk below, or check out the slides here.
Now, it’s taken us a little while to find time to tidy up the stuff we demo’d in the talk to be (relatively) suitable for public consumption, but we’re happy to finally release the four projects which powered the demo:
https://github.com/matrix-org/meshsim – meshsim is the network simulator which provides an interactive web interface to draw a network topology and let you spin up dockerized homeservers on a simulated network with whatever preferred latency, jitter, packet loss etc.
https://github.com/matrix-org/coap-proxy – coap-proxy is the golang proxy which converts HTTPS+JSON into CoAP+CBOR+Noise+Flate and vice versa, letting you squish Matrix CS API and SS API traffic in & out of CoAP.
In order to get up and running, the meshsim README has all the details.
It’s important to understand that this is very much a proof of concept, and shouldn’t be used in production yet, and almost certainly has some glaring bugs. In fact, it currently assumes you are running on a trusted private network rather than the public Matrix network in order to get away with some of the bandwidth optimisations performed – see coap-proxy’s Limitations section for details. Particularly, please note that the encryption is homemade and not audited or fully reviewed or tested yet. Also, we’ve released the code for the low-bandwidth transport, but we haven’t released the “fan-out routing” implementation for Synapse as it needs a rethink to be applicable to the public Matrix network. You’ll also want to run Riot/Web in low-bandwidth mode if you really wind down the bandwidth (suppressing avatars, read receipts, typing notifs and presence to avoid wasting precious bandwidth).
We also don’t have an MSC for the CoAP-based transport yet, mainly due to lack of time whilst wanting to ensure the limitations are addressed first before we propose it as a formal alternative Matrix transport. (We also first need to define negotiation mechanisms for entirely alternative CS & SS transports!). However, the quick overview is:
JSON is converted directly into CBOR (with a few substitutions made to shrink common patterns down)
HTTP is converted directly into CoAP (mapping the verbose API endpoints down to single-byte endpoints)
TLS is swapped out for Noise Pipes (XX + IK noise handshakes). This gives us 1RTT setup (XX) for the first connection to a host, and 0RTT (IK) for all subsequent connections, and provides trust-on-first-use semantics when connecting to a server. You can see the Noise state machine we maintain in go-coap’s noise.go.
The CoAP headers are hoisted up above the Noise payload, letting us use them for framing the noise pipes without having duplicated framing headers at the CoAP & Noise layers. We also frame the Noise handshake packets as CoAP with custom message types (250, 251 and 252). We might be better off using OSCORE for this, however, rather than hand-wrapping a custom encrypted transport…
The CoAP payload is compressed via Flate using preshared compression tables derived from compressing large chunks of representative Matrix traffic. This could be significantly improved in future with streaming compression and dynamic tables (albeit seeded from a common set of tables).
The end result is that you end up taking about 90 bytes (including ethernet headers!) to send a typical Matrix message (and about 70 bytes to receive the acknowledgement). This breaks down as as:
14 bytes of Ethernet headers
20 bytes of IP headers
8 bytes of UDP headers
16 bytes of Noise AEAD
6 bytes of CoAP headers
~26 bytes of compressed and encrypted CBOR
The Noise handshake on connection setup would take an additional 128 bytes (4x 32 byte Curve25519 DH values), either spread over 1RTT for initial setup or 0RTT for subsequent setups.
At 100bps, 90 bytes takes 90*8/100 = 7.2s to send… which is just about usable in an extreme life and death situation where you can only get 100bps of connectivity (e.g. someone at the bottom of a ravine trying to trickle data over one bar of GPRS to the emergency services). In practice, on a custom network, you could ditch the Ethernet and UDP/IP headers if on a point-to-point link for CS API, and ditch the encryption if the network physical layer was trusted – at which point we’re talking ~32 bytes per request (2.5s to send at 100bps). Then, there’s still a whole wave of additional work that could be investigated, including…
Smarter streaming compression (so that if a user says ‘Hello?’ three times in a row, the 2nd and 3rd messages are just references to the first pattern)
Hoisting Matrix transaction IDs up to the CoAP layer (reusing the CoAP msgId+token rather than passing around new Matrix transaction IDs, at the expense of requiring one Matrix txn per request)
Switching to CoAP OBSERVE for receiving data from the server (currently we long-poll /sync to receive data)
Switching access_tokens for PSKs or similar
…all of which could shrink the payload down even further. That said, even in its current state, it’s a massive improvement – roughly ~65x better than the equivalent HTTPS+JSON traffic.
In practice, further work on low-bandwidth Matrix is dependent on finding a sponsor who’s willing to fund the team to focus on this, as otherwise it’s hard to justify spending time here in addition to all the less exotic business-as-usual Matrix work that we need to keep the core of Matrix evolving (finishing 1.0, finishing E2E encryption, speeding up Synapse, finishing Dendrite, rewriting Riot/Android etc). However, the benefits here should be pretty obvious: massively reduced bandwidth and battery-life; resilience to catastrophic network conditions; faster sync times; and even a protocol suitable for push notifications (Matrix as e2e encrypted, decentralised, push!). If you’re interested in supporting this work, please contact support at matrix.org.
Heads up that Modular.im (the paid hosting Matrix service provided by New Vector, the company who employs much of the Matrix core team) launched a pilot today for paid Matrix integrations in the form of paid sticker packs. Yes kids, it’s true – for only $0.50 you can slap Matrix and Riot hex stickers all over your chatrooms. It’s a toy example to test the payments infrastructure and demonstrate the concept – the proceeds go towards funding development work on Matrix.org :) You can read more about over on Modular’s blog.
We wanted to elaborate on this a bit from the Matrix.org perspective, specifically:
We are categorically not baking payments or financial incentives as a first class citizen into Matrix, and we’re not going to start moving stuff behind paywalls or similar.
This demo is a proof-of-concept to illustrate how folks could do this sort of thing in general in Matrix – it’s not a serious product in and of itself.
What it shows is that an Integration Manager like Modular can be used as a way to charge for services in Matrix – whether that’s digital content within an integration, or bots/bridges/etc.
While Modular today gathers payments via credit-card (Stripe), it could certainly support other mechanisms (e.g. cryptocurrencies) in future.
The idea in future is for Modular to provide this as a mechanism that anyone can use to charge for content on Matrix – e.g. if you have your own sticker pack and want to sell it to people, you’ll be able to upload it and charge people for it.
Meanwhile, there’s a lot of interesting stuff on the horizon with integration managers in general – see MSC1236 and an upcoming MSC from TravisR (based around https://github.com/matrix-org/matrix-doc/issues/1286) proposing new integration capabilities. We’re also hoping to implement inline widgets soon (e.g. chatbot buttons for voting and other semantic behaviour) which should make widgets even more interesting!
So, feel free to go stick some hex stickers on your rooms if you like and help test this out. In future there should be more useful things available :)
You can see the full announcement and explanation over at https://dot.kde.org/2019/02/20/kde-adding-matrix-its-im-framework. It is fantastic to see one of the largest Free Software communities out there proactively adopting Matrix as an open protocol, open network and FOSS project, rather than drifting into a proprietary centralised chat system. It’s also really fun to see Riot 1.0 finally holding its own as a chat app against the proprietary alternatives!
This doesn’t change the KDE rooms which exist in Matrix today or indeed the KDE Freenode IRC channels – so many of the KDE community were already using Matrix, all the rooms already exist and are already bridged to the right places. All it means is that there’s now a shiny new homeserver (powered by Modular.im) on which KDE folk are welcome to grab an account if they want, rather than sharing the rather overloaded public matrix.org homeserver. The rooms have been set up on the server to match their equivalent IRC channels – for instance, #kde:kde.org is the same as #kde on Freenode; #kde-devel:kde.org is the same as #kde-devel etc. The rooms continue to retain their other aliases (#kde:matrix.org, #freenode_#kde:matrix.org etc) as before.
You may have heard that we recently published the first stable release of the Server to Server Spec (r0.1). The spec makes some changes which are not compatible with the protocol of the past – particularly, self-signed certificates are no longer valid for homeservers. Synapse 1.0.0 will be compliant with r0.1 and the goal of Synapse 0.99.0 is to act as a stepping stone to Synapse 1.0. Synapse 0.99.0 supports the r0.1 release of the server to server specification, but is compatible with both the legacy Matrix federation behaviour (pre-r0.1) as well as post-r0.1 behaviour, in order to allow for a smooth upgrade across the federation.
It is critical that all admins upgrade to 0.99.0 and configure a valid TLS certificate. Admins will have 1 month to do so, after which 1.0.0 will be released and those servers without a valid certificate will no longer be able to federate with >= 1.0.0 servers.
First of all, please don’t panic :) We have taken steps to make this process as simple as possible – specifically implementing ACME support to allow servers to automatically generate free Let’s Encrypt certificates if you choose to. What’s more, it is not necessary to add the certificate right away, you have at least a month to get set up.
For more details on exactly what you need to do (and also why this change is essential), we have provided an extensive FAQ as well as the Upgrade notes for Synapse
This was a huge effort! Congratulations to all involved, especially those of you in the community who contributed to spec MSCs and tested our release candidates. Thank you for bearing with us as we move the whole public Matrix Federation onto r0.1 compliant servers.
Synapse v0.99.x is a precursor to the upcoming Synapse v1.0 release. It contains foundational changes to room architecture and the federation security model necessary to support the upcoming r0 release of the Server to Server API.
Synapse’s cipher string has been updated to require ECDH key exchange. Configuring and generating dh_params is no longer required, and they will be ignored. (#4229)
We just got back from braving the snow in Brussels at FOSDEM 2019 – Europe’s biggest Open Source conference. I think it’s fair to say we had an amazing time, with more people than ever before wanting to hang out and talk Matrix and discuss their favourite features (and bugs)!
The big news is that we released r0.1 of Matrix’s Server-Server APIlate on Friday night – our first ever formal stable release of Matrix’s Federation API, having addressed the core of the issues which have kept Federation in beta thus far. We’ll go into more detail on this in a dedicated blog post, but this marks the first ever time that all of Matrix’s APIs have had an official stable release. All that remains before we declare Matrix out of beta is to release updates of the CS API (0.5) and possibly the IS API (0.2) and then we can formally declare the overall combination as Matrix 1.0 :D
We spoke about SS API r0.1 at length in our main stage FOSDEM talk on Saturday – as well as showing off the Riot Redesign, the E2E Encryption Endgame and giving an update on the French Government deployment of Matrix and the focus it’s given us on finally shipping Matrix 1.0! For those who weren’t there or missed the livestream, here’s the talk! Slides are available here.
Then, on Sunday we had the opportunity to have a quick 20 minute talk in the Real Time Comms dev room, where we gave a tour of some of the work we’ve been doing recently to scale Matrix down to working on incredibly low bandwidth networks (100bps or less). It’s literally the opposite of the Matrix 1.0 / France talk in that it’s a quick deep dive into a very specific problem area in Matrix – so, if you’ve been looking forward to Matrix finally having a better transport than HTTPS+JSON, here goes! Slides are available here.
Huge thanks to everyone who came to the talks, and everyone who came to the stand or grabbed us for a chat! FOSDEM is an amazing way to be reminded in person that folks care about Matrix, and we’ve come away feeling more determined than ever to make Matrix as great as possible and provide a protocol+network which will replace the increasingly threatened proprietary communication silos. :)
Since joining the core team as Developer Advocate last year it’s been quite a ride. One of the best things about the job is getting the chance to talk to so many people about their projects and what they would like to see happen in the matrix ecosystem. With so much going on, I just want to say thanks to everyone who has been so welcoming to me and share some of my personal highlights, as I recall them, from 2018!
Fractal was featured in the very first TWIM, announcing v1.26. Since then, the team have hosted two IRL hackfest events (Strasbourg and Seville – where to next, Stockholm? Salisbury?), engaged two GSOC students and continued to push out releases. At this point, Fractal is a full-featured Matrix client for GNOME.
Matrique became Spectral, and is generally awesome. Apparently the name “Matrique” was chosen because it sounds French, but those who speak the language well revealed that this name was not ideal! The project was re-named “Spectral”, and is going strong. I really appreciate the multi-user facility! It’s a great looking client, and runs great on macOS too (protip: get more attention from /me by providing a macOS build…)
On which subject, Seaglass is a native macOS client. First announced in June, this client supports E2EE rooms (via matrix-ios-sdk), and is also available on homebrew.
Ubuntu Touch has the most Matrix clients per-user of any platform. UT epitomises the resilience and collaborative spirit of Open Source. It’s a true community maintenance effort, and is as friendly a community as you might meet. uMatriks came first, but it’s FluffyChat that prompted me to install it on my battered old OnePlus One. FluffyChat is now extremely full-featured, with E2EE support being actively discussed.
In the command line, gomuks appeared and quickly became a competent client, but in terms of sheer enthusiasm and momentum, I must give commendation to matrix-client.el, a newly revived mode for Emacs which turns your editor/OS into a great Matrix Client. I enjoyed using it enough that it began to change my mind about using emacs. Laptops have more than 8mb memory these days anyway.
A culture of bots
There is a tendency in the community to build a bot for everything and anything. This has reached the point where there are multiple flairs available depending on what bots you like to make (silly vs serious.)
TravisR was perhaps the first person I saw to get the obsession, creating
In June tulir started maubot, a plugin-based bot system built in Python, which now also has a management UI.
All bridges lead to Matrix
Or from Matrix, depending on which way you want to send the message.
Around May, I started to notice another obsession brewing in the community. Bridging is a core part of the Matrix mission, but it was around this time I started seeing it in the wild.
Summer 2018 Half-Shot began working in the Matrix core team, and was hugely productive in maintaining and developing the bridge infrastructure for matrix.org. IRC bridging is far more stable and reliable now than it was a year ago. And yet there are still more bridges – too many to list, so I’m picking the ones I’ve used and enjoyed.
SMSMatrix, a phone-hosted bridge is simple and works great for SMS bridging.
Libraries, SDKs, Frameworks
I enjoyed using matrix-js-bot-sdk for building elizabot (more coverage needed for that!), and the SDK recently received support for application services.
In April, kitsune announced v0.2 of libqmatrixclient describing it as “the first one more or less functional and stable” – confidence! This library now powers both Quaternion and Spectral. QMatrixClient has continued to get updates, plus features including lazy loading and VoIP signalling.
There are a few libs I want to pay more attention to this year, starting with tulir‘s maubot now that it has been rewritten in Python. I’m also excited to see jmsdk, part of ma1uta‘s broader ecosystem of Matrix tooling – a Java-based SDK.
Until around June, Ruma was receiving regular updates. There was a pause as the team waited for Rust async/await to land, and also to get some stability in the Matrix Spec. Still waiting on Rust, but now that the Matrix Spec is stabilising, Ruma is showing signs of life too. I have also been watching other homeserver projects begin to restart, which makes for a great start to 2019.
DSN Traveller by Florian
Matrix was featured as part of a Master’s thesis by Florian Jacob.
DSN Traveller tries to get a rough overview of how the Matrix network is structured today. It records how many rooms it finds, how many users and servers take part in those rooms, and how they relate to each other, meaning how many users a server has and of how many rooms it is part of.
Synapse dominates the homeserver space right now, so if you want to host your own homeserver today it’s the obvious choice. Too great a variety of installation guides was doing more harm than good, so Stefan took the initiative to create a definitive community-driven Synapse installation guide, including a room to discuss and improve the text. Find the guide linked from here, and chat about the guide in #synapseguide:matrix.org.
I want to use Matrix, and I want to host my own homeserver. As such, matrix-docker-ansible-deploy is a project I absolutely love. It uses Synapse docker images from the Matrix core team, and combines them with Ansible playbooks written and organised by Slavi. It lets you quickly deploy everything needed for a Synapse homeserver, and it’s simple enough that even I can use it.
Having a Matrix-native mode for shields.io (those counter/indicator images you often see at the top of repos) seems like something petty at first, but it’s actually a great indicator of the importance of Matrix from the outside. Plus, I love seeing the images at the top of different repos. Thanks Brendan for helping this along.
Two students worked on Matrix-related projects during GSOC 2018.
Thanks for a great 2018. There was so much to learn about, so much to write about, and so many great community members to meet and chat to! If I didn’t mention your project, I’m sorry to have been either forgetful or to not be able to include everything.
If you think I’ve missed something, or if there’s a project I should have included rather than another, or even if you just disagree with my choices, let’s discuss it in #twim:matrix.org. See you there, and let’s all parade ahead to a productive, open, interoperable 2019!