This year, the Matrix.org Foundation is excited to host the first ever
Matrix.org Foundation and Community devroom in person at FOSDEM. Half a day of
talks, demos and workshops around Matrix itself and projects built on top of
Matrix.
We encourage people working on the Matrix protocol or building on it in an open
source project to submit a proposal! Note that companies are welcome to talk
about the Matrix details of their open source projects, but marketing talks are
not welcome.
This call for participation is only about the physical devroom. A separate CfP
will be issued for the online devroom once there are more details about it.
Key dates are:
Conference dates 4-5 February, 2023 In person
Matrix Devroom date: Sunday 5 morning in person, online devroom to be
announced
Submission deadline: Monday 5th December
Announcement of selected talks: Thursday 15th December
You must be available in person to present your talk for the physical devroom.
Talk Details
The talks can follow one of thee two formats for the physical devroom
20 min talk + 10 min Q&A, for topics that can be covered briefly
50 min talk + 10 min Q&A for more complex subjects which need more focus
We strongly encourage you to prepare a demo when it makes sense, so people can
actually see what your work looks like in practice!
Of course, the proposal must respect the FOSDEM terms as well:
The conference language is English. All content must relate to Free and Open
Source Software. By participating in the event you agree to the publication of
your recordings, slides and other content provided under the same licence as
all FOSDEM content (CC-BY).
We expect to receive more requests than we have slots available. The devroom
organisers will be reviewing the proposals and accepting them based on the
potential positive impact the project has on Matrix (as defined in by the
Mission section of https://matrix.org/foundation).
If a project proposal has been turned down, it doesn't mean we don't believe it
has good potential. Maintainers are invited to join the #twim:matrix.org
Matrix room to give it some visibility.
As of Synapse 1.69, we consider "faster remote room joins" to be ready for testing by server admins.
There are a number of caveats, which I'll come to, but first: this is an important step in a project which we've been working on for 9 months. Most people who use Matrix will be familiar with the pain of joining a large room over federation: typically you are just faced with a spinner, which is eventually replaced by a cryptic error. If you're lucky, the room eventually pops up in your room list of its own accord. The whole experience is one of the longest-standing open issues in Synapse.
At the end of each year it’s been traditional to do a big review of everything that the Matrix core team got up to that year, and announcing our predictions for the next. You can see the last edition in 2021 here - and if you’re feeling nostalgic you can head down memory lane with the 2020, 2019, 2018 ones etc too.
This year is turning out to be slightly different, however. Our plans for 2022 are particularly ambitious: to force a step change in improving Matrix’s performance and usability so that we firmly transition from our historical “make it work” and “make it work right” phases into “making it fast”. Specifically: to succeed, Matrix has to succeed in powering apps which punch their weight in terms of performance and usability against the proprietary centralised alternatives of WhatsApp, Discord, Slack and friends.
We’ve seen an absolute tonne of work happening on this so far this year… and somehow the end results all seem to be taking concrete shape at roughly the same time, despite summer traditionally being the quietest point of the year. The progress is super exciting and we don’t want to wait until things are ready to enthuse about them, and so we thought it’d be fun to do a spontaneous Summer Special gala blog post so that everyone can follow along and see how things are going!
We have always focused on first making Matrix “work right” before we make it “work fast” - sometimes to a fault. After all: the longer you build on a given architecture the harder it becomes to swap it out down the line, and the core architecture of Matrix has remained essentially the same since we began in 2014 - frankly it’s amazing that the initial design has lasted for as long as it has.
Over the years we’ve done a lot of optimisation work on the core team implementations of that original architecture - whether that’s Synapse or matrix-{js,react,ios,android}-sdk and friends: for instance Synapse uses 5-10x less RAM than it used to (my personal federated server is only using 145MB of RAM atm! 🤯) and it continues to speed up in pretty much every new release (this PR looks to give a 1000x speedup on calculating push notification actions, for instance!). However, there are some places where Matrix’s architecture itself ends up being an embarrassingly slow bottleneck: most notably when rapidly syncing data to clients, and when joining rooms for the first time over federation. We’re addressing these as follows…
Historically, /sync always assumed that the client would typically want to know about all the conversations its user is in - much as an IRC client or XMPP client is aware of all your current conversations. This provided some nice properties - such as automatically enabling perfect offline support, simplifying client and server development, and making features like “jump to room” and “tab complete” work instantly given the data is all client-side. In the early days of Matrix, when nobody was yet a power user, this wasn’t really a problem - but as users join more conversations and join bigger rooms, it’s become one of Matrix’s biggest performance bottlenecks. In practice, logging into a large account (~4000 rooms) can take ~10 minutes and hundreds of megabytes of network traffic, which is clearly ridiculous. Worse: if you go offline for a day or so, the incremental sync to catch back up can take minutes to calculate (and can even end up being worse than an initial sync).
To fix this, we started work on Sliding Sync (MSC3575) in 2021: a complete reimagining of the /sync API used by Matrix clients to receive data from their homeserver. In Sliding Sync, we only send the client the data it needs to render its UI. Most importantly, we only tell it about the subset of rooms which it is visible in the scroll window of its room list (or that it needs to display notifications about). As the user scrolls around the room list, they slide the window up and down - hence the name “sliding sync”. Sliding Sync was originally called Sync v3, given it’s our 3rd iteration of the sync API - it got renamed Sliding Sync given the current sync API confusingly ended up with a prefix of /v3.
Back in December our work on Sliding Sync was still pretty early: we had the initial MSC, an experimental proxy that converted the existing sync v2 API into Sliding Sync, and a simple proof-of-concept web client to exercise it. Since then, however, there has been spectacular progress:
MSC3575 has undergone some bigiterations as we converge on the optimal API shape.
The sliding-sync proxy has matured to be something which we’re now running in stealth against matrix.org for those dogfooding the API
We added the concept of extensions to split out how to sync particular classes of data (to avoid the API becoming a monolithic monster) - specifically:
Account Data
End-to-end Encryption
To-device messages
Ephemeral events (to be done)
Presence (to be done)
We added support for spaces!
We implemented it in matrix-js-sdk (which merged a few weeks ago!)
But most importantly, we’ve also been busy implementing Sliding Sync in Element Web itself so we can start using it for real. Now, this is still a work in progress, but as of today it’s just getting to the point where one can experiment with it as a daily driver (although it’s definitely alpha and we’re still squishing bugs like crazy!) - and we can see just how well it really performs in real life.
For instance, here’s a video of my account (4055 rooms, redacted for privacy) logging in on an entirely fresh browser via Sliding Sync - the actual sync takes roughly 1 second (at 00:18 in the video). And if we’d started the sync operation while the user is setting up E2E encryption, it would have completed in the background before they even got to the main screen, giving instant login(!). Given my account typically takes ~10 minutes to initial sync (plus even more time for encryption to sync up), this is at least a real-life 600x improvement. Moreover, the sync response is only 20KB (a ~5000x improvement) - a huge win for low-bandwidth Matrix situations.
Then, having logged in, the client subsequently launches pretty much instantly, no matter how long you’ve been offline. Total launch time is roughly 4 seconds, most of which is loading the app’s assets - which in turn could well be improved by progressively loading the app. It could also be sped up even more if we cached state locally - currently the implementation simply reloads from the server every time the app launches rather than maintaining a local cache.
As you can see, this is palpably coming together, but there’s still a bunch of work to be done before we can encourage folks to try it, including:
Switching the RoomList to be fully backed by sliding sync (currently the v2 roomlist is jury-rigged up to the sliding sync API, causing some flakey bugs such as duplicate rooms)
Spec and hook up typing / receipts / presence extensions
Hook up favourites, low_priority, knocked and historical rooms
Adding back in loading room members
Apply quality-of-service to to-device messages so we prioritise ones relevant to the current sliding window
Sync encrypted rooms in the background to search for notifications (and for indexing).
More local caching to speed up operations which now require checking the server (e.g. Ctrl/Cmd-K room switching)
We also need to determine whether it’s viable to run the sliding-sync proxy against matrix.org for general production use, or whether we’ll need native support in Synapse before we can turn it on by default for everyone. But these are good problems to have!!
Meanwhile, over in the land of Rust, we’ve been making huge progress in maturing and stabilising matrix-rust-sdk and exercising it in Element X: the codename for the next generation of native Element mobile apps. Most excitingly, we literally just got the first (very alpha) cut of Sliding Sync working in matrix-rust-sdk and hooked up to Element X on iOS - you can see Ștefan’s demo from last week here:
matrix-rust-sdk itself is now getting a steady stream of releases - including long-awaited official node bindings, providing excellent and performant encryption support via the newly audited vodozemac Rust implementation of Olm. It’s also great to see loads of major contributions to matrix-rust-sdk from across the wider Matrix community - particularly from Ruma, Fractal, Famedly and others - thank you!! As a result the SDK is shaping up to be much more healthy and heterogeneous than the original matrix-{js,ios,android}-sdk projects.
On Element X itself: matrix-rust-sdk is being used first on iOS in Element X iOS - aiming first for launching a stable “barbecue” feature set (i.e. personal messaging) asap, followed by adding on “banquet” features (i.e. team collaboration) such as spaces and threads afterwards. We’ve shamelessly misappropriated the barbecue & banquet terminology from Tobias Bernard’s excellent blog post “Banquets and Barbecues” - although, ironically, unlike the post, our plan is still to have a single app which incrementally discloses the banquet functionality as the user’s barbecue starts to sprawl. We’ve just published the brand new development roadmap for Element X from the rust-sdk perspective on GitHub. Above all else, the goal of Element X is to be the fastest mobile messenger out there in terms of launch and sync time, thanks to Sliding Sync. Not just for Matrix - but the fastest messenger, full stop :D Watch this space to see how we do!
Finally: Element is getting a major redesign of the core UI on both iOS and Android - both for today’s Element and Element X. I’m not going to spoil the final result (which is looking amazing) given it’ll have a proper glossy launch in a few weeks, but you can get a rough idea based on the earlier design previewed by Amsha back in June:
In addition to the upcoming overall redesign, Element also landed a complete rework of the login and registration flows last week on iOS and Android - you can see all about it over on the Element blog.
In terms of performance, the other area that we’re reworking at the protocol level is room joins.
One of the most glaring shortcomings of Matrix happens when a new server admin excitedly decides to join the network, installs a homeserver, tries to join a large room like #matrix:matrix.org, and then looks on in horror as it takes 10+ minutes to join the room, promptly despairs of Matrix being slow and complains bitterly about it all over HN and Reddit :)
The reason for the current behaviour is that the Matrix rooms are replicated between the servers who participate in them - and in the initial design of the protocol we made that replication atomic. In other words, a new server joining a room picks a server from which to acquire the room (typically the one in the room’s alias), and gets sent a copy of all the state events (i.e. structural data) about the room, as well as the last 20 or so messages. For a big room like Matrix HQ, this can be massive - right now, there are 79,969 state events in the room - and 126,510 auth_chain events (i.e. the events used to justify the existence of the state events). The reason there are so many is typically that the act of a user joining or parting the room is described by a state event - and in the naive implementation, the server needs to know all current state events in the room (e.g. including parted users), in order to keep in sync with the other servers in the room and faithfully authorise each new event which they receive for that room.
However, each event is typically around 500 bytes in size, and so the act of joining a big room could require generating, transmitting, receiving, authenticating and storing up to 100MB of JSON 😱. This is why joining big rooms for the first time is so painfully slow.
Happily, there is an answer: much as Sliding Sync lets clients synchronise with the bare minimum of data required to render their UI, we’ve created MSC3706 (and its precursor MSC2775) in order to rework the protocol to let servers receive the bare minimum of state needed to join a room in order to participate. Practically speaking, we only really care about events relevant to the users who are currently participating in the room; the 40,000 other lurkers can be incrementally synced in the background so that our membership list is accurate - but it shouldn’t block us from being able to join or read (or peek) the room. We already have membership lazyloading in the client-server API to support incrementally loaded membership data, after all.
The problem with this change is that Synapse was written from the outset to assume that each room’s state should be atomically complete: in other words, room state shouldn’t incrementally load in the background. So the work for Faster Joins has been an exercise in auditing the entirety of Synapse for this assumption, and generally reviewing and hardening the whole state management system. This has been loads of work that has been going on since the end of last year - but the end is in sight: you can see the remaining issues here.
As of right now, faster joins work (although aren’t enabled by default) - with the main proviso that you can’t speak in the room yet until the background sync has completed, and the new implementation has not yet been optimised. However, thanks to all the preparation work, this should be relatively straightforward, so the end is in sight on this one too.
In terms of performance: right now, joining Matrix HQ via the unoptimised implementation of faster joins completes on a fresh server in roughly 30 seconds - so a ~25x improvement over the ~12 minutes we’ve seen previously. However, the really exciting news is that this only requires synchronising 45 state events and 107 auth_chain events to the new server - a ~1400x improvement! So there should be significant scope for further optimising the calculation of these 152 events, given 30 seconds to come up with 152 events is definitely on the high side. In our ideal world, we’d be getting joins down to sub-second performance, no matter how big the room is - once again, watch this space to see how we do.
Finally, alongside faster remote joins, we’re also working on faster local joins. This work overlaps a bit with the optimisation needed to speed up the faster remote join logic - given we are seeing relatively simple operations unexpectedly taking tens of seconds in both instances. Some of this is needing to batch database activity more intelligently, but we also have some unknown pauses which we’re currently tracking down. Profiling is afoot, as well as copious Jaeger and OpenTracing instrumentation - the hunt is on!
All the work above describes some pretty bold changes to speed up Matrix and improve usability - but in order to land these changes with confidence, avoiding regressions both now and in future, we have really levelled up our testing this year.
Looking at matrix-react-sdk as used by Element Web/Desktop: all PRs made to matrix-js-sdk must now pass 80% unit test coverage for new code (measured using Sonarqube, enforced as a GitHub PR check). All matrix-react-sdk PRs must be accompanied by a mix of unit tests, end-to-end tests (via Cypress) and screenshot tests (via percy.io). All regressions (in both nightly and stable) are retro’d to ensure fixed things stay fixed (usually via writing new tests), and we have converted fully to typescript for full type safety.
Concretely, since May, we’ve increased js-sdk unit test coverage by ~10% globally, increased react-sdk coverage by ~17%, and added ever more Cypress integration tests to cover the broad strokes. Cypress now completely replaces our old Puppeteer-based end-to-end tests, and Sliding Sync work in matrix-react-sdk is being extensively tested by Cypress from the outset (the Sliding Sync PR literally comes with a Cypress test suite).
In mobile land, the situation is more complex given our long-term strategy is to deprecate matrix-ios-sdk and matrix-android-sdk2 in favour of matrix-rust-sdk. matrix-rust-sdk has always had excellent coverage, and in particular, adopting the crypto module in the current matrix-{js,ios,android}-sdk will represent a night and day improvement for quality (not to mention perf!). We’ll also be adopting PR checks, and screenshot testing for the mobile SDKs.
On the backend, we continue to build out test cases for our new integration tester Complement (in Golang), alongside the original sytest integration test suite (in Perl). In particular, we can now test Synapse in worker mode. The intention with Complement is that it should be homeserver agnostic so that any homeserver implementation can benefit. Indeed the project was initiated by Kegan wearing his Dendrite hat.
Finally, we’ve had a huge breakthrough with true multi-client end-to-end testing in the form of Michael Kaye’s brand new Traffic Light project. For the first time, we can fully test things like cross signing and verification and VoIP calls end-to-end across completely different platforms and different clients. It’s early days yet, but this really will be a game changer, especially for crypto and VoIP.
Next up, we will turn our attention to a performance testing framework so that we can reliably track performance improvements and regressions in an automated fashion - heavily inspired by Safari’s Page Load Test approach. This will be essential as we build out new clients like Element X.
All the stuff above is focused on improving the core performance and usability of Matrix - but in parallel we have also been making enormous progress on entirely new features and capabilities. The following isn’t a comprehensive list, but we wanted to highlight a few of the areas where new development is progressing at a terrifying rate…
2022 is turning out to be the year that Matrix finally gets fully native voice/video conferencing. After speccing MSC3401 at the end of last year, Element Call Beta 1 launched as a reference implementation back in March, followed by enabling E2EE, spatial audio and walkie-talkie mode in Element Call Beta 2 in June.
However, the catch was that Element Call beta 1 and 2 only ever implemented “full mesh” conferencing - where every participant calls every other participant simultaneously, limiting the size of the conference to ~7 participants on typical hardware, and wasting lots of bandwidth (given you end up sending the same upstream multiple times over for all the other participants). Element Call has been working surprisingly well in spite of this, but the design of MSC3401 was always to have “foci” (the plural of ‘focus’ - i.e. conference servers) to optionally sit alongside homeservers in order to aggregate the participating calls, a bit like this:
With foci, clients only need to send their upstream to their local focus, rather than replicating it across all the other participants - and the focus can then fan it out to other foci or clients as required. In fact, if no other clients are even watching your upstream, then your client can skip sending an upstream to its focus entirely!
Most importantly, foci are decentralised, just like Matrix: there is no single conference server as a single point of control or failure responsible for powering the group call - users connect to whichever focus is closest to them, and so you automatically get a standards-based heterogeneous network-split-resilient geographically distributed cascading conferencing system with end-to-end-encryption, powered by a potentially infinite number of different implementations. To the best of our knowledge, this is the first time someone’s proposed an approach like this for decentralised group calling (watch out, Zoom, we’re coming for you!)
Now, the VoIP team have been busy polishing Element Call (e.g. chasing down end-to-end encryption edge cases and reliability), and also figuring out how to embed it into Element and other Matrix clients as a quick way to get excellent group VoIP (more on that later). As a result, work on building out foci for scalable conferencing had to be pushed down the line.
But in the last few months this completely changed, thanks to an amazing open source contribution from Sean DuBois, project lead over at Pion - the excellent Golang WebRTC implementation. Inspired by our initial talk about MSC3401 at CommCon, Sean independently decided to see how hard it’d be to build a Selective Forwarding Unit (SFU) focus that implemented MSC3401 semantics using Pion - and published it at https://github.com/sean-der/sfu-to-sfu (subsequently donated to github.com/matrix-org). In many ways this was a flag day for Matrix: it’s the first time that a core MSC from the core team has been first implemented from outside the core team (let alone outside the Matrix community!). It’s the VoIP equivalent of Synapse starting off life as a community contribution rather than being written by the core team.
Either way: Sean’s SFU work has opened the floodgates to making native Matrix conferencing actually scale, with Šimon Brandner and I jumping in to implement SFU support in matrix-js-sdk… and as of a few weeks ago we did the first ever SFU-powered Matrix call - which worked impressively well for 12 participants!
Now, this isn’t released yet, and there is still work to be done, including:
We actually need to select the subset of streams we care about from the focus
We need to support thumbnail streams as well as high-res streams
We need rate control to ensure clients on bad connections don’t get swamped
We need to hook up cascading between foci (although the SFU already supports it!)
We need E2EE via insertable streams
Faster signalling for switching between streams
You can see the full todo list for basic and future features over on GitHub. However, we’re making good progress thanks to Šimon’s work and Sean’s help - but with any luck beta 3 of Element Call might showcase SFU support!
Meanwhile it’s worth noting that Element Call is not the only MSC3401 implementation out there - the Hydrogen team has added native support to Hydrogen SDK too (skipping over the old 1:1 calling), so expect to see Element <-> Hydrogen calling in the near future. The Hydrogen implementation is also what powers Third Room (see below…)
Elsewhere on VoIP, we’ve also been hard at work figuring out how to embed Element Call into Matrix clients in general, starting with Element Web, iOS & Android. Given MSC3401 is effectively a superset of native 1:1 Matrix VoIP calling, we’d ideally like to replace the current 1:1-only VoIP implementation in Element with an embedded instance of Element Call (not least so we don’t have to maintain it in triplicate over Web/iOS/Android, and because WebRTC-in-a-webview really isn’t very different to native WebRTC). To do this efficiently however, the embedded Element Call needs to share the same underlying Matrix client as the parent Element client (otherwise you end up wasting resources and devices and E2EE overhead between the two). Effectively Element Call ends up needing to parasite off the parent’s client. We call this approach “matryoshka embedding”, given it resembles nested Russian dolls. 🪆
In practice, we do this by extending the Widget API to let Matrix clients within the widget share the parent’s Matrix client for operations such as sending and receiving to-device messages and accessing TURN servers (c.f. MSC3819 and MSC3846). This in turn has been implemented in the matrix-widget-api helper library for widget implementers - and then a few days ago Robin demonstrated the world’s first ever matryoshka embedded Element Call call, looking like this:
Note that the MSC3401 events are happening in the actual room where the widget has been added, sent by the right users from Element Web rather than from Element Call, and so are subject to all the normal Matrix access control and encryption semantics. This is a huge step forwards from embedding Jitsi widgets, where the subsequent call membership and signalling happens in an entirely separate system (XMPP via Prosody, ironically) - instead: this is proper native Matrix calling at last.
Moreover, the same trick could be used to efficiently embed other exotic Matrix clients such as Third Room or TheBoard - giving the user the choice either to use the app standalone or within the context of their existing Matrix client. Another approach could be to use OIDC scopes to transparently log the embedded client in using the parent’s identity; this has the advantage of no code changes being needed on the embedded client - but has the disadvantage that you needlessly end up running two Matrix clients for the same account side by side, and adding another device to your account, which isn’t ideal for a performance sensitive app like Element Call or Third Room.
Matryoshka embedding isn’t live yet, but between scalable calls via SFU and native Element Call in Element Web/iOS/Android, the future is looking incredibly exciting for native Matrix VoIP. We hope to finish embedding Element Call in Element Web/iOS/Android in Sept/Oct - and if we get lucky perhaps the SFU will be ready too and then Element Call can exit beta!
Finally, we also added Video Rooms to Element Web - adding the user interface for an “always on” video room that you can hop into whenever you want. You can read about it over on the Element blog - the initial implementation uses Jitsi, but once Element Call and Matryoshka embedding is ready, we’ll switch over to using Element Call instead (and add Voice Rooms too!)
Just as MSC3401 and Element Call natively adds decentralised voice/video conferences to boring old textual Matrix chatrooms, MSC3815 and Third Room go the whole enchilada and adds a full decentralised 3D spatial collaboration environment into your Matrix room - letting you turn your Matrix rooms into a full blown interconnected virtual world.
I can’t overstate how exciting this is: one of the key origins of Matrix was back in Oct 2013 when Amandine and myself found ourselves in Berlin after TechCrunch Disrupt, debating why Second Life hadn’t been more successful - and wondering what you’d have to do to build an immersive 3D social environment which would be as positive and successful as a wildly popular chat network. Our conclusion was that the first key ingredient you’d need would be a kick-ass open decentralised communication protocol to build it on - providing truly open communication primitives that anyone could build on, much like the open web… and that was what got us thinking about how to build Matrix.
Fast forward 9 years, and Third Room is making spectacular progress in building out this dream, thanks to the incredibly hard work of Robert, Nate and Ajay. The goal of Third Room is to be an open platform layered directly on Matrix for spatial collaboration of any kind: effectively a blank canvas to let folks create freeform collaborative 3D (and in future 2D, VR or AR) experiences, either by using existing assets or building their own user-generated content and functionality. Just like the open web itself, this unlocks a literally infinite range of possibilities, but some of the obvious uses include: spatial telepresence, social VR, 3D visualisation of GIS or weather data, 3D simulated environments, search and rescue and disaster response operations (imagine streaming LIDAR from a drone surveying hurricane devastation into Third Room, where you can then overlay and collaborate on GIS data in realtime), and of course 3D gaming in all its various forms.
Now, we’re hoping to give Third Room a proper launch in a few weeks, so I’m not going to spoil too much right now - but the final pieces which are currently coming together include:
Finalising the initial version of Manifold, the multi-threaded game engine which powers Third Room (built on Three.JS, bitECS and Rapier), using SharedArrayBuffers as triple-buffers to synchronise between the various threads. See this update for a bit more detail on how the engine works.
Finalising the Matrix client interface itself, powered by Hydrogen SDK in order to be as lightweight as possible
Adding in full spatial audio and game networking via MSC3401 and Hydrogen SDK (currently full mesh, but will go SFU as soon as SFUs land!)
Adding in animated avatars (currently using Mixamo animations)
Adding in name tags and object labels
Adding in 3D Tile support in order to incrementally load 3D map tiles à la Google Earth
Building an asset pipeline from Unity and Blender through to the glTF assets which Third Room uses.
Initial framework for an in-world direct-manipulation editor
Lightmap support for beautiful high-performance static lighting and shadows
Full post-processing pipeline (bloom, depth-of-field, anti-aliasing etc)
Integrating with OIDC for login, registration, and account management (see OIDC below)
As a quick teaser - here’s an example of a Unity asset exported into Third Room, showing off lightmaps (check out the light and shadows cast by the strip lighting inside, or the shadow on the ground outside). Ignore the blurry HDR environment map of Venice in the background, which is just there to give the metals something to reflect. Check out the stats on the right-hand side: on Robert’s M1 Macbook Pro we’re getting a solid 60fps at 2000x1244px, with 13.12ms of unused gametime available for every 16.67ms frame, despite already showing a relatively complicated asset!
Meanwhile, here are some shots of Robert and Nate chasing each other around the UK City demo environment (also exported from Unity), showing off blended Mixamo animations and throwing around crates thanks to the Rapier physics engine.
And don't forget, it's just a Matrix client - with no infrastructure required other than a normal Matrix server:
As you can see, we are rapidly approaching the point where we’ll need support from technical artists to help create beautiful scenes and avatars and assets in order to make it all come to life - especially once the Blender and Unity pipelines, and/or the Third Room editor are finished. If you’re interested in getting involved come chat at #thirdroom-dev:matrix.org!
Back in the real world, a recent new project that we haven’t spoken about much yet is adding consistent WYSIWYG (What You See Is What You Get) editing to the message composer in matrix-{react,ios,android}-sdk as used by Element Web/iOS/Android - as well as publishing the resulting WYSIWYG editor for the greater glory of the wider ecosystem.
This is a bit of a contentious area, because we’ve tried several times over the years to add a rich text editor to matrix-react-sdk - firstly with the Draft.jsimplementation by Aviral (which we abandoned after Facebook de-staffed Draft), and then later with a Slateimplementation by me (which we abandoned thanks to the maintenance burden of keeping up with Slate’s API changes). Finally, burnt by the experience with third party solutions, Bruno wrote his own editor called CIDER, which was a great success and is what Element Web uses today to author messages including ‘pills’ for structured rooms/users etc… but this deliberately didn’t provide full WYSIWYG functionality. Meanwhile, Slack added WYSIWYG, forced it on, and screwed it up - and apps like WhatsApp and Discord seem to get by fine without WYSIWYG.
However, given that users are now used to WYSIWYG in Teams and Slack, we’ve now decided to have another go at it, inspired by CIDER’s success - and with the novel twist that the heavy lifting of modelling and versioning the document and handling Unicode + CJK voodoo will be provided by a cross-platform native library written in Rust, ensuring that matrix-{react,ios,android}-sdk (and in future matrix-rust-sdk-based apps like Element X) all have precisely the same consistent semantics, and we don’t spend our lives fixing per-platform WYSIWYG bugs unless it really is a platform-specific issue with the user interface provided on that particular platform.
The project is fairly young but developing fast, and lives over at https://github.com/matrix-org/matrix-wysiwyg (better name suggestions welcome ;) - we’re aiming to get it into clients by the end of October. The editor itself is not Matrix specific at all, so it’ll be interesting to see if other projects pick it up at all - and meanwhile, if we’ve done a good job, it’ll be interesting to see if this can be used to power Matrix-backed collaborative-editing solutions in future…
Update: we should have mentioned that the WYSIWYG editor project is being built out by staff at Element, who very kindly have been sponsored to work on it by one of Element's Big Public Sector Customers in order to get to parity with Teams. Thank you!!
On the other hand, a project we recently yelled about a lot is Matrix’s transition to Open ID Connect for standards-based authentication and account management. We announced this at the end of the year and the project has built up huge momentum subsequently, culminating with the release of https://areweoidcyet.com last week to track the progress and remaining work.
Our plan is to use native OIDC in production for the first time to provide all the login, registration and account management for Third Room when it launches in a few weeks (using a branded Keycloak instance as the identity provider, for convenience). After all, the last thing we wanted to do was to waste time building fiddly Matrix-specific login/registration UI in Third Room when we’re about to move to OIDC! This will be an excellent case study to see how it works, and how it feels, and inform the rest of the great OIDC experiment and proposed migration.
Meanwhile, the Next Generation team has continued to focus on their mission to make Dendrite as efficient and usable as possible. Within recent months, Dendrite has matured dramatically, with a considerable list of bugs fixed, performance significantly improved and new features added - push notifications, history visibility and presence to name a few notable additions.
Neil Alexander, Kegan and Till have continued to streamline the Dendrite architecture and to refactor areas of the codebase which have long needed attention, as well as moving from Kafka to NATS JetStream, an all-new caching model and some other fairly major architectural changes. We’ve also seen an increase of code contributions from the community and outside organisations, which is exciting, and the gomatrixserverlib library which underpins much of Dendrite is also seeing more active development and attention thanks to its use in the Complement integration testing suite.
With the most recent 0.9.3 release, we are proud to announce that Dendrite nowpasses 90% of Client-Server API tests and 95% of Server-Server API tests and has support for all specced room versions in use today. We have a growing community of users who are (quite successfully) trialling using Dendrite homeservers day-to-day, as well as our own public dendrite.matrix.org homeserver, which is open to the public for registration for anyone who wants to experiment with Dendrite without running their own deployment.
Dendrite plays an important role in our future strategy as it is also the homeserver implementation used for embedded homeservers, P2P development and experimentation. In addition to being able to scale up, we have also successfully scaled down, with the Element P2P demos proving that an embedded Dendrite homeserver can run comfortably on an iOS or Android device.
Research on the Pinecone overlay network for P2P Matrix has also continued, with Devon and Neil having experimented with a number of protocol iterations and spent considerable time bringing the Pinecone Simulator up to scratch to help us to test our designs more rapidly. Our work in this area is helping us to form a better direction and strategy for P2P Matrix as a whole, which is moving more towards a hybridised model with the current Matrix federation — a little different to our original vision, but will hopefully result in a much smoother transition path for existing users whilst solving some potential scaling problems. The arewep2pyet.com site is a living page which contains a high level overview of our goals and all the progress being made.
Comparing all of the above with the predictions for 2022 section of the end-of-year blog post, we’re making very strong progress in a tonne of areas - and the list above isn’t comprehensive. For instance, we haven’t called out all the work that the Trust & Safety team are doing to roll out advanced moderation features by default to all communities - or the work that Eric has been doing to close the remaining gap between Gitter and Matrix by creating new static archives throughout Matrix. Hydrogen has also been beavering away to provide a tiny but perfectly formed web client suitable for embedding, including the new embeddable Hydrogen SDK. We haven’t spoken about the work that the Cryptography team have been doing to adopt vodozemac and matrix-rust-sdk-crypto throughout matrix-{js,ios,android}-sdk, or improve encryption stability and security throughout. We’ve also not spoken about the new initiative to fix long-term chronic bugs (outside of the work above) in general - or all the work being done around Digital Markets Act interoperability…
Other things left on the menu for this year include getting Threads out of beta: we’ve had a bit of an adventure here figuring out how to get the right semantics for notification badges and unread state in rooms with threads (especially if you use a mix of clients which support and don’t support threads), and once that’s done we’ll be returning to Spaces (performance, group permissions etc).
Looking through this post (and congratulations if you’re still reading it at this point ;P), it really feels that Matrix is on the verge of shifting into a new phase. Much as MacOS X started off as a promising but distinctly unoptimised operating system, and then subsequently got unrecognisably faster year by year (even on the same hardware!) as Apple diligently worked away optimising the kernel… similarly: we are now landing the architectural changes to completely transform how Matrix performs.
Between protocol changes like Sliding Sync, Faster Joins, native OIDC and native VoIP conferencing all landing at roughly the same time - and alongside new implementations like matrix-rust-sdk and vodozemac, let alone Third Room - it feels unquestionably like we have an unrecognisable step change on the horizon. Our aim is to land as much of this as possible by the end of the year, and if we pull it off, I’m tempted to suggest we call the end result Matrix 2.0.
TL;DR: we’ve just launched areweoidcyet.com to track the project to adopt OpenID Connect (OIDC) as the authentication method used by Matrix. It has a load of useful resources (FAQs, status etc.) so do please check it out!
Hey folks,
As you may know, there is a proposal and project afoot to change the way authentication is done in Matrix…
Currently Matrix uses a custom authentication protocol baked into the Matrix spec. This poses a number of drawbacks. To overcome these drawbacks the project proposes migrating to use the industry standard authentication protocol OpenID Connect (OIDC) instead.
In terms of why this is a good idea: MSC3861 has all the details - please check it out!
The bottom line is that Matrix should focus on being a decentralised communication protocol - not an authentication protocol… and by adopting a dedicated authentication protocol we can benefit from all sorts of goodies such as easy 2FA and MFA, passwordless-auth via WebAuthn, Login via QR-code, alternative CAPTCHAs and so much more.
In support of this the proposal extends to the Matrix.org Foundation joining the OpenID Foundation as a non-profit member to support the work that the OpenID Foundation is doing to build a robust and audited ecosystem for open authentication.
Whilst this project proposes a significant change to the Matrix ecosystem that would take some time to migrate to, we believe that it will better support the continued growth and adoption of Matrix in the years to come.
Today we are launching the areweoidcyet.com website which is packed with information and resources on the project:
What? Why? When?
MSC proposals
Status of homeservers, clients, auth servers (OIDC Providers/OPs)
A client implementation guide
Links to the Matrix OIDC Playground environment where you can try out the latest progress
We just wanted to take a moment to welcome Rocket.Chat to Matrix, given the recent announcement that they are switching to using Matrix for standards-based interoperable federation! This is incredible news: Rocket.Chat is one of the leading open source collaboration platforms with over 12 million users, and they will all shortly have the option to natively interoperate with the wider Matrix network: the feature has already landed (in alpha) in Rocket.Chat 4.7.0!
We’d like to thank the whole Rocket.Chat team for putting their faith in Matrix and joining the network: the whole idea of Matrix is that by banding together, different independent organisations can build an open decentralised network which is far stronger and more vibrant than any closed communication platform. The more organisations that join Matrix, the more useful and valuable the network becomes for everyone, and the more momentum there is to further refine and improve the protocol. Our intention is that Matrix will grow into a massive open ecosystem and industry, akin to the open Web itself… and that every organisation participating, be that Rocket.Chat, Element, Gitter, Beeper, Famedly or anyone else will benefit from being part of it. We are stronger together!
Rocket.Chat’s implementation follows the “How do you make an existing chat system talk Matrix?” approach we published based on our experiences of linking Gitter into Matrix. Looking at the initial pull request, the implementation lets Rocket.Chat act as a Matrix Application Service, effectively acting as a bridge to talk to an appropriate Matrix homeserver. From chatting with the team, it sounds like next steps will involve adding in encryption via our upcoming matrix-sdk-crypto node bindings - and then looking at ways to transparently embed a homeserver like Dendrite, sharing data as much as possible between RC and Matrix, so Rocket.Chat deployments can transparently sprout Matrix interoperability without having to run a separate homeserver. Super exciting!
You can see a quick preview of a Rocket.Chat user chatting away with an Element user on matrix.org via Matrix here:
So, exciting times ahead - needless to say we’ll be doing everything we can to support Rocket.Chat and ensure their Matrix integration is a success. And at this rate, they might be distinctly ahead of the curve if they start shipping Dendrite! Meanwhile, we have to wonder who will be next? Nextcloud? Mattermost? Place your bets… ;)
Aaron from Rocket.Chat just published an excellent guide & video tour for how to actually set up your Rocket.Chat instance with Dendrite to get talking Matrix!
This audit was a bit of a whirlwind, as while we were clearly overdue an audit of Matrix’s E2EE implementations, we decided quite late in the day to focus on bringing vodozemac to auditable production quality rather than simply doing a refresh of the original libolm audit. However, we got there in time, thanks to a monumental sprint from Damir and Denis over Christmas. The reason we went this route is that vodozemac is an enormous step change forwards in quality over libolm, and vodozemac is now the reference Matrix E2EE implementation going forward. Just as libolm went live with NCC’s security review back in 2016, similarly we’re kicking off the first stable release of vodozemac today with Least Authority’s audit. In fact, vodozemac just shipped as the default E2EE library in matrix-rust-sdk 0.5, released at the end of last week!
The main advantages of vodozemac over libolm include:
Native Rust - type, memory and thread safety is preserved both internally, and within the context of larger Rust programs (such as those building on matrix-rust-sdk). This is particularly important given the memory bugs which libolm sprouted, despite our best efforts to the contrary.
Performance - vodozemac benchmarks roughly 5-6x faster than libolm on typical hardware
Better primitives - vodozemac is built on the best practice cryptographic primitives from the Rust community, rather than the generic Brad Conte primitives used by libolm.
Also, we’ve finally fixed one of the biggest problems with libolm - which was that the hardest bit of implementing E2EE in Matrix wasn’t necessarily the encryption protocol implementation itself, but how you glue it into your Matrix client’s SDK. It turns out ~80% of the code and complexity needed to securely implement encryption ends up being in the SDK rather than the Olm implementation - and each client SDK ended up implementing its own independent state machine and glue for E2EE, leading to a needlessly large attack & bug surface.
To address this problem, vodozemac is designed to plug into matrix-sdk-crypto - an SDK-independent Rust library which abstracts away the complexities and risks of implementing E2EE, designed to plug into existing SDKs in any language. For instance, Element Android already supports delegating its encryption to matrix-sdk-crypto; Element iOS got this working too last week, and we’re hard at work adding it to Element Web too. (This set of projects is codenamed Element R). Meanwhile, Element X (the project to switch Element iOS and Element Android to use matrix-rust-sdk entirely) obviously benefits from it too, as matrix-rust-sdk now leans on matrix-sdk-crypto for its encryption.
Therefore we highly recommend that developers using libolm migrate over to vodozemac - ideally by switching to matrix-sdk-crypto as a way of managing the interface between Matrix and the E2EE implementation. Vodozemac does also provides a similar API to libolm with bindings for JS and Python (and C++ in progress) if you want to link directly against it - e.g. if you’re using libolm for something other than Matrix, for example XMPP, ActivityPub or Jitsi. We’ll continue to support and maintain libolm for now though, at least until the majority of folks have switched to vodozemac.
In terms of the audit itself - we recommend reading it yourself, but the main takeaway is that Least Authority identified 10 valid concerns, of which we addressed 8 during the audit process. The remaining two are valid but lower priority, and we’ll fix them as part of our maintenance backlog. All the issues identified are excellent valid points, and we’re very glad that Least Authority have added huge value here by highlighting some subtle gotchas which we’d missed. (If you write Rust, you’ll particularly want to check out their zeroisation comments).
So: exciting times! Vodozemac should be landing in a Matrix client near you in the near future - we’ll yell about it loudly once Element switches over. In the meantime, if you have any questions, please head over to #e2ee:matrix.org.
Thanks again to gematik for helping fund the audit, and to Least Authority for doing an excellent job - and being patient and accommodating beyond the call of duty when we suddenly switched the scope from libolm to vodozemac at the last minute ;)
Next up: we’re going to get the Rust matrix-sdk-crypto independently audited (once this burndown is complete) so that everyone using the matrix-sdk-crypto state machine for Matrix E2EE can have some independent reassurance too - a huge step forward from the wild west of E2EE SDK implementations today!
We've been flooded with questions about the DMA
since it was announced last week, and have spotted some of the
gatekeepers jumping to the wrong conclusions about what it might entail.
Just in case you don't want to wade through
yesterday's sprawling blog post,
we've put together a quick FAQ to cover the most important points based on
our understanding.
The gatekeepers will have to open and document their existing APIs, so that
alternative clients and/or bridges can connect to their service. The DMA
requires that the APIs must expose the same level of privacy for remote
users as for local users. So, if their service is end-to-end-encrypted
(E2EE), the APIs must expose the same E2EE semantics (e.g. so that an
alternative client connecting would have to implement E2EE too). For
E2EE-capable APIs to work, the gatekeeper will likely have to model remote
users as if they were local in their system. In the short term (one year
horizon) this applies only to 1:1 chats and file transfers. In the long term
(three year horizon) this applies to group chats and voip calls/conferences
too.
The DMA defines any tech company worth over €75B or with over €7.5B of
turnover as a gatekeeper, who must open their communication APIs. This means
only the tech giants are in scope (e.g. as of today that includes Meta,
Apple, Google, Microsoft, Salesforce/Slack - not Signal, Telegram, Discord,
Twitter).
🔗Does this mean the gatekeepers are being forced to implement an open standard such as Matrix or XMPP?
No. They can keep their existing implementations and APIs. For
interoperability with other service providers, they will need to use a
bridge (which could bridge via a common language such as Matrix or XMPP).
If the service lacks end-to-end-encryption (Slack, Teams, Google Chat,
non-secret chats on Facebook Messenger, Instagram, Google Messages etc) then
the bridge does not reduce security or privacy, beyond the fact that bridged
conversations by definition will be visible to the bridge and to the service
you are interoperating with.
If the service has E2EE (WhatsApp, iMessage, secret chats on Messenger) then
the bridge will necessarily have to decrypt (and reencrypt, where possible)
the data when passing it to the other service. This means the conversation is
no longer E2EE, and thus less secure (the bridge could be compromised and
inspect or reroute their messages) - and so gatekeepers must warn the user
that their conversation is leaving their platform and is no longer E2EE with
something like this:
The upside is that the user has the freedom to use an infinite number of
services (bots, virtual assistants, CRMs, translation services, etc) as well
as speak to any other user in the world, regardless what platform they use.
It also puts much-needed pressure on the gatekeepers to innovate and
differentiate rather than rely on their network effects to attract new
users - creating a much more vibrant, open, competitive marketplace for
users.
🔗If the DMA requires that remote users have the same security as local users, how can bridges work?
The DMA requires that the APIs expose the same level of security as for
local users - ie E2EE must be exposed. If the users in a conversation choose
to use a bridge and thus reencrypt the messages, then it is their choice to
tradeoff encryption in favour of interoperability for a given conversation.
🔗Does this undermine the gatekeepers’ current encryption?
Absolutely not. Users talking to other users within the same E2EE-capable
gatekeeper will still be E2EE (assuming the gatekeeper doesn’t pull that rug
from under its users) - and in fact it gives the gatekeepers an excellent way
to advertise the selling point that E2EE is only guaranteed when you speak to
other users on the same platform.
🔗But why do we need bridges? If everyone spoke a common protocol, you wouldn’t ever have to decrypt messages to convert them between protocols.
Practically speaking, we don’t expect the gatekeepers to throw away their
existing stacks (or implement multihead messengers that also speak open
protocols). It’s true that if they natively spoke Matrix or XMPP then the
reencryption problem would go away, but it’s more realistic to focus on
opening the existing APIs than interpret the legislation as a mandate to
speak Matrix. Perhaps in future players will adopt Matrix of their own
volition.
There is already a vibrant community of developers who build unofficial
bridges to the gatekeepers - eg Element, Beeper and hundreds of open source
developers in the Matrix and XMPP communities. Historically these bridges
have been hampered by having to use unofficial and private APIs, making them
a second class citizen - but with open documented APIs guaranteed by the DMA
we eagerly anticipate an explosion of high quality transparent bridges which
will be invisible to the end user.
🔗Can you run E2EE bridges clientside to make them safer?
Maybe. For instance, current iMessage bridges work by running iMessage on a
local iPhone or Mac and then reencrypting the messages there for
interoperability. Given the messages are already exposed on the client
anyway, this means that E2EE is not broken - and avoids decrypting them on a
server. There is lots of development in this space currently, and with open
APIs guaranteed by DMA the pace should speed up significantly.
🔗How can you tell what service you should use to talk to a given remote user?
For 1:1 chats this is easy: you can simply ask the user which service they
want to use to talk to a given user, if that user is not already on that
service.
For group chats it is harder, and this is why the deadline for group chats is
years away. The problem is that you need a way to verify the identity of
arbitrary numbers of remote users on different platforms - effectively
looking up their identity in a secure manner which stops services from
maliciously spoofing identities.
One possible way to solve this would be for users to explicitly link their
identity on one service with that on the gatekeeper’s service - eg “Alice on
AliceChat is talking in the same room as Bob on BobChat; Bob will be asked to
prove to AliceChat that he is the real Bob” - and so if AliceChat has already
validated Bob’s identity, then this can be used to spot him popping up on
other services. It also gives Bob a way to block themselves from ever being
unwittingly bridged to AliceChat.
There are many other approaches too - and the onus is on the industry to
figure out the best solution for decentralised identity in the next 3-4 years
in order to realise the most exciting benefits of the DMA.
With last week’s revelation that the EU Digital Markets Act
will require tech gatekeepers (companies valued at over $75B or with over
$7.5B of turnover) to open their communication APIs for the purposes of
interoperability, there’s been a lot of speculation on what this could mean
in practice, To try to ground the conversation a bit, we’ve had a go at
outlining some concrete proposals for how it could work.
However, before we jump in, we should review how the DMA has come to pass.
Today’s gatekeepers all began with a great product, which got more and more
popular until it grew to such a size that one of the biggest reasons to use
the service is not necessarily the product any more, but the benefits of
being able to talk to a large network of users. This rapidly becomes
anti-competitive, however: the user becomes locked into the network and can’t
switch even if they want to. Even when people have a really good reason
to move provider (e.g. WhatsApp’s terms of use changing to
share user data with Facebook, Apple doing a 180 on end-to-end encrypting iCloud backups,
or Telegram not actually being end-to-end encrypted),
in practice hardly anything changes -
because the users are socially obligated to keep using the service in order
to talk to the rest of the users on it.
As a result, it’s literally harmful to the users. Even if a new service
launches with a shiny new feature, there is enormous inertia that must be
overcome for users to switch, thanks to the pull of their existing network -
and even if they do, they’ll just end up with their conversations haphazardly
fragmented over multiple apps. This isn’t accepted for email; it isn’t
accepted for the phone network; it isn’t accepted on the Internet itself -
and it shouldn’t be accepted for messaging apps either.
Similarly: the closed networks of today’s gatekeepers put a completely
arbitrary limit on how users can extend and enrich their own conversations.
On email, if you want to use a fancy new client like Superhuman - you can. If
you want to hook up a digital assistant or translation service to help you
wrangle your email - you can. If you want to hook up your emails to a CRM to
help you run your business - you can. But with today’s gatekeepers, you have
literally no control: you’re completely at the mercy of the service
provider - and for something like WhatsApp or iMessage the options are
limited at best.
Finally - all the users’ conversation metadata for that service (who talks to
who, and when) ends up centralised in the gatekeepers’ databases, which then
become an incredibly valuable and sensitive treasure trove, at risk of abuse.
And if the service provider identifies users by phone number, the user is
forced to disclose their phone number (a deeply sensitive personal
identifier) to participate, whether they want to or not. Meanwhile the user
is massively incentivised not to move away: effectively they are held hostage
by the pull of the service’s network of users.
So, the DMA exists as a strategy to improve the situation for users and
service providers alike by building a healthier dynamic ecosystem for
communication apps; encouraging products to win by producing the best quality
product rather than the biggest network. To quote Cédric O (Secretary of
State for the Digital Sector of France), the strategy of the legislation came
from Washington advice to address the anticompetitive behaviour of the
gatekeepers “not by breaking them up… but by breaking them open.” By
requiring the gatekeepers to open their APIs, the door has at last been
opened to give users the option to pick whatever service they prefer to use,
to choose who they trust with their data and control their conversations as
they wish - without losing the ability to talk to their wider audience.
However, something as groundbreaking as this is never going to be completely
straightforward. Of course while some basic use cases (i.e. non-E2EE chat)
are easy to implement, they initially may not have a UX as smooth as a closed
network which has ingested all your address book; and other use cases(eg E2EE
support) may require some compromises at first. It’s up to the industry to
figure out how to make the most of that challenge, and how to do it in a way
which minimises the impact on privacy - especially for end-to-end encrypted
services.
We’ve already written about this
from a Matrix perspective, but to recap - the main challenge is the trade-off
between interoperability and privacy for gatekeepers who provide end-to-end
encryption, which at a rough estimate means: WhatsApp, iMessage, secret chats
in Facebook Messenger, and Google Messages. The problem is that even with
open APIs which correctly expose the end-to-end encrypted semantics (as DMA
requires), the point where you interoperate with a different system
inevitably means that you’ll have to re-encrypt the messages for that system,
unless they speak precisely the same protocol - and by definition you end up
trusting the different system to keep the messages safe. Therefore this
increases the attack surface on the conversations, putting the end-to-end
encryption at risk.
Alex Stamos (ex-CISO at Facebook) said that “WhatsApp rolling out mandatory
end-to-end encryption was the largest improvement in communications privacy
in human history” – and we agree.
Guaranteed end-to-end encrypted conversations on WhatsApp is amazing, and
should be protected at all costs. If users are talking to other users on
WhatsApp (or any set of users communicating within the same E2EE messenger),
E2EE should and must be maintained - and there is nothing in the DMA which says otherwise.
But what if the user consciously wants to prioritise interoperability over
encryption? What if the user wants to hook their WhatsApp messages into a
CRM, or run them through a machine translation service, or try to start a
migration to an alternative service because they don’t trust Meta? Should
privacy really come so spectacularly at the expense of user freedom?
We also have the problem of figuring out how to reference users on other
platforms. Say that I want to talk to a user with a given phone number, but
they’re not on my platform - how do I locate them? What if my platform only
knows about phone numbers, but you’re trying to talk to a user on a platform
which uses a different format for identifiers?
Finally, we have the problem of mitigating abuse: opening up APIs makes it
easier for bad actors to try to spam or phish or otherwise abuse users within
the gatekeepers. There are going to have to be changes in anti-abuse
services/software, and some signals that the gatekeeper platforms currently
use are going to go away or be less useful, but that doesn't mean the whole
thing is intractable. It will require changes and innovative thinking, but
we’ve been making steady progress (e.g. the work done by Element’s trust and
safety team). Meanwhile, the
gatekeepers already have massive anti-abuse systems in place to handle the
billions of users within their walled gardens, and unofficial APIs are
already widespread: adding official APIs does not change the landscape
significantly (assuming interoperability is implemented in such a way that
the existing anti-abuse mechanisms still apply).
In the past, gatekeepers dismissed the effort of interop as not being
worthwhile - after all, the default course of action is to build a walled
garden, and having built one, the temptation is to try to trap as many users
as possible. It was also not always clear that there were services worth
interoperating with (thanks to the chilling effects of the gatekeepers
themselves, in terms of stifling innovation for communication startups).
Nowadays this situation has fundamentally changed however: there is a vibrant
ecosystem of open communication startups out there, and a huge appetite to
build a vibrant open ecosystem for interoperable communication, but like the
open web itself.
Before going further in considering solutions, we need to review the actual
requirements of the DMA. Our best understanding at this point is that the
DMA will mandate that:
Gatekeepers will have to provide open and documented APIs to their services, on request, in order to facilitate interoperability (i.e. so that other services can communicate with their users).
These APIs must preserve the same level of end-to-end encryption (if any) to remote users as is available to local users.
This applies to 1:1 messaging and file transfer in the short term, and group messaging, file-transfer, 1:1 VoIP and group VoIP in the longer term.
The DMA legislation deliberately doesn’t focus on implementation, instead
letting the industry figure out how this could actually work in practice.
There are many different possible approaches, and so from our point of view
as Matrix we’ve tried to sketch out some options to make the discussion more
concrete. Please note these are preliminary thoughts, and are far from
perfect - but hopefully useful as a starting point for discussion.
Imagine that you have a user Alice on an existing gatekeeper, which we’ll call
AliceChat, who runs an E2EE messaging service which identifies users using
phone numbers. Say that they want to start a 1-to-1 conversation with Bob,
who doesn’t use AliceChat, but Alice knows he is a keen user of BobChat.
Today, you’d have no choice but to send them an SMS and nag them to join
AliceChat (sucks to be them if they don’t want to use that service, or if
they’re unable to for whatever reason - e.g. their platform isn’t supported,
or their government has blocked access, etc), or join BobChat yourself.
However, imagine if instead the gatekeeper app had a user experience where the
app prompted you to talk to the user via a different platform instead. It’d
be no different to your operating system prompting you to pick which app to
use to open a given file extension (rather than the OS vendor hardcoding it
to one of their own apps - another win for user rights led by the EU!).
Now, the simplest approach in the short term would be for each gatekeeper to
pre-provision a set of options of possible alternative networks. (The DMA
says that, on request, other service providers can ask to have access to the
gatekeeper’s APIs for the purposes of interoperability, so the gatekeeper
knows who the alternative networks may be). “Bob is not on AliceChat - do
you want to try to reach him instead on BobChat, CharlieChat, DaveChat
(etc)”.
Much like users can configure their preferred applications for file extensions
in an operating system today, users would also be able to add their own
preferred service providers - simply specifying their domain name.
Now, AliceChat itself needs to figure out how to query the remote service
provider to see if Bob actually exists there. Given the DMA requires that
gatekeepers provide open APIs with the same level of security to remote users
as their local ones using today’s private APIs - and very deliberately
doesn’t mandate specific protocols for interoperability - they will need to
locate a bridge which can connect to the other system.
In this thought experiment, the bridge used would be up to the destination
provider. For instance, bobchat.com could announce that AliceChat users
should connect to it via alicechat-bridge.bobchat.com using the AliceChat
protocol(or matrix-bridge.bobchat.com via Matrix or xmpp-bridge.bobchat.com
via XMPP) by a simple HTTP API or even a .well-known URL. Users might also
be able to override the bridge used to connect to them (e.g. to point instead
at a client-side bridge), and could sign the advertisement to prove that it
hadn’t been tampered with.
AliceChat would then connect to the discovered bridge using AliceChat’s
vendor-specific newly opened API, and would then effectively treat Bob as if
they were a real AliceChat user and client to all intents and purposes. In
other words, Bob would effectively be a “ghost user” on AliceChat, and
subject to all their existing anti-abuse mechanisms.
Meanwhile, the other side of the bridge converts through to whatever the
target system is - be that XMPP, Matrix, a different proprietary API, etc.
For Matrix, it’d be chatting away to a homeserver via the Application Service
API (using
End-to-Bridge Encryption via
MSC3202).
It’s also worth noting that the target might not even be a bridge - it could
be a system which already natively speaks AliceChat’s end-to-end encrypted
API, thus preserving end-to-end encryption without any need to re-encrypt.
It’s also worth noting that while historically bridges have had a bad
reputation as being a second class (often a second class afterthought),
Matrix has shown that by considering them as a first class citizen and really
focusing on mapping the highest common denominator between services rather
than lowest common denominator, it’s possible for them to work transparently
in practice. Beeper is a great example of Matrix
bridging being used for real in the wild (rather amusingly they just
shipped emoji reactions for WhatsApp on iOS via their
WhatsApp<->Matrix bridge before WhatsApp themselves did…)
Architecturally, it could look like this:
Or, more likely (given a dedicated bridge between two proprietary services
would be a bit of a special case, and you’d have to solve the dilemma of who
hosts the bridge), both services could run a bridge to a common open standard
protocol like Matrix or XMPP instead (thus immediately enabling
interoperability with everyone else connected to that network):
Please note that while these examples show server-side bridges, in practice it
would be infinitely preferable to use client-side bridges when connecting to
E2EE services - meaning that decrypted message data would only ever be
exposed on the client (which obviously has access to the decrypted data
already). Client-side bridges are currently complicated by OS limits on
background tasks and push notification semantics (on mobile, at least), but
one could envisage a scenario where you install a little stub AliceChat
client on your phone which auths you with AliceChat and then sits in the
background receiving messages and bridging them through to Matrix or XMPP,
like this:
Another possible architecture could be for the E2EE gatekeeper to expose their
open APIs on the clients, rather than the server. DMA allows this, to the
best of our knowledge - and would allow other apps on the device to access
the message data locally (with appropriate authorisation, of course) - effectively
doing a form of realtime
data liberation
from the closed service to an open system, looking something like this:
Finally, it's worth noting that when peer-to-peer decentralised protocols
like P2P Matrix
enter production, clientside bridges could bridge directly into a local
communication server running on the handset - thus avoiding metadata being
exposed on Matrix or XMPP servers providing a common language between the
service providers.
Now, the above describes the simplest and most naive directory lookup system
imaginable - the problem of deciding which provider to use to connect to each
user is shouldered by the users. This isn’t that unreasonable - after all,
users may have strong feelings about what providers to use to talk to a given
user. Alice might be quite happy to talk to Bob via BobChat, but might be
very deliberately avoiding talking to him on DaveChat, for whatever ominous
reasons.
However, it’s likely in future we will also see other directory services
appear in order to map phone numbers (or other identities) to providers -
whether these piggyback on top of existing identity providers
(gatekeepers, DNS, telcos, SSO providers, governments) or are decentralised
through some other mechanism. For instance, Bob could send AliceChat a
blinded proof that he authorises them to automatically route traffic to him
over at BobChat, with BobChat maintaining a matching proof that Bob is who he
claims to be (having gone through BobChat’s auth process) - and the proofs
could be linked using a temporary key such that Bob doesn’t even need to
maintain a long-term one. (Thanks to James Monaghan for suggesting this one!)
Another alternative to having the user decide where to find each other could
be to use a decentralised Keybase-style identity system to track verified
mappings of identities (phone numbers, email addresses etc) through to
service providers - perhaps something like IDX might fit
the bill? While this decentralised identity lookups have historically been a
hard problem, there is a lot of promising work happening in this space
currently and the future looks promising.
Meanwhile, Alice still needs to talk to Bob. As already discussed, unless
everyone speaks the same end-to-end encrypted protocol (be it Matrix,
WhatsApp or anything else), we inevitably have a trade-off here between
interoperability and privacy if Bob is not on the same system as Alice
(assuming AliceChat is end-to-end encrypted) - and we will need to clearly
warn Alice that the conversation is no longer end-to-end encrypted:
To be clear: right now, today, if Bob were on AliceChat, he could be
copy-pasting all your messages into (say) Google Translate in a frantic
effort to workaround the fact that his closed E2EE chat platform has no way
to do machine translation. However, in a DMA world, Bob could legitimately
loop a translation bot into the conversation… and Alice would be warned that
the conversation was no longer secure (given the data is now being bridged
over to Google).
This is a clear improvement in user experience and transparency. Likewise, if
I’m talking to a bridged user today on one of these platforms, I have no way
of telling that they have chosen to prioritise interop over E2EE - which is
frankly terrifying. If I’m talking to someone on WhatsApp today I blindly
assume that they are E2EE as they are on the same platform - and if they’re
using an unofficial app or bridge, I have no way to tell. Whereas in a DMA
world, you would expect the gatekeeper to transparently expose it.
If anything, this is good news for the gatekeeper in that it consciously
advertises a big selling point for them: that for full E2EE, users need to
talk to other users in the same walled garden (unless of course the platform
speaks the same protocol). No more need for bus shelter adverts to remind
everywhere that WhatsApp is E2EE - instead they can remind the user every
time they talk to someone outside the walled garden!
Just to spell it out: the DMA does not require or encourage any reduction in
end-to-end encryption for WhatsApp or similar: full end-to-end encryption
will still be there for users in the same platform, including through to
users on custom clients (assuming the gatekeeper doesn’t flex and turn it off
for other reasons).
Obviously, this flow only considers the simple case of Alice inviting Bob. The
flow is of course symmetrical for Bob inviting Alice; AliceChat will need to
advertise bridges which can be used to connect to its users. As Bob pops up
from BobChat, the bridge would use AliceChat’s newly open APIs to provision a
user for him, authing him as per any other user (thus ensuring that AliceChat
doesn’t need to have trusted BobChat to have authenticated the user). The
bridge then sends/receives messages on Bob’s behalf within AliceChat.
This is all very well for 1:1 chats - which are the initial scope of the DMA.
However, over the coming years, we expect group chats to also be in scope.
The good news is that the same general architecture works for group chats
too. We need a better source of identity though: AliceChat can’t possibly
independently authenticate all the new users which might be joining via group
conversations on other servers (especially if they join indirectly via
another server). This means adopting one of the decentralised identity
lookup approaches outlined earlier to determine whether Charlie on
CharlieChat is the real Charlie or an imposter.
Another problem which emerges with group chats which span multiple service
providers is that of indirect routing, especially if the links between the
providers use different protocols. What if AliceChat has a direct bridge to
BobChat (a bit like AIM and ICQ both spoke OSCAR), BobChat and CharlieChat
are connected by Matrix bridges, and AliceChat and CharlieChat are connected
via XMPP bridges? We need a way for the bridges to decide who forwards
traffic for each network, and who bridges the users for which network. If
they were all on Matrix or XMPP this would happen automatically, but with
mixed protocols we’d probably have to extend the lookup protocol to establish
a spanning tree for each conversation to prevent forwarding loops.
Here’s a deliberately twisty example to illustrate the above thought experiment:
There is also a risk of bridge proliferation here - in the worst case, every
service would have to source bridges to directly connect to every other
service who came along, creating a nightmarish n-by-m problem. But in
practice, we expect direct proprietary-to-proprietary bridges to be rare:
instead, we already have open standard communication protocols like Matrix
and XMPP which provide a common language between bridges - so in practice,
you could just end up in a world where each service has to find a
them-to-Matrix or them-to-XMPP bridge (which could be run by them, or
whatever trusted party they delegate to).
A mesh of bridges which connect together the open APIs of proprietary vendors
by converting them into open standards may seem unwieldy at first - but it’s
precisely the sort of ductwork which links both phone networks and the
Internet together in practice. As long as the bridging provides for highest
common denominator fidelity at the best impedance ratio, then it’s
conceptually no different to converting circuit switched phone calls to VoIP,
or wired to wireless Ethernet, or any of the other bridges which we take
entirely for granted in our lives thanks to their transparency.
Meanwhile, while this means a bit more user interface in the communication
apps in order to select networks and warn about trustedness, the benefits to
users are enormous as they put the user squarely back in control of their
conversations. And the UX will improve as the tech evolves.
The bottom line is, we should not be scared of interoperability, just because
we’ve grown used to a broken world where nothing can interconnect. There are
tractable ways to solve it in a way that empowers and informs the user - and
the DMA has now given the industry the opportunity to demonstrate that it can
work.
Yesterday the EU Parliament & Council agreed on the contents of the Digital
Markets Act - new legislation from the EU intended to limit anticompetitive
behaviour from tech “gatekeepers”, i.e. big tech companies (those with market
share larger than €75B or with more than €7.5B a year of revenue).
This is absolutely landmark legislation, where the EU has decided not to break
the gatekeepers up in order to create a more competitive marketplace - but
instead to “break them open”. This is unbelievably good news for the open
Internet, as it is obligating the gatekeepers to provide open APIs for their
communication services. In other words: no longer will the tech giants be
able to arbitrarily lock their users inside their walled gardens - there will
be a legal requirement for them to expose APIs to other services.
While the formal outcomes of yesterday’s agreement haven’t been published yet
(beyond this press release),
our understanding is that the DMA will mandate:
Gatekeepers will have to provide open and documented APIs to their
services, on request, in order to facilitate interoperability (i.e. so
that other services can communicate with their users).
These APIs must preserve the same level of end-to-end encryption (if any)
to remote users as is available to local users.
This applies to 1:1 messaging and file transfer in the short term, and
group messaging, file-transfer, 1:1 VoIP and group VoIP in the longer
term.
This is the best possible outcome imaginable for the open internet. Never
again will a big tech company be able to hold their users hostage in a walled
garden, or arbitrarily close down or sabotage their APIs.
Since the DMA announcement on Thursday, there’s been quite a lot
of yelling from some very
experienced voices that mandating interoperability via open APIs is going to
irrevocably undermine end-to-end encrypted messengers like WhatsApp. This
seems to mainly be born out of a concern that the DMA is somehow trying to
subvert end-to-end encryption, despite the fact that the DMA explicitly
mandates that the APIs must expose the same level of security, including
end-to-end encryption, that local users are using. (N.B. Signal doesn’t
qualify as a gatekeeper, so none of this is relevant to Signal).
So, for WhatsApp, it means that the API would expose both the message-passing
interface as well as the key management APIs required to interoperate with
WhatsApp using your own end-to-end-encrypted WhatsApp client - E2EE would be
preserved.
However, this does mean that if you were to actively interoperate between
providers (e.g. if Matrix turned up and asked WhatsApp, post DMA, to expose
an API we could use to write bridges against), then that bridge would need to
convert between WhatsApp’s E2EE’d payloads and Matrix’s E2EE’d payloads.
(Even though both WhatsApp and Matrix use the Double Ratchet, the actual
payloads within the encryption are completely different and would need to be
converted). Therefore such a bridge has to re-encrypt the traffic - which
means that the plaintext is exposed on the bridge, putting it at risk and
breaking the end-to-end encryption guarantee.
There are solutions to this, however:
We could run the bridge somewhere relatively safe - e.g. the user’s client.
There’s a bunch of work going on already in Matrix to run clientside
bridges, so that your laptop or phone effectively maintains a connection
over to iMessage or WhatsApp or whatever as if it were logged in… but then
relays the messages into Matrix once re-encrypted. By decentralising the
bridges and spreading them around the internet, you avoid them becoming a
single honeypot that bad actors might look to attack: instead it becomes
more a question of endpoint compromise (which is already a risk today).
The gatekeeper could switch to a decentralised end-to-end encrypted protocol
like Matrix to preserve end-to-end encryption throughout. This is
obviously significant work on the gatekeeper’s side, but we shouldn’t rule
it out. For instance, making the transition for a non-encrypted service is
impressively little work, as we proved with Gitter.
(We’d ideally need to figure out decentralised/federated identity-lookup
first though, to avoid switching from one centralised identity database
to another).
Worst case, we could flag to the user that their conversation is insecure
(the chat equivalent of a scary TLS certificate warning). Honestly, this
is something communication apps (including Matrix-based ones!) should be
doing anyway: as a user you should be able to tell what 3rd parties
(bots, integrations etc) have been added to a given conversation. Adding
this sort of semantic actually opens up a much richer set of communication
interactions, by giving the user the flexibility over who to trust with
their data, even if it breaks the platonic ideal of pure E2E encryption.
On balance, we think that the benefits of mandating open APIs outweigh the
risks that someone is going to run a vulnerable large-scale bridge and
undermine everyone’s E2EE. It’s better to have the option to be able to get
at your data in the first place than be held hostage in a walled garden.
One other complaint which has come up a bunch is around speed of innovation:
the idea that WhatsApp or similar would be seriously slowed down by having
to (effectively) maintain stable documented federation APIs, and figure out
how to do backwards compatibility for new features. It’s true that this will
take a bit more effort (similar to how adding GDPR compliance takes some
effort), but the ends make it more than worth it. Plus, if the rag-tag
Matrix ecosystem can do it, it doesn’t seem unreasonable to think that a
$600B company like Meta can figure it out too...
Another consideration is that it might make it too easy to build malicious 3rd
party clients - e.g. building your own "special" version of Signal which
connects to the official service, but deliberately or otherwise has security
flaws. The fact is that we're already in this position though: there are
illicit alternative clients flying around all over the place, and the onus is
on the app stores to protect their users from installing malware. This isn't
reason to throw the baby of interoperability out with the bathwater of
bootleg clients.
The final complaint is about moderation and abuse: while open APIs are good
news for consumer choice, they can also be used by spammers, phishers and
other miscreants to cause problems for the users within the gatekeeper. Much
like a mediaeval citadel; opening up your walled garden means that both good
and bad people can turn up. And much like real life, this is a solvable problem,
even if it’s unfortunate: the benefits of free trade massively outweigh the
downsides of having to police strangers more effectively. Frankly,
moderation and anti-abuse approaches on the Internet today are infamously
broken, with centralised moderation by gatekeepers producing increasingly
erratic results. By opening the walled gardens, we are forcing a much-needed
opportunity to review how to empower users and admins to filter unwanted
content on their own terms. There’s a recent write-up of the proposed
approach for Matrix at
https://element.io/blog/moderation-needs-a-radical-change/,
which outlines one strategy - but there are many others. Honestly, having to improve
moderation tooling is a worthwhile price to pay for the benefits of open
APIs.
So, there you have it. Hopefully you’ll agree that the benefits here outweigh
the risks: without open APIs we wouldn't even have the option to talk about
interoperability. We should be celebrating a new dawn for open access,
rather than fearing that the sky is falling and this is nefarious attempt to
undermine end-to-end encryption.
Last year was the first time FOSDEM was hosted on Matrix, and it was generally a huge success - and so the FOSDEM team trusted us again this year and we’re happy to say that it seems to have gone really well! This year’s FOSDEM was massive once again, featuring 654 speakers, 731 events, and 103 tracks.
This year hosting the event went smoother than last year, the only significant issue was some of the Q&A Jitsis not being broadcast to the devrooms on Saturday before 10:15 UTC, for which we offer our apologies to the speakers impacted. This turned out to be a problem with the Matrix<->Jitsi access control sync system which hadn’t showed up during earlier testing, but we patched around it rapidly on the day.
The most notable difference between this year and the previous year has been the usage of a “attendees.fosdem.org” instance in addition to the original “fosdem.org” one, specifically for attendees. The graphs speak for themselves: Synapse could handle the load of the 23K users (13K joined users and 10K lurkers) spread across a total number of 941 rooms. The real eye-opener however is that of the 13K joined users, only 4K came came from the FOSDEM attendee server, and 1K from Libera Chat, meaning that ~70% of the Matrix participants were already on Matrix and came in from existing servers! 🤯 That means the vast majority of people attended over federation. Decentralisation at work, people! It works! We didn’t host the conference… you did!!
But not only did the backend handle the load smoothly: the general user experience felt tightly integrated. People were welcomed by a tailor-made home page in Element to help them navigate through all the tracks and stands:.
One of the great things is it doesn’t require heavy modifications to Element: anyone who installs their own instance of Element can use a simple html file to display relevant information to their audience.
New this year, we also generated a space hierarchy for the whole conference at #fosdem2022:fosdem.org to help navigate the maze of rooms, making it even easier for users on their own servers to jump in:
Another greatly appreciated feature was the famous “maximised widgets” I (Thib) keep telling you about in Matrix Live episodes. Attendees and speakers could give the conference the central attention it deserved while simultaneously keeping an eye on what was happening in the chat.
From the speaker's perspective, we tried to streamline the user journey as much as possible: a bot invited them to a backstage room, in which they joined a Jitsi widget while their talk was being played in the track or devroom. They could see the most upvoted questions by the audience in a dedicated widget. A few minutes before their pre-recorded talk was over, a countdown (new this year!) could be displayed to tell them and the host they were about to go live. At the end of the countdown, the backstage Jitsi was broadcasted to the track so the speaker could answer the questions.
If you want to have an in-depth look at the backend’s architecture, it didn’t change much from last year. You can have a look at last year’s blog post for the details on the setup. Most of the heavy lifting was around the conference bot used to set rooms up, create the spaces, populate them with widgets, arrange layouts and trigger countdowns before going live…
Huge thanks to the FOSDEM team for trusting us, massive shout-out to Element Matrix Services and Element’s Ops and infrastructure team for their fantastic job in setting everything up and making sure everything was ready in time, a sincere thank you to all the fantastic speakers who shared awesome content, and finally to all the attendees. What a weekend!