Jim Mackenzie, VP Trust & Safety — The Matrix.org Foundation

2 posts tagged with "Jim Mackenzie, VP Trust & Safety — The Matrix.org Foundation" (See all Author)

Switching to Curated Room Directories

20.02.2025 09:30 — General Jim Mackenzie, VP Trust & Safety — The Matrix.org Foundation

As of yesterday, Matrix.org is using a curated room directory. We’re paring down the rooms that are visible to a collection of moderated spaces and rooms. This is an intervention against abuse on the network, and a continuation of work that we started in May 2024.

In early 2024 we noticed an uptick in users creating rooms to share harmful content. After a few iterations to identify these rooms and shut them down, we realised we needed to change tack. We landed on first reducing the discoverability and reach of these rooms - after all, no other encrypted messaging platform provides a room directory service, and unfortunately it can clearly serve as a mechanism to amplify abuse. So, in May 2024 we froze the room directory. Matrix.org users were no longer permitted to publish their rooms to the room directory. We also did some manual intervention to reduce the size of the directory as a whole, and remove harmful rooms ahead of blocking them.

This intervention aimed at three targets:

  • Lowering the risk of users discovering harmful rooms
  • Stopping the amplification of abuse via an under-moderated room directory
  • Reducing the risk for Matrix client developers for app store reviews

In truth, the way room discovery works needs some care and attention. Room directories pre-date Spaces, and some of the assumptions don't hold up to real world use. From the freeze, and the months since, we've learned a few things. First, the criteria for appearing in a server's room directory in the first place is way too broad. Also, abuse doesn't happen in a vacuum. Some rooms that were fine at the time of the freeze, are not now. There are a few different causes for that, including room moderators losing interest. We looked for criteria to give us the confidence in removing the freeze, and we hit all the edge cases that make safety work so challenging.

Those lessons led to a realization. One of the values of the Foundation is pragmatism, rather than perfection. We weren't living up to that value, so we decided to change. The plan became simpler: move to a curated list of rooms, with a rough first pass of criteria for inclusion. In parallel, we asked the Governing Board to come up with a process for adding rooms in the future, and to refine the criteria. We've completed the first part of the plan today.

🔗What comes next

There's plenty of scope for refinement here, and we've identified a few places where we can get started:

  • The Governing Board will publish criteria for inclusion in the Matrix.org room directory. They'll also tell you how you can suggest rooms and spaces for the directory.
  • We're going to recommend safer defaults. Servers should not let users publish rooms unless there are appropriate filtering and moderation tools in place, and people to wield them. For instance, Element have made this change to Synapse in PR18175
  • We're exploring discovery as a topic, including removing the room directory API. One promising idea is to use Spaces: servers could publish a default space, with rooms curated by the server admin. Our recent post includes some other projects we have in this area: https://matrix.org/blog/2025/02/building-a-safer-matrix/

🔗FAQs

What criteria did you use for this first pass?
We used a rough rubric: Is the room already in the room directory, and does the Foundation already protect the room with the Matrix.org Mjolnir? From there, we extended to well-moderated rooms and spaces that fit one of the following:

  • Matrix client and server implementations (e.g. FluffyChat, Dendrite)
  • Matrix community projects (e.g. t2bot.io)
  • Matrix homeserver spaces with a solid safety record (e.g. tchncs.de, envs.net)

Why isn't the Office of the Foundation in the directory?
It didn't exist before May 2024, so the Office has never been in the directory. We're going to add it in the next few days, with a couple of other examples that fit our rough rubric.

How do I add my room/space to the list?
At the moment, you can't. The Governing Board will publish the criteria and the flow for getting on the list.

What do I do if I find a harmful room in the current directory?
You shouldn't, but if a room does have harmful content, check out How you can help

Building a Safer Matrix

14.02.2025 14:30 — General Jim Mackenzie, VP Trust & Safety — The Matrix.org Foundation

N.B. this post is also available in German below.

🔗Introduction

Right now, the world needs secure communication more than ever. Waves of security breaches such as the “Salt Typhoon” compromise of the telephone network’s wiretap system have led the FBI to advise US citizens to switch to end-to-end-encrypted communication. Geopolitical shifts painfully highlight the importance of privacy-preserving communication for vulnerable minorities, in fear of being profiled or targeted. Meanwhile the International Rules-Based Order is at risk like never before.

We built Matrix to provide secure communication for everyone - to be the missing communication layer of the Open Web. This is not hyperbole: Matrix is literally layered on top of the Web - letting organisations run their own servers while communicating in a wider network. As a result, Matrix is “decentralised”: the people who built Matrix do not control those servers; they are controlled by the admins who run them - and just as the Web will outlive Tim Berners-Lee, Matrix will outlive us.

Matrix itself is a protocol (like email), defined as an open standard maintained by The Matrix.org Foundation C.I.C - a UK non-profit incorporated in 2018 to act as the steward of the protocol; to coordinate the protocol’s evolution and to work on keeping the public Matrix network safe. The Foundation is funded by donations from its members (both individuals and organisations), and also organises the Matrix.org homeserver instance used by many as their initial home on the network.

Much like the Web, Matrix is a powerful technology available to the general public, which can be used both for good and evil.

The vast majority of Matrix’s use is constructive: enabling collaboration for open source software communities such as Mozilla, KDE, GNOME, Fedora, Ubuntu, Debian, and thousands of smaller projects; providing a secure space for vulnerable user groups; secure collaboration throughout academia (particularly in DACH); protecting healthcare communication in Germany; protecting national communication in France, Germany, Sweden and Switzerland; and providing secure communication for NATO, the US DoD and Ukraine. You can see the scope and caliber of the Matrix ecosystem from the talks at The Matrix Conference in September.

However, precisely the same capabilities which benefit privacy-sensitive organisations mean that a small proportion of members of the public will try to abuse the system.

We have been painfully aware of the risk of abuse since the outset of the project, and rather than abdicating responsibility in the way that many encrypted messengers do, we’ve worked steadily at addressing it. In the early days, even before we saw significant abuse, this meant speculating on approaches to combat it (e.g. our FOSDEM 2017 talk and subsequent 2020 blog post proposing decentralised reputation; now recognisable in Bluesky’s successful Ozone anti-abuse system and composable moderation). However, these posts were future-facing at the time - and these days we have different, concrete anti-abuse efforts in place.

In this post, we’d like to explain where things are at, and how they will continue to improve in future.

🔗What we do today

The largest use of our funding as a Foundation is spent on our full-time Safety team, and we expanded that commitment at the end of 2024. On a daily basis, the team triage, investigate, identify and remove harmful content from the Matrix.org server, and remove users who share that material. They also build tooling to prevent, detect and remove harmful content, and to protect the people who work on user reports and investigations.

The humans who make up the Foundation Trust & Safety team are dedicated professionals who put their own mental health and happiness in jeopardy every day, reviewing harmful content added by people abusing the service we provide. Their work exposes them to harms including child sexual exploitation and abuse (CSEA), terrorist content, non-consensual intimate imagery (NCII), harassment, hate, deepfakes, fraud, misinformation, illegal pornography, drugs, firearms, spam, suicide, human trafficking and more. It’s a laundry list of the worst that humanity has to offer. The grim reality is that all online services have to deal with these problems, and to balance the work to detect and remove that content with the rights of their users. We’re committed to that work, and to supporting the Trust & Safety team to the best of our ability — we are very grateful for their sacrifice.

Continue reading…