Rethinking Mail Servers

The current version of Mail Server idea evolved from the extremely simple example from Status users can choose from a few Mail Servers in the interface, but the solution is still centralized and based on trusted entities.

In this quarter, we started working on a more decentralized solution where people can run their own mail server nodes and add them to a registry. The pool of available mail servers would increase but the intrinsic issues with the current architecture were not going to be resolved.

Here are the issues I am talking about:

  1. Each Mail Server stores all seen Whisper envelopes locally; it does not scale, unnecessarily wastes disk storage and relies on a mail server node operator to properly secure the storage and handle disk failures,
  2. Each Mail Server must be trusted. There is a direct p2p connection between a Whisper node and Mail Server. This means leaking some metadata like topics and timestamps when a user is online,
  3. There are no quality measurements. A user can’t know if a given Mail Server is honest about its responses,
  4. Building an incentivization layer to run mail server nodes is hard as we would need to figure out it from scratch.

My proposal is that we take advantage of existing storage platforms, namely swarm and IPFS, to provide a better offline message delivery experience.

Let’s dive into more details.

First, let me describe the idea from 10,000 feet. There is a set of validator nodes (trusted) and a set of proposer nodes (untrusted). Every N seconds (even time windows, for example 15:30:00-15:30:30), the proposer prepares a batch of envelopes that were observed during the time window and sends it to the validators. The validators also observe Whisper envelopes and verify which batches are correct. Next, a correct batch is selected, uploaded to swarm or IPFS and the index is updated. The index is later used to retrieve envelopes matching a request criteria.

This approach solves: (1) because the historic messages are now stored in a distributed storage platform, (3) because the historic messages are always delivered as they were uploaded, (4) because the incentivization layer is built into the storage platform.

(2) is only partially solved because the validators must be trusted. However, there might be multiple competing validator sets which can maintain their own indexes and they can also implement various consensus algorithms.

There are a lot of details that needs to be figured out and researched but I believe this model is a viable alternative to the current architecture. It’s definitely better scalable and nicely integrates with the existing and the most promising storage technologies.

Some inspiration:

I also looked for some description how PSS handles historic messages but all I found was that it’s not supported at the moment but on the roadmap.


Yes! Great stuff - I was writing a similar post just before I left for vacation - wasn’t sure if it would hijack this discussion so put it in a separate thread:

1 Like