Visibilty Stake for Public Chat Room Governance

Public Chat Room Governance

Recently I’ve faced some offensive messages in public chats, and we need someway of limiting public chat abusers, such as bad trolls, offensive language, flooder, spammers, scammers, etc.
I also think that each room can have their own rules, lets say #spankchain would have loosen rules about type of language accepted in their channel.
Some channels might also require Identity Claims to accept user messaging on them.
While I see that is not possible to prevent users from broadcasting messages in the network, is totally possible to ignore (not display) message from non participants of a list or that are not in a specific format.

Visibility Stake

  • user should have to lock SNT to have voice in a public channel. This price might get higher if a room have too much messages.
  • when a channel have too much messages, the users with higher visibility stake would be more visible.
  • the more visible you want to be, the more funds you put in stake while using public chat or report tool.
  • visibility fee should be at minimal very lucrative for judges to press the correct button.
  • judges get 80% of visibility fee, not matter the result (so they don’t have economic incentives of choosing one result over other), remaining 20% goes to challenge winner visibility stake.

Channel Rules

  • a hash of the channel rules can be decided by Topic Democracy.
  • each channel can have its own rules, otherwise they might fallback to a default public chat rules


  • users can report other users that are abusing the chat by staking their self visibility stake against user being reported visibility stake.
  • a user can only be reported only up to its visibility stake, and multiple users can report a user if other users didn’t challenged total of reported user visibility stake.
  • users that abuses chat functionality can have its visibility stake slashed.
  • users that abuses report tool can have its visibility stake slashed.
  • looser of challenge can be time limited to buy the visibility stake
  • once user withdraw/lost visibility stake, their messages should fade away and not return back when new visibility stake is placed


  • a topic delegation would be able to vote directly in challenges.
  • votes are influence based on SNT balance + delegated SNT influence
  • judges need to see only: channel rules + message contents + visibility fee in challenge, and they can only accept, reject or ignore.
  • bad judges just but don’t win slashed visibility fee
  • good judges get the slashed visibility fee
  • slashed fees are accumulated in a contract and withdrawn when user wants
  • amount of funds is divided based on SNT influence under your vote / total SNT influence
  • delegators might also get 50% of funds even without ever voting.

Looser Appeal

  • looser of challenge might appeal to a new result by staking the same amount of funds is about to loose.
  • the new voting happens in Topic Democracy parent delegate proxy.
  • user can appeal up to root delegate chain

This don’t apply to “private groups”, a specific architecture for this small DAOs is more interesting because they seems to be simpler in some aspects and also require some additional features, as subscription fee.

This is just brainstorming is not a final idea, but I think this is a feasible way we can solve this problem in decentralized fashion.
Topic Democracy is another idea under development together with Voting Dapp, that might change itself and impact this idea, but its all manageable. :slight_smile:

I would like to hear ideas from community on the best way to do this, and also some other ideas about “visibility fee”.

1 Like

@ricardo3 this sounds very similar to for disputes resolution, have you had a chance to look at it? It would save us some effort by building on an already existing solution.

The main change to the product seems to be that it will require any user to stake SNT before he can join a chat, otherwise he will not be able to publish messages ( or more accurately they won’t be visible)

Is this something there’s a consensus on already as a product level, or should be discussed? (Same question applies to the private group chats posts).

In case we go for a similar approach, how fast do you reckon the process will take?

It would be important as sometimes you might want to have quick reaction times (to block a spammer for example), so we might want to implement additional measures.

1 Like

Thanks for the comments.

Good quearion to clarify. Here is Discuss, otherwise I would be posting at PR to ideas repository in GitHub.

Now why this idea is good…
I know Kleros awesome work, but this is slightly different because we have visibility fee. I don’t think we can use Kleros to moderate public chat without visibility fee, and having this visibility fee makes porting to Kleros more complex then what we can do within our own democracy. Kleros could be used as additional last appeal method after root delegates appeal, and this would be good because it could save us from an internal failures in our message curation system.

The way I designed it should fit under our current work in Topic Democracy and Voting Dapp, as it uses subtopics to have specialized delegates and lower quorums.

I am totally open to other options of moderation and as emergency measure allow users at least to ignore users they choose (easily breakable by a silly script, btw).

Visibility fee is important because I can create as many accounts I want, and use one different to send my spam each time, from a botnet.

I agree that the idea is sound and well thought,

What is not clear to me, is that it seems is based on the assumption that we want the user to stake SNT, which it’s something we might want to do no doubt, but has this been discussed before and agreed on? Or your are making a case here for this?

I think clarifying this would help us move forward with the discussion and speed up the process

Sorry if this has been discussed before but I wasn’t aware of any conversation

I’ve developed the idea while writing to this thread so this specific idea was never discussed before, only that we want some kind of democratic/decentralized moderation in public groups. We can explore other ideas then this, or evolve in top of what I proposed here. Feel free to contribute.

My 2p would be that we’d want to have a both,

have some chats that are moderated, which will require the user to stake SNT, where we can implement something on the lines of the idea you proposed and move in that direction.

But still allow users to join/create chats without moderation, which will not require staking anything, as I would be concerned otherwise of the entry-barrier for new users would be too high.

What do you think?

It would also be good to hear someone else’s opinion on this.

I like the general idea, but this aspect still concerns me a lot, since depending on the motives, the spammer might not actually care about having the messages displayed if his goal is to make Status mobile use unfeasible by consuming everyone’s data plan through flooding public channels with spam messages. Even if users never get to see the spam messages and are oblivious to the problem, the spam load might cause his/her phone to get hot/battery to get drained. This makes me think that something like this should also be implemented at the protocol level, so that nodes don’t even forward Whisper envelopes that are considered to be spam (haven’t thought about the intricacies of this yet, but I’m sure it will come up in next week’s meeting in Basel).

There have also been suggestions to detect spam overload situations in the app and suggest to the user to mute the channel temporarily, as a dumb workaround until we implement a way to deal with this at the protocol level.


The case you specify about a spammer overloading the network with data is not a direct problem of Status Chat, but about whisper.
Whisper already have security features against the type of attack you mentioned, and, if this still be a problem for Status, we can protect clients from data usage by implementing the “visibility stake” check in “message nodes” and only communicating through message nodes in case of spam happening in the whisper.

If you mean PoW, the problem is that we’re restricted on mobile devices, so we can’t require a high PoW. It does seem like the nodes will need to be aware of the “visibility stake” in order to have a robust solution.

Not all nodes, just the “message/offline inbox nodes” - in case of massive spam in whisper network the device would be only using this nodes to communicate (not directly whisper).
Regardless of that, we can propose enhancements in Whisper to our case (low PoW), such as an exhaustion system: we calculate how much messages one person could send per second and make nodes don’t allow too much messages coming from a single source, making the need of a botnet to attack.

By the way, this attacks are not permanent, this means that the attacker needs to be constantly flooding the network, which costs lots of resources. This still bad, because opens a way to censorship temporarily the application chat by flooding whisper.

I think this is a different topic, the type of flooding I mean is not robot automated to cause DoS, but users simply manually flooding messages.
The flood attack you mean is more a DoS attack, which I still think you would need to make it DDoS to be effective against whisper (otherwise someone would already stopped ethereum mining/transactions communication), which also is a problem to any architecture.

I think we should open a new topic about the DDoS resistance of whisper to propose solutions to that low PoW case, and here focus in the governance of the room.

Makes sense! I’ll move this specific discussion to another thread.

We could also have a “ignore visibility stake” option, so if user want to see other messages that have no visibility stake they might be able to.

Yeah, that would make sense!

Now that spam is starting to be issue, any thoughts on reviving this? @cammellos @pedro ?

Agree. Would also love to see experiments around

I think we need some product input, there are some issues to solve, my main concern is entry barrier, if we require a user to stake SNT, it means there is a considerable entry barrier to use Status (I install the app, I know nothing about SNT, I can’t post in public chats).
We need a bit more product input before pulling the trigger in my opinion.

1 Like

I don’t have the technical knowledge to know how to do anything but a couple of thoughts come to mind. If you add a mute button would it be possible to kill or permanently mute an account that gets muted too many times on a public chat or maybe that account just can not post to that public chat anymore and all the accounts posts to that public feed are removed. A kind of public censorship of the public chat. This would discourage spam because it would not live in the feed and spammers would know they are going to get removed. Also each public forum would develop its own “censorship” based on its active uses. So if you have a #WeLoveCats and someone comes on that public chat and talks about how much they hate cats then the group can decide if they want to hear it or push them out of the group. Some public chats will become echo chambers but others will remain open forums and the open forums should take off with people of open minds.

I also posted this to #Status on the app. Thanks to the desktop app I can now type longer posts. That may be a bad thing for others. :wink:

I agree, we should keep supporting “free” messages, but it would not be guaranteed that everyone sees that message. The viewers should be able to choose the minimum stake required to display the messages.
Nodes itself would limit the amount of messages a specific key can create + require a higher PoW for free messages.
I don’t think that most of users will be willing to participate on public chats, rather they would prefer group chats with their relatives (where one or more administrators need to include that user) and private messaging.

The problem we are trying to solve, spam and scam, is being a concern for all social apps from history, and it literally can kill those social apps if not done (as it becomes unusable due floading of spam), so I would prefer having a small (optional) entry barrier then having the public chats unusable.


To reduce the entry barrier we could make that one user, lets say alice, could trust own visibility stake to other user (bob), If bob misbehaves then Alice looses the visibility stake.

Technically Alice would sign an ethereum message that allows to use its visibility stake, could be just a part of it, and should have some timestamp validity or use nonces.

The problem with timestamp is that slashing would only be available up to that timestamp, therefore only messages with more then one day (or more) of timestamp available should be considered valid - otherwise community (humans) would not be able report it (once reported timestamp can expire, so decision could be made after timestamp expires)

Nonces could be done, but are weaker, because all this happens offchain, and status nodes could organically have different versions of the latest nonce, making bob able to deliver more messages than alice allowed. However the contract could slash if proven that bob signed a different message to the same nonce.