Principles Seminar - Session 4: Security

Forth principles seminar 4, Security.

Opening talking

Principles Seminar v0

Session 3 - Security

Oskar, 2018-10-10

III. Security

We don’t compromise on security when building features. We use state-of-the-art technologies, and research new security methods and technologies to make strong security guarantees.

Information security

practice of preventing unauthorized access, use, disclosure, disruption, modification, inspection, recording or destruction of information. (Wikipedia)


  • Private keys and transactions
  • End to end encrypted chat
  • Darkness (see who is talking to who)
  • Cluster uptime

Compromised security?

  • Trivial: Can’t restore your account (private keys)
  • Subtle: Forward secrecy, either you have or don’t
  • We are responsible for secure defaults

State-of-the-art technologies

  • ‘Obvious’ choices for experts in field
  • Chat: Double Ratchet and PFS
  • Hardware wallet
  • Reproducible builds

Basic security hygiene

  • Ex: no password re-use
  • Ex: 2FA (without phone recovery)
  • Security and privacy week after Prague

Tool: Threat modeling 101

  • Pretend to be attacker and follow logic
  • Example: House with jewelry (high reward) and open back door (vulnerable) and thief (relevant attack).

Research new security methods

  • Magic and crazy
  • Be top 1-10% of tech orgs for attention
  • Might seem unusual or crazy to some of you

Example research

  • Zero knowledge proofs for private transactions
  • Darkness, quantum secure, multiparty computation, formal methods…

Tool: Security guarantees

  • This might seem hard (it is)
  • But you can ask questions and learn!
  • Explicit about guarantees
  • Simple user stories

Example: E2EE chat

  • As a user, I don’t want anyone but the person I’m talking to to see my conversations.

  • Forward secrecy: If my private key gets compromised another person can’t read my historical conversations.

Example: Private transactions and darkness

  • As a user, I don’t want someone to know who I am talking to except the person I’m talking to.

  • As a user, I don’t want anyone but recipient to know that I transferred money to them.

(security, inclusivity)

  • How do we ensure a secure user experience while being user friendly?

  • How do we ensure we provide utility for people and aren’t paralyzed by extreme threat models?
    E.g. lack of private tx !=> only focus on chat.

  • How can we work iteratively on security and communicate clearly what guarantees we make and can’t make right now?

Pairing and wall of shame

Up to you.

  • Idea Generator 1: List pairings and think about positive and negative interactions.

  • Idea Generator 2: Think like adversary - how can Status be attacked?


Raw notes


Wall of shame

  1. We don’t security audit on-boarded core contributors

  2. Not enough time allocated to validating risk mitigations

  3. We don’t inform people about what information they share, when

  4. Anonymous names don’t guarantee anonimity as soon as they are matched to your identity

  5. We don’t have a risk profile across the company

  6. We don’t have any incident response procedures

  • We aren’t explicit about our guarantees around darkness (push notifications, web2 services, mailservices, lack of formal whisper math, etc)

  • We don’t sign releases

  • We don’t have reproducible builds and multiparty signature

We don’t have threat models and explicit security guarantees we have or don’t have in docs. Start => list protocols and stuff we use

  • Lack of security incident response, clear route respond
  • Also devops runbooks incident response (aligned above but not same)

Start with a few opinions:

  1. If there is no castle, what do we defend? As we decentralise, the process will further push security to the nodes, further down the stack. The end user ends up having all the security risk. Need to build tools to help people do this easily and properly. Help people use Status app appropriately. Pairing of security/decentralisation, requires a lot of educational effort, but it’s the right thing to do if you want sovereignity over your information.

The easiest attack vector for scammers, e.g. ICOs where people legitimately send info to scammers. We should also try to prevent this, not just hackers. As Status, our main focus is the end user, having the node secure is basic, but who runs this node? We assume they have technical knowledge, want to help users become a node safely.

Authorisation procedures - how can we move into our own auth using public key. WoS - still using third party/legacy authorisation.

Auth for what? How we do auth, e.g. for notes - self deployment of HackMD ( - use Google qand GitHub authentication. Had that there as we assumed it was for Status-official comms, to guarantee that people using it are official and attributable. Wanted attribution and an audit trail if the information is important.

This is about justifying the choices that we make. Asking - why aren’t we doing more to use our own public key from our own system to be doing authentication?

Dilemma, we need a multisig to do things in a safe way. I am the single owner of a subdomain, I could change the records of it at any time. It’s not deployed yet so not a risk, but as soon as it gets deployed, there is a problem of coordination. We need to coordinate to sign, we don’t have a tool built into Status that enables us to do that. I would expect that Status has a multisig built into its features.

We don’t have to think about smart contracts too much here if we’re thinking about logging into services we use commonly as an organisation. Why can’t we use the Status public and private key that we already have in place to log into various things. This is not technically infeasible.

Agree. Education may be a factor. Across Status we don’t have enough people who are writing DApps. Maybe it’s a bottleneck, not enough people comfortable creating mini-tools in the same way we would be in the web2.0 world.

How do we incentivise that, bounties? How do we make the integrated tools we need?

Need to open a swarm with an exact specification of what we want. Like the procedure we have with ENS usernames. Then we can build these type of DApps that are useful for us as an org, and users. For example, Status usernames are just smart contracts, users could fork that to use names from another address?

Why can’t we provide that as a service that resides on Swarm or IPFS where it can happen in one click? Served on a decentralised network, with that instance being open to log in with a Status ID. We need to get to that one-click point, not forking a contract.

Does that clash with Inclusivity?

We are using crude elements of Ethereum, in future we will have more possibilities for authentication. These options need to be secure, and not mislead users. Something to discuss, but don’t see that we need it.

Change of topic, any security concerns or ideas about this?

Would like to see us make threat models and security guarantees more explicit.

Starts with outlining and detailing all of the centralised services we currently use, and why we use them. Then we can outline all of the protocols that we use and the associated security guarantees. Compile it into some kind of technicon. Show our plans to further any guarantees we’re trying to put in place.

Just need to make it happen and implement a timeline.

Need people to champion it for accountability. It can be as simple as starting with a list.
I want us to broadcast our official Status documentation, like Signal’s technicon, want us to be transparent about all the things we use in development and organisation, and clearly identifying the trade-off. Need to agree internally on where we’re at. That speaks to multiple principles - security guarantees, transparency, inclusivity.

Action item - need group of people to commit to compiling this content

DAC0 could sponsor.

Action item - Will kick off with a plan.

Need to set a minimum bar of security across the org, and audit people coming in for their security against that bar. Does this clash with privacy - how do we feel about viewing CCs (Core Contributors) as a potential security threat? Involves digging into how they operate on a personal level, is that something we should mandate for CCs? Focus more on protocols and procedures, not personal data review.

Need policies on People Ops site to set expectation that these guidelines are important.

Systems should be designed to be tolerant to these faults. Should withstand one individual making a mistake. Not a fan of individual security audits. We can’t assume that people will always be good actors, the public key that is. Should focus on making the key resilient.

Another thing we lack is incident response procedures (IRP). Not sure everyone in the org knows what to do in case of an incident.

What do enable people to e.g. enable people to see what’s wrong with the mailserver, is this along the same lines?

This is more about the personal level when someone is not sure where to go for help. Want everyone in the organisation to feel safe - personally, and for the org. If someone feels something is going on, they should have a clear route to report concerns. Even contributors in the wider sense should have a place to report concerns to us as Status.

Like scam emails, important to report. Someone could take all keyholders of a multisig and try to scam them as a group, and may just succeed. The more multisig users, the more difficult that becomes, it’s unlikely - but important to report it. Or maybe a pen drive with your keys is compromised.

Guarantees around darkness, push notifications, etc. Do you see that becoming part of the docs site, should we be doing more public facing comms about that?

Few things moving in that direction, docs work underway. Breaking Whisper, trying to do formal scaling tests on Whisper and darkness.

We still lack this intellectual rigour, Whisper is not the only thing we’re using, still using web 2.0 services. These all impact darkness. We don’t make these explicit. Leads us to not prioritise certain pieces of work that would mitigate this. There’s a parallel piece of work that Dustin’s working on, looking at meta data leakage in general. In general, we need to be more honest about “we don’t know” as it relates to darkness.

We need to be more honest about the realities of the software we’re building on and relying on.

What do people think about resource allocation and how people spend their time around this. Are we doing enough? Can we be more rigorous?

I want as much research as possible into this area. But don’t want to do it at the cost of wasting resources. Not sure what it would take.

Classic DAO, this dovetails with the DAC well.

Any security measure we put in place needs to work against social engineering. People need to know how to use the security mechanism effectively. Should test if people follow a desired course of action afterwards to make a security measure effective. We don’t have enough time/resource to do that sufficiently right now.

Can we make mail servers be able to run in smart phones with mobile internet and relay message for paying people in a private network without internet?

problem is high availability, Adam and p2p are looking into distributing storage so they don’t have this requiremnent

It would be interesting, we’ve been exploring this before, helping users understand the choices they’re making and the implications of that. What’s public v private? Some work we could do there. When choices are linked to people’s personal finances, likely to get traction.

Reproducible builds, why aren’t we doing more?

Lots happening in the deterministic builds channel. Getting the app into people’s hands by means outside of the app store. It’s hard to verify that you’ve built something correctly if you don’t have a build to reference to. It’s difficult with the way we build Status currently.

Is this how Linux works, always resorting back?

Should be able to build something by yourself then look at how the company that built it went through the final hash, to get validation that things worked as they were intended do. Should be able to check that it matches the reference build. If people are using a forked version of Status to make their own modifications (something we want to enable) - they would have to signal their own specific build fingerprint. Status should provide a build fingerprint. If people gets something that’s different, you should know that something is wrong.
App store functions as a signal of correct builds.

There’s a lot we could do here, what do you think about how can we make the problem focus on the basics first, e.g. pyramid of pain/shame.

Trying to figure this out. Lot of resources aren’t made for orgs like this. Trying to find the allegories has been difficult, particularly as you go towards decentralised, you push responsibility for security towards the edges (i.e. the user). Status’s tooling helps with lower end of the pyramid, but education is the thing that mitigates the top end of the pyramid, the behaviour of the attackers.

Action item - we need to be more proactive in education efforts. Efficient documentation, public facing communication, find out how we can use the real estate inside our own application to amplify these efforts.

We should make nice graphics that show people the implications of security.

Need to integrate the implications into the user interface itself. Agree with education of contributors, but wrt education of end users - it’s not sufficiently effective to make sure our security measures will work. I learn things, but may still repeat behaviours. Education does not always guarantee the right behaviours in the moment.

Would short messages and images help explain implications?

It’s useful and a good first step. Wouldn’t want for us to perceive that because we told people so, that they now understand and we absolve ourself of responsibility further. There’s more we can do in the interface itself.

MyEtherWallet and MyCrypto as role models - made really scary message with a lot of text, and made the user go through the process of understanding. Maybe we can try to adapt it or make it easier to understand with e.g. video/graphics. We need to make our builds deterministic and make things safe on our website, but the main attack vector is scammers. How do we prevent people from sending money to scam ICOs.

Social engineering aspect of security here. Is this something we want to take on? Requires extensive resources.

Respect MyCrypto for their heavy allocation of resources to educate users. We need to help users feel safe using Status. We have an obligation to educate them, and mitigate obvious scams.

Very real problem. It used to be the case that we had Slack completely open. Had a lot of scammers, and people lost tens of thousands of dollars clicking on scam links. This led to the closing of Slack, and other measures. As we move to Status, this could become a problem again. This was a real problem, and people lost real money.

We’re facilitating money flows, and if problems like that happen within our app, that’s an issue. May have to resort to annoying text that educates users, e.g. like MyCrypto.

Action item - design some of these approaches and get them into user testing?

Extremely time intensive. The question is, is this something we want to do? If so, let’s prioritise it over e.g. localisation and market targeting.

we should consider that many users are coming from a TRUSTFUL AUTHORITIVE model to a new TRUSTLESS UNPERMISSIONED model.

Can we also think about special modes. If there are features, e.g. push notifications not encrypted. Can we make modes where, if selected, features that lack full security are switched off. If paranoid mode enabled, you could choose not to enable push notifications etc. There could be a midway mode with all features enabled, and a warning about what’s not secure.

We were talking previously about revolution mode. Ned’s suggestion of using colour, shape, and information to delineate the features that people are enabling.

Not sure what we can do to show someone that push notification is enabled, this is a design question. Like Hester mentioned, there is a difference between being informed and acting on it. When I used MyCrypto, I skipped through. I know that’s my problem, but people may do that. Having security modes built into the app helps us provide them with our knowledge of the app, and they can trust in the mode they select based on how we present it to them.

Users may not want to think about mailservers. We should group secure/not secure features, and help users make the choice.

There will always be scammers, we can’t moderate this as it involves censoring discussion. In the places where we can act, we should help signpost things like paranoid mode. Let’s not waste user time to understand security.

general inventory and review of dependencies would be better.

Wall of shame: lack of clear understanding and rationale of dependencies, leading to a big and haphazard surface area

See Sensitive data is shown in TF logs · Issue #5567 · status-im/status-mobile · GitHub and