Full Status Archive Nodes - A path to independence and true decentralisation

We know we’ve got a problem

Status is a decentralised project with centralised solutions. Proper decentralisation is a hard problem to solve. This post doesn’t solve any of it.



Status nodes need to be able to send transaction data to and receive state data from the Ethereum network. Status nodes need to be super light because the devices Status node run on are expected to be very resource limited. This means that a Status node needs to rely on a trusted node to relay its transaction data and state query data.

Currently Infura is the solution. Infura sell us access to trustable node endpoints, and in return for convenience we give up our independence.

Light nodes don’t fix it … Yet

I know what you’re thinking, why don’t we just implement light node functionality? That’s a thing right? It is a thing but it doesn’t solve our problem, currently all light node systems basically boil down to a server / client relationship. One node has a fully synced state and a has a connection to Ethereum network peers (a full node), and the light node trusts the full node to send data.

The only way to make light nodes a truly scalable option is to develop incentivisation mechanisms to encourage and reward good actors and deter and punish bad actors. This is hard, we can discuss it in another post about “state data for sale”. Simply Infura is state data for sale along with transaction propagation for sale, where only Infura gets paid. There’s always a price for data and someone always has to pay the price.

The thing I actually wanted to talk about

Basically we need functionality beyond what Infura offers and we want to be independent of Infura, and a truly decentralised solution is not yet here.

What other solution could possibly solve our problem?

:tada: Status managed Ethereum nodes :tada:

This is probably old news to most of you, but the needs are still there. I made a picture to demonstrate the functionality for a pending transaction context, but the basic idea applies here.

After speaking with @jakubgs he very helpfully provided initial costing for this kind of set up Infra Costs for Pending Transactions Experiment - CodiMD . And we’ve estimated that an in house solution would cost $800 in monthly infra costs.

FYI the server costs are not a blocker here, following from a conversion @jarradhope has highlighted

The $800 monthly infra costs aren’t really an issue nor is the approach of creating an infura alternative, the real issue in terms of cost is who is going to build it, maintain it and how long will it take. These were questions we raised last time during a strategy sync and I didn’t get an answer on it. It’s difficult to justify a new hire to build and maintain this. Last time we discussed this both Jakub and Corey were maxed out in terms of workload. How are things now?

So the main blockers are, with my weak:

  • Who will build the solution?
    • I can write the software for this, most of work is connecting raw nodes up to Redis and caching queries.
  • Who will maintain the solution?
    • Probably Jakub… But I’m sure he’ll have an opinion on that
  • How long with building the solution take?
    • I don’t know how long the full solution would take. I believe that it would take about 2 weeks to implement a basic first stage, for querying pending transactions.
  • Will the team need a new hire?
    • I don’t know, more input is needed.

Purely economically I expect that if cost of implementation + cost of maintenance <= Infura rental and implementation can be achieved in a reasonable time frame in house will be “a go”, if not probably means no in house.

Help needed

I need help answering the above blocker questions. Also I am look for opinions on my proposed roll out for the in house solution:

  • Implement a light experimental solution for handling only ephemeral pending transaction data.
    • This reduces cost on maintaining state and syncing nodes, keeps usage almost entirely in memory
  • Implement a production solution for handling only ephemeral pending transaction data.
  • Revisit the impacts and cost after pending tx have been:
    • implemented experimentally
      • Probably after 2 weeks
    • implemented in production
      • Probably after 1 month, in version 1.5
      • again after 2 months.
  • Get more concrete estimations on engineer costs for a full Infura replacement
    • Possibly this needs to be done before the experimental stage
  • Identify the migration strategy (from Infura usage to in house)
  • Implement nodes with full sync and state
  • Implement most common RPC queries
  • Implement automated processes for
    • New node syncing
    • Node version upgrades
    • Node recovery
  • Implement remaining RPC queries
  • Implement truly decentralised light node functionality
  • Retire in house node infra

Thanks for the write up.

Before making a decision (and going into technical details), I think it would be good to clearly state why, maybe as a separate point to give it more visibility.

As far as I can see, the main reason detailed in this post are:

  1. Pending transactions
  2. Independence from Infura

I would maybe further articulate why dependence on Infura is worse than running our nodes. I can see some points, which essentially mostly boil down to reliance to third parties:

  • This is core to status, so core functionalities as a general rule should be in-house
  • Once you have this set up is easier to extend

Another thing that I would add is that we are actually rate-limited by Infura, so we might reach the cap as users are growing, and this to me is a very compelling reason.

Overall I am always a bit wary of bringing in-house stuff, so don’t really know which way to go, but if we add a few more reasons and expand on the above, I feel that we can make as stronger argument for it.


Just one reason could be extending API, so for users who use status node we could use it in status-react, and instead of hundreds of requests when fetching history of transactions or tokens balances we could have only one request


Can you expand on the additional functionality?

The fact that it’s core to status, at least to me, makes a compelling enough argument, barring pricing concerns.

1 Like

Another thing that I would add is that we are actually rate-limited by Infura, so we might reach the cap as users are growing, and this to me is a very compelling reason.

This is the core issue imo. I’d love to see more detailed info and capacity planning here, as this would be a bottleneck for user growth. Specifically:

  1. How many requests do we do for a given amount of usage (e.g. DAU)?

  2. What’s our current plan?
    a) how much does it cost?
    b) when do we get rate limited?
    c) how flexible is it?

  3. If we x10, x100 and x1000 DAU, how do things change?
    a) in terms of cost?
    b) in terms of QoS?
    c) what are our options in terms of contract change / flexibility?

  4. Another idea that was posed at a previous Status meetup is the idea of of generating Infura accounts for each user. Has the feasibility of this approach been tested?

If we have answers to these questions, we know how much we should prioritize projects like the one in OP.

@andrey made this comment on discord

also regarding infura, we use Growth $1000/mo plan, with 5,000,000 Requests/Day, for example for last 24 h we have 1,322,510 Requests

Looking at the Infura plans https://infura.io/pricing we use the highest level of advertised plans.

Not a problem. @andrey actually touched on this in his comment but I’ll expand on this.

Currently to get an account balance we need to make an RPC call for the eth balance, this is protocol level data attributed directly to the account hash.

However users also want to know what ERC-20 compatible tokens and NFT tokens their account controls. To do this requires a contract call for each contract that manages the token a user could be interested in needs to be made. We can not know in advance of querying which contracts hold tokens for the user, so we need to ask a bunch of them. If the user wants to recheck their balances for a single account it will take additional multiple RPC calls because contract data is not directly associated with the address hash in the protocol.

Users have the expectation that if someone “sends” them tokens that the tokens will appear in their wallet. Users do not understand or want to manually find the contract address for their tokens and then add the contract details to their client installation. We need to do this for them, but we can’t know which in advance.

So a very important scaling solution is to index all of the block state and associate any contracts that follow the ERC-20 / NFT compatible signature and have a balance for a address hash with that address hash. This will allow us to make a single API call to get all the balances and NFT data that the user’s address hash controls.

Makes sense. This is part of the motivation behind this issue in specs: Implement indexing layer · Issue #133 · status-im/specs · GitHub

Ah, we are on the same page. This is what I meant in my paragraph about light nodes.

The very hard part is creating incentive mechanisms and guaranteeing data integrity. Currently our only solution is to trust “nodes” we have a high degree of confidence in.

As @andrey has stated we can index account data to allow us to make a single API call in place of hundreds of API calls.

Also if we control our own nodes we can closely monitor contract Events. This means that we could create a kind of bot service that would watch for token transfers and notify, via mobile notifications, users about incoming transactions. This could be a paid SNT utility feature.

Building on the contract Events monitor @petty has developed plans for a fully fledged monetisable product.

Yes this is an important point. There is another concern about third parties, you need to trust them.

  • Trust them to
    • provide accurate data
    • relay transactions
    • accurately estimate and bill you for your usage
    • not go bust
    • not collect fingerprinting data
    • provide features you need
    • sell your request data
    • not be bought by an evil organisation
    • not suspend your service for a change in terms that you now technically break

This is an interesting solution, I don’t believe it is feasible though.

An Infura free plan gives the following:

  • Ethereum Mainnet and Testnets
  • 100,000 Requests/Day
  • 3 Projects
  • Community Support Forum

So this certainly meets the expected usage of a user.

The problem is that to get access to this you need to register an account with an email. Then manually create your endpoints. So this could be technically possible with automation.

The main problem is that this approach would almost certainly be perceived by Infura as a circumvention of their business model. Infura want to make money and if we do this Infura will not make money. This would jeopardise our current and future access to their service.

1 Like

One thing to keep in mind is that if Status starts making full nodes publicly available, infura-style so they can be accessed from say… mobile phones over the web for free, it’s quite possible we’ll see a migration of other applications from infura to the status nodes, for natural reasons.

@cammellos I have some vague recollection that there was an effort to reduce the number of requests going out from the app - 1.3M requests for whatever user count we have right now seems to be on the high side - what happened with that effort? did it perhaps also result in some metrics on the kinds of requests that the app makes? This would be excellent data to use for prioritising research in nimbus / stimbus.


This is a concern. In my mind the best solution would be to implement a key sig handshake before a connection is accepted. This doesn’t make it impossible to connect to the Status node as the exact steps can be sniffed and reverse engineered, or discovered in our source code, the approach will just make connection a non-trivial thing.

One thing I would say is that there isn’t currently anything preventing someone from taking our Infura endpoint and using that for whatever purpose.

I have to say, I agree with you. I wonder if someone is already using our endpoint for their own purposes.


Actually now I’m really suspicious that this is happening. We can add this concern to the pile of reasons why managing our own nodes is better.

1 Like

It is very high given current usage.
We did some work, which as I understand greatly reduced the amount of queries we make, @roman worked on it and has some more details.
Out of curiosity, I can see we use the same project ID for tests/testnets in infura.
Do we know if those counts towards the same rate limit or there’s a separate count for ropsten/goerli/mainnet?

If there’s a single count, we might be wasting rate limits during e2e tests, which would explain the high count.

Another issue is that the token is public, and anyone can just use it.


Follow up after implementation of different build endpoints

Stats after having merge of

Date Total Requests Status.im (Production) Status.im (dev build)
2020-06-24 2,276,249 1,886,068 390,181
2020-06-25 2,190,632 1,773,156 417,476
2020-06-26 2,155,242 1,674,808 480,434
2020-06-27 1,519,710 1,250,990 268,720
2020-06-28 1,218,854 1,051,159 167,695
2020-06-29 1,802,196 1,328,233 473,963
2020-06-30 1,746,937 1,443,264 303,673

I’ll follow up with this in a week after the currently open status-react PRs merge / rebase in status-react#10853