Status node growing storage size

Hi,
I am running a Status node since mid-December and its storage size is growing constantly, currently at 33 GB.
I thought it purges old data automatically. Should I increase the SSD size, or there is another way to cap its storage use?
And by the way, how should I update the node without interrupting it?

Thanks in advance.

The status-go node does not by default cap the size of the database. There is a setting called MailServerDataRetention which defines the number of days you want to keep, but that is not set by default.

On our servers we set WakuConfig.MailServerDataRetention to 30 days, but neither the code, nor the standard node setups we recommend include a default setting for WakuConfig.MailServerDataRetention. I guess we should, so I’ve created two PRs to include that in generated configs and the website:

https://github.com/status-im/status.im/pull/689

In case of using an SQLite database you might have to run VACUUM command to get it to release the disk space. Unless you’re running it in auto_vacuum=FULL mode, which we don’t by default.

Warning for those using PostgreSQL, the is a pruning process that removes old envelops, but it does not mean your database will release that disk space it already acquired.

Based on my research in infra-eth-cluster#31(private repo):

  • You can run VACUUM (VERBOSE, FULL) envelopes; which MIGHT release some space
  • You can try to adjust autovacuum settings of PostgreSQL to let it vacuum by itself
  • Adjusting autovacuum_vacuum_scale_factor from default 0.2 to something lower helps

This is how our settings look like:


waku=# SELECT name, setting FROM pg_settings WHERE name LIKE 'autovacuum%';
                name                 |  setting  
-------------------------------------+-----------
 autovacuum                          | on
 autovacuum_analyze_scale_factor     | 0.1
 autovacuum_analyze_threshold        | 50
 autovacuum_freeze_max_age           | 200000000
 autovacuum_max_workers              | 3
 autovacuum_multixact_freeze_max_age | 400000000
 autovacuum_naptime                  | 60
 autovacuum_vacuum_cost_delay        | 20
 autovacuum_vacuum_cost_limit        | -1
 autovacuum_vacuum_scale_factor      | 0.05
 autovacuum_vacuum_threshold         | 50
 autovacuum_work_mem                 | -1
(12 rows)

Oh, thanks, that was quick. :grinning: