Rclone Database Shard 0 vs Established Node Shard 0 sizes


This week we’ve had many nodes fill up on disk space.

I’ve been running a lot of tests loading nodes and noticed over the last few weeks an rclone of shard 0 now takes up 167GB of storage.

On an old node I have been running for 2+ months, shard 0 is only at 130GB. On a server I have from a month ago it’s at 155GB.

Here’s a few questions pertaining to storage that some validators have had:

Is there anything that can be done to reduce the storage needed for a fresh rclone?
Should we all plan on much more storage space being the norm for a validator going forward?
Will anything be done to reduce the size of the database a validator needs to hold?

Thank you,

Patrick @ Easy Node


@leo @rongjian :point_up_2: Can you say something about this issue?


I would also like to know as well as several other validators I have spoken with.

  1. the rclone db may not be the cleanest db. @leo maybe we can refresh the rclone db with smaller version like the one with 130GB?

2/3. In the short term, yes, the db will grow as the txn volume keep at a high level. But eventually, we will implement pruned version of the db to cap the storage needed for validators.


@slugo_slugom_crypto we’ll circle back with an estimate completion by the end of the week. Hang in there.

1 Like

I’ve witnessed something similar with my nodes, here’s a few things we’ll be working on Patrick @slugo_slugom_crypto

  • To deep dive into the differences in the DB and investigate the root cause
  • To create a fresh copy from block 0 (this will take a week)
  • (bonus) To have the rclone db snapshot occur more frequently (e.g. 3x per day)

This will take us 2-3 weeks to resolve. The current copy should still work, we suspect that the pruning isn’t working as consistently as it should.


ive synced 170gb so far. on day two of syncing. will i still be able to validate even if i dont have the full db?

it depends … non shard 0 you could validate but if your node is s0 it won’t validate until you are fully synced

@slugo_slugom_crypto @harmonious … thanks to @sophoah for restoring a fresh copy, we now have a cleaner Shard 0 db. Closing the loop here based on confirmations on Discord