Currently the combined databases (shard 0 and whichever shard the validator is operating on) is around 250 GB. Shard 0 DB increases around 1.5-2 GB a day. In order to ensure a validator has enough space to operate safely without their disk filling up and stop signing, they must stay ahead of the DB size by about 50 GB minimum. I cannot speak to all the cloud server options out there but I will use Digital Ocean as the cost example. The only option left from the basic shared CPU servers that has adequate GB space (for now) is the most expensive option at $96 a month for 320 GB of space. Soon, however that will not be enough space and validators will have to add a mounted volume which will increase their costs (every 10GB of space added is $1 a month). The other option is the dedicated server packages; which run about $110 a month for a 300GB mounted drive (same cost of $1 a month per 10GB used). There are various spec servers with the option to add volumes but the premise is the same; the DB for shard 0 is growing exponentially each day.
These were just Digital Ocean prices but the cost is increasing routinely as the DB grows. If validators choose to use a second server for redundancy sake to ensure a good sign rate that doubles the monthly cost. New validators run at 0% commission (if they want to have a chance to compete against other validators) so all server costs are out of pocket. There are many of them who are on their second month being unelected and even after getting elected usually have to maintain 0% fees as long as they can to remain competitive.
Ultimately we are asking for the feasibility of shard 0 database to be reduced in size to prevent validators from walking away due to increasing costs they cannot sustain.
I hope I have answered your question! Let me know if you need any more details and I will do my best to provide them.
@rongjian already replied above and hopefully this can be prioritized.
Let me share here for the benefits of all validator some quite good options that still works (may not be able to get 100% uptime but 99.9 is definitely possible)
contabo VPS instance has shared ressource but if you go with VPS L or XL can give you some good large disk and hopefully enough shared cpu/memory, this is the one i believe might get you down to 99.9% https://contabo.com/en/vps/vps-xl-ssd/?addons=1435&image=ubuntu.267&qty=1&contract=1&storage-type=nvme-storage-extension-vps-xl at € 32 (without tax). Their dedicated hardware/ressource (VDS) is a bit too expensive for now, but contabo could be a good backup if you go for VPS M or L, primary should be fine with XL and their NVMe disk
These are good solutions we can probably start collecting for everyone’s bennifit, considering that we don’t yet have a validator map of the available data centers for new validators to check on to look for open slots.
Hi, as of Nov 12th 2021 shard 0 db0 size is about 337 GB, it’s still growing at a very fast rate, I didn’t anticipate this and run my validator on a 512GB SSD. Is pruning still currently work-in-progress , is there an estimation on when it will be ready ? I just would like to know in order to eventually prepare for a migration to a 1 TB before the remaining space (about 90GB) is used up. Thank you for the answer.
All depends on how many TX there are per day, currently it can grow anywhere from 3-5GB a day from what I’ve seen. So if the upcoming minimum gas fee increase doesn’t slow the # of tx i’d estimate about 3 months?
Please excuse my exasperation… I know we’re all in this together but:
db_0 on my nodes is at 723Gb and now I’m out of space. Yet the official Harmony guide is still recommending 486GB for Shard 0 (as of 29/11/2021) without any sort of growth figures discussed.
This spec discrepancy coupled with my disk space issue is probably going to trigger a search for a more cost-effective VPS provider now - and one that can be more flexible where frequent disk upgrades are concerned, it seems.
In the meantime:
is there any recourse for pruning db_0 on my nodes?
If not, then if I delete and resync the ‘smaller’ db_0 would this get me up and running again?
… and would I need to do this again further down the line?
I must say that thankfully we’re no where near being elected at the moment.
But I’m sure I’m sure other folk are - and are probably in a worse place right now.