Decentralization - making each subsequent key proportionally less effective

So many proposals have been brought forward for improving the decentralization of the network. Here is another one. The calculations shown are simplistic but actual implementation will require fundamentally changing how effective bid is calculated.

Approaches:

  • Approach 1 - Penalize subsequent key effectiveness by a fixed amount - First key - similar to today (except its calculation) - an effective bid. Key #2ā€™s bid would be higher by xxxx ONE (calculated based on %age of first key). Key #3 will have a bid that is higher by another xxxx ONE and so on.
  • Approach 2 - Penalize subsequent key effectiveness by a fixed % - First key - similar to today (except its calculation) - an effective bid. Key #2ā€™s bid would be higher by x% . Key #3 will have a bid that is higher by another x% and so on.
  • Approach 3 - to keep outcomes simpler and more inline with current implementation, calculate an effective bid that uses one of the above approaches and is an average of all the bid values from Approach 1 or Approach 2.

This may be a complicated mathematical change but will discourage higher than certain number of keys. Delegates will be discouraged to keep staking with large validators as their returns will be lower.

Let me know what your thoughts are.

Read the summary of each model to know the eventual impact on # of keys and effective stake.

2 Likes

Interesting take @SmartStake. Iā€™m not sure I follow it completely.

So in each case the highest bid (bid #1) stays the same but as you go down the ranking each consecutive lower bid gets an increase in its effective stake?

My fear with this is that larger stakers can calibrate their bids to be at the lower end of the spectrum more easily than smaller stakers. So the larger stakers may get more benefit from this.

the spreadsheet provides a simplistic calculation. The first bid will also be auto calculated just like today.

What happens today:

  • everybody tries to have as much of a low bid as they can depending upon their comfort level or level of automation.

What will happen with the change:

  • In all models, a validator that is at 300 million level today, would lose 2 to 7 keys depending upon the level of bid appreciation per key that you choose to implement. Even a validator tries to have too many keys, they will be out of effective range. The way I see, any way you play it, net result will be a validator will have atleast one less key (if bid appreciation factor is 0.005%) or will have 6 less keys (if bid appreciation factor is 0.02%).
  • If you would like, I can connect with you over zoom and explain my thinking.

I like the idea so the validator use as less key as possible here but I feel like it is too a complicated calculation. Having multi-BLS key is already complicated for the validator and now this would be even more complicated.