You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When converting to and from bits field (aka target) to difficulty, it is important to not make a consistently wrong round off error. For example, if previous difficulty was 100,000, it is important that nothing in the code make it consistently +50 more or consistently -50 less (0.05% error). That would cause the EMA at N=70 to have 3.5% error in solvetime. At 0.5% error per block, there is 35% error in the solvetimes (difficulty is 30% too high or low). The error that develops from 0.5% and using N=70 seems to be based on about 1.005^70 = 41%. Larger N means larger error from "round off". If half the time it is +1,000 too high and the other half -1,000 too low, then it's OK. just don't be consistently wrong in the same direction. Error in the value for e=2.7183 does not hurt it.
In an simple average (SMA), a multiplicative error like this only causes a proportional error in solvetime, not a compounded error. The error in EMA only compounds for N, not for the number of blocks.
There is a T/solvetime ratio in two places. It must be the same in both places. I don't know how it would be coded to give something different, but I want to keep an eye out for anything fragile in it.
The text was updated successfully, but these errors were encountered:
When converting to and from bits field (aka target) to difficulty, it is important to not make a consistently wrong round off error. For example, if previous difficulty was 100,000, it is important that nothing in the code make it consistently +50 more or consistently -50 less (0.05% error). That would cause the EMA at N=70 to have 3.5% error in solvetime. At 0.5% error per block, there is 35% error in the solvetimes (difficulty is 30% too high or low). The error that develops from 0.5% and using N=70 seems to be based on about 1.005^70 = 41%. Larger N means larger error from "round off". If half the time it is +1,000 too high and the other half -1,000 too low, then it's OK. just don't be consistently wrong in the same direction. Error in the value for e=2.7183 does not hurt it.
In an simple average (SMA), a multiplicative error like this only causes a proportional error in solvetime, not a compounded error. The error in EMA only compounds for N, not for the number of blocks.
There is a T/solvetime ratio in two places. It must be the same in both places. I don't know how it would be coded to give something different, but I want to keep an eye out for anything fragile in it.
The text was updated successfully, but these errors were encountered: