Predicting effect of new difficulty algorithm

Soon BTG wil go to a new DA. It just so happens that BTC Candy copied BTG’s difficulty algorithm (which is contains an error that was my fault). They have also copied and implemented BTG’s new DA which is my LWMA that is being used on a lot of Monero / forknote / Cryptonote clones. So, we might be able to get an idea of the improvement we can expect by looking at its before and after data. One caveat: BTC Candy is 120 minute solvetime which gives them an advantage in difficulty. But their difficulty performance is a lot worse because they are smaller and get much bigger hash attacks.

Simple moving average N=30 with unfortunate accidental limits left over from Digishield’s code:
(the title says “Digishield” but it’s not similar to Digishield because I made a major change to it)

New LWMA algorithm: (first plot starts with prevous algo above)

For comparison, here’s BTG’s current algorithm performance:

All the numbers above are pretty bad. LWMA typically sees < 5% and < 1% on the “blocks stolen” and “delays” metrics.

I have been able to greatly improve the LWMA by making it “dynamic” which means if last 5 blocks were solved in < 600 seconds, then it switches to an LWMA with 1/2 to 1/3 the averaging window to make it respond faster. Only the pseudo code is done and it maye take us a few days to get the code done. Sicne it’s not been tested in a live coin, it’s premature to try to get BTG to use it.

1 Like

A 5-block average is great to be more responsive to immediate effects, but is a 5-block average enough to see a “real” time, given that the internal block timestamps are not necessarily accurate - and can potentially be faked?

While I do not believe anyone is currently actively faking them, we need to bear in mind that the current source of these “hash attacks” is largely mining pools which direct hashpower at the “most profitable” coin at the moment. Those pools would have an incentive to use timestamps which are less likely to trigger the dynamic response.

1 Like

In order to minimize problems from fake timestamps, the node-enforced future time limit (FTL) needs to be pretty tight. I’m requiring it to be < 3xT and enforcing timestamps to be sequential, using the highest timestamp seen so far as the timestamp for the current block. This means an attacker can often trigger the shift to a smaller averaging window (which is 20 instead of 60):

A miner or pool may use a fake timestamp to initiate or prevent the trigger. If you go through all the scenarios, you can see it will be very difficult to have any gain.

I seem to have finished this algorithm. Hopefully some of the devs will be testing it out today

Hey Zawy, I heard that our implementation on testnet is not good. I tried to implement the original algorithm. I’m curious that whether our code has some issue, or the original algorithm is not working well.

BTW, did you get the dumped testnet block timestamp?

1 Like

I was unable to fully evaluate the testnet data because it seems to have a rule to drop to D=1 if a solvetime was long. But the formula seemed to be correct. The only error I saw last time I check was that adjust should be 0.997 instead of 0.9909. It’s just a minor correction (to give 0.6% more accurate solvetime). The difference is that target is slightly different from difficulty method that used the 0.9909.

Thanks for the investigation. So the original LWMA still doesn’t perform very well on Bitcoin Candy. That’s not mainly caused by the wrong adjustment parameter I guess. So the way to solve it is to introduce Dynamic LWMA. However I’m curious about why it performs much worse than the original LWMA analysis.

1 Like

The problem seems to be that they have almost no dedicated miners. BTG would not perform as well if it’s mining situation were similar because your T=600. But as I described above, you should do better than you are now. The only ideas I know of to do better than LWMA are

  1. Andrew Stone’s idea to prevent long delays
  2. A new POW that neither Nicehash or ASICs threaten
  3. Bobtail (best 5+ hashes per block are rewarded, not just 1)
  4. D-LWMA (not tested on live coins yet)
  5. Merge mining (next post)
  6. EMA of D-EMA

Everyone’s looking into POW change and I may start working on it (changing algorithm in every block. Bobtail is a new huge change.

Item 1 means to lower D for the current block as the solvetime gets longer. Changing difficulty during a block solve is not a small change. It would require a fair amount of consideration in part because future validations can’t determine D based on past blocks. So the timestamp that will determine D will be in the template of the that is being hashed. Instead of linearly decreasing D as Andrew described, maybe decrease it once very 3xFTL so that everyone has the same difficulty.

Item 4 maybe still need Andrew’s idea. D-LWMA enables a malicious big miner to send D high in a shorter amount of time (if they have a high hash rate). But I am going ahead as fast as possible to get small coins to use it. I knew how LWMA would perform on live coins. It’s harder to know if D-LWMA is going to have problems due to its interaction with miners.

There is also merge mining, but I’m not sure it is a good thing to do. Also, LWMA drops so quickly it may be a problem. Iridium gives the best example where LWMA was not performing good. I may need to go to EMA (or D-EMA). It rises and falls in a more symmetrical manner. It has more delays, but it gives more time for miners to come back slowly instead of all jumping on at once after LWMA drops I am looking into getting a coin to switch between 3 different algorithms every 500 to 1000 blocks so the algorithms can be compared.

I do not have an idea to improve LWMA for your next fork. I can have EMA ready in a few days if you wanted it, but it’s a little preliminary for me to recommend people use it over LWMA. LWMA has usually done great on all the coins so far. It had a great long history with Iridium until Iridium was one of the few who had not changed POW.

@zawy - do you have much of a DAA testing framework to model hash attacks, or do you mostly work via data from real-world coins that try the algorithms?

This is an example of on-off mining testing that compares LWMA and my new D-LWMA. D-LWMA has one more modification needed, but after that I am abandoning it for something I think that is simpler and better. Read read the titles to see how the attack was simulated.

Here’s the comparison to Zcash

image

OK, since you say it was simulated, I presume you have a testing framework.

On the charts, is the Y axis Difficulty, and if so, is it normalized to a target Difficulty of “1” for the avgD?

Or is the Y axis Block Time, normalized so that the target block time is 1.0?

Or something else?

(I’m assuming the X axis is blocks, not time.)

Also, what is ST?

1 Like

ST is solvetime. X is block number. Y is D, normalized to avg D.

“Blocks stolen” is actually not a bad description. The D was < 1/2 of the appropriate D, and that means when he leaves, dedicated miners will have to absorb the difference in order for avg ST to stay on track, or if avg ST > target which results in the same thing (fewer blocks are released per time). So the number of blocks stolen truly is at least 1/2 of the number shown, but less than the number shown

1 Like

I really agree with it.