[00:46:30] i've updated the script so now it shows in which blocks there were reorganizations [01:10:25] Nice. What are the steps in your mind to re-enable cfund? [01:11:34] from my point of view, after the testnet stress test, and what i'm seeing with the devnet stresser, the patches from this pull request look quite solid [01:12:26] i would vote for reenabling it when this pull request gets merged, but i would like this to be not only my decision [01:13:00] @prole @mxaddict when do you think you could have some time for reviewing the pr and my script? [01:13:17] Tbh I'm thinking the same. The stresser and the testnet look solid so far. [01:13:53] the stresser covers quite everything, lot of random data, combination of statuses, reorganizations, verifychain [01:14:04] i dont know if im missing anything [09:50:21] my stressers finished without error [10:55:35] *** Quits: navcoin-bot (~navcoin-b@nav.community) (*.net *.split) [11:04:29] *** Joins: navcoin-bot (~navcoin-b@nav.community) [11:04:29] *** Joins: ChanServ (ChanServ@services.) [11:04:29] *** adams.freenode.net sets mode: +o ChanServ [11:05:32] *** Quits: navcoin-bot (~navcoin-b@nav.community) (Write error: Broken pipe) [11:05:32] *** Parts: ChanServ (ChanServ@services.) () [11:05:39] *** Joins: navcoin-bot (~navcoin-b@nav.community) [15:14:42] mine got state hash mismatch @aguycalled [15:23:45] can you send me the logs? [15:23:57] @salmonskinroll [15:24:41] yup [15:40:49] @aguycalled sent it, there are 3 fails in 3 different files [16:06:25] ive updated the script to get a bit more of log @salmonskinroll let me know if u see it fail [16:13:27] did u see a 100% fail? [16:30:20] Yes [16:30:30] 19.04? [16:30:42] 18.04 [16:30:49] self built? [16:30:53] Yup [16:31:05] Were you able to find anything in the logs? [16:31:27] I'll update the script now and start a run soon [16:40:25] it would help to have a bit more log, let me know how this one goes [16:52:17] i'm running one in 18.04 too [16:52:28] 18.04, 16.04 and osx are my tests [16:52:53] okay, im running it on 3 nodes [16:54:12] oh wait, im running it on 18.10 for 2 nodes and 19.04 for 1, but they all failed [16:54:23] the ones i ran last night [16:54:36] i had osx and 16.04 overnight, both succeeded [16:54:44] okay, we'll see how these ones go [17:56:03] just making a note, the three fails yesterday happened around height 1700, 1000, and 600 [17:56:26] yes ive seen that, my 3 tests went pass height 700 [17:56:48] same all my 3 nodes are at 7xx now [17:57:15] tests are random, so itll be a matter of running them in loop until something fails [17:57:50] hmmm, i will fire up another 3 nodes later [18:11:19] for some reason one node just wouldn't run the script and returns syntax error [18:11:24] all other 5 are dine [18:18:24] maybe a longer sleep? [18:19:58] doesn't work this time, i'll try something else. maybe it's time for a clean reinstall [19:35:34] just got a winnder @aguycalled [19:35:42] STATE HASH MISMATCH! cddcfe120863de8e49bc08925972fcba2562112e7bfeb6feef8a4c50f0adc4d9 vs cdb505ee9d201d40afaef8a4d39884ab05b830247a236efce83bac02496c8bbf I wrote listproposals output to /tmp/tmp.ZsrytbFNRt/devnet/listproposals.out and /tmp/tmp.dKMFRaxDzB/devnet/listproposals.out [19:38:45] sending it to you [21:18:05] having a look now [21:26:24] another node failed [21:26:37] mine succeeded [21:26:38] want the files? [21:26:41] sure [21:26:56] the 3 node i fired up ealier all suceeded [21:27:10] the 2 nodes i fired up abou 2 hours ago both failed [21:27:42] well not suceeded, the 3 nodes are in the staking phase right before verifychain [21:31:57] sent [21:37:56] from the first pair of logs, the listproposals output say that in one node proposals were in cycle 1 (last block) and in the other in cycle 2 (first block), so its like there was 1 block difference between them but the logs show same best block and that the proposals transitioned to the next cycle correctly, so maybe its a problem of the script and not a real statehash mismatch [21:38:15] will look at the second set of logs now [21:38:29] that's good news, it it's the script [21:42:17] maybe turn off staking at wait_until_sync function and turn on taking on stress can help with it if it just happens that one node staked a block? [21:42:35] now i see in node 1 proposals have exactly one vote less than in node 2 [21:42:40] wait nvm [21:42:44] https://cdn.discordapp.com/attachments/416000318149754881/652248512666271770/unknown.png [21:42:46] oh okay [21:44:51] but the logs show again that the proposal got 7 votes in both nodes [21:44:55] just at the end of the log [21:44:57] https://cdn.discordapp.com/attachments/416000318149754881/652249072685678654/unknown.png [21:45:03] left is node 1, right is node 2 [21:45:22] so it must be a matter of how the script checks the state hash, it sends it too fast sometimes to node1 [21:47:03] more sleeps? my machines seem to like that a lot [21:47:06] https://cdn.discordapp.com/attachments/416000318149754881/652249611620319252/unknown.png [21:47:20] ill add a 2 second sleep at the beginning of assert_state [21:47:41] okay, i'll do that and start the runs again [21:48:05] 👍 [21:48:28] what kind of servers are they? [21:48:46] i mean, why could they need the extra sleep? are they lowspecd? [21:49:16] they are xeon E5-1620 v2 [21:49:37] not sure why they are so slow [21:51:44] ive set up a loop running the script infinitely, ill get a telegram message if it fails [21:52:15] the one node that wasn't runnign the script started to run after i added sleep after the for loop for sending funds to node 2 [21:52:37] so, sleeping is important it seems [21:56:09] https://cdn.discordapp.com/attachments/416000318149754881/652251890821627916/unknown.png [21:56:11] like this? [21:58:01] https://cdn.discordapp.com/attachments/416000318149754881/652252360491401227/unknown.png [21:58:29] added to the gist