[00:10:54] @prole i've found an issue in pr #634 which i havent fixed yet [00:11:09] (im saying because i just saw your pr comments about the tests) [00:12:42] maybe it's better that i remove the ready for review label from that pr [00:17:05] okay [00:17:15] ill stop looking at it [00:17:59] its quite a lot of changes, so i will wait to do the code review when you're happy with it [00:18:32] in other news, my 16.04 node i setup to mirror binance seems to always die when syncing [00:18:38] i think its just running out of memory tbh [00:18:47] might need to bump up the resources [00:18:52] are the same specs as they? [00:18:58] nah, doubt it [00:19:01] its jsut a little server [00:19:09] they didnt disclose the RAM etc on their VPS [00:19:18] but i would assume they are not runnign some budget one [00:20:39] ill try 4GB RAM [00:20:42] and see what happens [00:23:17] and my other little servers runnning the testnet with debug=dao seem to crash. again i think its just runnign out of resources. the debug.log file is ending up being some GB in size [00:23:29] is there an option to limit it [00:23:29] ? [00:23:40] limit the size of the debug.log ? [00:23:45] i only saw shrinklog or whatever [00:23:53] theres an option to trim it if its too big at launch [00:24:06] but i think nothing to keep it small, but i can be wrong [00:35:25] yeah all good, thats what i thought [01:12:32] okay ive resized the tokyo node to 4GB RAM see what happens this time [01:12:56] did you have any problems with your 16.04 node? [01:13:36] nope its reporting on the tg channel from what i see [03:11:06] @prole I have 2 nodes that are running on 1GB ram (2GB) swap stable [03:11:28] How much ram was on that node that would crash? [03:59:26] @salmonskinroll @prole @aguycalIed can you guys confirm that 638 fixes the freezing issue with reindex? [03:59:40] ill do now [04:04:12] yes works well for me [04:06:09] Nice, this should help with the next release as reindex will be enforced due to the verifychain changes correct? [04:06:48] yep [04:11:19] @aguycalIed you have any suggestions on where the reindex functiontionality is bottlenecked? [04:11:38] I think speeding up the reindex would also help, but I'm not sure where to start looking to speed it up [04:13:58] i would try running it with -debug=bench [04:14:02] and see what it say [04:14:41] reindexing is basically validation (syncing without downloading) [04:15:16] i know CFundStep can be optimised [04:15:35] Ok, thanks for the pointer [04:15:58] Also, since the blocks need to be sequencial, that means only TX verification can be multithreaded right? [04:16:10] this loop for(unsigned int i = 0; i < pindexblock->vProposalVotes.size(); i++)and this loop for(it = vCacheProposalsToUpdate.begin(); it != vCacheProposalsToUpdate.end(); it++) can be condensed in one [04:16:21] same with the payment request loops [04:16:26] Which I assume is already using the "script verification threads" counter [04:16:36] yup,only tx signature verification inside of the same block [04:17:17] Yeah, I was afraid that was the case, that means if the blocks have 1-2 tx per block, the script verification threads won't really help [04:17:47] Cause mine is now using 12 threads (Due to the patch we merged) [04:17:58] But it's still not as fast as I thought it would be [04:17:59] i think cfundstep takes 2-3ms last time i checked [04:18:34] most of the things we can optimise can be there [04:19:26] and maybe some loops in acceptblock acceptblockheader checkblock checkcontextualblock checkblockheader checkcontextualblockheader connectblock [04:23:29] Alright, I'll poke around and see what I can find [04:30:42] @aguycalIed 2019-11-23 03:27:04 - Connect block: 5.00ms [39.80s] 2019-11-23 03:27:04 - Load block from disk: 0.00ms [0.71s] 2019-11-23 03:27:04 - Sanity checks: 0.00ms [6.13s] 2019-11-23 03:27:04 - Fork checks: 0.00ms [0.28s] 2019-11-23 03:27:04 - Connect 2 transactions: 0.00ms (0.000ms/tx, 0.000ms/txin) [0.26s] 2019-11-23 03:27:04 - Verify 1 txins: 0.00ms (0.000ms/txin) [0.98s] 2019-11-23 03:27:04 - Index [04:30:42] writing: 4.00ms [27.46s] 2019-11-23 03:27:04 - Callbacks: 0.00ms [0.36s] 2019-11-23 03:27:04 - Connect total: 4.00ms [38.21s] 2019-11-23 03:27:04 - CFund count votes from headers: 1.00ms 2019-11-23 03:27:04 - CFund update votes: 0.00ms 2019-11-23 03:27:04 - CFund update payment request status: 0.00ms 2019-11-23 03:27:04 - CFund update proposal status: 0.00ms 2019-11-23 03:27:04 - CFund total CFundStep() function: 1.00ms [04:30:43] 2019-11-23 03:27:04 - Flush: 0.00ms [0.02s] 2019-11-23 03:27:04 - Writing chainstate: 0.00ms [0.20s] 2019-11-23 03:27:04 - Connect postprocess: 0.00ms [0.67s] [04:30:57] Looks like the cfund step is not that slow actually [04:31:07] The Index write step seems to be slowest [04:31:13] what height? [04:31:54] Ohh yeah, the height is 1M, so no cfund data yet [04:32:00] yup [04:32:03] Ack [04:32:22] 5ms per block means 200blocks per second [04:33:06] 5hours for 3600000blocks [04:33:25] mmm [04:47:22] do you have indexesset @mxaddict ? [06:30:05] Yes, I only have address index set [06:30:14] I'm retesting with all indexes set [06:33:23] I'm getting this error [06:33:24] https://cdn.discordapp.com/attachments/416000318149754881/647671013886263307/Capture.PNG [06:33:48] Even when running with: ./src/qt/navcoin-qt.exe -reindex=1 -reindex-chainstate=1 -allindex -debug=bench -testnet [06:34:03] I'll investigate and create a PR if I find a fix [06:40:28] Is this normal error @aguycalIed ? [06:40:44] Or should I be able to rebuild the index this way? [09:17:53] thanks @prole & @Goku i'm coming here from #roundtable. i noticed in another blockchain project that voting locks an address' balance until one wants to execute a balance xfer, in which an "unvote" function needs to be called to unlock these funds. a "hot wallet" looks to have many incoming & outgoing xfers. this would deter an owner of this address from voting in CF & future DAO proposals. can this idea of "vote-locking" be [09:17:54] beneficial to the navcoin DAO? :navcoin3: [10:04:41] @superIPFS We think that locking will not be a big thing for exchanges if they want staking, but normal user will feel bothered. [11:57:07] @mxaddict i was talking yesterday with @prole about randomizing the context (testing as many combinations as possible) as a way to increase our coverage of the reorganizations [11:57:10] what do you think? [11:57:59] should we integrate that in the rpc functional tests or is it better to keep the travis test suite deterministic and itd better as an external tool? [17:10:09] We could always create multiple test suites [17:10:19] The main deterministic tests [17:10:42] A secondary deterministic tests suite (for the longer running tests) [17:11:00] And the new suite that you guys are thinking of, which would be for more randomized tests [17:11:14] And we add 2 new pipelines to travis to run the 2 new test suites [17:11:24] What you you think @aguycalIed @prole ? [18:22:30] sounds good [18:27:14] do you know how to properly set it up? [18:28:21] I'll make a PR to create the suites, I'll just add dummy tests to the 2 suites in the PR, then we can move from there [21:04:54] my 16.04 node in tokyo (4GB RAM) crashed again trying to sync. this time at 1.6M blocks. So more ram = more sync, but still not managing to make it all the way [21:05:12] im pulling latest master and trying again [21:05:25] im pretty sure that it starts at block 0 again when i try to start it up [21:05:36] perhaps this is the issue Binance are experiencing? [21:06:01] the had issues when reindexing afaik [21:06:06] right [21:06:15] did you self build? [21:06:32] yeah, self build using depends and the --disable-hardening flag on configure [21:06:40] does it just terminate the process or segfaults? [21:06:49] try running it through gdb [21:07:04] gdb --args ./navcoind -debug=dao -whatever [21:07:07] then type run [21:07:11] and wait till it crashes [21:07:15] then type bt [21:07:24] the cli just won't respond and the logs just stop after a normal looking tip update log [21:07:31] i have to kill -9 the pid [21:07:38] u need to ./configure --enable-debug also before gdb [21:07:45] ok [21:07:55] the gdb is not needed if it does not crash [21:08:07] try compiling with configure --enable-debug [21:08:12] and see the log, maybe its a deadlock [21:08:16] okay [21:08:23] did you check top? [21:08:54] yep [21:09:01] https://cdn.discordapp.com/attachments/416000318149754881/647891374116438030/Screen_Shot_2019-11-24_at_9.08.51_AM.png [21:09:11] 81% of memory [21:09:35] 0 cpu [21:11:22] configuring with depends directory, --enable-debug and --disable-hardening [21:11:28] will rebuild and see how we go [21:11:36] 👍 [21:40:36] make fails with those flags [21:40:54] https://pastebin.com/Ad0mHxhR [21:42:06] what params for configure exactly? [21:43:47] ./configure --prefix=/home/navworker/navcoin-core/depends/x86_64-pc-linux-gnu --disable-hardening --enable-debug [22:01:26] i think @sakdeniz saw that too. i told him to just use a newer version of ubuntu [22:01:29] any idea @mxaddict ? [22:17:58] mmm okay, well this is the version binance are using for whatever reason [22:22:46] is this something new which has been added? I was previously able to compile before pulling the latest master [22:23:17] i tried to configure with the same params as before to check it was not introduced by adding the --enable-debug flag [22:23:20] i think mx can answer that question better than me, this is probably related to the bump of glibc version but im not sure [22:23:21] and i have hte same output [22:23:26] ah [22:23:27] right [22:23:31] u mean something new as from the last days? [22:23:33] i dont think so [22:23:35] yeah [22:23:52] is the prefix path correct and the depends built? [22:24:02] i compiled successfully earlier in the week [22:24:08] should be [22:24:16] the issue is boost related [22:24:25] okay [22:24:50] ill try to recompile depends and give it all another try