[10:00:56] https://www.reddit.com/r/NavCoin/comments/efq7gl/error_invalid_header_received_error/?utm_medium=android_app&utm_source=share [19:13:09] please @developer prioritise review of this https://github.com/navcoin/navcoin-core/pull/656 [19:13:21] @salmonskinrolI [19:17:56] building [19:41:04] can you talk a bit about how you solve the problem? is it the same way how qtum sovled it? [19:45:46] nope, its not the same. im adding some notes in the pr now [20:11:06] just updated the pr text [20:15:39] let me know if u have any question after the last changes in the text @salmonskinroll [20:16:09] okie dokie [20:38:04] i remember that our first try was to have a max reorg depth and it was cancelled. can you talk a bit about how this is superior? and compared to qtum's solution, what's different in yours? [20:53:24] id say the only way to really know if a header is valid is looking at the full block. from my perspective this filter is more sound as it counts how many headers without a validated full block a peer sent in order to determine if he must be banned. what qtum does is just add small rocks on the road here and there to make more difficult to spam lot of invalid headers. [21:00:40] maxSize is set to 50. The keys to the map are block header heights, the values are numbers of received block headers at the specific heights. This implementation makes very little sense as it is easy to bypass the filter by sending a sequence of headers that ends at some large height followed by an unlimited number of shorter sequences, which must end more than 50 headers before the last header of the first sequence. In our [21:00:40] proof of concept exploit, the first sequence starts at block height -2100 relative to the current chain tip and we send 2000 headers, effectively ending at the relative height -100. Following sequences we start at the same height but only use 1900 headers, thus effectively ending at the relative height -200. This works because of how std::map works internally – it is an ordered collection and erase function removes not the last element of it [21:00:41] (as the comment suggests), but rather the smallest element according to the ordering [21:00:58] is the addressed in your implementation? [21:02:07] yes, its way simpler than what qtum does [21:03:00] my implementation just looks at how many headers did you send me that i don't have the full block yet to know if its valid or not. if its greater than 4000 then i ban you [21:03:40] when the node is syncing, and theres more than 1 day of blocks to download, it is in initial block download mode (IBD), during this mode the filter is turned off [21:04:35] if my wallet lags behind 1 week, for example. i will first receive 7*2880 headers and then the blocks. in this situation my peers would be banned if the filter would be turned on [21:05:15] but because the threshold is 4000 (which is greater than 2880), the filter is only turned on when there are at max 2880 legit headers to receive [21:06:32] hmm, say im syncing and im only half a day behind, then the filter will be on correct? [21:06:40] yes the filter is on [21:07:06] how does a nodes tell if the headers are legit or not? i thought it needs the whole block to verify it [21:07:43] yes, it looks at the full block. if the full block has been flagged as fully validated, then the hash is removed from the list [21:08:00] the node will look at the size of the list to decide if the peer needs to be banned [21:08:15] only headers without the fully validated block are part of the list [21:09:55] what's the down side of reducing the threshold then? it sounds like having 400 instead of 4000 would be fine too assuming nodes are mostly in sync [21:10:55] risk of banning legit peers [21:12:17] it can probably be reduced but i'm not sure about the consequences or how far we can bring it down so i preferred to be conservative, taking in account something super safe like 4000 restricts the exhaustion to ~700kb [21:13:01] per attack node [21:13:05] per ip address [21:13:12] gotcha [21:13:34] ip addresses have a cost and are limited, one cant have as many as desired [21:13:43] true true [21:13:56] Yes, so port is not taken into account? [21:14:05] not by default [21:14:11] Nice [21:15:01] This should be a pretty good solution, as each IP will be expensive to get, cheapest is like 5$ on a vps provider. [21:15:26] yes and you can't get thousand of ips by paying 5,000$ [21:16:18] anyway this patch is something temporary, ideally we should get a retwist of our proof of stake [21:17:36] that sounds cool. im testing syncing from scratch, will try the attack in your PR later [21:18:55] half-synced scenarios are worth to test too, thats the most critical from my pov [21:19:21] im guessing the only way to trick the system is to do something with the points map, but i can't think of any way