19:02:10 #startmeeting 19:02:10 Meeting started Thu May 7 19:02:10 2020 UTC. The chair is wumpus. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:02:10 Useful Commands: #action #agreed #help #info #idea #link #topic. 19:02:14 hi 19:02:18 hi 19:02:19 hi 19:02:27 hi 19:02:30 #bitcoin-core-dev Meeting: wumpus sipa gmaxwell jonasschnelli morcos luke-jr sdaftuar jtimon cfields petertodd kanzure bluematt instagibbs phantomcircuit codeshark michagogo marcofalke paveljanik NicolasDorier jl2012 achow101 meshcollider jnewbery maaku fanquake promag provoostenator aj Chris_Stewart_5 dongcarl gwillen jamesob ken281221 ryanofsky gleb moneyball kvaciral ariard digi_james amiti fjahr 19:02:31 hi 19:02:31 hi 19:02:31 jeremyrubin lightlike emilengler jonatack hebasto jb55 19:02:33 hi 19:02:41 hi 19:02:43 hi 19:02:48 hi 19:02:56 hi 19:03:02 hi 19:03:16 one proposed topic (by jnewbery): removing valgrind from travis PR builds but that was already done 19:03:36 I have something to bring up, unless we're still to busy with 0.20... 19:03:42 There was another by Andrew re. wallet storage 19:03:58 I have a little bit to add to the valgrind topic, just for context 19:04:20 gleb: if you have any topics to propose please do 19:04:36 we're not too busy with 0.20, 0.20 is going pretty slow at the moment 19:05:04 I want to ask about adding extra threads in light of my work in #18421 19:05:05 most focus is on 0.21/master 19:05:06 https://github.com/bitcoin/bitcoin/issues/18421 | Periodically update DNS caches for better privacy of non-reachable nodes by naumenkogs · Pull Request #18421 · bitcoin/bitcoin · GitHub 19:05:22 ok 19:05:50 we'll come to those, let's start with high prio as usual 19:05:51 hi 19:06:03 #topic High priority for review 19:06:18 meshcollider: I think that topic is for tomorrow's wallet meeting 19:06:33 https://github.com/bitcoin/bitcoin/projects/8 4 blockers, 1 bugfix, 5 chasing concept ACK 19:06:37 I'd like to add #18877 please 19:06:39 achow101: ok sure :) 19:06:40 https://github.com/bitcoin/bitcoin/issues/18877 | Serve cfcheckpt requests by jnewbery · Pull Request #18877 · bitcoin/bitcoin · GitHub 19:07:30 jnewbery: added 19:07:37 (to blockers, I suppose?) 19:07:42 yes 19:07:52 blocking the rest of BIP 157 implementation 19:07:53 thanks! 19:08:03 #17037, which is on "chasing concept ACKs", was closed yesterday 19:08:05 https://github.com/bitcoin/bitcoin/issues/17037 | Testschains: Many regtests with different genesis and default datadir by jtimon · Pull Request #17037 · bitcoin/bitcoin · GitHub 19:08:50 lightlike: thanks, removed 19:09:42 anything else to change/add/remove? 19:10:21 nice to see the blockers moving forward lately 19:10:32 yes, two have been merged this week IIRC 19:11:18 looks like #17994 is kind of close to merge too 19:11:21 https://github.com/bitcoin/bitcoin/issues/17994 | validation: flush undo files after last block write by kallewoof · Pull Request #17994 · bitcoin/bitcoin · GitHub 19:11:43 and #16946 19:11:46 https://github.com/bitcoin/bitcoin/issues/16946 | wallet: include a checksum of encrypted private keys by achow101 · Pull Request #16946 · bitcoin/bitcoin · GitHub 19:12:29 #topic Adding another scheduler thread (gleb) 19:12:37 I implemented #18421 which helps non-reachable nodes to be less visible to the upstream infrastructure (DNS servers, ASNs). 19:12:37 The idea is to query DNS periodically by already-known reachable nodes, to update the caches, so that non-reachable nodes are served from caches. 19:12:39 https://github.com/bitcoin/bitcoin/issues/18421 | Periodically update DNS caches for better privacy of non-reachable nodes by naumenkogs · Pull Request #18421 · bitcoin/bitcoin · GitHub 19:12:46 It requires reachable nodes execute this query periodically, and potentially that DNS request might take several minutes. AFAIK, it is a part of the low-level stack, and can’t be easily solved on application level. Because of this, we can’t safely integrate this feature into existing threads: all of them sort of assume nothing would block them 19:12:46 for so long. 19:13:10 So I was wondering what should be a good solution here? Give up on the idea because it’s not worth adding a new thread? Or maybe add a new thread keeping in mind it will be useful in future for similar (non-restricted) tasks? Or maybe modify scheduler to limit max exec time (not sure how to do that in practice…) 19:13:19 can't this be done asynchronously? 19:13:37 it seems the thread would spend most of its time waiting for the network anyhow 19:14:08 Yeah, it hangs on the network call. 19:14:10 IIRC libevent has some async DNS functionality 19:14:49 oops, sorry I'm late 19:15:05 That might help actually! Will investigate this then. Thank you wumpus. Wasn't sure which tools we have available. 19:15:28 in any case, on 32-bit systems we don't want to add another thread, on 64 bit systems it doesn't matter 19:16:02 [13bitcoin] 15luke-jr opened pull request #18909: [0.20] Fix release tarball (060.20...06fix_release_tarball-0.20) 02https://github.com/bitcoin/bitcoin/pull/18909 19:16:05 What's the reason behind not adding it on 32-bit? 19:16:07 the 2MB/4MB of virtual memory for the stack that is only mapped when it is used is only a problem on 32 bit 19:16:20 virtual memory space 19:16:24 Oh, i see. 19:16:33 it's really tight on 32 bit already 19:16:48 Alright, I opened an issue for broader thread discussion is #18488, if anyone is interested. Otherwise, done here, will explore libevent. 19:16:49 https://github.com/bitcoin/bitcoin/issues/18488 | Support for non-immediate periodic tasks with variable runtime · Issue #18488 · bitcoin/bitcoin · GitHub 19:17:05 in any case if you can avoid adding a thread that'd be good 19:19:48 do people have a bit of time to talk about bip157 or more broadly light clients ? 19:19:49 #topic Removing valgrind from travis (jnewbery) 19:20:05 thanks wumpus 19:20:24 like you say, this was mostly resolved this morning, but I thought I'd give some more context in general 19:20:30 In December, we added a travis job to run all the functional test in valgrind for every PR. 19:20:36 That meant that ci runs were taking around 3 hours (and much longer in some cases due to backlog). 19:20:40 Thankfully, we're not doing that since this morning :) 19:20:41 ariard: probably there's some time left, though it's preferred if you propose topics at the beginning of the meeting or between meetings with #proposedmeetingtopic 19:20:46 We are, however, still running ASan/LSan and UBSan jobs, which take about an hour. 19:20:55 I think that's too long for a PR ci job. Preferably travis runs should return in a few minutes to allow fast iteration on PRs. Longer running jobs can be done on a nightly travis build on master. 19:21:05 I did a bunch of work in 2017/18 to make ci jobs faster, so I was surprised to see how much slower they've become since then. 19:21:12 wumpus: ah yes I proposed yesterday but I should have used #proposedmeetingtopic right 19:21:53 Nice to hear travis no longer takes hours because of valgrind, it was painful last time I rebased my things at a busy day. Thanks jnewbery 19:21:55 as I said in the PR I think it'd still make sense to run the unit tests and one functional test (spinning up and down bitcoind) in travis to test the init/shutdown sequence 19:21:59 I have a suggestion, but I'm not sure how easy to implement is that 19:22:02 but running it on everythign was always overkill 19:22:02 Really, just a plea to keep travis times down on PR jobs. It makes developers' lives much pleasanter! 19:22:17 I agree, long turnaround times for tests are bad for a project 19:22:27 we can have a "fast" CI on PRs and a longer one after it was accepted to merge, so before the actual merge it will run in another CI 19:22:31 please use testing time and resources efficiently 19:22:43 elichai2: does Travis support that? 19:22:51 don't do silly or overkill things 19:23:07 i think that would introduce way more process overhead for maintainers 19:23:08 also... don't forget that bitcoinbuilds.org runs usually faster but without the ASAN/TSAN and fuzzers 19:23:23 luke-jr: good question, sadly I doubt it supports it natively . I know rust-lang is doing it via a bot. 19:23:23 (for a quick feedback on a PR) 19:23:26 jonasschnelli: hah 19:23:41 jonasschnelli: can we get that to report to GitHub? 19:23:50 luke-jr: it is 19:23:57 I think it probably does support it. You just set it up to run on every push to master 19:23:59 * luke-jr didn't notice O.o 19:24:08 https://github.com/jonasschnelli/bitcoin-core-ci 19:24:19 jnewbery: well, it'd be nice to get them aall runn BEFORE merge 19:24:20 jonasschnelli: yes, i always look at bitcoinbuilds for first feedback, then much later, travis on my own github branch and on bitcoin/bitcoin 19:24:22 but like sipa says, anything that causes things to not get caught pre-merge transfers work to the maintainers 19:24:24 sipa: well we could delegate it to a bot, but it will require implementing and a big change on how merges happen(more uses of bots) which probably not everyone will like 19:24:31 jonasschnelli: i'm grateful for bitcoinbuilds 19:24:43 oh sipa was talking about nightly builds 19:24:50 I only see AppVeyor and Travis on the PR I just made.. 19:24:53 we're talking about different things 19:25:05 i'm totally in favor of doing more work on master merges than on PRs 19:25:26 things can always be reverted if there is an unexpected problem soon after merging 19:26:17 sipa: and then maintainers need to check the result on the nightly CI and revert if something broke it? 19:26:17 rather not, of course, but the full valgrind run wasn't that effective in catching things anyway 19:26:19 i don't think adding separate CI between PRs before and after they're "accepted" is a good idea as it just pushes more work to maintainers (arguably a more scarce resource than CI infrastructure...) 19:26:41 does valgrind do anything the *Sans don't? 19:26:57 luke-jr: it can test actual production binaries 19:26:58 luke-jr: I think gmaxwell show me an example once but I don't remember 19:27:10 sipa: oh, true 19:27:23 the sans all require different builds that invasively change the output 19:27:28 well, Valgrind does it by runtime patching of stuff… not sure that's much different? 19:27:46 and emulation IIRC 19:27:47 (they can also test things that valgrind can't, because they have knowledge of the source code) 19:27:53 yes 19:27:54 (I've seen Valgrind emulate an instruction *wrong* before) 19:28:05 luke-jr: sure 19:28:18 both approaches have their advantages and disadvantages I think that's clear 19:28:37 but it is certainly possible that a bug in the source code exists that persists into production binaries (and can be caught by valgrind), but is compiled out in sanitizer builds 19:29:01 true 19:29:12 because it's very optimizer dependent for example, and sanitizer builds prevent some optimizations (or at least interfere with it significantly) 19:29:14 so yes it's good to test master under valgrind as well 19:29:30 once in a while at least 19:29:41 can the opposite also be true? (ie overflow that is optimized out because the read was never used etc) 19:29:50 sure 19:29:56 that's what sanitizers are for 19:30:10 they primarily test for discoverable bugs in the source code 19:31:02 yes good point 19:31:35 FWIW I think I have a mainnet node lying around running under valgrind constantly (although since covid I didn't check it) 19:32:18 elichai2: social node distancing 19:32:28 hehe 19:32:34 aka Tor? 19:32:36 yeah lol, I don't want it to get infected with some UB :P 19:33:39 #topic bip157 and light clients (ariard) 19:33:41 the full valgrind run brought to light some issues for me recently that lead to more robust code... #18681 was an example 19:33:43 https://github.com/bitcoin/bitcoin/issues/18681 | donotmerge: build: Enable thread-local with glibc compat by laanwj · Pull Request #18681 · bitcoin/bitcoin · GitHub 19:34:03 er, #18691 19:34:06 https://github.com/bitcoin/bitcoin/issues/18691 | test: add wait_for_cookie_credentials() to framework for rpcwait tests by jonatack · Pull Request #18691 · bitcoin/bitcoin · GitHub 19:34:20 Yes so about light client I had really interesting discussion with people 19:34:41 and the constructive outcome of this was it would be better to have a more defined policy 19:35:06 when we now a solution isn't perfect, but at same time not restrain the project to make steps forward 19:35:24 what I was worry about, is by supporting bip157 in core, all people building such nice LN wallets 19:35:29 jonatack: hehe the cookie file race was detected just because valgrind makes things slow :) 19:35:34 consider the validation backend as a solved issue 19:35:40 BIP157 isn't just "not perfect", it's harmful/backward 19:35:41 yep :p 19:36:10 instead of having well-awareness, they are free-riding on the p2p network for now 19:38:09 I think BIP157 support in core is a conceptual no brainer. The question is maybe more, if it should be open to non-whitelisted peers (random peers). 19:38:10 and having a better idea for which bip157 support was aimed, people using their mobile wallets with their full-nodes 19:38:25 or servicing random clients in the wild, which maybe a bit insecure 19:39:02 there is nothing insecure about it; it's just a bad idea for them to trust random peers 19:39:07 the same issue as with the bloom filters again 19:39:09 (but that's still better than BIP37...) 19:39:15 jonasschnelli: what is the use case for it? 19:39:30 (though at least this doesn't have as much DoS potential) 19:39:35 wumpus: i don't think so; 19:39:37 exactly 19:39:42 bloom filters are strictly better I think 19:39:49 BIP157 support is very cheap for the server 19:39:51 luke-jr: how so? 19:39:56 it's a kind of 'altruism' that might not be warranted 19:40:00 sipa: lower overhead 19:40:09 luke-jr: for whom? 19:40:09 on the security aspect, supporting bip157 in core encourage people to connect directly to random peers 19:40:10 luke-jr: wait, how? 19:40:16 sipa: for everyone 19:40:19 luke-jr: wut 19:40:27 and almost all bip157 clietns, dont't have strong addr management countermeasures 19:40:40 ariard: but that's *their* problem 19:40:40 BIP157 is certainly harder on clients 19:40:41 *peer management protection 19:40:43 take the reasonable use case of a user using a light wallet with their own full node 19:40:52 we care about the server side 19:40:53 bloom does this fine, with very little overhead 19:40:58 wumpus: but do you want to make it easy for people to build insecure solutions? 19:40:59 you scan the blockchain once on the server side 19:41:08 luke-jr: but you need to do it once per client 19:41:13 with BIP157 you do it once 19:41:19 sipa: how many people have multiple clients? 19:41:35 and even a few clients is still relatively low total overhead there 19:41:38 scanning on the server side was always the problem 19:41:49 wumpus: but that's exactly the ideal in this case 19:41:55 you don't want to burden your phone/battery 19:41:56 if you allow random people on the internet to offload computation to you, you're infinitely generous 19:42:07 you don't. this isn't for random people, it's for trusted peers… 19:42:12 your own wallets 19:42:21 scanning on the server side isn't great, even worst with LN clients verifying channel opening 19:42:22 for whitelisted peers it's okay 19:42:25 sure 19:42:26 random people using it is harmful, and the very reason to avoid merging it 19:42:54 ariard: server side typically has ~unlimited power 19:42:56 luke-jr: i agree it's a bad idea; i'm not sure it is harmful 19:42:59 client has a battery to worry about 19:43:20 sipa: it encourages light wallets to use foreign nodes 19:43:22 and it would be far less of a bad idea if it was softforked in, so the filters are verifiable 19:43:25 maybe it should be part of the release node, to advice whitelisting 19:43:29 *notes 19:43:36 but that's not going to happen any time soon 19:43:38 sipa: that doesn't fix the problem of people not usign their own node 19:43:46 if it's your own server, you don't need an spv protocol. Just upload your xpub 19:43:49 luke-jr: not everyone uses their own full node, period 19:43:52 so the rationale here from luke-jr is that in the end every person should have its own full node? 19:43:56 luke-jr: there are good and bad ways to deal with it 19:43:57 luke-jr: yes but rescannning code of core isn't that performant, no parallelization, a lot of lock tacking 19:44:01 jnewbery: yes, that direction seems a lot better IMO 19:44:12 I think ariard concern is hypothetical but IMO boils down on limiting bandwidth,... you can write a client today that downloads all blocks over and over again. 19:44:14 good, so your use case is solved 19:44:35 jnewbery: ⁇ 19:44:49 jonasschenelli: are you thinking about intentional DoS ? 19:44:56 my point is that there is no use case for neutrino 19:44:59 not everyone wants to use bitcoin in the same way as you, and that's ok 19:45:05 ariard: both... intentional or just because of the use cases 19:45:13 Bitcoin's security model depends on at least most people using their own full ndoe 19:45:39 it's okay if there are exceptions, but there's no reason to cater to them, especially when the network's security is already at high risk 19:45:40 luke-jr: i strongly disagree; it depends on enough people independently verifying the blockchain 19:45:43 if there is the concern that there are too many BIP157 clients,... one might want to limit the bandwidth 19:45:44 jonasschenelli: okay my point was really about LN clients, for which bip157 was designed, not an application which needs to download block over and over 19:45:51 sipa: enough people = most 19:46:01 luke-jr: i strongly disagree 19:46:09 sipa: a minority verifying is useless if the majority imposes the invalid chain economically 19:46:17 ariard: Same for any SPV client,... right? 19:46:35 jonasschenlli: yes my concern isn't bip157 specific, I do think that's the best option available today 19:46:46 stratum > bloom > bip157 19:46:59 for private/trusted usage 19:47:08 which is the only usage we should support IMO 19:47:11 it's more how do you scale any light client protocol to avoid building centralized chain access services when they hit a scaling roof 19:47:45 luke-jr: I assume you meant electrum? 19:47:48 ariard: there's no difference 19:47:50 luke-jr: bip157 has other advantages over bloom filters, such as being able to connect to two nodes and comparing the filters, permitting a "1 of 2 nodes is trusted" security model 19:47:55 gleb: Stratum is the protocol Electrum uses, yes 19:48:10 ariard: I would expect that wallet providers ship a recent pack of filters with the ap 19:48:11 overall, bip157 is good for experimentation, while keeping awareness there is still unsolved issues on security and scalability 19:48:13 sipa: but improving security of light wallets is a net loss of security for the network 19:48:24 sipa: because now fewer people will use a full node of their own 19:48:30 luke-jr: 99.99% of users don't even have SPV level verification 19:48:40 ariard: the beauty is also that filters can be retrieved from centralized sources and CDNs 19:48:46 sipa: if 99.99% don't have their own full node, Bitcoin has failed 19:49:02 fwiw, i'm running a bip157 node on mainnet with -peercfilters=1 -blockfilterindex=1 to test for the first time, and /blockfilter/basic is 4 GB 19:49:10 jonasschnelli: yes but what's your trust model with such centralized sources and CDNS 19:49:40 ariard: IMO the goal for compact block filters is to get a block commitment at some point 19:49:42 you can dissociate getting the filters from such CDN and getting filters-headers/headers from the p2p network 19:50:08 ariard: also, one can crosscheck the CDN filters against some p2p loaded bip157 filters 19:50:13 jonasschenlli: it would simplify SPV logic and improve their security but even committed you still need to download them 19:50:20 luke-jr: Ok. 19:50:37 ariard: what is the worry with downloading them? 19:50:39 i don't think this discussion will lead anywhere 19:50:57 better continue the ML discussion I think 19:50:58 jonasschnelli: bandwidth cost if you download them directly from the p2p network 19:51:11 (happy to continue outside of this meeting) 19:51:37 jonasschnelli: but yes I agree you can crosscheck the CDN filters against filter-headers provided from the p2p network 19:51:50 is the contention that light clients should be doing IBD and validation? 19:52:09 heh 19:52:40 kanzure: i think luke-jr is contending that light clients shouldn't exist, and all wallets should be either a full node, or connected to the user's own trusted full node 19:52:41 kanzure: no my concern is assuming you have the bip157 light client paradigm, how do you make it scale ecosystem-wise 19:52:47 at least for a majority of users 19:52:50 next question: how many times should someone have to do IBD? i think the correct answer should be only once ever.... 19:53:12 [if they can keep integrity of their download and state] 19:53:14 ariard: i don't understand how your concern is any different from nodes serving blocks *at all* 19:53:32 kanzure: next question: how many random peers have I misused by testing mainnet IBD 19:53:44 these and other disturbing questions. 19:54:01 sipa: ideally 19:54:20 at least, that as long as the situation is not good, anything that makes light clients better, is harmful to Bitcoin, and shouldn't be merged 19:54:48 because that can only result in fewer people using a full node 19:55:00 sipa: it's another issue but yes also an unsolved problem, my assumption was you may have a desquilibritate number of light clients compare to full-nodes 19:55:06 and maybe faster than expected 19:55:13 luke-jr: my belief is that bitcoin offers a choice for financial autonomy, and choice is a good thing - not everyone will choose to make maximal use of that, but everyone who wants to should be able to 19:55:16 Yes. The only difference to blocks serving (which seems to cause much more traffic) is that blocks served to bip157 are pure consumption while blocks served to full nodes should - ideally - be served to other peers. 19:55:39 sipa: you already have that choice with fiat: you can print monopoly money, and refuse to honour USD 19:55:41 jonasschelli: yes you may have assume some reciprocity between full-nodes peers 19:55:44 ariard: imposing upload costs on peers is something that is caused by any activity on the p2p network. It doesn't make much sense to distinguish between application data types because there will always be some other data you can download. Peer upload resource cost can really only be done on the net layer by deprioritizing nodes that are taking up resource. 19:55:49 at least I see incentives far more aligned 19:56:08 luke-jr: this is not productive 19:56:08 agree with jnewbery 19:56:23 4 minutes to go 19:56:41 sipa: it's the same thing; if most people just trust miners, then the people who don't trust miners will simply get cut off when miners do something they don't like; the losers are the full nodes 19:57:47 light clients are a hardfork to "no rules at all" 19:58:33 perhaps - but far less easily than having money on coinbase is a hardfork to "whatever monetary policy coinbase likes" 19:58:36 jnewbery: but ideally you do want to increase security by increasing connectivity, like I prefer to offer my bandwidth to other full-nodes for censorship-resistance? 19:58:55 then don't enable serving cfilters :) 19:58:56 sipa: it's the same, but miner(s) instead of coinbase 19:59:36 ariard: there is no way you know if the blocks you serve are for other full nodes 19:59:40 jnewbery: sorry I don't get you on depriorirtizing nodes that are taking up resources, can you precise? 19:59:59 jonasschnelli: technically true, but what non-full nodes download full blocks these days? 20:00:03 ariard: here's another example for you. If a peer asks for the same block twice, should you serve it again? You're clearly not helping block propagation 20:00:22 luke-jr: wasabi did for a while (full block SPV) 20:00:23 if your answer is 'no', then you need to keep internal book-keeping of which blocks you've served to whom 20:00:26 (maybe still does) 20:00:27 ariard: if a node asks too much of your resources (memory, cpu, bandwidth, i/o), deprioritize serving their incoming requests 20:00:36 if your answer is 'yes', then how is it any different from serving a cfilter? 20:00:52 jnewbery: maybe that's a fault-tolerance case and it makes sense to serve it again 20:01:39 sipa: yes but we don't do this AFAIK? and if everyone start to deprioritizie servicing bip157 clients you do have an issue 20:01:48 ariard: no, but we absolutely should 20:02:09 sipa: +1 20:02:15 (not BIP157 specifically, just in general - if you ask too much of us and we get overloaded, deprioritize) 20:02:29 * luke-jr still hasn't heard a use case for merging BIP157 at all, aside from harming Bitcoin 20:02:46 question: if bip157 is opt-in, and a full node can soon export a descriptor wallet xpub, why would a full node turn on serving cfilters? 20:03:17 this should be the end of the meeting 20:03:26 i don't see what exporting and xpub has to do with that 20:03:31 maybe we should continue next week 20:03:34 * luke-jr either 20:03:43 jonasschnelli: yes you may not know what kind of clients you're servicing, but with all this stuff we make assumptions of what kind of clients 20:03:50 are effectively deployed ? 20:04:06 wumpus: yes we can end, but thanks for all your points it's really interesting 20:04:44 #endmeeting