19:02:01 #startmeeting 19:02:01 Meeting started Thu Apr 27 19:02:01 2017 UTC. The chair is wumpus. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:02:01 Useful Commands: #action #agreed #help #info #idea #link #topic. 19:02:31 I have two topic proposals: "hd-restore" and "limited NODE_NETWORK (NODE_NETWORK_LIMITED) signaling" 19:02:33 #bitcoin-core-dev Meeting: wumpus sipa gmaxwell jonasschnelli morcos luke-jr btcdrak sdaftuar jtimon cfields petertodd kanzure bluematt 19:02:33 instagibbs phantomcircuit codeshark michagogo marcofalke paveljanik NicolasDorier jl2012 instagibbs 19:02:39 hi. 19:02:44 here 19:02:47 hi 19:03:17 #topic hd-restore (jonasschnelli) 19:03:21 suggested topic: summary of BlueMatt's overall plan for libconsensu 19:03:41 is this 10240? 19:03:45 if BlueMatt wants of course 19:03:47 jtimon: k, can share. jonasschnelli you have the floor :) 19:03:50 Re. HD restore. I'm not sure if we should always try to restore funds or if we should check for the bestblock and compare it to the chain tip and only then restore 19:04:12 The main stuff is in #10240 19:04:13 https://github.com/bitcoin/bitcoin/issues/10240 | [WIP] Add basic HD wallet restore functionality by jonasschnelli · Pull Request #10240 · bitcoin/bitcoin · GitHub 19:04:24 ack libconsensus discussion 19:04:34 But I think we should only restore if the wallet's bestblock lacks behind 19:04:35 oh sorry, didnt see topic set already :) 19:04:44 Because... 19:04:51 Encrypted wallets may need to unlock 19:05:06 And also for performance / log reasons 19:05:09 jonasschnelli: i assumed we'd always keep a buffer of X pubkeys around 19:05:15 because you may have wallet "forks" 19:05:21 not sure what you mean by "restore"? 19:05:28 (feel free to tell me to shut up and go read the pr) 19:05:55 BlueMatt: By restore I mean always check the keypool keys and auto-extend (if only 50 [TBD] keys are left, topup to 100 [TBD] 19:05:57 looks like it's re: finding relevant transactions 19:06:19 If we always restore... we would need to unlock encrypted wallet... 19:06:29 (more often) 19:06:30 jonasschnelli: my assumption was that we'd always mark seen keys as used (and we should do that independently) 19:06:43 jonasschnelli: we should also always extend the keypool when we can 19:06:54 jonasschnelli: ah, you mean like "when do we extend keypool to watch buffer"? 19:06:56 sipa: Yes. But what if we can't? 19:06:59 jonasschnelli: and if the keypool runs out in a non-interactive setting, shutdown 19:07:00 If it needs to generate keys you could prompt the user right when the main gui pops up 19:07:15 And whats a save gap limit? I would assume >100 keys. 19:07:18 another option would be to stop updating best seen block 19:07:29 and then kick off a background rescan-from-that-height when wallet next unlocks 19:07:38 If someone has handed out 101 keys and only the position 101 has payed... 19:07:38 if gap goes under some threshold 19:07:45 yea, trigger on next unlock is better than achow101 popup 19:07:58 achow101: needs to be cli-compatible, though 19:08:03 achow101: GUI is solvable.. 19:08:07 jonasschnelli: if we fix the bdb flushing stupidity, generating new keys becomes very cheap 19:08:11 I don't know how to solve the non GUI way 19:08:20 jonasschnelli: shutdown. make sure it doesn't happen 19:08:21 jonasschnelli: how would you hand out 101 keys if the 101st wasn't generated yet? 19:08:29 jonasschnelli: i mean keys are cheap, can do 250 or 500 or something crazy 19:08:35 sipa: But how to unlock during init in the first place? 19:08:40 jonasschnelli: you can't 19:08:47 jonasschnelli: but cant we just use the keypool number now as the "buffer"? 19:08:48 ah, i see what you mean 19:08:50 But right after we rescan and sync 19:09:07 and, like, the lower bound should be like keypool count / 2 19:09:21 sipa: you cant just shutdown mid-sync 19:09:30 BlueMatt: Yes. But with the current 100 default, we would enforce a shutdown on startup for encrypted wallets 19:09:33 BlueMatt: why not? 19:09:47 it's an error condition that we cannot recover from 19:09:48 * BlueMatt re-proposes that we stop updating wallet's best height if our keypool falls below keypool / 2 19:09:56 and then rescan when keypool next gets filled 19:09:57 hmm 19:10:05 IMO an explicit "restore-mode" with a "unlock during startup" (not sure how) would be preferable for encrypted wallets 19:10:12 BlueMatt: you should also stop pruning 19:10:20 sipa: yes, that would be my major reservation 19:10:29 jonasschnelli: not sure you realistically can in a daemon setting 19:10:40 is stdin a total nogo? *duck* 19:10:44 so i guess we need a special "stop syncing" mode that we go into when the keypool runs out 19:10:49 jonasschnelli: there is no stdin with -daemon 19:10:51 sipa: i guess you can stop pruning and if disk fills it will do the shutdown part for you :p 19:10:58 BlueMatt: ugh 19:11:02 yea, i know 19:11:03 sipa: Yes. But at least you could run in non-daemon headless 19:11:07 yes a blocking mode makes sense in that case 19:11:24 ok, so blocking in pruning mode, rescan-later in non-pruning mode? 19:11:34 and no, stdin is not an option, there should be no expectation with bitcoind that there's anyone at the terminal 19:11:35 If you run with an encrypted wallet and the bestblock lacks behind, shutdown if we can't unlock over stdin 19:11:52 no stdin, just shutdown 19:11:56 wumpus: So we have only RPC to unlock? 19:11:57 everything should be scriptable 19:11:57 jonasschnelli: but only in prune mode 19:12:07 jonasschnelli: yes 19:12:14 But how do we unlock/extend before we sync? 19:12:25 just wait until the wallet is unlocked to start 19:12:25 rpc starts after chain sync 19:12:31 jonasschnelli: you go into a blocking mode, and you continue after walletunlock 19:12:37 right 19:12:52 jonasschnelli: and no, no stdin ever 19:13:03 but can we block the sync and wait for RPC wallettunlock? 19:13:09 jonasschnelli: why not? 19:13:15 sure 19:13:18 (without changing too much)? 19:13:18 ProcessNewBlock { return false; } 19:13:31 okay... sounds good. Need to take a closer look. 19:13:33 add a function to validation.h to let the core know that validation cannot progress 19:13:42 maybe stop net too under the current net-pause stuff 19:13:47 right 19:13:53 Good point. 19:13:54 should it shutdown if wallet is not unlocked within a certain time period? if it's not shutdown users might expect it to still be syncing. 19:14:00 Next question: what's a sane gap limit? 19:14:02 the only precondition for getting out of Init() is that the genesis block has been processed, everything else can be delayed 19:14:04 100 seems way to low to me 19:14:16 jonasschnelli: fix bdb flushing insanity, and raise it to 1000 or 10000 19:14:17 jonasschnelli: keypool / 2? 19:14:18 (risk of losing funds is involved) 19:14:24 and we can bump keypool to 500 19:14:25 how would you know that it is blocking and you need to walletunlock? 19:14:29 jonasschnelli: i think the answer will depend on performance. also, do you really want to encourage users to use gaps? the answer might be yes.. 19:15:06 achow101: yes that is why i suggested shutdown after a certain period of time. users might not realize that syncing is stopped otherwise. 19:15:15 there my next concern pops up... all user will always have to have 500+ keypools. In an explicit restore more, only then we would need to have a large pool 19:15:30 jonasschnelli: who cares about 500 keys' 19:15:41 it's 16 kB of memory 19:15:52 well, some small constant multiple of that 19:15:52 i thought derivation time was the bottleneck? 19:15:56 Hmm... yes. 19:16:08 If it just would be a pubkey and H160 onyl.. but it's also the privatre key! hell 19:16:17 the memory usage of keys is not an issue, just generation time (and that's only due to bdb stupidity) 19:16:33 kanzure: we can do ~10000 derivation steps per second on a single thread on modern CPU 19:16:42 is that with bdb madness? :) 19:16:46 and maybe 5 due to BDB flushing 19:16:47 calling fsync after every key is not a good idea, it should create the entire keypool refill in one transaction 19:16:53 IMO automatic pruning should probably have as a precondition that the wallet has updated to the block being pruned, if it doesn't already; then the wallet can just set its criteria for processing 19:17:20 and if auto-pruning is enabled, block validation (safely) when the size is hit, until it can prune further? 19:17:30 luke-jr: agree, but that's not a concern right now as the wallet updates synchronously... with BlueMatt's coming changes maybe that changes 19:17:42 yes, that changes, but it still shouldnt be too slow 19:17:48 With HD, there would also be no need for the disk-keypool for unencrypted wallets,.. it's just legacy. We could always fill up in-mem 19:18:02 if your wallet falls behind consensus, you have a very, very large wallet 19:18:13 (and should pause sync anyway) 19:18:39 right, the wallet should have the ability to pause syncing or prevent pruning 19:18:49 Conclusion: a) always scan keypool and topup, b) extend keypool and gap-limit to 500+, c) block when encrypted until RPC unlocked. 19:19:11 sgtm 19:19:17 yes 19:19:18 thanks. That was effective 19:19:32 #topic libconsensus (BlueMatt) 19:19:51 yes, so obviously this is all based on #771 19:19:53 https://github.com/bitcoin/bitcoin/issues/771 | CBlockStore by TheBlueMatt · Pull Request #771 · bitcoin/bitcoin · GitHub 19:19:59 :) 19:20:08 (19 Jan 2012) 19:20:22 archeology? 19:20:23 but pr #10279 creates a CChainState class which will hold things like mapBlockIndex chainActive, etc, etc 19:20:24 https://github.com/bitcoin/bitcoin/issues/10279 | Add a CChainState class to validation.cpp to take another step towards clarifying internal interfaces by TheBlueMatt · Pull Request #10279 · bitcoin/bitcoin · GitHub 19:20:40 and have ProcessNewBlock Activate..., Connect, etc, etc, etc 19:20:53 yay 19:21:04 long-term that class' public interface will be libbitcoinconsensus, but right now its really just to clean up internal interfaces within validation.cpp 19:21:13 sounds good to me 19:21:29 that class would get a pcoinsTip and related stuff to write/read blocks from disk 19:21:39 and then only be able to call that and pure functions (eg script validation) 19:21:45 BlueMatt: so what's the next thing we will be able to expose with these changes? 19:21:51 ooh, +1 19:21:54 there is a bit of cleanup in the pr, but mostly its just moving into a class 19:22:06 jtimon: expose-wise? probably nothing for like 2 more releases "until its ready" 19:22:15 * BlueMatt is not a fan of libbitcoinconsensus being a grab-bag of random verification functions 19:22:21 the class itself? mhmm 19:22:32 i mean "the class" but I assume via a C API 19:24:01 any other questions? or next topic? 19:24:08 yes, I know, and I'm very open to see what you want to expose, even if I don't renounce to the verifyWithoutChangingState x {block, header, tx, script} + getFlags() vision I had 19:24:38 but that's helpful, I can just imagine the class being exposed as a c api 19:25:15 not directly, it's just another step toward being able to 19:25:36 #topic limited NODE_NETWORK (NODE_NETWORK_LIMITED) signaling (jonasschnelli) 19:25:57 wumpus: +1 19:25:57 I wanted to ask if a first step to announce pruned NODE_NETWORK would make sense. 19:26:04 Could be NODE_NETWORK_LIMITED 19:26:11 jonasschnelli: what would it entail? 19:26:14 The only requirement is relay, and serve the last 144 blocks 19:26:21 jonasschnelli: ACK 19:26:22 we had this discussion recently, I thnk the conclusion was to use two service bits 19:26:28 (or one, at first) 19:26:31 what wumpus said. 19:26:32 (which is almost always possible with the current auto-prune limit) 19:26:43 i would suggest something that guarantees 1 day and 1 week 19:26:51 one bit combination would be 144, one would be ~1000 19:26:58 jonasschnelli: so segwit prune=550 wouldn't be allowed? 19:27:02 * BlueMatt resists the urge to bikeshed on the "1 week" number 19:27:02 Which should be 2 days and 2 weeks so the boundary condition doesn't leave you right out. 19:27:09 BlueMatt: i have data! 19:27:11 luke-jr: We would have to bump there 19:27:14 BlueMatt: sipa has data on request rates. 19:27:21 oh, true, thats right 19:27:22 luke-jr: it's allowed, but it can't signal anything 19:27:44 BlueMatt: i'll analyse the numbers again if there is interest 19:27:44 The only think to bikeshed is how much higher do we need the cutoff than his data, it should be at least a couple blocks higher because of reorgs/boundary conditions. 19:28:20 our existing minimum sizing for pruning is sized out for 288 blocks, so I think we should just do that, it will make ~144 pretty reliable. 19:28:29 [13bitcoin] 15sipa opened pull request #10290: Add -stopatheight for benchmarking (06master...06shutdown_at_height) 02https://github.com/bitcoin/bitcoin/pull/10290 19:28:38 yep 19:28:47 sipa: ack without seeing code 19:28:49 Two service bits seems to be great. Did anyone started with specs/BIP? 19:29:27 BlueMatt: i just have a log of which depths of blocks are being fetched from my node 19:29:42 how would NODE_NETWORK_LIMITED interact (if at all) with the remote peer's advertised height? 19:29:59 BlueMatt: since february 19:30:03 cfields: I don't think it should? 19:30:05 IMO would be nicer to have the new service bit require *some* historical storage, but I guess we're not running out.. 19:30:08 IMO the purpose is to signal "I have only a limited amount of blocks" 19:30:12 cfields: not at all, we ignore that value 19:30:24 sipa: yes, i recall now 19:30:26 ok, good 19:30:27 That advetised height shouldn't be used for almost anything. 19:30:28 (as it's easily spoofable) 19:30:31 The best-height in version doesn't matter IMO 19:30:32 it isn't used at all 19:30:39 i believe it is not used at all 19:30:44 (by bitcoin core) 19:30:46 I'm not sure why more than the last 1-2 blocks should be needed to indicate relaying 19:30:53 wumpus: I think it's used by SPV 19:31:01 luke-jr: because or reorgs. 19:31:20 if I can't serve you the parents of my tip, I can't help you reorg onto it, making my serving nearly useless. 19:31:24 jonasschnelli: I meant in bitcoin core; I don't know about other implementations 19:31:24 hmm 19:31:28 Is a min of 144 blocks to height? 19:31:44 No... nm 19:31:45 luke-jr: and requiring nodes to have a GB or two of space for this is a trivial cost these days 19:31:52 is the point of NODE_NETWORK_LIMITED just to tell nodes that they can request the most recent blocks from said node? 19:31:55 assuming we only fetch blocks when reorging to their chain 19:32:02 It's the unbounded growth that gets people to shut off nodes 19:32:04 and right now you can't request any blocks from pruned nodes? 19:32:11 In any case the bit must promise more than nodes count on. 19:32:17 achow101: pruned nodes don't even advertize they can relay blocks 19:32:34 achow101, NODE_NETWORK is a flag for that, and it's missing from pruned nodes currently 19:32:36 achow101: once you are in sync (>-144) you can pair with pruned peers and be fine 19:32:53 ok 19:32:56 Say nodes frequently need to catch up a day. You only keep 144 blocks. Peer needs to catch up a day, connects to you.. opps you can't help them because a day turned out to be 150 blocks, they wasted their time connecting to you for nothing. 19:33:43 So for this to be useful the requester has to be conservative and not try to talk to you unless it thinks you are _very_ likely to have what it needs, which means that you need a fair amount more than the target. 19:34:11 So to serve a day of blocks, you'll need a day and a half or so. Round it up to 288. 19:34:25 petertodd: oh hi. long time no see. 19:34:33 gmaxwell: heh 19:34:39 and as mentioned, our pruning limit is already there. 19:34:49 I just think we should allow the current auto-pruning 550 peers to signal relay and "limited amount of blocks around the tip". 19:35:08 so 137 blocks? 19:35:21 If we set NODE_NETWORK_LIMIT higher while allowing to prune shorter,.. this would wast potential peers 19:35:22 luke-jr: 1337 blocks 19:35:27 jonasschnelli: then that will never be used. 19:35:28 heh 19:35:43 If we don't know how many blocks to except we'll never connect to them. 19:36:14 This impacts the connection logic, we'll need logic that changes the requested services based on an estimate of how far back we are. 19:36:17 when you're fully synced, why wouldn't you connect to a node that guarantees for example having the last 10 blocks? 19:36:19 gmaxwell: Well, if we are in sync, you could be friendly and make space for the one who need sync and re-connect to limited peers? 19:36:30 Yes. What sipa said 19:36:57 I would expect the larger the chain grows the more pruned peers we will see 19:37:09 (rought assumption) 19:37:11 not that we should support pruning that much, but for bandwidth reasons it may be reasonable that someone wants to relay new blocks, but not historical ones 19:38:11 sipa: I think we can make a good argument that requiring nodes to have something like 1GB of storage for historical blocks isn't a big deal, and makes the logic around all this stuff simpler 19:38:12 signaling the amount of block you have is also not extremly effective because of the addr-man, seed/crawl delay 19:38:18 *blocks 19:38:30 petertodd: again, not talking about storage, but about bandwidth 19:38:52 it's an open question - i'm not convinced it's needed or useful 19:39:02 Yes. Agree with sipa. Main pain point in historical blocks is upstream bandwidth 19:39:07 sipa sure that would also work: but (1) nodes that only keep ten blocks are a hazard to the network, and (2) there is no real reason to keep that little, and (3) we don't have signaling room to send out every tiny variation. 19:39:10 sipa: how much more bandwidth do your status say serving ~100 or whatever blocks is vs. 10? 19:39:21 sipa: I mean, you can just turn off NODE_NETWORK_LIMIT entirely 19:39:26 gmaxwell: They would keep more but only willing to serve the last 10 19:39:28 sipa: if you want to limit your bandwidth, limit it. 19:39:32 gmaxwell: well we have 3 possibilities 19:39:45 NODE_NETWORK_LIMITED would be a limit 19:39:51 fair enough, we have other mechanisms for limiting bandwidth 19:39:51 QoS on historical blocks :) 19:40:20 petertodd: i need to look again... it may not be that much difference 19:40:30 sipa: and we have had reorgs longer than 10 in recent memory, what happens if all of your peers are like that? 19:40:57 gmaxwell: we have?! 19:41:17 BlueMatt: bip50 was 30 deep, iirc 19:41:17 oh, you mean the csv false-signaling reorgs? 19:41:22 yea, ok 19:41:44 ok, i retract my suggestion for 10 deep 19:41:46 Would the two bit amount-of-blocks-available signaling be effective regarding the delay of address distribution? 19:41:52 always need 2 * MAX_HUMAN_FIX_TIME_FACTOR for everything :p 19:42:02 but we do have 3 possibilities with 2 bits... perhaps we can have a 3rd limit 19:42:08 People tend to prune to MB rather then blocks (which could be a design mistake) 19:42:17 jonasschnelli: Why do you think it has much to do with address distribution delay at all? 19:42:29 if you keep the last 288 you keep the last 288.. you're not going to flicker that on and off. 19:42:42 jonasschnelli: the design guarentees that you'll have 288 blocks. 19:42:48 (of the software) 19:42:53 gmaxwell: Maybe I'm looking to much to our seeders,... but the crawling till you serve IPs can be very delayed. 19:43:02 so? 19:43:09 jonasschnelli: I think I agree on prunning by height being more useful 19:43:13 You'll signal you keep X if you're guarenteed to keep X. 19:43:26 or relative height rather 19:43:33 s/height/depth/ 19:43:36 Okay. But prune=550 is a MB target. Does it guarantee and amount of blocks? 19:43:42 sipa: right, thanks 19:43:43 *an 19:44:10 jonasschnelli: it guarentees we'll keep 288 blocks. The whole feature was designed to guarentee that for reorg reasons, but people thought offering a MB centric UI would be more useful to users. 19:44:21 I think in the future we'll change it to a limited set of options. 19:44:45 Maybe all of them named after words for big in different languages, like starbucks. :P 19:44:53 Okay. Fair enough... 19:44:53 gmaxwell: the MB option confuses people though since it includes the undo data. people see 550 and they assume it means 550 blocks since 1 MB blocks 19:44:53 eh, 550 MB is only guaranteed 137 blocks with segwit 19:45:04 oh, forgot undo data 19:45:11 gmaxwell: "For me a venti depruned node, please" 19:45:20 lol @ coffee names 19:45:23 luke-jr: then that needs to get fixed. 19:45:24 lol 19:45:38 sipa: with a double shot of xthin. 19:45:43 pfff 19:46:10 luke-jr: easy fix. 19:46:18 controversial fix 19:46:24 gmaxwell: it'll break existing configs 19:46:41 Okay. I can start writing a draft specs about the two bit (144/~1000) NODE_NETWORK_LIMITED.. will announce once I have something 19:46:42 sipa: I'm sorry, I dont speak starbucks 19:46:49 sipa: so? 19:47:04 jonasschnelli: seriously, like why did I bother commenting today? 19:47:18 BlueMatt: venti is italian for 20. easy. that's obviously more than "grande" or "tall" 19:47:22 first peak is at 144, if _must_ keep more than that to be useful. 19:47:49 sipa: ehh, I'll stick with my *good* coffee, thanks 19:47:56 anyway, next topic? 19:48:08 #topic high priority for review 19:48:14 My 2 cents: the UI should stay in MB, but underlying the variables stored by the software should be in block count... for the prune threshold. 19:48:43 anything to add to project https://github.com/bitcoin/bitcoin/projects/8? 19:48:55 * BlueMatt suggests adding #10199 for morcos 19:48:56 https://github.com/bitcoin/bitcoin/issues/10199 | Better fee estimates by morcos · Pull Request #10199 · bitcoin/bitcoin · GitHub 19:49:00 (who is out today) 19:49:17 i'd like to get review on #10195 (which i think is ready) 19:49:18 https://github.com/bitcoin/bitcoin/issues/10195 | Switch chainstate db and cache to per-txout model by sipa · Pull Request #10195 · bitcoin/bitcoin · GitHub 19:49:20 * BlueMatt humbly requests rebase of #7729 which is on that list 19:49:21 https://github.com/bitcoin/bitcoin/issues/7729 | rpc: introduce label API for wallet by laanwj · Pull Request #7729 · bitcoin/bitcoin · GitHub 19:49:26 sipa: pyou already have an entry.... 19:49:30 oh :( 19:49:50 added 10199 19:50:23 It's not urgent, but #10285 is first in a long line towards libevent 19:50:24 ok, swap #10148 for #10195 then; 10148 needs more tests 19:50:24 https://github.com/bitcoin/bitcoin/issues/10285 | net: refactor the connection process. moving towards async connections. by theuni · Pull Request #10285 · bitcoin/bitcoin · GitHub 19:50:25 https://github.com/bitcoin/bitcoin/issues/10148 | [WIP] Use non-atomic flushing with block replay by sipa · Pull Request #10148 · bitcoin/bitcoin · GitHub 19:50:26 https://github.com/bitcoin/bitcoin/issues/10195 | Switch chainstate db and cache to per-txout model by sipa · Pull Request #10195 · bitcoin/bitcoin · GitHub 19:50:38 suggested topic? planned obsolecense 19:50:41 random though: what about maintaining the mb option an adding an incompatible one (you can only set one) with depth ? then the mb can be just an estimation that translates to depth on init, but you don't break old configs, only the expected guarantees about limits 19:51:00 luke-jr: NACK 19:51:16 BlueMatt: NACK topic or NACK it altogether? :/ 19:51:24 luke-jr: planned obsolecense is a bad name for it 19:51:25 second 19:52:09 added 10285, swapped #10148 for #10195 19:52:10 * luke-jr waits for topic change before going into discussion 19:52:10 * jtimon checks https://github.com/bitcoin/bitcoin/pull/8855 is on the priority list 19:52:12 https://github.com/bitcoin/bitcoin/issues/10148 | [WIP] Use non-atomic flushing with block replay by sipa · Pull Request #10148 · bitcoin/bitcoin · GitHub 19:52:13 https://github.com/bitcoin/bitcoin/issues/10195 | Switch chainstate db and cache to per-txout model by sipa · Pull Request #10195 · bitcoin/bitcoin · GitHub 19:52:28 thanks 19:52:50 #topic bitcoind expiration 19:53:42 10282 19:53:44 #10282 19:53:45 https://github.com/bitcoin/bitcoin/issues/10282 | Expire bitcoind & bitcoin-qt 7-8 years after its last change by luke-jr · Pull Request #10282 · bitcoin/bitcoin · GitHub 19:53:50 BlueMatt: reasoning for NACK? 19:54:11 luke-jr: maybe explain reasoning for doing so first? 19:54:12 re achow101's comment, I don't really think it matters what we call it 19:54:23 cfields: 10282 has a full explanation 19:54:51 any timeframe short enough to really be useful will probably be short enough to raise political risks... 19:54:51 1) it's basically guaranteed to be unsafe by then; 2) hardforks become softforks with enough lead time 19:55:00 I think if it's optional and disabled by default it kind of defeats the point, but I certainly don't want that for myself or the users I recommend to use bitcoin core 19:55:19 luke-jr: i don't see how this has anything to do with consensus changes 19:55:32 also, is there any precedent for this kind of expiration in other software? 19:55:40 luke-jr: 110% sends the wrong message. if i expected any reasonable person to see that and think "I need to think for myself about what consensus of the network is" I'd be happy with it, but realistically the only people reading that will think "oh, I have to switch to the latest thing from Bitcoin Core, for whatever Bitcoin Core is according to my local google server" 19:55:43 jtimon: what is the use case for running node software over 7 years after its release, without maintenance? 19:55:53 petertodd: yes, but I'm not aware of any that can be overridden. 19:56:06 luke-jr: i think insecurity of the software is perhaps a good reason, but not consensus 19:56:07 gmaxwell: got any examples? 19:56:13 petertodd: see also the thing with debian and xscreensaver. 19:56:32 BlueMatt: that's a problem independent of this IMO 19:56:35 gmaxwell: ah, yeah, that crazy situation... 19:57:02 luke-jr: how is that independant of the thing which creates it? but, indeed, security may be a reasonable reason, not sure its justified, though 19:57:26 am i really not allowed to not upgrade the bitcoind I've got running behind by bitcoind/xyz firewall? 19:57:33 BlueMatt: people will mostly all update before this triggers; probably using the insecure method you describre 19:57:44 I agree with petertodd's point about short enough to be useful is short enough to be problematic. :( But there are other not really useful features... 19:57:50 oops, bitcoind crashed in production 19:58:23 BlueMatt: note this has an explicit override allowed 19:58:26 gmaxwell: and there's a larger point too: chances are the surrounding software on your machine is also not getting updated anyway, so you've got other big problems 19:58:32 if you really don't want to upgrade, just add to your config file 19:58:35 luke-jr: yes, and you can do that /after/ your bitcoind has crashed 19:58:36 luke-jr: let's say my friend remembers what I told him about being up to date 6 years and 11 months after I helped him install bitcoin core 19:58:38 which is kinda shit 19:58:48 it would be nice to be able to say there are no nodes running older than X without the user deciding to keep them running. 19:58:58 BlueMatt: you could do it before as well, but IMO after 7 years it's okay to force the user to do something 19:59:04 BlueMatt: yes, but the crash was an RCE and all your funds are now gone. :) 19:59:27 if you run nodes in production you'll have some system to monitor it 19:59:30 gmaxwell: not if its the bitcoind that everything talks to on your network and it just sits behind sufficient layers of regularly-updated bitcoind firewalls 19:59:39 and summon an operator on crashes 19:59:53 wumpus: lol, i meannnnn, maybe 20:00:08 wumpus: hahaha, yes, with a server farm at the end of the rainbow 20:00:19 BlueMatt: and what if it doesn't crash, but someone exploits your failure to enforce a softfork? 20:00:21 or shouldn't I recommend bitcoin core for a wallet? 20:00:25 wumpus: you should talk to some banking IT guys about how hard it is to get approval to update things :) 20:00:33 jtimon: I don't understand your argument. 20:00:45 petertodd: I'm not saying anything about updating 20:00:54 jtimon, you can over-ride the setting, I believe 20:01:08 wumpus: literally touching a config option is an update by those standards 20:01:09 instagibbs: oh, I missed that 20:01:18 only about crashes, if some software is important to your business and it crashes, you'll notice. 20:01:41 anyhow 20:01:43 #endmeeting