20:08:37 #startmeeting 20:08:37 Meeting started Mon Jun 22 20:08:37 2020 UTC. The chair is t-bast. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:08:37 Useful Commands: #action #agreed #help #info #idea #link #topic. 20:08:47 I don't think anchor output add security without package relay support, but it's a step forward for congestion management 20:09:05 #topic The everlasting anchor outputs debate 20:09:16 t-bast, roasbeef: yeah, I've been looking at making static_remotekey compulsory, it would be really nice to have an upgrade to get rid of the older code entirely. 20:09:20 #link https://github.com/lightningnetwork/lightning-rfc/pull/688 20:09:20 ariard: ofc it does..you can actually adjust fees 20:09:58 rusty: we're not fully ready for that yet, on the mobile side we still need to do some work for bech32... 20:10:04 ariard: yeh 0.10 is manual, we're looking to do an exponential increase as we get closer to the deadline 20:10:23 t-bast: stuff keeps coming up, it's on the way... ;) 20:11:01 rusty: same! eclair added it, I think it's bout time 20:11:10 static remote by default that is 20:11:31 roasbeef: wrt to blockspace competition and racing for being confirmed before timelocks expiration ? yes but I think a really adversarial counterparty can pin the commitment 20:11:36 want to prio enabeling these safety features by default and make them compulsory 20:11:41 to avoid even your getting into the mempool 20:11:52 ariard: yes pinning is still there, but even w/o pinning, today, you can't adjust 20:11:57 we always conflate these issues 20:12:19 as in: w/o any adverserial activity, today you can't adjust your fees, so you can end up being screwed 20:12:39 roasbeef: there's something that's been bothering me, how can you decide on the amount you attach to your anchor output without looking at the mempool? Because the carve-out rule is a one-shot attempt, if you don't bump enough (because there other anchor has a big list of children txs) you won't be able to RBF your anchor, right? 20:12:45 roasbeef: agreed, sorry I've been remiss on implementing anchor outputs. However, I have finally gotten the new protocol tests (ann coming today I really hope) to the point where I can implement it so testing is easier. 20:12:51 t-bast: you can use it as a lower bound, that's your floor 20:12:59 I see your point, it's a step forward even with regards to security but I wouldn't call you secure even with compulsory anchor output support 20:13:01 yeah the step needs to be a certain size 20:13:09 ariard: security isn't binary! 20:13:26 let's address what we can, and contineu to iterate on the other issues 20:13:49 roasbeef: right we should evaluate each scenario on its own, but solving the hardest one are likely alleviate us to have to overspec 20:14:15 we also don't need super strong global synchrony on this 20:14:19 like I don't think that remote anchor ouput and local commitment should be pursued, it's a mapping oracle 20:14:21 t-bast: you should be able to RBF your anchor. 20:14:23 since it's a link level feature and can be negotiated between peers 20:14:50 ariard: implementations are free to deploy if if they wish, or wait even longer (it's been like 9+ months at this posint) 20:15:00 dunno what you mean by mapping oracle 20:15:23 roasbeef: see on t-bast gist https://gist.github.com/t-bast/22320336e0816ca5578fdca4ad824d12 20:15:29 a way to map your full-node to your LN-node 20:15:36 short update on the anchor pr itself: i am still working on providing you with the test vectors 20:15:43 t-bast: for RBF'ing your anchor, see https://github.com/bitcoin/bitcoin/pull/16421 20:15:59 roasbeef: I think your free to deploy whatever you want but we should spec out security-damaging stuff, I'm not sure 20:16:00 ariard: seems distinct...and other ways that can be mitigated 20:16:04 * bitconner waves, irc client back in operation 20:16:07 harding: but at a potentially huge cost, right? If the attacker has a big list of child txs on his anchor (that maximizes the package), you already need quite a big amount to get your anchor spend to match the carve-out rule. If you didn't allocate enough fee there, when you bump it it will be very costly, isn't it (because of the adversarial package)? 20:16:35 roasbeef: well so what's your mitigation? It's just reacting to any in-mempool stuff is insecure by design 20:17:26 ariard: mitigation to what? i feel like we're tyring to fix everything in a single swoop, and more stuff is discovered, but the central thing being enabled: bumping fees is extremley valuable, well overdue, and implemetnations that don't deploy some sort of solution in the near term put their users at risk 20:17:41 t-bast: no, I don't think so. You're only RBFing your ~200 vbyte anchor transaction, so you only need to fee bump those 200 vbytes. 20:17:43 joostjgr: have you verified the spendign path for the anyone-can-spend I'm not sure it's compliant with MINIMALIF, haven't test yet though 20:18:11 harding: oh that's great, I'd misunderstood that then! 20:18:26 harding: thanks for clarifying this 20:18:27 roasbeef: I agree with you we should move forward with a subset of anchor output, namely adding a _local_ anchor ouput on local commitment transactions 20:18:50 and even getting right the bumping logic should be done steps by steps 20:19:09 joostjgr: could you also add to the PR that local signatures should still use SIGHASH_ALL even when the remote uses SIGHASH_SINGLE? Worth mentioning IMHO 20:19:26 (well I agree we should move forward, but I think we disagree on the scope what we should move forward with) 20:19:34 minimal if is just about what you supply in the witness ariard 20:19:51 t-bast: i believe i've added that in https://github.com/lightningnetwork/lightning-rfc/pull/688/commits/964b03f51cdcf668eb4d494a738679482e20c5ed 20:20:08 ariard: i will look that up. it has been a while now since we worked on it 20:20:18 ariard: fee bumping is purely client-side policy 20:20:47 joostjgr: thanks, don't know how I missed that :/ 20:21:04 still need to start that weekend project to make the anyone-can-pay anchor sweeper and get rich 20:21:16 ariard: again as I mentioned above, we don't need global syncrhony on this, it's a link level upgrade, that's the beauty of LN as well, we can roll out things more quickly as long as there's negotiation 20:21:17 joostjgr: xD 20:21:29 minimal if may hit you there because OP_CHECKSIG will push a 0 in case of sig failure? 20:21:30 joostjgr: bwahahah! 20:22:47 roasbeef: okay if you want to have remote anchor output on local commitment feel free to do so, I would just discourage to deploy it 20:23:08 I maybe wrong on this but with all these pinning games, mapping your counterparty node is a big chunk of it 20:23:10 ariard: rationale being? otherwise one party can block the confirmation entirely 20:23:25 we keep combining all these scenarios, which isn't productive imo 20:23:34 mapping oracle namely, making transaction size bigger also 20:23:58 but my point is you can already block confirmation entirely 20:24:23 but in the case w/o any outside interatction at all, w/o this you can fail to get into the chain in time, which can mean loss of funds 20:25:45 Ah so the scenario you're thinking is concurrent broadcast of both commitment transactiosn and your counterparty one getting in mempool 20:26:06 and your counterparty fee-bumping policy being too lazy to get if confirmed ? 20:26:16 in a timely fashion 20:26:36 yeh that's one of many 20:26:54 or you don't trust their bumping algo or w/e 20:27:16 you should have an ability to do it yourself, since their commitment needs to be confirmed in order for you to amke sure you can resolve all your contracts properly 20:27:21 Is it a fair summary to say that roasbeef you want to move forward with the current proposal because while not fixing all attack vectors (pinning is still possible), it greatly improves in the current tx format; ariard your main concern is that you'd like to move forward with something that better fixes those attack vectors? 20:27:39 yeah it being lazy or buggy that's the same for you, still that it's doing the assumption that you see the remote commitment in your mempool 20:27:54 idk how saying "users need to be able to bump their fees" isn't a resounding agreement 20:27:58 t-bast: yes, yes, yes 20:27:59 p2p rules don't make the assumption right now that every peer will see every transaction announced 20:28:16 t-bast: we keep expanding the threat model as well continually vs pinning one down and operating within that 20:28:55 t-bast; yes exactly I'm working on some package relay for core and I think that would avoid us a lot of overspecing there 20:28:58 I agree with roasbeef FWIW. 20:29:24 We need to remember that attacks only get better, so we may get stuck never shipping anything if we're always aiming for fixing everything at once 20:29:32 t-bast: boom 20:29:43 because as soon as we have package relay a lot of complexities around OP_CSV, remote anchor and tx-propagation assumptions can disappear 20:29:55 modulo a soon on the core-side can take a long time 20:30:18 ariard: I completely agree, but that doesn't prevent us from doing something that's not perfect now, and migrate to something better once we have better layer 1 support? 20:30:40 still you will fix some attacks/congestion scenarios by making some others attacks easier, that's my concern 20:30:43 as long as what we'd like to do now isn't horribly costly and fixes issues we're seeing today? 20:31:22 ariard: package relay doesn't fix the bitcoin core mempool behavior, which is a larger concern anyway. 20:31:22 Then I think what would help move forward is a clearer view of how much simpler we make those attacks, to weigh whether what we gain is better than what we lose 20:31:39 ariard: how long is package relay gonna take? no one knows.... 20:31:40 so I'm okay to move forward if we can dynamically negotiate the scope of anchors, for whom being unconfortable with adding remotes ones 20:32:24 for me, increasing safety of 1000s of nodes deployed today >>> needing to make a v2 anchor proposal 20:32:44 After spending a few days on the anchor outputs current proposal, I've grown to like it :). The format change isn't too drastic, and it's a first step to get everyone to start implementing RBF/CPFP engines which will always be useful. 20:33:05 roasbeef: I'm actively working on it and hope to post some proposal before next meeting, but likely a 18months timeline to get it merged and deployed 20:33:18 ariard: you mean a flag that would let us have a format with a single anchor (local) instead of 2? 20:33:31 a minimal package relay just fixing the current state of LN, not the full-fledge things 20:33:52 ariard: 18 months...such time, wow 20:34:09 bitconner: yes but I'm questioning this "safety", as t-bast are we confident that's a blank increase ? 20:34:17 roasbeef: we're already 9 months in for the anchor outputs proposal, and it's been discussed almost 2 years ago for the first time xD 20:34:28 roasbeef: 18 months doesn't shock me anymore 20:34:35 Thanks, I think we're going in circles. Can we move on? 20:34:43 rusty: sgtm 20:34:59 t-bast: think of the events that can happen in those 18 months that'll make you wanted to have deployed _something_ in that time frame 20:35:01 rusty: yes 20:35:10 #action ariard to summarize the security loss the two-anchors create, so that we evaluate it next time 20:35:22 roasbeef: yeah, I'm teasing really :) 20:35:24 also 18 months is just conjecture, and it'll be even longer for "all" nodes to update 20:35:36 roasbeef: think of the events that can happeen in those 18 months due to an anchor output introducing some easier way to attack 20:36:15 ariard: ofc we can't know that there won't be any new things discoerved, that doesn't mean we should do _nothing_ 20:36:16 roasbeef: in the meanwhile, without pacakge relay if your feerate doesn't get into the mempool anchor won't help you 20:36:25 i think it's also diff for you given that rust-lightnign isn't fully "deployed" 20:36:32 we have users in the wild we want to protect _now_ 20:36:40 Shall we do some small PRs, or does Rusty want to introduce to the world protocol tests v2? 20:36:50 t-bast: sure let's move on 20:36:59 t-bast: PRs first? 20:37:01 will let y'all know if anything changes w.r.t our plans re deployment 20:37:13 test vectors pr, related :) 20:37:14 okay let's ask a last question and then move on, if we do see sophisticated attackers in the next coming montths don't we think they will chase for the easiest scenarios to execute? 20:38:21 ariard: it's a good point, but I think for the sake of this meeting it's worth preparing something for next time to showcase how worse it would be with 2 anchors, don't you agree? 20:38:28 roasbeef: I think that's an orthogonal point, about being deployed, you can't tell to your users they're going to be safe even with anchor outputs 20:38:28 ariard: there're easier things they can do than do some elaborate commitment pinning, did you see that "flood & steal" paper (or w/e it's called) 20:38:47 ariard: you can't claim 100% safety with _anything_, security isn't binary bruv 20:38:49 Then we'll have something we can comment on and debate, I feel it's a bit too hand-wavey right now 20:39:05 Yes, ariard this is not the hole the water is coming thru right now... 20:39:08 roasbeef: yes, but this is already mentionned in LN paper, and I think you can just dumbly spam the mempool no need to open channels 20:39:26 yeh even easier...and guess what...no one would be able to update their fees to try and thawrt it! 20:39:43 rusty: lol never heard that b4 20:39:57 rusty: me neither xD 20:40:00 as again, I'm fine moving forward with a negotiated version of anchor output, I would personally not deploy remote anchor ones 20:40:11 yeah, best I could do at 6:10am. 20:40:26 i mean remote anchor is super useful if you assume some kind of ability to monitor the global mempool, no ariard? 20:40:40 which, like, sure, you cant, but you can do something with that 20:40:43 "some kind of ability to monitor the global mempool" 20:40:46 ariard are you ok with preparing a small gist/issue to summarize the cons of the double anchors? 20:41:13 and I think its pretty clear by now that we can't "fix" the problem without, like, eltoo or something. 20:41:34 BlueMatt: you just triggered cdecker, that was the forbidden word 20:41:35 t-bast: yes I'm already gathering all the issues around the fees, will publish on the ml or elsewhere once I got package relay 20:41:40 t-bast: lmaooo 20:42:06 ariard: great, then let's continue discussing that off-meeting and resuming this discussion next time? 20:42:10 * cdecker rears his head :-) 20:42:11 ok...small PRs? ;) 20:42:17 #topic Static remotekey test vectors 20:42:20 t-bast: sure :) 20:42:21 he's arisen! 20:42:23 #link https://github.com/lightningnetwork/lightning-rfc/pull/758 20:42:28 * BlueMatt notes that these discussions almost certainly merit a presentation and video/voice call, not just a text chat. 20:43:14 #action t-bast to try to gather people for a voice call before next time to discuss these anchor issues more efficiently 20:43:43 on our side, araspitzu validated the test vectors, they're on eclair master so the PR looks good 20:44:16 joostjgr put some interesting comments, BlueMatt did you have time to review them? 20:44:24 t-bast: is that with the previously-agreed-upon chnges (ie dropping HTLC-tx changes?) 20:45:10 comment about test vectors in general: while working on generating the anchor vectors, i switch to using the 'raw' test vector data (that is currently hidden as comments in the markdown). 20:45:16 BlueMatt: what is? the meeting to schedule ? or the PR? 20:45:23 t-bast: the pr. 20:45:48 BlueMatt: this is static_remotekey, not anchors 20:46:00 oh, oops, sorry, didnt realize what we were talking about. 20:46:11 No worries, this is much simpler ;) 20:46:12 Erk, I didn't see this PR. I tried to replicate the PRs recently, and ran into the "missing secret key" problem (one of the remote secrets). But I'm happy to redo those later, I agree with the idea of making these static_remotekey since nobody should be without it these days. 20:46:59 ah, regarding joostjgr's questions, I dunno, the way we check these vectors is lower-level than actual enforcement of htlc limits and such. 20:47:32 what i do now is set up a channel between two nodes and let them go through the message exchanges to get the channel in the 'test point' state 20:47:47 so no use for vectors that describe an impossible state 20:47:54 i deleted them 20:48:09 right, we dont bother doing that for the test vectors, since they're just to make sure we generate txn correctly, not really anything to do with protocol enforcement 20:48:18 same for us, we test that at the transaction level so we don't mind the 0 fee / reserve 20:48:25 I think just y'all removing vectors if you cant write tests for them is fine, but no need to remove the vectors 20:48:32 but why describe a commitment tx that is really undefined? 20:48:37 maybe also a bit impl. specific 20:48:50 I dont think its undefined? its just a way to check that you can generate txn correctly 20:49:17 joostjgr: it's a fair point. Ideally these would be generated with all-known secrets and reasonable fee levels. 20:49:20 can add a comment that notes that black-box testing is unlikely to get into such a state. 20:49:27 it doesn't feel unreasonable to me to have the test vectors reflect some real situation 20:49:54 but on the other hand, it creates dependencies between bolts that aren't strictly necessayr for these tests 20:50:05 i don't think they add anything if those commitments can never happen 20:50:34 i think it begs, what logic are we really testing then? 20:51:12 that couldn't also be described by a valid commitment 20:51:14 i found these unit tests useful when the channel create_commitment_tx() function was written and almost nothing else 20:51:52 having it with zero fee I dont really care about, we can remove it, or not, happy to flip a coin, but I dont see how these tests test anything more then simply the commitment tx creation function(s) 20:52:14 like, even if you black-box to get your state machine there, its not like you've meaningfully tested your state machine. 20:52:41 I think if we use a sane (253 perkw) feerate, the first test vector is impossible, since HTLC 0 may be trimmed. It's only 1000 sat. But I'd need to double-check 20:53:25 same for 'commitment tx with fee greater than funder amount' 20:54:06 btw, for anchor test vectors we need to come up with new 'interesting' configurations and fee rate tipping points 20:54:38 joostjgr: yeah, I remember grinding out those fee values by brute force for the test vectors...4 20:54:57 i thought so... was thinking about doing the same for anchors :) 20:56:23 joostjgr: we used to be able to get into that corner case (fee greater than funder can afford), but I think with new requirements on push_msat leaving reserve, it's not triue. 20:57:24 (FWIW, these test vectors were *not* useful for the protocol test python implementation last week, since those assume we know everyone's secrets) 20:58:27 One thing that comes to mind is that changes elsewhere in the protocol (for example adding an extra reserve or something) shouldn't force us to re-generate Bolt 3 test vectors because they're now invalid commitments, that would be really wasteful 20:58:34 right, they're really only useful to sanity check that you've gotten the commitment tx generation right, not any kind of exhaustive check on...anything 21:00:10 Yes, but that's a *lot*. Including HTLC trimming, OCN generation, key tweaking... 21:00:34 right, it def took me a few rounds to get it all right when I first wrote a commitment tx generator. 21:00:49 ... output ordering, feerate calculation... 21:01:18 anyway, I also dont think this is, like, the most critical thing to harp on. if people care strongly, I can drop it. I can also add a comment noting that its only useful for some test suites, or I can literally flip a coin. 21:01:40 I don't care much either TBH 21:01:51 If someone has a strong opinion, please say it 21:02:09 No, if the new vectors are correct let's update them. And if joostjgr generates new ones for anchors, he can fix these issues :) 21:02:41 joostjgr: does that sound ok for now? 21:02:53 i wasn't planning to fix static remote key vectors in the context of anchors 21:03:16 i would just remove them from the pr now 21:04:12 unless the anchor vectors are going to replace this completely 21:04:20 question again about how to structure the spec... 21:04:35 i'm all about distinct extension documents these days 21:04:37 joostjgr: we need them in place while it's still an option, but eventually I expect anchors will become compulsory and these can be deleted. 21:04:40 much more scalable and easier to read/analyze 21:04:50 vs "if statements" in the spec lol 21:05:05 personally I strongly favor dropping old sections...they're in git if you need them 21:05:10 no fan of 'if' statements either. 21:05:11 roasbeef: meaning you'd have anchor not change the current bolt 3 but rather be a different section/document with some duplication? 21:05:23 roasbeef: sure, but I keep hoping those ugly if statements will focus us on dropping old stuff :) 21:05:34 hehe 21:05:53 t-bast: yep 21:05:56 +1 for standalone documents 21:06:06 roasbeef: that was my feeling as well while reading the PR 21:06:09 t-bast: then we'd start to "freeze" the spec, and do everything in extension documents 21:06:18 this way someone can read a single doc and impl a new feature 21:06:27 vs needing to navigate conditionals in the spec to make sure they implemented the correct thing 21:06:33 and later we can revisit to replace an old document by a new one, and people can look in git for the old version 21:06:52 I agree that these "ifs" are hard to read and a bit error-prone 21:06:56 yeh then those could be stiched together to make a congtiguous spec w/ some auto-gen 21:07:12 +1 to using git as intended. But a spec which lies because there's this other thing you need to read which replaced it is a horrible thing, too. 21:07:26 sounds like a project. after the anchor pr merge please 21:07:30 also gets across the feature that w/ all the feature bits n stuff we have, ppl can choose what they want to implement other than like big new payment types 21:07:38 joostjgr: I feel your pain xD 21:07:39 joostjgr: ;) 21:07:58 rusty: main body could link to other stuff also 21:08:25 there're also some really big chagnes like taproot or scriptless scripts stuff that would pretty much be a re-write of certain bolts 21:08:33 this meeting needs to be kept under control ;) 21:08:42 Sounds like something we'd have to do in a 3-day spec meeting IRL if we really want to make progress on it, let's defer for now? 21:09:17 roasbeef: yeah, I actually like the idea of the top-level simply being "see for this feature". 21:09:29 t-bast: yeah, it's a Big Project. 21:09:37 Since we're encouraging new implementation to directly use static_remotekey, I'm in favor of merging #758 21:09:39 yeh, this is where i'd like things to head, but it's a big-ish change that we'd need to do over time 21:10:09 t-bast: ack, but I haven't tested the vectors myself. 21:10:25 #action c-lightning to validate the test vectors 21:10:37 Now you'll have to :D 21:10:52 LOL 21:11:06 i think joost validated them? only saw that one comment tha one pubkey might be tweaked? 21:11:09 on the LL side, do you strongly oppose to this change? Or is it ok? 21:11:21 no didn't validate them 21:11:32 i can comment without validation 21:11:59 because of some overlap with the anchor test vector generation 21:12:20 sgtm 21:12:24 oh gotcha, okay yeah i'm in favor assuming we match up. as matt said we can always drop/extend as we please for state-machinne level tests as well 21:12:43 #action finalize comments on the PR and merge once verified by enough implementation 21:12:58 Let's do a last small PR (we're already slightly over 1h) 21:13:12 But it's mine and I'm chair so... 21:13:21 #topic cltv_expiry_delta recommendations 21:13:24 #link https://github.com/lightningnetwork/lightning-rfc/pull/785 21:14:03 There are way too many channels using very low cltv_expiry_delta on mainnet (6!!!) 21:14:35 And it's probably because that section is way to optimistic, so I suggest stressing a bit more that caution is required 21:15:09 Yeah, 6 is charity. With anchor outputs, it's closer to possible, but TBH there's not much enduser difference between 6 and 60. 21:15:19 if we increase delay of cltv_expiry_delta, we should also increase commitment broadcast delta 21:16:06 t-bast: maybe recommend something higher than 12, given all we've learned since the 12 recommendation was written? 21:16:15 in general being more conservative with all these timelocks make all attacks harder 21:16:25 ariard: but right now I haven't seen very wrong usage of to_self_delay, so it doesn't feel as important to fix 21:16:44 ariard: is there a paragraph in particular you want me to make more cautious? 21:16:57 BlueMatt: I'm all in favor in recommending even higher values 21:16:57 t-bast: not to_self_delay, I was mentioning going-on-onchain-to-claim-incoming delay 21:17:07 this one has a name spec? 21:17:11 it should be dynamic really 21:17:11 BlueMatt: I'd really like to recommend a number, but we still don't know. I think increasing the recommended minimum is Best Practice RN though. 21:17:16 ariard: oh right, and this one isn't even properly named in the spec 21:17:23 roasbeef: yes we should scale them on mempool congestion 21:17:25 your cltv delta, changed based on what's going on in the chain, and also your past attempts to get any sort of txn confirmed 21:17:33 and channel_update increase or decrwase 21:17:54 yes it should be dynamic, but short term it's important to at least recommend a bigger lower bound than we currently do :) 21:17:58 ariard: the spec calls that the deadline. 21:18:10 I can update the PR to expand a bit on that though 21:18:20 roasbeef: I dont think past on-chain closes are a good indicator of the future. 21:18:36 i mean your attempts to get any transaction in the chain 21:18:42 could be unrelated to LN 21:18:44 t-bast: maybe at least for now recommend 24 or 36 blocks? 21:18:52 I don't think anything is a good indicator of the future 21:18:54 also the time and other factors as well since stuff is pretty cyclic 21:19:01 rusty: htlc_onchain_deadline seems a good name? Just we should have a strict one I've seen different names in every LN papers 21:19:02 Actually, it calls it G in this section. The grace period " a grace-period `G` blocks after HTLC timeout before giving up on an unresponsive peer and dropping to chain" 21:19:11 roasbeef: right, but we know that, like, its doubly hard to get things confirmed at exactly 9am every weekday, which software likely wont be able to learn without a lot of work :p 21:19:20 (or whatever time it is that bitmex tries to screw everyone daily) 21:19:27 BlueMatt: perfect, I was afraid people would be reluctant, but I'll increase the values in my PR! 21:19:32 probably safer to recommend higher and only go lower if you understand the risks 21:19:40 lol oh yeah the bitmex txn bomb 21:20:19 #action t-bast recommend even higher value (yay!) 21:20:38 at the end of the day even if you do dynamic, you may have a second-order game were people try actually to game default "autopilot" configuration 21:20:41 #action t-bast explain why a dynamic value makes sense 21:20:43 (so more towards 36) 21:21:03 36 sgtm 21:21:15 #action t-bast properly name the "deadline" and recommend a higher value than 7 21:21:37 allright thanks for the feedback guys, I'll update the PR somewhat heavily 21:21:46 t-bast: it has a name, "grace period" in the spec, but happy to rename. 21:22:04 iirc lnd uses 40 atm so we are in that realm 21:22:12 rusty: a decent PR name needs underscores and backticks 21:22:28 not PR, spec name 21:23:30 isn't the grace period still for the downstream node (instead of the upstream one)? 21:23:39 I probably need to re-read it carefully 21:24:00 t-bast: G is how long you wait once peer should have failed HTLC before going onchain. 21:24:18 R == worst-case reorg depth. 21:24:27 S = delay before txn is mined. 21:24:35 rusty: right, so I think that what ariard and I are talking about is yet another parameter 21:25:11 rusty: what we're talking about is how long you would wait for an upstream peer to acknowledge and remove a fulfilled HTLC before going on-chain (for the upstream channel, not the downstream one) 21:25:23 ... we use G for both. 21:25:30 oh gotcha 21:25:46 then 1 or 2 blocks is clearly not what I'd recommend! 21:25:47 "B now needs to fulfill the incoming A->B HTLC, but A is unresponsive: B waits `G` more 21:25:47 blocks before giving up waiting for A. A or B commits to the blockchain." 21:25:58 I set it to 24 by default on eclair 21:26:22 the deadline for received HTLC this node has fulfilled? 21:26:25 *24* blocks.... wow, that's a long time for your peer to be offline! 21:26:38 it's not only about being offline 21:27:03 it's the time you're confident your HTLC-success will be confirmed 21:27:21 otherwise you'll enter a race with the upstream's HTLC-timeout tx 21:27:46 (note: it's not your HTLC-success in that case, it's your claim-preimage tx) 21:27:52 t-bast: that's a derived value, though. 21:28:03 and you can RBF this one like you want 21:28:19 yes that's true, but we currently don't have the logic to automatically RBF 21:28:36 we'll have it soon, but for now 24 makes me feel safe-ish 21:28:52 t-bast: I think "`cltv_expiry_delta` Selection" section would benefit from a close re-reading. 21:29:12 rusty: great, I'll spend some time on it this week and update my PR 21:29:19 yes but the bumping logic aka "when next block I'm going to bump this" can be the same between RBF/CPFP 21:29:25 But I agree the numbers are too low. 21:29:37 OK, 1 minute to hard stop for me. 21:29:47 Hey since it is getting late and the meeting is almost over I would like to ask if you could have a look and give me feedback for https://github.com/lightningnetwork/lightning-rfc/pull/780 soonish? It is not ready and on purpose I made an early PR so that I can continue the work based upon the feedback. I have been here for the last 3 meetings and 21:29:47 it has not been discussed yet (neither here nor on github). I would like to continue working / improving it soon. I won't be here for the next spec meeting though.. thank you 21:29:50 thank you all for the feedback! let's end now, we've done a great meeting ;) 21:30:37 * roasbeef g2g 21:30:39 renepick1: honestly I'd love to have time to review it, but I don't think I'll be able to in the short term... 21:30:58 #endmeeting