19:06:01 #startmeeting 19:06:01 Meeting started Mon Mar 30 19:06:01 2020 UTC. The chair is t-bast. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:06:01 Useful Commands: #action #agreed #help #info #idea #link #topic. 19:06:10 First of all, the agenda for today 19:06:13 #link https://github.com/lightningnetwork/lightning-rfc/issues/757 19:06:29 #topic channel range clarification 19:06:32 #link https://github.com/lightningnetwork/lightning-rfc/pull/737 19:07:27 The only lasting issue is whether to allow a block to spill into multiple messages 19:07:44 rationale being that theoretically, without compression, a block full of channel opens couldn't fit in a single message 19:08:08 From previous discussions, CL and eclair thought it was too much of an edge case to worry about, LL wanted to investigate a bit more 19:09:00 wpaulino had a look at the PR but didn't ACK; the way I understood it he's ok with block fully contained in a single message, but I'd like an ACK from him or someone else on the LL team? 19:09:50 In theory you could return a random sampling in that case and mark your response `full_information` = false. (c-lightning just fails to respond in this case tho) 19:10:38 feels like either a txindex in teh response, or an explicit "ok i'm done" would resolve thsi nicely 19:10:44 true, but it means that when full_information = false we can never know when you're done sending replies 19:11:15 i don't think the 'complete' field as is, is even used properly, nor is there any support for signalling a ndoe has only part of some of the graph 19:11:17 I agree that an explicit indication that everything is sent would be much easier -> why not add a TLV? 19:11:22 t-bast: no, you're done still. You just did't send it at all. 19:11:36 t-bast: yeh in the past, we used erroneously used 'complete' as that 19:11:48 but it made everything a lot simpler on our end 19:11:50 rusty: ah ok, so sender aren't allowed to send one block in multiple replies - with that assumption it works 19:12:21 t-bast: ack. We could allow it except for the final block, but that seems like we're not really helping. 19:12:27 it seems to me that the issue of a block full of channel opens can safely be ignored; are you really not using compression? 19:13:04 did we figure out a rough number for how much compression helps when sending timestamps and checksums? 19:13:24 I thought you were checking that :D 19:13:31 no? 19:13:44 i don't recall that :P 19:13:51 I'd shared rough number a month ago, I'll try to dig them up 19:14:08 IIRC rusty or christian had shared some calculations too 19:15:28 I have historical gossip data, no info on compressability though 19:16:02 ideally the numbers would be about worst-case compression too 19:16:40 we'd need more than 3.5k channel opens in a single block and to request both timestamps and checksum without compression to be in trouble 19:17:14 Even assuming the checksums and timestamps don't compress, the short_channel_ids are an ordered sequence (with same block num). That's basically 3 of 8 bytes for free, so it's 5 bytes per reply, 64k == 13,000 channels. We can theoretically open 24k per block though. 19:17:25 Nice, so not fitting into a message is basically impossible for Bitcoin 19:17:53 * rusty repeats his promise to fix this as soon as it becomes a problem :) 19:18:10 I'm honestly in favor of not worrying about this now 19:18:24 ok, we'll leave it up to our future selves them 19:18:49 allright so we're good to merge 737? 19:18:59 it's int 5 + 4*4 bytes per reply? 19:19:25 isn't* 19:20:09 which produces figures more like t-bast's 3.5k 19:20:48 i think rusty isn't including the optional check sum and timestamp? 19:21:52 bitconner: you're right, I forgot there are two csums and timestamps per entry. So 24 bytes uncompressed. So we're closer indeed to 3.5k 19:22:11 yeah my calculation was for the full thing, which yield around ~3k channels per message 19:22:26 timestamps are likely to compress quite well, so probably more than that 19:22:31 * rusty really needs to get back to set reconciliation which doesn't have this problem (I promise all new problems!) 19:22:41 hehe 19:22:44 the timestamps aren't ordered tho, so not necessarily 19:22:55 Yeah, timestamps have to be in a 2 week range though. 19:23:08 * cdecker is reminded of the 12 standards XKCD when it comes to gossip protocols xD 19:23:12 but there def will be similar prefixes 19:24:08 exactly, so we'll be around at least 3 or 3.5k channels per message ; far from something that should worry us IMHO 19:24:09 i agree the chances of it being an issue is pretty rare. in general we should longer term revisit the gossip with invs, better node ann querying, etc 19:25:10 I think if we start seeing a rate of 3k channel opens every 10 minutes, we'll have a lot more problems than the gossip :D 19:25:43 So shall we move on and merge this? 19:25:46 Well gossip might still be the issue, but the message size limitation isn't likely to be the biggest issue :-) 19:25:52 ACK 19:26:22 sgtm 19:26:34 sgtm as well 19:26:41 #action merge #737 19:26:56 #topic Stuck channels 19:26:59 #link https://github.com/lightningnetwork/lightning-rfc/pull/740 19:27:28 We're still going with the option of the additional reserve for lack of better short-term solution 19:27:49 The main issue with this PR honestly is just that wording this is a nightmare 19:28:40 Yes, we've implemented this for now. 19:28:56 It spun off a lot of interesting discussions, but I think there's overall agreement on the mechanism: reserve some buffer for a future extra HTLC, with a feerate increase of x2 (which is just a recommendation) 19:30:43 yeah it seems to blur 2 problems, preventing initiator to stuck channel by offering a HTLC and safe-guard against fee spikes in some time N to keep incoming capacity 19:31:18 the safeguard against fee spike is really for that specific problem, to avoid getting stuck 19:31:30 because that's how you currently get into that situation 19:32:12 t-bast: my only complaint is that it since the non-fee payer MUST NOT send an HTLC which causes the fee=payer to dig into reserves, the alternate solution (letting just one more HTLC through in this case) is now technically banned. 19:33:57 rusty: that's true. We previously didn't have any requirement on the non-fee payer, do you want to keep it at that? 19:34:14 Or turn the third paragraph into a SHOULD NOT? 19:34:38 seems like things are pretty active on the PR in the spec repo, anything critical that needs to be discussed here? 19:35:15 t-bast: SHOULD NOT. Everyone implemented a limit despite not being required to, which caused this problem. 19:35:24 on the implementation side, we'll need to make sure all our interpretations of the spec are actually compatible 19:35:38 I can see one impl having slightly diff behavior which causes a de-sync or force close 19:36:11 roasbeef: really? I don't think there's a risk there, you're just implementing behavior to restrict yourself (when funder) from sending too much 19:36:38 this PR doesn't impose any requirement on your peer, so I think it should be safe 19:36:41 i also don't think there is a risk of de-sync 19:36:54 roasbeef: we haven't added any receive-side restrictions, just what you should send. Still possible of course, but not a new problem. 19:37:00 rusty: allright, I'll make that a SHOULD NOT 19:37:12 * rusty reads https://lsat.tech/macaroons... nice job LL! 19:38:18 I agree that we're able to make progress on github (thanks to the reviewers!) so let's skip to the next topic? 19:38:47 gotcha, only sender stuff make sense, still seems like whack-a-mole in the end, but if it helps to resolve cases in teh wild today then it may be worthwile (dunno what impl of it looks like yet) 19:38:51 t-bast: sgtm 19:39:25 #action keep iterating on wording, fundee requirement SHOULD instead of MUST 19:39:51 #topic TLV streams everywhere 19:39:54 #https://github.com/lightningnetwork/lightning-rfc/pull/754 19:40:05 This one just got an ACK from roasbeef, so I feel it's going to be quick :) 19:40:27 approved it, we need to shuffle some bytes on disk to make it work in our impl, but it's sound 19:40:27 I just had one thought about that recently though for which I'd like your feedback 19:40:30 So AFAICT the obvious step is to make option_upfront_shutdown_script a compulsory option. 19:41:06 For the specific case of update_add_htlc -> allowing a TLV extension means that there won't be a single fixed size for the packets going through the network 19:41:15 That looks like a privacy issue 19:41:46 It will be easier for network layer attackers to correlate traffic if udpdate_add_htlc messages differ in size depending on the features they use... 19:42:00 t-bast: a minor one though; in practice update_add_htlc is already distinctive sized fom other messages, and there aren't enough at once to confuse. 19:42:17 yes but at least all update_add_htlcs look the same to a network observer... 19:42:37 Right, TLV extensions are also only defined on a peer level, not across multiple hops 19:43:09 So a distinctively sized update_add_htlc doesn't get propagated (unless we explicitly add such a feature later) 19:43:28 Probably best to keep that one in mind, but not worry about it for now 19:43:39 cdecker: good point, so this critique only applies when we use it, which is always true. 19:43:39 mhmm, and if yuo're modifyign update_add_htlc, you need further changes in the layer above (routing feature bits, how to handle it, onoin payload implications, etc) 19:43:49 for path blinding we are adding an ephemeral point in the TLV extension of update_add_htlc once you're inside the blinded path :) 19:44:20 t-bast: not up to date on this stuff, but why not in th eonion instead? 19:44:22 Yep, in that case it'd apply 19:44:31 it could be the case tho that multiple hops need to add some tlv-extension in order for the payment to work 19:44:38 roasbeef: you need the ss to decrypt the onion itself. 19:44:50 if there's any variable length data it would be pretty easy to track 19:44:54 (tells you what the key tweak is) 19:45:03 roasbeef: because it's kind of a reverse onion, I don't think we can put it in the onion... 19:45:05 bitconner: that's true, but not specific to this proposal 19:45:34 yes it's mostly an issue for later, when we actually start adding TLVs at the end of update_add_htlc 19:45:35 _any_ extension mechanism would result in distinctive message sizes, wouldn't it? 19:45:41 We can theoretically mask this stuff with pings, but I've shied away from that because it's easy to get the sizes wrong and not actually obscure anything. 19:46:10 cdecker, yes it does. particuarly for add_* messages 19:46:25 sorry, update_* messages 19:46:34 agreed we can address later tho 19:46:51 If they are getting propagated along with the HTLC itself, yes, otherwise I don't see a problem 19:47:15 sgtm, I wanted to mention it so we keep it in mind when building on top of these extensions for update_add_htlc, but we can probably move on for now 19:47:33 Let's keep it in mind for when we have proposals that'd extend the update_* messages along (parts of) the path 19:47:44 Looks like I got two approvals, so we'll get that one merged 19:47:51 #action merge #754 19:48:05 #topic Wumbo advisory 19:48:07 #link https://github.com/lightningnetwork/lightning-rfc/pull/746 19:48:14 Very quickly, just want your opinion on 19:48:26 * whether this advisory should be in the spec or not 19:48:34 * if yes, is the current wording ok 19:48:47 I think it's small enough to be useful for people 19:48:56 Without adding too much bloat 19:49:43 Yes, but if we add every small tidbit to the spec it'll grow indefinitely. A separate best practices document would be better suited for these non-essential things 19:49:44 i think it doesn't hurt, many aren't even aware of the link between csv value and security 19:49:45 we may need some annex for anchor_ouput on how to implement bump coins pool and effective aggregating algos, why not a security annex? 19:50:07 idk if scaling csv and confs is non-essential, would you accept a 100 BTC channel with 1 conf and a csv value of 2 blocks? 19:50:24 we can keep it minimal, but ppl should be aware of the considerations 19:50:37 No, I would apply my usual timeouts which are good for that 19:50:40 i think lnd is the only implementation that scales both conf and csv value according to the size of the impl as is (I could be wrong though) 19:51:08 I agree with roasbeef; I find these two lines useful 19:51:41 yeh two lines minimally, I don't think this _hurts_ anyone, just more context for security considerations 19:51:44 t-bast: commented., Advice is bad. 19:51:51 Ok, my objection is by no means a very strong opinion, but I worry about the readability and size of the spec (both are abysmal as it is...) 19:51:58 we only scale confirmations in eclair 19:52:09 conf is good, scaling csv is not logical. 19:52:15 how is scaling csv not logical? 19:52:19 (I commented on PR just now) 19:52:33 https://github.com/lightningnetwork/lightning-rfc/pull/746#issuecomment-606209902 19:52:36 imo if you have more funds in teh channel, you want a longer period to act just in case stuff breaks down 19:52:45 especially if you don't have a tower... 19:53:06 roasbeef: and it's more painful to be without funds for that time. In theory these two things cancel out. 19:53:22 err I don't think so, loss of funds >>>> time value loss 19:53:32 it's more painful to be without the funds permanently imo 19:53:45 if it doesn't matter, than why don't we just all set csv to 1 everywhere? 19:53:53 roasbeef, bitconner: sure. But does it scale differently with the amount? 19:53:54 1 block, better act quickly! 19:54:09 maybe then update the wording to say that implementation "MAY" provide means of scaling any of these two values (if see fit)? 19:54:34 i'd be okay with that, both have cases where you may consider adjusting 19:55:24 The conf thing is definitely worth noting. The other one is not rational (but that doesn't mean it's not worth considering). 19:55:45 err you haven't really provided a convincing argumetn taht is isnt' "rational" 19:56:07 but let's finish hashing it out on the spec, seems were down to just wording 19:56:28 minimally, it should have something like: "you should consider increasign these values as they're security parameters" 19:56:43 scaling/increasgin with chan size 19:57:00 roasbeef: we've had this debate before, though, seems weird to rehash. 19:57:14 you definetely want to scale up csv with channel amount, an attacker may spam the mempool or delay you justice transaction propagation 19:57:29 ariard: yep, it's a *security* parameter 19:57:58 t-bast: what's up next? we can continue this in the spec 19:58:07 agreed, let's continue that on the PR 19:58:10 ariard: wah? But you can spend more on the penalty tx if it's larger. 19:58:19 You should select parameters that you feel comfortable with independently from the amount, either you can react or you can't, the potential loss only comes into play once the risk of loss is high enough 19:58:46 rusty: a high-value channel may motivate an infrastructure attacker to just eclipse your full-node to drop your justice tx 19:58:53 amount totally matters... 19:58:58 #action tweak the wording to give more leeway to implementers in choosing what they want to make configurable 19:59:02 higher csv let you do manual broadcast or any emergency broadcast stuff 19:59:21 ariard: yep, or give your tower more time to work, or w/e other back up measures you may have 19:59:46 Let's move on to anchor outputs? 20:00:17 so I sent an email out before the meeting 20:00:23 #topci Anchor outputs 20:00:26 I need to drop off in a couple of minutes 20:00:28 #topic Anchor Outputs 20:00:33 discussing stuff on IRC is hard, so i think we'll have more progress with that ML thread 20:00:43 Ok so let's keep that for the mailing list then 20:00:46 ariard: that's plausible. 20:00:53 in the email i talk about our impl and plans, and then go thru some of the current concerns w/ two anchors 20:00:54 #topic Rendezvous? cdecker? :) 20:00:58 I added another comment on gh about 10 minutes ago pointing out that two anchors isn't just a waste of fees, its acually insecure 20:01:19 We can talk about rendezvous, sure 20:01:33 replied to that. to_remote is encumbered by 1 csv. so i was wondering, how are there two spend paths for remote? 20:01:43 ok, continue on the pr 20:01:58 thanks, let's continue anchor on the PR and mailing list 20:02:05 Let's have an update on rendezvous from cdecker 20:02:31 joostjgr: currently reviewing PR but have you considered an upgrading mechanism for alreaady deployed channels? 20:02:33 * cdecker digs up the branch for the proposal 20:02:45 ariard: yep, I mentoin that in teh email 20:02:58 should have a draft of the shceme we've come up with posted to the ML in teh next week or two 20:03:14 roasbeef: cool given there is no new parameters to negotiate, that shoulddn't that hard 20:03:19 it's pretty important imo, give that most chans still aren't using static to_remote keys and ppl hate closing channels lol 20:03:27 #link https://github.com/lightningnetwork/lightning-rfc/blob/rendez-vous/proposals/0001-rendez-vous.md 20:03:37 I agree, let's avoid a useless chain write 20:03:51 So we have a working construction for rendez-vous onions in that proposal 20:03:52 ariard: yeh it's basically some tlv extensions to make the commitment type specific, which lets you create new commits with a diff type than the latter 20:04:13 It works by creating the onion in such a way that we can cut out the middle part and regenerate that at the RV node 20:04:45 roasbeef: so generic mechanism to cover commit tx format in the future like taproot one or when we drop CSV delay after mempool improvements? 20:04:46 While it is a working proposal it has a number of downsides 20:05:16 First and foremost it is not reusable, since the onion would always look the same in the latter part 20:05:28 The path encoded is fixed in the onion 20:05:39 And the size of the compressed onion is still considerable 20:06:00 ariard: mhmm, also with this, all we need to do for taproot/scnhorr is update the multi-sig script (just the single key), and then we can defer modifying the commitment until later as there's a pretty large design space for that 20:06:06 So for these reasons we have mostly stashed that proposal for the time being 20:06:23 cdecker: this is distinct from the "blinded hop hint" stuff right? 20:06:30 Interesting, I thought you were still making progress 20:07:00 roasbeef: Yep, this is the classical rendez-vous construction for sphinx with a couple of extra twists due to the increased per-hop payloads 20:07:02 roasbeef: yes the blinded paths are a different attempts at better privacy 20:07:27 rusty has mostly been concentrating on the blinded paths idea for the offers 20:07:50 and are you investigating something else? 20:07:51 I think we could end up using my rendez-vous proposal when combined with trampoline though 20:08:14 yes I believe with trampoline, your rendezvous proposal isn't very expensive (it's clearly acceptable) 20:08:50 We could have an onion that contains just a sequence of (non-consecutive) trampoline nodes, that'd then run the RV protocol 20:09:03 That way we can get recipient anonymity even in trampoline 20:09:56 what do you mean by "run the RV protocol"? 20:10:03 Don't get me wrong, the rendez-vous construction is a cool tool to have in our toolbelt, it's just not that well-suited for what we intended 20:11:02 I mean one of the trampolines would get the partial onion as payload (when he's the point where the two parts where stitched together) and have to recover the original partial onion 20:11:12 Otherwise it'd run the normal trampoline protocol 20:11:36 ok that sounds like what I had in mind 20:11:46 Great ^^ 20:12:17 do you want me to prepare something for the next meeting to discuss trampoline then? and integrate that rendezvous option into it? 20:12:34 So yeah, I think the TL;DR for rendez-vous is: "We did stuff, it's cool, it works, it doesn't fit our intended application" :-) 20:12:51 That'd be great, I'd love to get trampoline moving again 20:13:26 :100: 20:13:29 Something I'd like to ask the whole room: would you like me to move the trampoline proposal to format closer to what cdecker did for rendezvous (https://github.com/lightningnetwork/lightning-rfc/blob/rendez-vous/proposals/0001-rendez-vous.md)? 20:13:47 I think that may be more helpful to review the high-level design before the nitpicks of the message fields 20:14:09 And would probably be useful to keep to help newcomers ramp up more easily 20:15:12 It'd certainly help me get up to speed before the meetings, and for people that just want a quick overview of a specific feature / change 20:16:18 I just can't get used to the hyphen in rendez-vous :) 20:16:38 Honestly I'm not sure which one is correct either xD 20:16:58 rendez-vous is the valid french word :) 20:17:04 t-bast: +1 for a high-level doc, I fear people bringing again and again tsame point on privacy tradeoff in the PR 20:17:17 rownde-voo? 20:17:31 t-bast: and maybe we should do the same for blinded paths... 20:17:31 the spec is written in english tho ;) 20:17:34 rownde-voo sounds like a cool new thing 20:17:43 lol, that's how I pronounce it ;) 20:17:43 yee haw 20:17:44 Ok, the namer in chief has spoken, rowndee-voo it is ^^ 20:17:46 kek 20:17:50 rusty: blinded path is already in proposal format :) 20:17:52 This is the dreaded Eltoo disease, I think! 20:17:54 lolol 20:17:54 sounds like some vooodo crypto 20:18:12 Right, rusty just for you we can rename it el-too :-) 20:18:27 el-too sounds spanish now 20:18:35 naming is def my favorite part of this process :) 20:18:44 Anyway, gotta drop off, see you around everyone ^^ 20:18:52 "uhh, lemmie get uhh #1, but rownde-voo, thank you I know y'all are closed n stuff now due to The Rona" 20:18:54 great I'll move trampoline to a format closer to the "proposals" then before next meeting, thanks all! 20:19:12 i'll also work the hornet summary before next meeting 20:19:20 bitconner: neat, I'd love to read that 20:19:31 #action bitconner to work on a hornet summary 20:19:32 was a little busy doing lockdown things lol 20:19:41 #action t-bast to move trampoline to a proposals format 20:19:56 ariard: BTW, I conceded your point about eclipse attacks on the issue, for posterity. 20:19:58 i'll shoot to send it out to the ML 20:20:26 bitconner: yeah so many things to do around the house it's crazy...did you know rice bags may contain up to 31 less grains for the same weight? 20:20:53 t-bast: lol compared to like a rice _box_? 20:21:11 t-bast: we should interop test at some point soon for blinded paths. My implementation is pretty hacky, but should be sufficient to test. 20:21:12 roasbeef: (its a joke about being so bored one goes and counts rice grains :p) 20:21:14 boxes are bad for the planet, think about the penguins man 20:21:28 lolol 20:21:31 rusty: thanks, currently putting down a paper with gleb around eclipse+LN, to show a bit more light on this 20:21:36 and motivate some groundwork in core 20:21:42 rusty: then I shuold get started on actually implementing something xD 20:23:01 t-bast: I still don't have a bolt11 format, so it's manually crafted paths for testing. And my infra really doesn't like not having a short_channel_id in every hop, so there's a dummy one at the moment (which gets overridden by enctlv). And also I don't handle the new error code correctly. 20:23:23 Sounds great ariard, make sure to send a copy to the ML once it's done :-) 20:23:31 ariard: nice! 20:23:45 rusty: heh sounds like I shouldn't be in too much of a rush then, perfect ;) 20:24:11 alright time to go back to counting lentils, thanks everyone for the discussions and see you around! 20:24:17 t-bast: no, but I was happy to get the pieces basically working. 20:24:21 #endmeeting