19:05:29 #startmeeting 19:05:29 Meeting started Mon Mar 2 19:05:29 2020 UTC. The chair is t-bast. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:05:29 Useful Commands: #action #agreed #help #info #idea #link #topic. 19:05:40 Let's start with a simple PR to warm-up 19:05:50 #topic https://github.com/lightningnetwork/lightning-rfc/pull/736 19:05:54 #link https://github.com/lightningnetwork/lightning-rfc/pull/736 19:06:13 This is a very simple PR to add more test vectors to Bolt 11 and clarify pico amounts 19:06:41 It's mostly adding negative test vectors (which we lacked before) 19:07:14 I guess this also closes #699 (Add bolt11 test vector with amount in `p` units) 19:07:29 There is one pending comment on whether to be strict about pico amount ending with 0 or let people round it 19:08:24 cdecker: I think #699 adds another test vector, we should probably merge the two PRs 19:08:49 have these been implemented by more than one impl? 19:08:50 t-bast: I think we add "SHOULD reject" to the reader side of the spec. It's straightforward. 19:09:15 bitconner: c-lightning and eclair, IIUC. 19:09:35 rusty: sounds good to me to add a "SHOULD reject" 19:09:52 bitconner: I think no implementation was failing this test, just some random JS library, which cause sword_smith to file the issue 19:10:06 yes those test vectors have been implemented in eclair and CL (found a few places in eclair where we weren't spec compliant) 19:10:37 it's a good exercize to add them to your implementation, you may have a few surprises :D 19:10:41 rusty: nice, i'm all for negative tests. i can make a pr to lnd adding these today 19:10:46 t-bast: I take it back, it's already there. 19:11:27 rusty: woops true, then it's settled 19:11:48 I guess we can decide on the correctness independently of the implementations since this is just a place we were underspecified 19:12:30 * cdecker ACKs on behalf of c-lightning :-) 19:13:15 ACK for me too, ready to merge 19:13:19 OK, I say we apply 736. Oh, and this superceded 699, FWIW. 19:13:33 no blocker for RL (still a bit behind on invoices stuff...) 19:13:37 rusty: you sure 699 didn't add another test vector that we lack? 19:14:01 Ok, so closing #699 since this covers the cases, correct? 19:14:23 #699 adds a working one, not a counterexample 19:14:40 But 699 adds a test vector for the pico amount 19:14:47 which we didn't have before 19:15:08 But it also doesn't cover new corner cases, does it? 19:16:00 * rusty checks... yes, t-bast is right there's no positive pico test. Ack 699 too. 19:16:17 Yeah I believe we should apply both (positive and negative tests) 19:16:40 ok 19:17:09 #action merge #699 and #736 19:17:38 if you disagree with the action items, please say it during the meeting so we can clear up miscommunication 19:18:06 #topic Clarify gossip messages 19:18:10 #link https://github.com/lightningnetwork/lightning-rfc/pull/737 19:18:35 There has been a lot of back and forth between implementations over channel range messages, the spec was giving a bit too much leeway there 19:18:48 This is an attempt at clarifying the situation and making the spec stricter 19:19:20 conner did you have time to look at this? I don't remember if it was you or wpaulino who worked on that recently for lnd? 19:19:27 stuff on the PR itself should suprecede irc (approvals), as it's a much terser medium of comms 19:19:31 it was wpaulino on our end 19:19:52 we also impl the non-overlapping req as is rn 19:20:57 we'll accept a few diff versions (to support older lnd nodes), but now send things in a complete (entire range) and non-overlapping manner 19:21:10 t-bast: your example in there is invalid; we assume all channels in a single block are in the same response 19:21:11 roasbeef: do you know if lnd would accept reading the version defined in this PR? 19:21:52 t-bast: we do the pr behaviour currently, so I think it has to be ok? 19:22:16 rusty: I think we need to make it explicit then, eclair splits between multiple responses. It's technically possible that a single response *cannot* hold all the channels that were created in a single block (I need to dig up the calculations for that) 19:23:16 IIRC a single bitcoin block may be able to fit more than 3000 channel opens, which overflows the 65kB lightning message limit 19:23:29 Do we care or do we consider this is an extremely unlikely event and ignore that? 19:23:33 why would the responder set `first_block_num` to something less that what's in the request? 19:23:41 t-bast: 64k / 8 == 8k? And these compress really really well... 19:24:13 rusty: but you need to spec also for the uncompressed case? 19:24:15 bitconner: if they want to reuse pre-computed responses for example 19:24:16 bitconner: an implementation is allowed to keep canned responses, so you might keep a set of scids for blocks 1000-1999, etc. 19:24:28 rusty: and we'd like the same splitting behavior regardless of compressed/uncompressed, right? 19:24:32 t-bast: good point re max chans for uncompressed 19:25:01 t-bast: sure, but we can't fit 8k txs in a block anyway? 19:25:12 i don't think that's disallowed as is now though? (repeated first_blocknum over multiple messages) 19:25:43 rusty: but you can also ask for timestamps, so your calculation is too optimistic I believe 19:25:58 and checksums I mean 19:26:13 t-bast: ah right! Yes, 4k. 19:26:29 Point is I think that in the case where you ask for the most information, in the extreme case where a Bitcoin block is full of channel opens it would not fit in a single lightning message 19:26:32 roasbeef: it's not explicit but then you'd send the next query because the current one has been fully processed 19:26:42 because -> before 19:27:03 sstone: which is part of why we used the old "complete" as termination, makes the ending _explicit_ 19:27:10 This has always been a feature of the protocol, though; there's no way to tell. 19:27:18 "send w/e, just let us know when you're done, we'll hadnel reconcilliation afterwards" 19:27:28 I must admit an explicit termination field clears up a lot of this complexity :) 19:28:13 Sure, but I wasn't worried about 8k chans per block, and I'm not worried about 4k now, TBH. 19:28:34 Heh true, I just wanted to raise the point but we can choose to ignore this unlikely event 19:29:01 Technically it's not number of txs in a block, it's number of outputs created in a block, same thing though, not worth the extra effort 19:29:17 It may be an issue if in the future we want to add more information than just checksum + timestamp (but I don't know if we'll ever do) 19:29:52 We'll probably have replaced the gossip mechanism at that point. 19:30:21 possible as is also to continue protocol messages over a series of transport messages 19:30:54 Are we ok with saying that we don't care about the edge case where a block is full of channel openings? And we restrict the gossip mechanism to not split a block's scids over two responses? 19:30:58 Our code will log a "broken" message if this ever happens, and get upset. But we shoujld consider this for gossip NG or whatever follows. 19:32:13 in theory doesn't that mean someone could prevent your channel from being discovered if they pack the beginning of the block? 19:32:41 bitconner: unless you do random elimination for this case? BUt they could def. affect propagation. 19:33:09 i suppose you could randomize it if you have more than what would otherwise firt, but that kinda defeats the purpose of canned responses 19:33:15 (In *practice*, these scids compress really well, so the actual limit is higher). 19:34:32 Too many channels sounds like a good problem to have, and until then let's just take the easy way out 19:34:36 rusty: why do you need the "minus 1" line 791 if we always include a whole block of scid in each response? 19:34:42 No need to cover every unlikely corner case imho 19:34:56 hey 19:35:10 cdecker: I agree, this seems unlikely enough for the lifetime of that gossip mechanism 19:35:40 We'll rework the gossip protocol before it ever becomes an issue 19:35:46 t-bast: oops, the minus one is completely wrong. Remove the "minus one' from that sentence :( 19:35:57 unlikey? it's determinstic-ish and ther's just a set cost to crowd out the channels 19:36:30 roasbeef: if you use the compressed version it's impossible that it happens unless we reduce the block size :) 19:37:01 roasbeef: we'd need to double-check the math, but it only happens when using uncompressed and asking for both timestamps and checksums, and you'd really need to fill the whole block 19:37:43 rusty: cool in that case it makes sense to me. I'd like to re-review our code before we merge it in the spec, but I don't have other comments for now. 19:37:45 checksums compress well? 19:38:08 bitconner: mmmh not very likely, but I haven't tested it 19:38:21 i think these assumptions need some validation 19:38:48 roasbeef: what would you do that for? It's expensive (you'd need to pay all the fees in the block basically) for very little effect (you prevented your arch-nemesis from announcing a channel, in the backlog sync, but can still broadcast updates that'll push your announcement through...) 19:39:27 if we get rid of backlog sync all of these issues go away no? 19:39:31 bitconner: checksums don't compress, and timestamps can be made harder to compress, but the scids for same block obv. compress well. 19:40:05 cdecker: idk if the "why" is all that pertinent, it's clear that it's possilbe given a certain cost, but correct that you can still just attempt to broadcast your stuff 19:40:38 or maybe not 19:41:18 but on our end, we 19:41:20 we 19:41:36 we'll check our recv logic to see if it's compat w/ #737 as is (I suspect it is) 19:41:47 If someone opens 4k channels at once, I promise I'll fix this :) FWIW, we can theoretically open ~24k channels in one block. That ofc would not fit. 19:43:00 allright so let's all check our implementation against the latest status of this PR and report on github? 19:43:08 sgtm 19:44:31 #action all implementation should verify their compliance with the latest version of #737 and report on Github 19:44:48 #topic Stuck channels are back - and they're still stuck 19:45:02 #link https://github.com/lightningnetwork/lightning-rfc/pull/740 19:45:06 #link https://github.com/lightningnetwork/lightning-rfc/pull/750 19:45:18 I opened two small PRs that offer two different alternatives on how to fix this. 19:45:34 #740 adds a new reserve on top of the reserve (meta) 19:46:00 the game of wack-a-mole continues :p 19:46:00 #750 allows the funder to dip once into the reserve to pay for the temporary fee increase of the commit tx for an incoming HTLC 19:46:14 :P 19:46:44 +1 for keeping extra reserves, dipping into the channel reserves should be avoided 19:47:17 I think #750 makes more sense to be included in the spec: that way HTLC senders will send such an "unblocking" HTLC. 740 can be implemented regardless of whether it's in the spec, as an additional safety if implementers feel it's needed (for small channels maybe). 19:47:18 750 is not implementable :( 19:47:24 how isn't it the case that th eextra reserve just delays things? 19:47:38 rusty: why? 19:47:48 concurrent send case prob 19:47:52 To be clear, the current requirement only applies to the fee payer. There *is* no requirement on the other side. 19:48:13 rusty: yes but right now the other side still avoids sending that HTLC 19:48:18 t-bast: additional should be in spec or not att all, not optional, otherwise we cant know hoe much a channel can send or receive, because remote might have different option 19:48:21 *opinion 19:48:22 at least in eclair, lnd and c-lightning 19:48:50 if we think 750 isn't the way to go, then yes 740 should go in the spec IMO :) 19:49:17 we recently added some additional sanity checks in this area, to avoid things like sending an htlc that can reside in one commitment but not the other 19:49:21 t-bast: sorry, it is implementable (and, in fact, we implemented this), but it's a bit misleading because it can still happen due to concurrency. 19:49:39 It's true that it can happen because of concurrency 19:50:08 If you all think the additional reserve (#740) is a better solution, I'm totally fine with that. I just wanted to explore both before we decide. 19:50:17 We tried not to force the payer into that state because it's a bit anti-social, and risks creating a lower fee tx than we're comfortable with. 19:50:48 t-bast: I prefer 750, too, because reserve is already a PITA, and as it gets harder to calculate, it gets worse. 19:51:01 (We implemented both, and have applied 740, FWIW) 19:51:41 at least a margin of 2*times fees eleminates the risk of someone trying to force a remote channel into that state 19:51:49 My main reason for not applying my 750 implementation instead was that I wasn't sure in practice how impl would handle being forced to dip into reserves. 19:51:54 though technically it can still lock 19:52:33 rusty: yeah I tried that too and current implementations didn't like it :) 19:52:53 t-bast: OK, then I think we go for 740 in that case. 19:52:58 i have to hop off, see you all next time :wave: 19:53:00 m-schmoock: true that the additional reserve "just works" and feels safer, but it felt like the reserve was there for this... 19:53:08 * t-bast waves at niftynei 19:53:26 Allright is everything more in favor of 740 (additional reserve)? 19:53:35 Not everything, everyone :) 19:53:48 in absent of a better solution yes 19:54:21 rusty: we need to adapt the factor from 1.5 to 2 19:54:53 m-schmoock: agreed, but that's easy to fix. 19:54:56 also, how do we know when/if a node supports this? 19:55:03 we can bikeshed the factor on the PR, I'm not locked on the decision of using 2. I think matt may want to suggest a value there too, ariard what's your take on this? 19:55:21 because if cant cant know (yet) due to outdated software, we cant calculate receivable/spendable on a channel correcty (yet) 19:55:24 factor of 2 just again seems to be deferring things w/o actually fundamentally fixing anything 19:56:09 roasbeef: why? it only means that the fee increase you can handle is bigger and not unbounded, that's true, but it does prevent the issue 19:56:43 roasbeef: it's true that it's not a fundamental fix, but we don't see an easy one that fixes that in the short term... 19:57:03 t-bast: how do we signal the remote peer that #740 is in effect or not? 19:57:10 roasbeef: this is something users run into, and it's a pain to get out without closing the channel 19:57:54 m-schmoock: we could add a feature bit but I think it would really be wasting one...I 19:58:00 or do we just migrate over time and run into rejections until everyone supports this 19:58:02 I'll think about signaling more 19:58:30 iiuc, the non-initiator can still send multiple htlcs and end up in the same situation as today? 19:58:36 m-schmoock: that was my idea at first, yes. But I can investigate some signaling. 19:59:06 bitconner: no because at least one of the HTLCs will go through 19:59:07 t-bast: we should at least think about this 19:59:13 unless I'm missing something 19:59:25 and once 1 HTLC has gone through, you're unblocked for the others 19:59:45 roasbeef can you clarify why you think it doesn't fix this? 19:59:50 m-schmoock: will do. 20:00:42 hmmm, am I mistaken, but it is possible to spam a channel by a lot of trimmed (no fees) HTLCs until remote is forced into lockup stil ? 20:01:11 :D just saying 20:01:29 m-schmoock: I don't think so, can you detail? 20:01:53 it only happens on the funder side 20:02:22 im not too deep into LN protocol yes, but afaik small trimmed HTLCs have no onchain fee 'requirement' because they are considered dust 20:02:28 so if funder keeps his extra reserve, when you send him trimmed HTLCs you still increase his balance slowly, which allows him to pay the fee 20:02:46 they are still added to the balance once fulfilled 20:02:59 so, in order to force a remote (even 3rd party) channel into the locked up state, a mean person would have to drain a remote by repeating dust HTLC until its ocked 20:03:13 no, it's the other way around 20:03:36 if you want the remote to be stuck, you have to be the fundee. And the remote has to send all his balance to you (which you don't control). 20:03:54 which I did by circular payments 20:03:56 If that remote keeps the extra reserve, he'll be safe, at some point he'll stop sending you HTLCs 20:04:26 and he'll have the extra reserve allowing him to receive HTLCs 20:04:28 you can route through a 'victim' always by using dust HTLC 20:04:51 yes but that victim will not relay once he reaches his additional reserve 20:05:06 and that additional reserve makes sure his channel isn't stuck 20:05:19 but the PR says 2*onchain fees (which is 0 for trimmed, right?) 20:05:35 t-bast: sorry if understand well the logic of #740 fix, you require that the funder keep an additional reserve, that way when fundee try to send a HTLC to rebalance channel it works? 20:05:48 ariard: exactly 20:06:09 maybe I'm mistaken 20:06:30 okay so no risk of funder hitting the bottom of additional reserve by sending a HTLC from its side (or that would be a spec violation) 20:06:43 t-bast: aahh, sorry I misread the PR. maybe we should clarify 20:06:55 ariard: exactly, maybe there's some clarification needed for dust HTLCs 20:07:01 * pay the fee for a future additional untrimmed HTLC at `2*feerate_per_kw` while 20:07:11 (untrimmed) 20:07:15 #action t-bast close 750 in favor of 740 20:07:23 #action clarify the case of dust HTLCs 20:07:33 #action everyone to continue the discussion on github 20:07:45 btw you should switch to a better name, that's not a penalty_reserve but a fee_uncertainty_reserve ? 20:08:03 good idea, let's find a good name for that reserve :) 20:08:05 I mean both reserves have different purposes, one is for security, the other for paying fees 20:08:22 let's move on to be able to touch on one long-term feature 20:08:29 let's continue the discussion on github 20:08:34 sure, let's move on 20:09:04 @everyone we have time for one long-term topic: should we do rendezvous, trampoline or protocol testing framework? 20:09:12 the loudests win 20:10:04 IMO rendez-vous, decoy, option_scid, different proposals tradeoff wrt to exact problem they are trying to cover? 20:10:05 down to talk about rendezvous, namely how any pure sphinx based approach leaves a ton to be desried as far as UX, so not sure it's worht implementing 20:10:25 Now he tells me, after I implemented it like 3 times xD 20:10:29 (namely single path, no full error propagation, not compatible wth mpp, etc) 20:10:32 heh 20:10:39 okay sounds like we'll do rendezvous :) 20:10:48 #topic Rendezvous with Sphinx 20:10:58 But I totally agree, it's rather inflexible, but it can provide some more tools to work with 20:11:23 sure, but woudl it see any actual usage? given the drastic degrade in UX 20:11:26 I think that even though it's not very flexible, the current proposal is really quite cheap to implement so it may be worth it. 20:11:48 rv node is offline, now what? the channel included in teh rv route was force closed or something, now what? 20:11:52 I know we'd add it to eclair-mobile and phoenix, so there would be end-users usage 20:12:09 the invoice is old, and another payment came in, so now that rv channel can't recv the payment, now what? 20:12:13 t-bast: with compression onions, it makes sense. We can have multiple, in fact. 20:12:15 It can be a powerful tool to hide the route inside a company setup for example 20:12:56 rusty: compression onion? 20:13:00 It can be useful as a routing hint on steroids 20:13:06 I think it's a powerful tool to hide a wallet's node_id and channel, replacing decoys 20:13:54 because a wallet directly connected to a node knows its unannounced channel will not be closed for no reason (or he'd re-open one soon and non-strict forwarding will do the trick) 20:14:01 * cdecker likes it for its nice construction ;-) 20:14:09 it's like 10x larger than a routing hint though, and at least routing hints let you do things like update chan policies on that route if you have stale data 20:14:26 roasbeef: it's not in cdecker's latest proposal 20:14:29 feels liek were conflating rv w/ some sort of blinded hop hint 20:15:13 With the compressible onion stuff we might actually end up with smaller onions than routing hints 20:15:15 the thing is blinded hop hint may be in fact more costly to implement because there would be more new mechanisms 20:15:47 while rendezvous has all the bricks already there in any sphinx implementation, so even a one-hop rendezvous would be quite an efficient blinding 20:15:49 roasbeef: yeah, we really want blinded hints. RV with compressed onions is one way of getting us there. 20:16:47 I've looked again at hornet and taranet, and it's really a big amount of work. While it's very desirable in the long term, I think a shorter-term solution would be quite useful too. 20:17:36 The question really isn't whether we want to add it to the spec (LL can obviously opt out if you don't see the value). The real question is whether the construction makes sense 20:17:41 t-bast: eh, we're basically half way there w/ hornet, we onlyu need to implement the data phase, we've already implemented the set up phase 20:18:01 Otherwise we can talk about sensibility all day long and not make progress... 20:18:19 roasbeef: but I think we'd want the taranet version that has data payload integrity, otherwise we're getting worse than simple sphinx in terms of tagging attacks 20:18:21 t-bast: would mean we'd just include a data rv in the invoice, then use that to establsih initial comms, get new htlc rv routes send other signalling informatoin, etc 20:18:45 can just use an arbitrary input size block cipher, and tagging attacks don't exist 20:19:12 cdecker: so the primary goal is extended blinded unadvertised channels primarily? 20:19:27 If all implementations can commit on dedicating resources right now to work on Hornet/Taranet, I'd be fine with it. But can we really commit on that? 20:19:47 roasbeef: well, that's basically what you get with offers. You need rv to get real invoice, which contains up-to-date information. 20:20:33 roasbeef: can you send me more details offline about that suggestion for the block cipher? I'd like to look at it closely, I don't think it's so simple 20:20:37 (I already (re-)implemented and speced the e2e payload in Sphinx for messaging, was hoping to post this week) 20:20:47 roasbeef: I meant the expand the use-cases section a bit, but it can serve as 1) extended route hints, 2) recipient anonymity, 3) forcing a payment through a witness (game server adjudicating who wins), ... 20:21:22 yeh all that stuff to me just seems to be implementing hand rolled partial solutions for these issues when we already have a fully spec'd protocl that addresses all these uses cases and more (offers, sphinx messaging, etc, etc) 20:21:48 So how is recipient anonymity implementable without RV? 20:23:05 * cdecker will drop off in a couple of minutes 20:23:17 Maybe next time we start with the big picture stuff? 20:23:48 Good idea, let's say that next time we start with rendezvous? And everyone can catch up on the latest state of cdecker's proposal before then? 20:23:51 not saying it is w/o it, i'm a fan of RV, but variants that can be used wholesale to nearly entirely replace payments as they exist now with similar or better UX 20:24:35 Cool, and I'd love to see how t-bast's proposal for trampoline + rendezvous could work out 20:25:07 Maybe that could simply be apply the compressible onion for the trampoline onion instead of the outer one? 20:25:22 cdecker: yeah in fact it's really simple, I'll share that :) 20:25:48 That'd free us from requiring the specific channels in the RV onion and instead just need the trampoline nodes to remain somehow reachable 20:25:56 roasbeef: could you (or conner, or someone else) send a quick summary of the features we're working on that hornet could offer for free? 20:26:05 Anyway, need to run off, see you next time ^^ 20:26:21 roasbeef: it would be nice to clear up what we want to offload for hornet work and what's required even if we do hornet in the future 20:26:30 Bye cdecker, see you next time! 20:27:16 I need to re-read the part where they discuss receiver anonymity 20:27:28 also using hornet, we have a protocol from at least privacy analysis have been done seriously, redoing this work again for all partials impls... 20:27:54 ariard: can you clarify? I didn't get that 20:27:57 t-bast: sure i can work on a summary 20:28:11 great thanks bitconner! 20:28:42 t-base: cf your point on decoy proposal and bleichenbacher-style attacks 20:29:58 oh yeah, I think rendezvous feels safer for that - a one-hop rendezvous achieves the blinding I'm interested in for wallet users 20:30:30 avoids them leaking their unannounced channel and node_id (as long as we stop signing the invoice directly with the node_id) 20:31:22 Allright sounds like we're reaching the end, please all have a look at the PRs we didn't have time to address during the meeting. 20:31:36 t-bast: isn't unannounced channels a leak by themselves for the previous hop? 20:31:51 #action bitconner to draft a summary of the goodies of hornet related to currently discussed new features 20:32:01 #action t-bast to share rendezvous + trampoline 20:32:35 ariard: yes but this is one you can't avoid: if you're using a wallet on a phone and can't run your whole node, there's no point in trying to hide from the node you're connected to 20:32:47 they'll always know you are the sender/recipient and never an intermediate node 20:33:15 if you want to avoid that, you'll have to run your own node, otherwise you need to be fine with that :) 20:33:24 #endmeeting