19:05:29 <t-bast> #startmeeting
19:05:29 <lightningbot> Meeting started Mon Mar  2 19:05:29 2020 UTC.  The chair is t-bast. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:05:29 <lightningbot> Useful Commands: #action #agreed #help #info #idea #link #topic.
19:05:40 <t-bast> Let's start with a simple PR to warm-up
19:05:50 <t-bast> #topic https://github.com/lightningnetwork/lightning-rfc/pull/736
19:05:54 <t-bast> #link https://github.com/lightningnetwork/lightning-rfc/pull/736
19:06:13 <t-bast> This is a very simple PR to add more test vectors to Bolt 11 and clarify pico amounts
19:06:41 <t-bast> It's mostly adding negative test vectors (which we lacked before)
19:07:14 <cdecker> I guess this also closes #699 (Add bolt11 test vector with amount in `p` units)
19:07:29 <t-bast> There is one pending comment on whether to be strict about pico amount ending with 0 or let people round it
19:08:24 <t-bast> cdecker: I think #699 adds another test vector, we should probably merge the two PRs
19:08:49 <bitconner> have these been implemented by more than one impl?
19:08:50 <rusty> t-bast: I think we add "SHOULD reject" to the reader side of the spec.  It's straightforward.
19:09:15 <rusty> bitconner: c-lightning and eclair, IIUC.
19:09:35 <t-bast> rusty: sounds good to me to add a "SHOULD reject"
19:09:52 <cdecker> bitconner: I think no implementation was failing this test, just some random JS library, which cause sword_smith to file the issue
19:10:06 <t-bast> yes those test vectors have been implemented in eclair and CL (found a few places in eclair where we weren't spec compliant)
19:10:37 <t-bast> it's a good exercize to add them to your implementation, you may have a few surprises :D
19:10:41 <bitconner> rusty: nice, i'm all for negative tests. i can make a pr to lnd adding these today
19:10:46 <rusty> t-bast:  I take it back, it's already there.
19:11:27 <t-bast> rusty: woops true, then it's settled
19:11:48 <cdecker> I guess we can decide on the correctness independently of the implementations since this is just a place we were underspecified
19:12:30 * cdecker ACKs on behalf of c-lightning :-)
19:13:15 <t-bast> ACK for me too, ready to merge
19:13:19 <rusty> OK, I say we apply 736.  Oh, and this superceded 699, FWIW.
19:13:33 <ariard> no blocker for RL (still a bit behind on invoices stuff...)
19:13:37 <t-bast> rusty: you sure 699 didn't add another test vector that we lack?
19:14:01 <cdecker> Ok, so closing #699 since this covers the cases, correct?
19:14:23 <cdecker> #699 adds a working one, not a counterexample
19:14:40 <t-bast> But 699 adds a test vector for the pico amount
19:14:47 <t-bast> which we didn't have before
19:15:08 <cdecker> But it also doesn't cover new corner cases, does it?
19:16:00 * rusty checks... yes, t-bast is right there's no positive pico test.  Ack 699 too.
19:16:17 <t-bast> Yeah I believe we should apply both (positive and negative tests)
19:16:40 <cdecker> ok
19:17:09 <t-bast> #action merge #699 and #736
19:17:38 <t-bast> if you disagree with the action items, please say it during the meeting so we can clear up miscommunication
19:18:06 <t-bast> #topic Clarify gossip messages
19:18:10 <t-bast> #link https://github.com/lightningnetwork/lightning-rfc/pull/737
19:18:35 <t-bast> There has been a lot of back and forth between implementations over channel range messages, the spec was giving a bit too much leeway there
19:18:48 <t-bast> This is an attempt at clarifying the situation and making the spec stricter
19:19:20 <t-bast> conner did you have time to look at this? I don't remember if it was you or wpaulino who worked on that recently for lnd?
19:19:27 <roasbeef> stuff on the PR itself should suprecede irc (approvals), as it's a much terser medium of comms
19:19:31 <roasbeef> it was wpaulino on our end
19:19:52 <roasbeef> we also impl the non-overlapping req as is rn
19:20:57 <roasbeef> we'll accept a few diff versions (to support older lnd nodes), but now send things in a complete (entire range) and non-overlapping manner
19:21:10 <rusty> t-bast: your example in there is invalid; we assume all channels in a single block are in the same response
19:21:11 <t-bast> roasbeef: do you know if lnd would accept reading the version defined in this PR?
19:21:52 <rusty> t-bast: we do the pr behaviour currently, so I think it has to be ok?
19:22:16 <t-bast> rusty: I think we need to make it explicit then, eclair splits between multiple responses. It's technically possible that a single response *cannot* hold all the channels that were created in a single block (I need to dig up the calculations for that)
19:23:16 <t-bast> IIRC a single bitcoin block may be able to fit more than 3000 channel opens, which overflows the 65kB lightning message limit
19:23:29 <t-bast> Do we care or do we consider this is an extremely unlikely event and ignore that?
19:23:33 <bitconner> why would the responder set `first_block_num` to something less that what's in the request?
19:23:41 <rusty> t-bast: 64k / 8 == 8k?  And these compress really really well...
19:24:13 <t-bast> rusty: but you need to spec also for the uncompressed case?
19:24:15 <sstone> bitconner: if they want to reuse pre-computed responses for example
19:24:16 <rusty> bitconner: an implementation is allowed to keep canned responses, so you might keep a set of scids for blocks 1000-1999, etc.
19:24:28 <t-bast> rusty: and we'd like the same splitting behavior regardless of compressed/uncompressed, right?
19:24:32 <roasbeef> t-bast: good point re max chans for uncompressed
19:25:01 <rusty> t-bast: sure, but we can't fit 8k txs in a block anyway?
19:25:12 <roasbeef> i don't think that's disallowed as is now though? (repeated first_blocknum over multiple messages)
19:25:43 <t-bast> rusty: but you can also ask for timestamps, so your calculation is too optimistic I believe
19:25:58 <t-bast> and checksums I mean
19:26:13 <rusty> t-bast: ah right!  Yes, 4k.
19:26:29 <t-bast> Point is I think that in the case where you ask for the most information, in the extreme case where a Bitcoin block is full of channel opens it would not fit in a single lightning message
19:26:32 <sstone> roasbeef: it's not explicit but then you'd send the next query because the current one has been fully processed
19:26:42 <sstone> because -> before
19:27:03 <roasbeef> sstone: which is part of why we used the old "complete" as termination, makes the ending _explicit_
19:27:10 <rusty> This has always been a feature of the protocol, though; there's no way to tell.
19:27:18 <roasbeef> "send w/e, just let us know when you're done, we'll hadnel reconcilliation afterwards"
19:27:28 <t-bast> I must admit an explicit termination field clears up a lot of this complexity :)
19:28:13 <rusty> Sure, but I wasn't worried about 8k chans per block, and I'm not worried about 4k now, TBH.
19:28:34 <t-bast> Heh true, I just wanted to raise the point but we can choose to ignore this unlikely event
19:29:01 <cdecker> Technically it's not number of txs in a block, it's number of outputs created in a block, same thing though, not worth the extra effort
19:29:17 <t-bast> It may be an issue if in the future we want to add more information than just checksum + timestamp (but I don't know if we'll ever do)
19:29:52 <t-bast> We'll probably have replaced the gossip mechanism at that point.
19:30:21 <roasbeef> possible as is also to continue protocol messages over a series of transport messages
19:30:54 <t-bast> Are we ok with saying that we don't care about the edge case where a block is full of channel openings? And we restrict the gossip mechanism to not split a block's scids over two responses?
19:30:58 <rusty> Our code will log a "broken" message if this ever happens, and get upset.  But we shoujld consider this for gossip NG or whatever follows.
19:32:13 <bitconner> in theory doesn't that mean someone could prevent your channel from being discovered if they pack the beginning of the block?
19:32:41 <rusty> bitconner: unless you do random elimination for this case?  BUt they could def. affect propagation.
19:33:09 <bitconner> i suppose you could randomize it if you have more than what would otherwise firt, but that kinda defeats the purpose of canned  responses
19:33:15 <rusty> (In *practice*, these scids compress really well, so the actual limit is higher).
19:34:32 <cdecker> Too many channels sounds like a good problem to have, and until then let's just take the easy way out
19:34:36 <t-bast> rusty: why do you need the "minus 1" line 791 if we always include a whole block of scid in each response?
19:34:42 <cdecker> No need to cover every unlikely corner case imho
19:34:56 <m-schmoock> hey
19:35:10 <t-bast> cdecker: I agree, this seems unlikely enough for the lifetime of that gossip mechanism
19:35:40 <cdecker> We'll rework the gossip protocol before it ever becomes an issue
19:35:46 <rusty> t-bast: oops, the minus one is completely wrong.  Remove the "minus one' from that sentence :(
19:35:57 <roasbeef> unlikey? it's determinstic-ish and ther's just a set cost to crowd out the channels
19:36:30 <t-bast> roasbeef: if you use the compressed version it's impossible that it happens unless we reduce the block size :)
19:37:01 <t-bast> roasbeef: we'd need to double-check the math, but it only happens when using uncompressed and asking for both timestamps and checksums, and you'd really need to fill the whole block
19:37:43 <t-bast> rusty: cool in that case it makes sense to me. I'd like to re-review our code before we merge it in the spec, but I don't have other comments for now.
19:37:45 <bitconner> checksums compress well?
19:38:08 <t-bast> bitconner: mmmh not very likely, but I haven't tested it
19:38:21 <bitconner> i think these assumptions need some validation
19:38:48 <cdecker> roasbeef: what would you do that for? It's expensive (you'd need to pay all the fees in the block basically) for very little effect (you prevented your arch-nemesis from announcing a channel, in the backlog sync, but can still broadcast updates that'll push your announcement through...)
19:39:27 <bitconner> if we get rid of backlog sync all of these issues go away no?
19:39:31 <rusty> bitconner: checksums don't compress, and timestamps can be made harder to compress, but the scids for same block obv. compress well.
19:40:05 <roasbeef> cdecker: idk if the "why" is all that pertinent, it's clear that it's possilbe given a certain cost, but correct that you can still just attempt to broadcast your stuff
19:40:38 <bitconner> or maybe not
19:41:18 <roasbeef> but on our end, we
19:41:20 <roasbeef> we
19:41:36 <roasbeef> we'll check our recv logic to see if it's compat w/ #737 as is (I suspect it is)
19:41:47 <rusty> If someone opens 4k channels at once, I promise I'll fix this :)  FWIW, we can theoretically open ~24k channels in one block.  That ofc would not fit.
19:43:00 <t-bast> allright so let's all check our implementation against the latest status of this PR and report on github?
19:43:08 <bitconner> sgtm
19:44:31 <t-bast> #action all implementation should verify their compliance with the latest version of #737 and report on Github
19:44:48 <t-bast> #topic Stuck channels are back - and they're still stuck
19:45:02 <t-bast> #link https://github.com/lightningnetwork/lightning-rfc/pull/740
19:45:06 <t-bast> #link https://github.com/lightningnetwork/lightning-rfc/pull/750
19:45:18 <t-bast> I opened two small PRs that offer two different alternatives on how to fix this.
19:45:34 <t-bast> #740 adds a new reserve on top of the reserve (meta)
19:46:00 <roasbeef> the game of wack-a-mole continues :p
19:46:00 <t-bast> #750 allows the funder to dip once into the reserve to pay for the temporary fee increase of the commit tx for an incoming HTLC
19:46:14 <t-bast> :P
19:46:44 <cdecker> +1 for keeping extra reserves, dipping into the channel reserves should be avoided
19:47:17 <t-bast> I think #750 makes more sense to be included in the spec: that way HTLC senders will send such an "unblocking" HTLC. 740 can be implemented regardless of whether it's in the spec, as an additional safety if implementers feel it's needed (for small channels maybe).
19:47:18 <rusty> 750 is not implementable :(
19:47:24 <roasbeef> how isn't it the case that th eextra reserve just delays things?
19:47:38 <t-bast> rusty: why?
19:47:48 <roasbeef> concurrent send case prob
19:47:52 <rusty> To be clear, the current requirement only applies to the fee payer.  There *is* no requirement on the other side.
19:48:13 <t-bast> rusty: yes but right now the other side still avoids sending that HTLC
19:48:18 <m-schmoock> t-bast: additional should be in spec or not att all, not optional, otherwise we cant know hoe much a channel can send or receive, because remote might have different option
19:48:21 <m-schmoock> *opinion
19:48:22 <t-bast> at least in eclair, lnd and c-lightning
19:48:50 <t-bast> if we think 750 isn't the way to go, then yes 740 should go in the spec IMO :)
19:49:17 <roasbeef> we recently added some additional sanity checks in this area, to avoid things like sending an htlc that can reside in one commitment but not the other
19:49:21 <rusty> t-bast: sorry, it is implementable (and, in fact, we implemented this), but it's a bit misleading because it can still happen due to concurrency.
19:49:39 <t-bast> It's true that it can happen because of concurrency
19:50:08 <t-bast> If you all think the additional reserve (#740) is a better solution, I'm totally fine with that. I just wanted to explore both before we decide.
19:50:17 <rusty> We tried not to force the payer into that state because it's a bit anti-social, and risks creating a lower fee tx than we're comfortable with.
19:50:48 <rusty> t-bast: I prefer 750, too, because reserve is already a PITA, and as it gets harder to calculate, it gets worse.
19:51:01 <rusty> (We implemented both, and have applied 740, FWIW)
19:51:41 <m-schmoock> at least a margin of 2*times fees eleminates the risk of someone trying to force a remote channel into that state
19:51:49 <rusty> My main reason for not applying my 750 implementation instead was that I wasn't sure in practice how impl would handle being forced to dip into reserves.
19:51:54 <m-schmoock> though technically it can still lock
19:52:33 <t-bast> rusty: yeah I tried that too and current implementations didn't like it :)
19:52:53 <rusty> t-bast: OK, then I think we go for 740 in that case.
19:52:58 <niftynei> i have to hop off, see you all next time :wave:
19:53:00 <t-bast> m-schmoock: true that the additional reserve "just works" and feels safer, but it felt like the reserve was there for this...
19:53:08 * t-bast waves at niftynei
19:53:26 <t-bast> Allright is everything more in favor of 740 (additional reserve)?
19:53:35 <t-bast> Not everything, everyone :)
19:53:48 <m-schmoock> in absent of a better solution yes
19:54:21 <m-schmoock> rusty: we need to adapt the factor from 1.5 to 2
19:54:53 <rusty> m-schmoock: agreed, but that's easy to fix.
19:54:56 <m-schmoock> also, how do we know when/if a node supports this?
19:55:03 <t-bast> we can bikeshed the factor on the PR, I'm not locked on the decision of using 2. I think matt may want to suggest a value there too, ariard what's your take on this?
19:55:21 <m-schmoock> because if cant cant know (yet) due to outdated software, we cant calculate receivable/spendable on a channel correcty (yet)
19:55:24 <roasbeef> factor of 2 just again seems to be deferring things w/o actually fundamentally fixing anything
19:56:09 <t-bast> roasbeef: why? it only means that the fee increase you can handle is bigger and not unbounded, that's true, but it does prevent the issue
19:56:43 <t-bast> roasbeef: it's true that it's not a fundamental fix, but we don't see an easy one that fixes that in the short term...
19:57:03 <m-schmoock> t-bast: how do we signal the remote peer that #740 is in effect or not?
19:57:10 <t-bast> roasbeef: this is something users run into, and it's a pain to get out without closing the channel
19:57:54 <t-bast> m-schmoock: we could add a feature bit but I think it would really be wasting one...I
19:58:00 <m-schmoock> or do we just migrate over time and run into rejections until everyone supports this
19:58:02 <t-bast> I'll think about signaling more
19:58:30 <bitconner> iiuc, the non-initiator can still send multiple htlcs and end up in the same situation as today?
19:58:36 <t-bast> m-schmoock: that was my idea at first, yes. But I can investigate some signaling.
19:59:06 <t-bast> bitconner: no because at least one of the HTLCs will go through
19:59:07 <m-schmoock> t-bast: we should at least think about this
19:59:13 <t-bast> unless I'm missing something
19:59:25 <t-bast> and once 1 HTLC has gone through, you're unblocked for the others
19:59:45 <t-bast> roasbeef can you clarify why you think it doesn't fix this?
19:59:50 <t-bast> m-schmoock: will do.
20:00:42 <m-schmoock> hmmm, am I mistaken, but it is possible to spam a channel by a lot of trimmed (no fees) HTLCs until remote is forced into lockup stil ?
20:01:11 <m-schmoock> :D just saying
20:01:29 <t-bast> m-schmoock: I don't think so, can you detail?
20:01:53 <t-bast> it only happens on the funder side
20:02:22 <m-schmoock> im not too deep into LN protocol yes, but afaik small trimmed HTLCs have no onchain fee 'requirement' because they are considered dust
20:02:28 <t-bast> so if funder keeps his extra reserve, when you send him trimmed HTLCs you still increase his balance slowly, which allows him to pay the fee
20:02:46 <t-bast> they are still added to the balance once fulfilled
20:02:59 <m-schmoock> so, in order to force a remote (even 3rd party) channel into the locked up state, a mean person would have to drain a remote by repeating dust HTLC until its ocked
20:03:13 <t-bast> no, it's the other way around
20:03:36 <t-bast> if you want the remote to be stuck, you have to be the fundee. And the remote has to send all his balance to you (which you don't control).
20:03:54 <m-schmoock> which I did by circular payments
20:03:56 <t-bast> If that remote keeps the extra reserve, he'll be safe, at some point he'll stop sending you HTLCs
20:04:26 <t-bast> and he'll have the extra reserve allowing him to receive HTLCs
20:04:28 <m-schmoock> you can route through a 'victim' always by using dust HTLC
20:04:51 <t-bast> yes but that victim will not relay once he reaches his additional reserve
20:05:06 <t-bast> and that additional reserve makes sure his channel isn't stuck
20:05:19 <m-schmoock> but the PR says 2*onchain fees (which is 0 for trimmed, right?)
20:05:35 <ariard> t-bast: sorry if understand well the logic of #740 fix, you require that the funder keep an additional reserve, that way when fundee try to send a HTLC to rebalance channel it works?
20:05:48 <t-bast> ariard: exactly
20:06:09 <m-schmoock> maybe I'm mistaken
20:06:30 <ariard> okay so no risk of funder hitting the bottom of additional reserve by sending a HTLC from its side (or that would be a spec violation)
20:06:43 <m-schmoock> t-bast: aahh, sorry I misread the PR. maybe we should clarify
20:06:55 <t-bast> ariard: exactly, maybe there's some clarification needed for dust HTLCs
20:07:01 <m-schmoock> * pay the fee for a future additional untrimmed HTLC at `2*feerate_per_kw` while
20:07:11 <m-schmoock> (untrimmed)
20:07:15 <t-bast> #action t-bast close 750 in favor of 740
20:07:23 <t-bast> #action clarify the case of dust HTLCs
20:07:33 <t-bast> #action everyone to continue the discussion on github
20:07:45 <ariard> btw you should switch to a better name, that's not a penalty_reserve but a fee_uncertainty_reserve ?
20:08:03 <t-bast> good idea, let's find a good name for that reserve :)
20:08:05 <ariard> I mean both reserves have different purposes, one is for security, the other for paying fees
20:08:22 <t-bast> let's move on to be able to touch on one long-term feature
20:08:29 <t-bast> let's continue the discussion on github
20:08:34 <ariard> sure, let's move on
20:09:04 <t-bast> @everyone we have time for one long-term topic: should we do rendezvous, trampoline or protocol testing framework?
20:09:12 <t-bast> the loudests win
20:10:04 <ariard> IMO rendez-vous, decoy, option_scid, different proposals tradeoff wrt to exact problem they are trying to cover?
20:10:05 <roasbeef> down to talk about rendezvous, namely how any pure sphinx based approach leaves a ton to be desried as far as UX, so not sure it's worht implementing
20:10:25 <cdecker> Now he tells me, after I implemented it like 3 times xD
20:10:29 <roasbeef> (namely single path, no full error propagation, not compatible wth mpp, etc)
20:10:32 <roasbeef> heh
20:10:39 <t-bast> okay sounds like we'll do rendezvous :)
20:10:48 <t-bast> #topic Rendezvous with Sphinx
20:10:58 <cdecker> But I totally agree, it's rather inflexible, but it can provide some more tools to work with
20:11:23 <roasbeef> sure, but woudl it see any actual usage? given the drastic degrade in UX
20:11:26 <t-bast> I think that even though it's not very flexible, the current proposal is really quite cheap to implement so it may be worth it.
20:11:48 <roasbeef> rv node is offline, now what? the channel included in teh rv route was force closed or something, now what?
20:11:52 <t-bast> I know we'd add it to eclair-mobile and phoenix, so there would be end-users usage
20:12:09 <roasbeef> the invoice is old, and another payment came in, so now that rv channel can't recv the payment, now what?
20:12:13 <rusty> t-bast: with compression onions, it makes sense.  We can have multiple, in fact.
20:12:15 <cdecker> It can be a powerful tool to hide the route inside a company setup for example
20:12:56 <bitconner> rusty: compression onion?
20:13:00 <cdecker> It can be useful as a routing hint on steroids
20:13:06 <t-bast> I think it's a powerful tool to hide a wallet's node_id and channel, replacing decoys
20:13:54 <t-bast> because a wallet directly connected to a node knows its unannounced channel will not be closed for no reason (or he'd re-open one soon and non-strict forwarding will do the trick)
20:14:01 * cdecker likes it for its nice construction ;-)
20:14:09 <roasbeef> it's like 10x larger than a routing hint though, and at least routing hints let you do things like update chan policies on that route if you have stale data
20:14:26 <t-bast> roasbeef: it's not in cdecker's latest proposal
20:14:29 <roasbeef> feels liek were conflating rv w/ some sort of blinded hop hint
20:15:13 <cdecker> With the compressible onion stuff we might actually end up with smaller onions than routing hints
20:15:15 <t-bast> the thing is blinded hop hint may be in fact more costly to implement because there would be more new mechanisms
20:15:47 <t-bast> while rendezvous has all the bricks already there in any sphinx implementation, so even a one-hop rendezvous would be quite an efficient blinding
20:15:49 <rusty> roasbeef: yeah, we really want blinded hints.  RV with compressed onions is one way of getting us there.
20:16:47 <t-bast> I've looked again at hornet and taranet, and it's really a big amount of work. While it's very desirable in the long term, I think a shorter-term solution would be quite useful too.
20:17:36 <cdecker> The question really isn't whether we want to add it to the spec (LL can obviously opt out if you don't see the value). The real question is whether the construction makes sense
20:17:41 <roasbeef> t-bast: eh, we're basically half way there w/ hornet, we onlyu need to implement the data phase, we've already implemented the set up phase
20:18:01 <cdecker> Otherwise we can talk about sensibility all day long and not make progress...
20:18:19 <t-bast> roasbeef: but I think we'd want the taranet version that has data payload integrity, otherwise we're getting worse than simple sphinx in terms of tagging attacks
20:18:21 <roasbeef> t-bast: would mean we'd just include a data rv in the invoice, then use that to establsih initial comms, get new htlc rv routes send other signalling informatoin, etc
20:18:45 <roasbeef> can just use an arbitrary input size block cipher, and tagging attacks don't exist
20:19:12 <roasbeef> cdecker: so the primary goal is extended blinded unadvertised channels primarily?
20:19:27 <t-bast> If all implementations can commit on dedicating resources right now to work on Hornet/Taranet, I'd be fine with it. But can we really commit on that?
20:19:47 <rusty> roasbeef: well, that's basically what you get with offers.  You need rv to get real invoice, which contains up-to-date information.
20:20:33 <t-bast> roasbeef: can you send me more details offline about that suggestion for the block cipher? I'd like to look at it closely, I don't think it's so simple
20:20:37 <rusty> (I already (re-)implemented and speced the e2e payload in Sphinx for messaging, was hoping to post this week)
20:20:47 <cdecker> roasbeef: I meant the expand the use-cases section a bit, but it can serve as 1) extended route hints, 2) recipient anonymity, 3) forcing a payment through a witness (game server adjudicating who wins), ...
20:21:22 <roasbeef> yeh all that stuff to me just seems to be implementing hand rolled partial solutions for these issues when we already have a fully spec'd protocl that addresses all these uses cases and more (offers, sphinx messaging, etc, etc)
20:21:48 <cdecker> So how is recipient anonymity implementable without RV?
20:23:05 * cdecker will drop off in a couple of minutes
20:23:17 <cdecker> Maybe next time we start with the big picture stuff?
20:23:48 <t-bast> Good idea, let's say that next time we start with rendezvous? And everyone can catch up on the latest state of cdecker's proposal before then?
20:23:51 <roasbeef> not saying it is w/o it, i'm a fan of RV, but variants that can be used wholesale to nearly entirely replace payments as they exist now with similar or better UX
20:24:35 <cdecker> Cool, and I'd love to see how t-bast's proposal for trampoline + rendezvous could work out
20:25:07 <cdecker> Maybe that could simply be apply the compressible onion for the trampoline onion instead of the outer one?
20:25:22 <t-bast> cdecker: yeah in fact it's really simple, I'll share that :)
20:25:48 <cdecker> That'd free us from requiring the specific channels in the RV onion and instead just need the trampoline nodes to remain somehow reachable
20:25:56 <t-bast> roasbeef: could you (or conner, or someone else) send a quick summary of the features we're working on that hornet could offer for free?
20:26:05 <cdecker> Anyway, need to run off, see you next time ^^
20:26:21 <t-bast> roasbeef: it would be nice to clear up what we want to offload for hornet work and what's required even if we do hornet in the future
20:26:30 <t-bast> Bye cdecker, see you next time!
20:27:16 <t-bast> I need to re-read the part where they discuss receiver anonymity
20:27:28 <ariard> also using hornet, we have a protocol from at least privacy analysis have been done seriously, redoing this work again for all partials impls...
20:27:54 <t-bast> ariard: can you clarify? I didn't get that
20:27:57 <bitconner> t-bast: sure i can work on a summary
20:28:11 <t-bast> great thanks bitconner!
20:28:42 <ariard> t-base: cf your point on decoy proposal and bleichenbacher-style attacks
20:29:58 <t-bast> oh yeah, I think rendezvous feels safer for that - a one-hop rendezvous achieves the blinding I'm interested in for wallet users
20:30:30 <t-bast> avoids them leaking their unannounced channel and node_id (as long as we stop signing the invoice directly with the node_id)
20:31:22 <t-bast> Allright sounds like we're reaching the end, please all have a look at the PRs we didn't have time to address during the meeting.
20:31:36 <ariard> t-bast: isn't unannounced channels a leak by themselves for the previous hop?
20:31:51 <t-bast> #action bitconner to draft a summary of the goodies of hornet related to currently discussed new features
20:32:01 <t-bast> #action t-bast to share rendezvous + trampoline
20:32:35 <t-bast> ariard: yes but this is one you can't avoid: if you're using a wallet on a phone and can't run your whole node, there's no point in trying to hide from the node you're connected to
20:32:47 <t-bast> they'll always know you are the sender/recipient and never an intermediate node
20:33:15 <t-bast> if you want to avoid that, you'll have to run your own node, otherwise you need to be fine with that :)
20:33:24 <t-bast> #endmeeting