19:06:01 <t-bast> #startmeeting
19:06:01 <lightningbot> Meeting started Mon Mar 30 19:06:01 2020 UTC.  The chair is t-bast. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:06:01 <lightningbot> Useful Commands: #action #agreed #help #info #idea #link #topic.
19:06:10 <t-bast> First of all, the agenda for today
19:06:13 <t-bast> #link https://github.com/lightningnetwork/lightning-rfc/issues/757
19:06:29 <t-bast> #topic channel range clarification
19:06:32 <t-bast> #link https://github.com/lightningnetwork/lightning-rfc/pull/737
19:07:27 <t-bast> The only lasting issue is whether to allow a block to spill into multiple messages
19:07:44 <t-bast> rationale being that theoretically, without compression, a block full of channel opens couldn't fit in a single message
19:08:08 <t-bast> From previous discussions, CL and eclair thought it was too much of an edge case to worry about, LL wanted to investigate a bit more
19:09:00 <t-bast> wpaulino had a look at the PR but didn't ACK; the way I understood it he's ok with block fully contained in a single message, but I'd like an ACK from him or someone else on the LL team?
19:09:50 <rusty> In theory you could return a random sampling in that case and mark your response `full_information` = false.  (c-lightning just fails to respond in this case tho)
19:10:38 <roasbeef> feels like either a txindex in teh response, or an explicit "ok i'm done" would resolve thsi nicely
19:10:44 <t-bast> true, but it means that when full_information = false we can never know when you're done sending replies
19:11:15 <roasbeef> i don't think the 'complete' field as is, is even used properly, nor is there any support for signalling a ndoe has only part of some of the graph
19:11:17 <t-bast> I agree that an explicit indication that everything is sent would be much easier -> why not add a TLV?
19:11:22 <rusty> t-bast: no, you're done still.  You just did't send it at all.
19:11:36 <roasbeef> t-bast: yeh in the past, we used erroneously used 'complete' as that
19:11:48 <roasbeef> but it made everything a lot simpler on our end
19:11:50 <t-bast> rusty: ah ok, so sender aren't allowed to send one block in multiple replies - with that assumption it works
19:12:21 <rusty> t-bast: ack.  We could allow it except for the final block, but that seems like we're not really helping.
19:12:27 <t-bast> it seems to me that the issue of a block full of channel opens can safely be ignored; are you really not using compression?
19:13:04 <bitconner> did we figure out a rough number for how much compression helps when sending timestamps and checksums?
19:13:24 <t-bast> I thought you were checking that :D
19:13:31 <bitconner> no?
19:13:44 <bitconner> i don't recall that :P
19:13:51 <t-bast> I'd shared rough number a month ago, I'll try to dig them up
19:14:08 <t-bast> IIRC rusty or christian had shared some calculations too
19:15:28 <cdecker> I have historical gossip data, no info on compressability though
19:16:02 <bitconner> ideally the numbers would be about worst-case compression too
19:16:40 <t-bast> we'd need more than 3.5k channel opens in a single block and to request both timestamps and checksum without compression to be in trouble
19:17:14 <rusty> Even assuming the checksums and timestamps don't compress, the short_channel_ids are an ordered sequence (with same block num).  That's basically 3 of 8 bytes for free, so it's 5 bytes per reply, 64k == 13,000 channels.  We can theoretically open 24k per block though.
19:17:25 <cdecker> Nice, so not fitting into a message is basically impossible for Bitcoin
19:17:53 * rusty repeats his promise to fix this as soon as it becomes a problem :)
19:18:10 <t-bast> I'm honestly in favor of not worrying about this now
19:18:24 <roasbeef> ok, we'll leave it up to our future selves them
19:18:49 <t-bast> allright so we're good to merge 737?
19:18:59 <bitconner> it's int 5 + 4*4 bytes per reply?
19:19:25 <bitconner> isn't*
19:20:09 <bitconner> which produces figures more like t-bast's 3.5k
19:20:48 <roasbeef> i think rusty isn't including the optional check sum and timestamp?
19:21:52 <rusty> bitconner: you're right, I forgot there are two csums and timestamps per entry.  So 24 bytes uncompressed.  So we're closer indeed to 3.5k
19:22:11 <t-bast> yeah my calculation was for the full thing, which yield around ~3k channels per message
19:22:26 <t-bast> timestamps are likely to compress quite well, so probably more than that
19:22:31 * rusty really needs to get back to set reconciliation which doesn't have this problem (I promise all new problems!)
19:22:41 <t-bast> hehe
19:22:44 <bitconner> the timestamps aren't ordered tho, so not necessarily
19:22:55 <rusty> Yeah, timestamps have to be in a 2 week range though.
19:23:08 * cdecker is reminded of the 12 standards XKCD when it comes to gossip protocols xD
19:23:12 <bitconner> but there def will be similar prefixes
19:24:08 <t-bast> exactly, so we'll be around at least 3 or 3.5k channels per message ; far from something that should worry us IMHO
19:24:09 <bitconner> i agree the chances of it being an issue is pretty rare. in general we should longer term revisit the gossip with invs, better node ann querying, etc
19:25:10 <t-bast> I think if we start seeing a rate of 3k channel opens every 10 minutes, we'll have a lot more problems than the gossip :D
19:25:43 <t-bast> So shall we move on and merge this?
19:25:46 <cdecker> Well gossip might still be the issue, but the message size limitation isn't likely to be the biggest issue :-)
19:25:52 <cdecker> ACK
19:26:22 <roasbeef> sgtm
19:26:34 <bitconner> sgtm as well
19:26:41 <t-bast> #action merge #737
19:26:56 <t-bast> #topic Stuck channels
19:26:59 <t-bast> #link https://github.com/lightningnetwork/lightning-rfc/pull/740
19:27:28 <t-bast> We're still going with the option of the additional reserve for lack of better short-term solution
19:27:49 <t-bast> The main issue with this PR honestly is just that wording this is a nightmare
19:28:40 <rusty> Yes, we've implemented this for now.
19:28:56 <t-bast> It spun off a lot of interesting discussions, but I think there's overall agreement on the mechanism: reserve some buffer for a future extra HTLC, with a feerate increase of x2 (which is just a recommendation)
19:30:43 <ariard> yeah it seems to blur 2 problems, preventing initiator to stuck channel by offering a HTLC and safe-guard against fee spikes in some time N to keep incoming capacity
19:31:18 <t-bast> the safeguard against fee spike is really for that specific problem, to avoid getting stuck
19:31:30 <t-bast> because that's how you currently get into that situation
19:32:12 <rusty> t-bast: my only complaint is that it since the non-fee payer MUST NOT send an HTLC which causes the fee=payer to dig into reserves, the alternate solution (letting just one more HTLC through in this case) is now technically banned.
19:33:57 <t-bast> rusty: that's true. We previously didn't have any requirement on the non-fee payer, do you want to keep it at that?
19:34:14 <t-bast> Or turn the third paragraph into a SHOULD NOT?
19:34:38 <roasbeef> seems like things are pretty active on the PR in the spec repo, anything critical that needs to be discussed here?
19:35:15 <rusty> t-bast:  SHOULD NOT.  Everyone implemented a limit despite not being required to, which caused this problem.
19:35:24 <roasbeef> on the implementation side, we'll need to make sure all our interpretations of the spec are actually compatible
19:35:38 <roasbeef> I can see one impl having slightly diff behavior which causes a de-sync or force close
19:36:11 <t-bast> roasbeef: really? I don't think there's a risk there, you're just implementing behavior to restrict yourself (when funder) from sending too much
19:36:38 <t-bast> this PR doesn't impose any requirement on your peer, so I think it should be safe
19:36:41 <joostjgr> i also don't think there is a risk of de-sync
19:36:54 <rusty> roasbeef: we haven't added any receive-side restrictions, just what you should send.  Still possible of course, but not a new problem.
19:37:00 <t-bast> rusty: allright, I'll make that a SHOULD NOT
19:37:12 * rusty reads https://lsat.tech/macaroons... nice job LL!
19:38:18 <t-bast> I agree that we're able to make progress on github (thanks to the reviewers!) so let's skip to the next topic?
19:38:47 <roasbeef> gotcha, only sender stuff make sense, still seems like whack-a-mole in the end, but if it helps to resolve cases in teh wild today then it may be worthwile (dunno what impl of it looks like yet)
19:38:51 <roasbeef> t-bast: sgtm
19:39:25 <t-bast> #action keep iterating on wording, fundee requirement SHOULD instead of MUST
19:39:51 <t-bast> #topic TLV streams everywhere
19:39:54 <t-bast> #https://github.com/lightningnetwork/lightning-rfc/pull/754
19:40:05 <t-bast> This one just got an ACK from roasbeef, so I feel it's going to be quick :)
19:40:27 <roasbeef> approved it, we need to shuffle some bytes on disk to make it work in our impl, but it's sound
19:40:27 <t-bast> I just had one thought about that recently though for which I'd like your feedback
19:40:30 <rusty> So AFAICT the obvious step is to make option_upfront_shutdown_script a compulsory option.
19:41:06 <t-bast> For the specific case of update_add_htlc -> allowing a TLV extension means that there won't be a single fixed size for the packets going through the network
19:41:15 <t-bast> That looks like a privacy issue
19:41:46 <t-bast> It will be easier for network layer attackers to correlate traffic if udpdate_add_htlc messages differ in size depending on the features they use...
19:42:00 <rusty> t-bast: a minor one though; in practice update_add_htlc is already distinctive sized fom other messages, and there aren't enough at once to confuse.
19:42:17 <t-bast> yes but at least all update_add_htlcs look the same to a network observer...
19:42:37 <cdecker> Right, TLV extensions are also only defined on a peer level, not across multiple hops
19:43:09 <cdecker> So a distinctively sized update_add_htlc doesn't get propagated (unless we explicitly add such a feature later)
19:43:28 <cdecker> Probably best to keep that one in mind, but not worry about it for now
19:43:39 <rusty> cdecker: good point, so this critique only applies when we use it, which is always true.
19:43:39 <roasbeef> mhmm, and if yuo're modifyign update_add_htlc, you need further changes in the layer above (routing feature bits, how to handle it, onoin payload implications, etc)
19:43:49 <t-bast> for path blinding we are adding an ephemeral point in the TLV extension of update_add_htlc once you're inside the blinded path :)
19:44:20 <roasbeef> t-bast: not up to date on this stuff, but why not in th eonion instead?
19:44:22 <cdecker> Yep, in that case it'd apply
19:44:31 <bitconner> it could be the case tho that multiple hops need to add some tlv-extension in order for the payment to work
19:44:38 <rusty> roasbeef: you need the ss to decrypt the onion itself.
19:44:50 <bitconner> if there's any variable length data it would be pretty easy to track
19:44:54 <rusty> (tells you what the key tweak is)
19:45:03 <t-bast> roasbeef: because it's kind of a reverse onion, I don't think we can put it in the onion...
19:45:05 <cdecker> bitconner: that's true, but not specific to this proposal
19:45:34 <t-bast> yes it's mostly an issue for later, when we actually start adding TLVs at the end of update_add_htlc
19:45:35 <cdecker> _any_ extension mechanism would result in distinctive message sizes, wouldn't it?
19:45:41 <rusty> We can theoretically mask this stuff with pings, but I've shied away from that because it's easy to get the sizes wrong and not actually obscure anything.
19:46:10 <bitconner> cdecker, yes it does. particuarly for add_* messages
19:46:25 <bitconner> sorry, update_* messages
19:46:34 <bitconner> agreed we can address later tho
19:46:51 <cdecker> If they are getting propagated along with the HTLC itself, yes, otherwise I don't see a problem
19:47:15 <t-bast> sgtm, I wanted to mention it so we keep it in mind when building on top of these extensions for update_add_htlc, but we can probably move on for now
19:47:33 <cdecker> Let's keep it in mind for when we have proposals that'd extend the update_* messages along (parts of) the path
19:47:44 <t-bast> Looks like I got two approvals, so we'll get that one merged
19:47:51 <t-bast> #action merge #754
19:48:05 <t-bast> #topic Wumbo advisory
19:48:07 <t-bast> #link https://github.com/lightningnetwork/lightning-rfc/pull/746
19:48:14 <t-bast> Very quickly, just want your opinion on
19:48:26 <t-bast> * whether this advisory should be in the spec or not
19:48:34 <t-bast> * if yes, is the current wording ok
19:48:47 <t-bast> I think it's small enough to be useful for people
19:48:56 <t-bast> Without adding too much bloat
19:49:43 <cdecker> Yes, but if we add every small tidbit to the spec it'll grow indefinitely. A separate best practices document would be better suited for these non-essential things
19:49:44 <roasbeef> i think it doesn't hurt, many aren't even aware of the link between csv value and security
19:49:45 <ariard> we may need some annex for anchor_ouput on how to implement bump coins pool and effective aggregating algos, why not a security annex?
19:50:07 <roasbeef> idk if scaling csv and confs is non-essential, would you accept a 100 BTC channel with 1 conf and a csv value of 2 blocks?
19:50:24 <roasbeef> we can keep it minimal, but ppl should be aware of the considerations
19:50:37 <cdecker> No, I would apply my usual timeouts which are good for that
19:50:40 <roasbeef> i think lnd is the only implementation that scales both conf and csv value according to the size of the impl as is (I could be wrong though)
19:51:08 <t-bast> I agree with roasbeef; I find these two lines useful
19:51:41 <roasbeef> yeh two lines minimally, I don't think this _hurts_ anyone, just more context for security considerations
19:51:44 <rusty> t-bast: commented.,  Advice is bad.
19:51:51 <cdecker> Ok, my objection is by no means a very strong opinion, but I worry about the readability and size of the spec (both are abysmal as it is...)
19:51:58 <t-bast> we only scale confirmations in eclair
19:52:09 <rusty> conf is good, scaling csv is not logical.
19:52:15 <roasbeef> how is scaling csv not logical?
19:52:19 <rusty> (I commented on PR just now)
19:52:33 <rusty> https://github.com/lightningnetwork/lightning-rfc/pull/746#issuecomment-606209902
19:52:36 <roasbeef> imo if you have more funds in teh channel, you want a longer period to act just in case stuff breaks down
19:52:45 <roasbeef> especially if you don't have a tower...
19:53:06 <rusty> roasbeef: and it's more painful to be without funds for that time.  In theory these two things cancel out.
19:53:22 <roasbeef> err I don't think so, loss of funds >>>> time value loss
19:53:32 <bitconner> it's more painful to be without the funds permanently imo
19:53:45 <roasbeef> if it doesn't matter, than why don't we just all set csv to 1 everywhere?
19:53:53 <rusty> roasbeef, bitconner: sure.  But does it scale differently with the amount?
19:53:54 <roasbeef> 1 block, better act quickly!
19:54:09 <t-bast> maybe then update the wording to say that implementation "MAY" provide means of scaling any of these two values (if see fit)?
19:54:34 <bitconner> i'd be okay with that, both have cases where you may consider adjusting
19:55:24 <rusty> The conf thing is definitely worth noting.  The other one is not rational (but that doesn't mean it's not worth considering).
19:55:45 <roasbeef> err you haven't really provided a convincing argumetn taht is isnt' "rational"
19:56:07 <roasbeef> but let's finish hashing it out on the spec, seems were down to just wording
19:56:28 <roasbeef> minimally, it should have something like: "you should consider increasign these values as they're security parameters"
19:56:43 <roasbeef> scaling/increasgin with chan size
19:57:00 <rusty> roasbeef: we've had this debate before, though, seems weird to rehash.
19:57:14 <ariard> you definetely want to scale up csv with channel amount, an attacker may spam the mempool or delay you justice transaction propagation
19:57:29 <roasbeef> ariard: yep, it's a *security* parameter
19:57:58 <roasbeef> t-bast: what's up next? we can continue this in the spec
19:58:07 <t-bast> agreed, let's continue that on the PR
19:58:10 <rusty> ariard: wah?  But you can spend more on the penalty tx if it's larger.
19:58:19 <cdecker> You should select parameters that you feel comfortable with independently from the amount, either you can react or you can't, the potential loss only comes into play once the risk of loss is high enough
19:58:46 <ariard> rusty: a high-value channel may motivate an infrastructure attacker to just eclipse your full-node to drop your justice tx
19:58:53 <roasbeef> amount totally matters...
19:58:58 <t-bast> #action tweak the wording to give more leeway to implementers in choosing what they want to make configurable
19:59:02 <ariard> higher csv let you do manual broadcast or any emergency broadcast stuff
19:59:21 <roasbeef> ariard: yep, or give your tower more time to work, or w/e other back up measures you may have
19:59:46 <t-bast> Let's move on to anchor outputs?
20:00:17 <roasbeef> so I sent an email out before the meeting
20:00:23 <t-bast> #topci Anchor outputs
20:00:26 <cdecker> I need to drop off in a couple of minutes
20:00:28 <t-bast> #topic Anchor Outputs
20:00:33 <roasbeef> discussing stuff on IRC is hard, so i think we'll have more progress with that ML thread
20:00:43 <t-bast> Ok so let's keep that for the mailing list then
20:00:46 <rusty> ariard: that's plausible.
20:00:53 <roasbeef> in the email i talk about our impl and plans, and then go thru some of the current concerns w/ two anchors
20:00:54 <t-bast> #topic Rendezvous? cdecker? :)
20:00:58 <BlueMatt> I added another comment on gh about 10 minutes ago pointing out that two anchors isn't just a waste of fees, its acually insecure
20:01:19 <cdecker> We can talk about rendezvous, sure
20:01:33 <joostjgr> replied to that. to_remote is encumbered by 1 csv. so i was wondering, how are there two spend paths for remote?
20:01:43 <joostjgr> ok, continue on the pr
20:01:58 <t-bast> thanks, let's continue anchor on the PR and mailing list
20:02:05 <t-bast> Let's have an update on rendezvous from cdecker
20:02:31 <ariard> joostjgr: currently reviewing PR but have you considered an upgrading mechanism for alreaady deployed channels?
20:02:33 * cdecker digs up the branch for the proposal
20:02:45 <roasbeef> ariard: yep, I mentoin that in teh email
20:02:58 <roasbeef> should have a draft of the shceme we've come up with posted to the ML in teh next week or two
20:03:14 <ariard> roasbeef: cool given there is no new parameters to negotiate, that shoulddn't that hard
20:03:19 <roasbeef> it's pretty important imo, give that most chans still aren't using static to_remote keys and ppl hate closing channels lol
20:03:27 <cdecker> #link https://github.com/lightningnetwork/lightning-rfc/blob/rendez-vous/proposals/0001-rendez-vous.md
20:03:37 <ariard> I agree, let's avoid a useless chain write
20:03:51 <cdecker> So we have a working construction for rendez-vous onions in that proposal
20:03:52 <roasbeef> ariard: yeh it's basically some tlv extensions to make the commitment type specific, which lets you create new commits with a diff type than the latter
20:04:13 <cdecker> It works by creating the onion in such a way that we can cut out the middle part and regenerate that at the RV node
20:04:45 <ariard> roasbeef: so generic mechanism to cover commit tx format in the future like taproot one or when we drop CSV delay after mempool improvements?
20:04:46 <cdecker> While it is a working proposal it has a number of downsides
20:05:16 <cdecker> First and foremost it is not reusable, since the onion would always look the same in the latter part
20:05:28 <cdecker> The path encoded is fixed in the onion
20:05:39 <cdecker> And the size of the compressed onion is still considerable
20:06:00 <roasbeef> ariard: mhmm, also with this, all we need to do for taproot/scnhorr is update the multi-sig script (just the single key), and then we can defer modifying the commitment until later as there's a pretty large design space for that
20:06:06 <cdecker> So for these reasons we have mostly stashed that proposal for the time being
20:06:23 <roasbeef> cdecker: this is distinct from the "blinded hop hint" stuff right?
20:06:30 <t-bast> Interesting, I thought you were still making progress
20:07:00 <cdecker> roasbeef: Yep, this is the classical rendez-vous construction for sphinx with a couple of extra twists due to the increased per-hop payloads
20:07:02 <t-bast> roasbeef: yes the blinded paths are a different attempts at better privacy
20:07:27 <cdecker> rusty has mostly been concentrating on the blinded paths idea for the offers
20:07:50 <t-bast> and are you investigating something else?
20:07:51 <cdecker> I think we could end up using my rendez-vous proposal when combined with trampoline though
20:08:14 <t-bast> yes I believe with trampoline, your rendezvous proposal isn't very expensive (it's clearly acceptable)
20:08:50 <cdecker> We could have an onion that contains just a sequence of (non-consecutive) trampoline nodes, that'd then run the RV protocol
20:09:03 <cdecker> That way we can get recipient anonymity even in trampoline
20:09:56 <t-bast> what do you mean by "run the RV protocol"?
20:10:03 <cdecker> Don't get me wrong, the rendez-vous construction is a cool tool to have in our toolbelt, it's just not that well-suited for what we intended
20:11:02 <cdecker> I mean one of the trampolines would get the partial onion as payload (when he's the point where the two parts where stitched together) and have to recover the original partial onion
20:11:12 <cdecker> Otherwise it'd run the normal trampoline protocol
20:11:36 <t-bast> ok that sounds like what I had in mind
20:11:46 <cdecker> Great ^^
20:12:17 <t-bast> do you want me to prepare something for the next meeting to discuss trampoline then? and integrate that rendezvous option into it?
20:12:34 <cdecker> So yeah, I think the TL;DR for rendez-vous is: "We did stuff, it's cool, it works, it doesn't fit our intended application" :-)
20:12:51 <cdecker> That'd be great, I'd love to get trampoline moving again
20:13:26 <BlueMatt> :100:
20:13:29 <t-bast> Something I'd like to ask the whole room: would you like me to move the trampoline proposal to format closer to what cdecker did for rendezvous (https://github.com/lightningnetwork/lightning-rfc/blob/rendez-vous/proposals/0001-rendez-vous.md)?
20:13:47 <t-bast> I think that may be more helpful to review the high-level design before the nitpicks of the message fields
20:14:09 <t-bast> And would probably be useful to keep to help newcomers ramp up more easily
20:15:12 <cdecker> It'd certainly help me get up to speed before the meetings, and for people that just want a quick overview of a specific feature / change
20:16:18 <rusty> I just can't get used to the hyphen in rendez-vous :)
20:16:38 <cdecker> Honestly I'm not sure which one is correct either xD
20:16:58 <t-bast> rendez-vous is the valid french word :)
20:17:04 <ariard> t-bast: +1 for a high-level doc, I fear people bringing again and again tsame point on privacy tradeoff in the PR
20:17:17 <roasbeef> rownde-voo?
20:17:31 <rusty> t-bast: and maybe we should do the same for blinded paths...
20:17:31 <bitconner> the spec is written in english tho ;)
20:17:34 <t-bast> rownde-voo sounds like a cool new thing
20:17:43 <roasbeef> lol, that's how I pronounce it ;)
20:17:43 <bitconner> yee haw
20:17:44 <cdecker> Ok, the namer in chief has spoken, rowndee-voo it is ^^
20:17:46 <roasbeef> kek
20:17:50 <t-bast> rusty: blinded path is already in proposal format :)
20:17:52 <rusty> This is the dreaded Eltoo disease, I think!
20:17:54 <roasbeef> lolol
20:17:54 <ariard> sounds like some vooodo crypto
20:18:12 <cdecker> Right, rusty just for you we can rename it el-too :-)
20:18:27 <t-bast> el-too sounds spanish now
20:18:35 <bitconner> naming is def my favorite part of this process :)
20:18:44 <cdecker> Anyway, gotta drop off, see you around everyone ^^
20:18:52 <roasbeef> "uhh, lemmie get uhh #1, but rownde-voo, thank you I know y'all are closed n stuff now due to The Rona"
20:18:54 <t-bast> great I'll move trampoline to a format closer to the "proposals" then before next meeting, thanks all!
20:19:12 <bitconner> i'll also work the hornet summary before next meeting
20:19:20 <t-bast> bitconner: neat, I'd love to read that
20:19:31 <t-bast> #action bitconner to work on a hornet summary
20:19:32 <bitconner> was a little busy doing lockdown things lol
20:19:41 <t-bast> #action t-bast to move trampoline to a proposals format
20:19:56 <rusty> ariard: BTW, I conceded your point about eclipse attacks on the issue, for posterity.
20:19:58 <bitconner> i'll shoot to send it out to the ML
20:20:26 <t-bast> bitconner: yeah so many things to do around the house it's crazy...did you know rice bags may contain up to 31 less grains for the same weight?
20:20:53 <roasbeef> t-bast: lol compared to like a rice _box_?
20:21:11 <rusty> t-bast: we should interop test at some point soon for blinded paths.  My implementation is pretty hacky, but should be sufficient to test.
20:21:12 <BlueMatt> roasbeef: (its a joke about being so bored one goes and counts rice grains :p)
20:21:14 <t-bast> boxes are bad for the planet, think about the penguins man
20:21:28 <roasbeef> lolol
20:21:31 <ariard> rusty: thanks, currently putting down a paper with gleb around eclipse+LN, to show a bit more light on this
20:21:36 <ariard> and motivate some groundwork in core
20:21:42 <t-bast> rusty: then I shuold get started on actually implementing something xD
20:23:01 <rusty> t-bast: I still don't have a bolt11 format, so it's manually crafted paths for testing.  And my infra really doesn't like not having a short_channel_id in every hop, so there's a dummy one at the moment (which gets overridden by enctlv). And also I don't handle the new error code correctly.
20:23:23 <cdecker> Sounds great ariard, make sure to send a copy to the ML once it's done :-)
20:23:31 <rusty> ariard: nice!
20:23:45 <t-bast> rusty: heh sounds like I shouldn't be in too much of a rush then, perfect ;)
20:24:11 <t-bast> alright time to go back to counting lentils, thanks everyone for the discussions and see you around!
20:24:17 <rusty> t-bast: no, but I was happy to get the pieces basically working.
20:24:21 <t-bast> #endmeeting