19:00:21 <rusty> #startmeeting
19:00:21 <lightningbot> Meeting started Mon Feb  4 19:00:21 2019 UTC.  The chair is rusty. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:00:21 <lightningbot> Useful Commands: #action #agreed #help #info #idea #link #topic.
19:00:28 <cdecker> PING bitconner BlueMatt cdecker johanth kanzure lightningbot lndbot roasbeef rusty sstone
19:00:48 <rusty> #info Agenda here: https://github.com/lightningnetwork/lightning-rfc/issues/566
19:00:50 * BlueMatt hasnt slept in 24 hours and has to leave soon
19:00:50 <sstone> Hi everyone!
19:01:17 <cdecker> Just added the multi-onion thingy in the agenda, feel free to push down the list if we don't have time :-)
19:01:18 <rusty> #info label to apply to issues/PRs for discussion: https://github.com/lightningnetwork/lightning-rfc/labels/2019-02-04
19:01:21 <cdecker> Heia sstone
19:02:07 <rusty> Hi all; would like to finally tag 1.0 if we can.  Should we apply the few pending clarification fixes first?
19:02:12 <kanzure> hi.
19:02:24 <niftynei> hello
19:02:25 <rusty> https://github.com/lightningnetwork/lightning-rfc/labels/spelling if you want to see the pending ones...
19:02:35 <lndbot> <johanth> hi :slightly_smiling_face:
19:03:00 <rusty> And of course: https://github.com/lightningnetwork/lightning-rfc/pull/550 which is approved but isn't technically a typo fix.
19:03:27 <rusty> #topic https://github.com/lightningnetwork/lightning-rfc/pull/550
19:04:11 <roasbeef> added what's IMO a clarification re that it's the current unrevoked point owned by that party
19:05:10 <rusty> Well, I guess the entire point is "this is the point they need for the commit tx you would broadcast now to unilateral close".
19:05:33 <roasbeef> yeh which is the one they haven't revoked yet
19:05:43 <rusty> Yes.
19:08:00 <rusty> OK, so now it's down to wording.  Perhaps we just append "(as it sent in its last revoke_and_ack message)" ?
19:08:14 <niftynei> it's not necessarily the unrevoked point right, it's whatever your commitment point is for the signed commit txn you have from your peer, right?
19:09:10 <niftynei> hmm actually i guess that is the same thing, because you don't revoke a commitment until you received the next one
19:09:33 <niftynei> in theory you could receive a new commitment and not get the chance to revoke it before your peer goes offline
19:10:08 <rusty> niftynei: well, it's whatever one you would broadcast.
19:10:31 <lndbot> <johanth> should be unrevoked from the sending node’s POV, no assumptions about the peer
19:10:32 <roasbeef> last unrevoked to me is "the lowest unrevoked", fwiw lnd nodes don't store that pending commitment and rely on the other party to broadcast it
19:10:55 <niftynei> my issue with tying it so  concretely to the revoke action is that there's no ack that the other side ever gets your revocation... it's really based off what the last valid signed commit you received, right?
19:11:17 <roasbeef> doesn't matter if they ack it, you've revoked so you shoudn't broadcast that commitment
19:11:35 <niftynei> like you're telling them your commitment point for the most valid commitment txn you have
19:11:51 <niftynei> 'most valid' aka 'non-revoked'
19:13:02 <niftynei> the important thing seems to be focusing on the fact that it's the commitment txn your node will publish if the re-establish messages show a state chain loss for the other party
19:13:04 <rusty> BTW, I thought revoke_and_ack contained the N+1th point, and we want the Nth point?
19:13:09 <roasbeef> at broadcast time, lnd also stores the last unrevoked to handle the case of the unrevoked+pending commitment case
19:13:18 <roasbeef> so it'll be w/e we broadcasted
19:13:19 <niftynei> rusty i believe that's correct
19:13:39 <roasbeef> ah yeh that's right (re n+1)
19:14:03 <rusty> Yes, so it's *not* the one you sent in the last revoke_and_ack.  Hmm, not sure we can do better than the actual concrete requirement: the one corresponding to what you'll put onchain.
19:14:56 <rusty> ie. append ("the commitment_point the sender would use to create the current commitment transaction for a unilateral close)"
19:15:15 <niftynei> right which is exactly the spirit of the clarification i proposed
19:16:12 <rusty> I was trying to capture roasbeef's summary too.
19:17:21 <roasbeef> yeh would, or "did" works there
19:17:33 <roasbeef> since it's possible for them to broadcast that "top" commitment as well
19:17:33 <rusty> It is, logically, the one you use for the last signed commitment from the peer.  But in theory you could pause before sending revoke_and_ack, and have two valid commitment txs.  If your implementation were to broadcast the old one (we won't, it's atomic to us to send revoke_and_ack), *that* is what you must send.
19:18:07 <rusty> roasbeef: yes.
19:19:53 <niftynei> ah i see. so it's not necessarily the last received but the 'current closing candidate'?
19:20:12 <rusty> niftynei: exactly
19:20:32 <rusty> Whatever goes on-chain is what they need
19:21:12 <rusty> OK, I think we're going to timeout on this issue.  I've put a suggestion up, let's move on
19:21:22 <niftynei> :ok_hand:
19:21:30 <cdecker> Ok
19:21:50 <rusty> #topic https://github.com/lightningnetwork/lightning-rfc/pull/558
19:22:01 <rusty> Trivial change to bring scripts into same form.
19:22:10 <rusty> Anyone object?
19:22:40 <cdecker> Sounds simple enough
19:22:43 <rusty> (We're also doing 558 562 563 if you want to read ahead :)
19:22:50 <lndbot> <johanth> lgtm
19:22:55 <rusty> #action apply https://github.com/lightningnetwork/lightning-rfc/pull/558
19:23:02 <rusty> #topic https://github.com/lightningnetwork/lightning-rfc/pull/559
19:23:47 <cdecker> sgtm
19:23:52 <rusty> Another trivial clarification: the receiver set max values, the sender must not violate.
19:24:01 <rusty> #action apply https://github.com/lightningnetwork/lightning-rfc/pull/559
19:24:25 <rusty> #topic https://github.com/lightningnetwork/lightning-rfc/pull/562
19:24:44 <cdecker> LGTM
19:24:45 <roasbeef> 558 should specify the precise data push
19:25:05 <rusty> roasbeef: it's kind of implied by the length, I guess.
19:25:36 <roasbeef> commented on the PR
19:25:42 <rusty> roasbeef: but yeah, this matches what we use elsewhere in those calculations, but more clarity would be nice.
19:25:55 <rusty> #apply https://github.com/lightningnetwork/lightning-rfc/pull/562
19:26:20 <rusty> #topic https://github.com/lightningnetwork/lightning-rfc/pull/563
19:26:37 <rusty> roasbeef: I think that would be a separate sweep, though.
19:26:56 <rusty> roasbeef: but if you want I can try to unaction it :)
19:27:15 <rusty> #action apply https://github.com/lightningnetwork/lightning-rfc/pull/563
19:27:20 <rusty> Now, that was faster :)
19:27:39 <rusty> #topic Finally tagging 1.0
19:27:51 <roasbeef> just saying "data" doesn't clarify anymore than it is, since as you say it's arguably implicit, so if we're going to specify should elimnate all ambiguity
19:28:08 <cdecker> btw rusty if you follow #action with one of the participant's name they'll be assigned the action in the meeting notes (I just assigned myself if noone else was jumping in)
19:29:54 <rusty> roasbeef: there's no convenient wording for those pushes, though.  Elsewhere it's annotated like (OP_DATA: 1 byte (pub_key_alice length))
19:30:09 <roasbeef> i mean like the exact op code
19:30:17 <sstone> roasbeef: and readers can check all the details in the test vectors
19:30:34 <cdecker> Well, OP_PUSH1<pubkey> would be my personal preference
19:30:36 <roasbeef> sure, they can, but if we're modifying it, should make it as explicit as possible
19:30:49 <cdecker> Gives the reader both the op-code as well as the content
19:30:56 <roasbeef> which may mean a larger sweep as rusty said
19:30:58 <rusty> roasbeef: to be clear, they're just doing the minimal to bring it into line with the others.
19:32:56 <rusty> roasbeef: and the point of this part of the spec is to merely establish the length/weight of the txs.
19:33:28 <cdecker> Right, we can always be more verbose and add details at a later point in time
19:33:53 <rusty> #action roasbeef to come up wiht better notation than OP_DATA for BOLT#3 weight calculations
19:34:21 <cdecker> SGTM
19:34:23 <rusty> So, back to topic.  After those 4 spelling fixes, can we please tag 1.0?
19:34:33 <rusty> Like, git tag v1.0
19:34:47 <rusty> Then we can officially open the gates to The New Shiny.
19:35:12 <rusty> It's like I want to open my Christmas presents and you're all being all grinchy...
19:35:46 <cdecker> Happy to tag it, we've been mostly working on v1.1 features anyway
19:35:47 <rusty> (I realize the text is fat from perfect, but time is finite).
19:35:53 <lndbot> <johanth> is the plan to modify the current docs for 1.1, meaning that you no longer can implement 1.0 without checking out the tag?
19:36:09 <cdecker> Yep
19:36:13 <lndbot> <johanth> or will it be clear from it what is towards 1.0, 1.1 etc
19:36:19 <lndbot> <johanth> ok
19:36:35 <rusty> lndbot / johanth: yes.  Though we're not planning any compat breaks, so it's a bit semantic.
19:37:03 <rusty> roasbeef: you happy for a v1.0 tag?
19:37:10 <rusty> sstone: and you?
19:37:59 <sstone> yes
19:38:08 <roasbeef> sgtm, will take a peek at pending open prs to see if there's anything that maybe should land
19:38:39 <rusty> #action rusty Tag v1.0 before next meeting, after approved fixes here.
19:38:50 <rusty> OK, looking at https://github.com/lightningnetwork/lightning-rfc/labels/2019-02-04
19:38:53 <sstone> just remembered that there's an error in the onion test vectors
19:39:34 <rusty> sstone: :( Any chance of a fix soon?
19:39:55 <sstone> yes it should be trivial
19:40:13 <rusty> #action sstone To fix onion test vectors, rusty to verify.
19:40:26 <rusty> Thanks@
19:40:28 <rusty> #topic https://github.com/lightningnetwork/lightning-rfc/pull/557
19:40:52 <rusty> I remain to be convinced that we can't just suppress flapping nodes, BTW.
19:41:09 <rusty> (ie. remember that we have a dup with a later timestamp, and re-broadcast the old one).
19:41:31 <roasbeef> rusty: same, nodes can just spam to hell, and IMO it's futile to try to sync all the channel updates
19:41:43 <cdecker> #557 is looking much better, thanks sstone ^^
19:42:07 <roasbeef> instead clients can prioritize updates for channels they've used in the past, and eat the latency on an error due to a failed update. routing nodes don't really need them at all
19:42:08 <cdecker> roasbeef: absolutely, having a consistent view of the network topology is a red herring
19:42:25 <cdecker> But I understand sstone's wish to keep traffic low
19:42:26 <rusty> roasbeef: yeah, I think we'll need to suppress spammy ones *anyway*.
19:42:44 <rusty> cdecker: I think if nodes start suppressing dups he'll get his wish?
19:43:02 <cdecker> We might impose exponential backoff for chatty nodes
19:43:10 <rusty> (Though it implies you should not send more than 2 updates every 120 seconds to be safe)
19:43:16 <sstone> to be clear: we're trying to sync A and B, not A and the rest of the world (i.e mobile nodes are connected to very few peers)
19:43:41 <rusty> sstone: but if B is suppressing redundant updates for you, I think you get your wish.
19:44:13 <rusty> (BTW, should I prioritize writing code so c-lightning doesn't ask for ALL the gossip on every connect?  I've been slack...)
19:44:36 <sstone> yes, possibly. but the new PR has a minimal impact on the spec (no new messages, just optional data)
19:45:03 <roasbeef> sstone: sync A and B?
19:45:08 <cdecker> Nah, c-lightning can deal with the extra data :-)
19:45:22 <rusty> sstone: but if we suppress redundant updates, I think your extension becomes pointless?
19:45:31 <rusty> cdecker: what's 20MB between friends?
19:45:35 <sstone> sync a mobile node that offline most of the time and the peer we're connected too
19:45:40 <sstone> that's
19:46:23 <rusty> #action rusty c-lightning to do something smarter than retrieve all gossip from peer before 0.7 release.
19:46:32 <sstone> if it really becomes pointless then people will stop using it, but I doubt it will happen that soon
19:47:15 <rusty> sstone, roasbeef: Well, anyone object to me implementing dup suppression to see how it goes?  Then I'll look at implementing the extension if that doesn't work?
19:47:31 <roasbeef> sstone: don't see how that distinction changes anything, you'll still end up trying to get all the updates since you were down, or all chans that updated since then
19:47:50 <roasbeef> (unless it has preferential querying)
19:48:23 <rusty> roasbeef: there was a good on-list analysis of how much gossip is redundant; it's a lot :(
19:48:24 <sstone> we're trying to minimize traffic and startup time. we've implemented it and the gains are massive right now
19:48:24 <cdecker> Well, actually if you're offline and reconnect you should only get the channel_announcement and the latest channel_update, not all prior ones
19:48:50 <cdecker> The duplicate channel_update stuff can only happen while online or while trying to sync from multiple nodes simultaneously
19:49:10 <roasbeef> only massive as y'all try to always sync all updates rn?
19:49:29 <cdecker> (and duplicates for nodes that you know of, that have had an update, which I now realize is exactly what you're trying to suppress, my bad)
19:49:54 <sstone> you come back online, your peer has 1000 updates with newer timestamp, only 10 of them carry actual change
19:50:16 <roasbeef> ahh gotcha, assuming you want all the updates
19:50:26 <ott0disk> the trick of the PR is to not retrieve updates that did not change, this is done via the checksum.
19:50:41 <sstone> I'm almost not making the numbers up :)
19:50:53 <roasbeef> what if peers just reject updates that don't change at all between the last
19:51:02 <cdecker> What kind of savings are we talking about here sstone? Just curious
19:51:10 <ott0disk> although you still want to have one every 2 weeks to detect the stale channels
19:51:17 <rusty> roasbeef: that's waht I suggested, but it doesn't help if they actually are doing "disable, reenable".
19:51:37 <sstone> a good 80% less traffic on the current network
19:51:43 <roasbeef> same thing w/ the checksum tho right? they'd compute and refetch since that bit flipped
19:51:47 <rusty> ott0disk: yes, as I suggested in my post onlist, you need to allow for weekly updates.
19:52:03 <rusty> roasbeef: not if it happened since you went offline.  ie. two updates, you're back where yuo started.
19:52:12 <cdecker> roasbeef: we've also had a few variants on the ML where a flapping channel would get overridden by the second to last update and resulting in the wrong endstate
19:52:53 <rusty> ie. they're not *strictly* redundant at the time, just in retrospect they are.
19:53:16 <sstone> also I believe that the checksum trick could be useful for INV-base gossip, and potentially for set-based reconciliation (still think it's a bad fit though, must be missing smthg)
19:53:23 <sstone> INV-based
19:53:55 <rusty> sstone: adler was a nice touch, I was having flashbacks to the zlib source code :)
19:54:45 <sstone> it was pm's idea (we used it a long time ago on another project)
19:55:15 <cdecker> Ok, so from my side this is a far better proposal than the last one. It leaves a lot of flexibility, is fully backward compatible and doesn't need signaling nor new messages
19:55:21 <rusty> OK, so redundant suppression helps if they're literally redundant, but the csum proposal helps if they're just transient but not literally redundant.  I'll have to record the exact gossip to determine which it is, unless sstone has that data already?
19:55:44 <rusty> cdecker: agreed.  If we need something like this, this is nice and minimal.
19:56:42 <sstone> yes we have "6-hourly" dumps of our routing table (need to check we're still doing that)
19:57:11 <cdecker> I have quarter hourly stats on the channel state if that helps
19:57:16 <rusty> sstone: that won't tell me what I need; if consecutive updates are identical.  That's OK, I can figure it out.
19:57:45 <rusty> roasbeef: I guess the question is, do you have any major objections to the extension?  Seems pretty clean to me...
19:58:08 <roasbeef> haven't looked at specifics of the latest
19:59:02 <rusty> roasbeef: OK, cool, let's shelve.  But no objections from me.
19:59:07 <rusty> Next topic?
19:59:13 <roasbeef> diff looks much smaller than the last ;)
19:59:22 <rusty> Yeah :)
19:59:24 <cdecker> Hehe
19:59:32 <rusty> cdecker: you wanted to discussion onion fmt?
19:59:33 <roasbeef> prob also a meta q on if it should be TLV or not (as new optional field)
20:00:08 <cdecker> Yep, roasbeef has opened a PR here https://github.com/lightningnetwork/lightning-onion/pull/31 about the multi-hop onion
20:00:32 <rusty> #action Continue discuss https://github.com/lightningnetwork/lightning-rfc/pull/557 on PR
20:00:43 <cdecker> And I added a (rather lengthy) comment below after reviewing it. I'd like to invite everybody to take a look and see what works better for them
20:00:46 <rusty> #topic https://github.com/lightningnetwork/lightning-onion/pull/31 multi-hop onion
20:01:01 <cdecker> I have listed all the pros to my approach there, so I won't repeat them here
20:01:13 <rusty> cdecker: I like getting more room in the onion for roasbeef's movie streaming service.
20:02:18 <roasbeef> we can get those 64 or w/e bytes just by eliding the mac check between the pivot and final hop, i favor the multi unwrap as it keeps the processing+creation code nearly identical
20:02:21 <cdecker> But I was wondering, since I'm implementing this and I keep getting confused with hops (as in single payload size) and hop-payload (as in actual payload being transferred), if everybody is ok if I use the term "frame" to refer to a single 65 byte byte slice
20:02:25 <rusty> cdecker: have you implemented it yet?  I worry that hmac position will be awkward.
20:02:44 <roasbeef> otherwise we may end upwith 3 diff ways: rendezvous, regualr, multi onion
20:02:56 <cdecker> I'm implementing it now
20:03:22 <cdecker> roasbeef: no, we end up with 2 formats: legacy, and TLV (which includes rendez-vous and spontaneous payments and all the rest)
20:03:36 <roasbeef> processing
20:03:41 <cdecker> TLV being the multi-onion/single-onion case
20:03:43 <rusty> cdecker: indicated by realm byte 0 vs other, right?
20:03:57 <roasbeef> also i think TLV should be optional, and there's a big hit for signalling those values
20:04:04 <roasbeef> and many times makes what shoudl fit into a single hop fit into 3
20:04:11 <cdecker> Yep, realm 0 (i.e., MSB unset) keeps exactly the same semantics and processing as the old one
20:04:12 <roasbeef> also with type+TLV we get more "solution" space
20:04:22 <cdecker> We only differentiate in how we interpret the payload
20:04:49 <roasbeef> so up to to 65k depending on size of type
20:04:54 <cdecker> On the contrary I think TLV is eventually the only format we should support
20:04:59 <rusty> roasbeef: well, I consider framing and format indep topics.  Whether we end up implying new format from new framing, I don't mind.
20:05:12 <roasbeef> we have a limited amount of space, unlike the wire message where it's 65kb
20:05:18 <cdecker> Actually it's 32 + 32 + 18*65 bytes maximum payload
20:05:22 <roasbeef> yeh as is, it doesn't care about framing it just passes it up
20:05:34 <roasbeef> (of the payload)
20:06:12 <roasbeef> cdecker: so i mean that you options if there's a disticnt type, then optional tlv within that type, vs having to agree globally on what all the types are
20:06:22 <roasbeef> (all the tlv types)
20:06:36 <cdecker> So with my proposal we just read the realm, the 4 MSB tell us how many additional frames to read, and then we just pass the contiguous memory up to the parser which can then differentiate between legacy realm 0 and TLV based payload
20:07:06 <rusty> cdecker: BTWdo you go reading the realm byte to determine the hmac position before you've checked the hmac?
20:07:10 <roasbeef> i brought up the value of invoicelss payments in australia, ppl asked what ppl would actually use it for (many were skeptical), once it dropped many app devs were super excitd w.r.t the possibilities
20:07:28 <rusty> roasbeef: yeah, blog post coming RSN on why it's a terrible idea :)
20:07:38 <roasbeef> why what is?
20:07:49 <rusty> roasbeef: not giving a receipt :)
20:08:03 <cdecker> Well, if we make TLV correspond to the second realm 1 we still have 254 realms to be defined if we want to drop TLV at some point
20:08:19 <rusty> roasbeef: but to be fair, doing it "right" means we need bolt11 offers, which is a bunch of work.
20:08:26 <cdecker> rusty: the HMAC is just the last 32 bytes of the payload, which can be computed from the realm and then we just check that
20:08:28 <roasbeef> yeh, so packet level type that telsl you how to interpret the bytes to the higher level
20:08:50 <roasbeef> rusty: depends entirely on the use case, in many cases you can obtain the same with a bit more interaction at a diff level
20:09:04 <roasbeef> there aren't even any standards for receipts yet afaik
20:09:15 <roasbeef> and this is prob the most requested feature i see
20:09:46 <cdecker> Did I just lose the multi-frame discussion to a side-note? xD
20:09:52 <rusty> roasbeef: I agree.  Anyway, let's stick to topic.
20:10:31 <roasbeef> if you have the optional tlv, then you need a length as well
20:10:40 <rusty> cdecker: I'd like to see the implementation.  I think that moving framing is cleaner than roasbeef's redundant HMACs.
20:11:00 <cdecker> Anyway, if we stick to unwrapping the onion incrementally, and just skip HMAC like roasbeef suggests we do a whole lot of crypto operations extra, we don't get contiguous payloads and have to copy to reassemble, and we don't get any advantage from it imho
20:11:01 <roasbeef> as in how many bytes in a frame to consume (as what impl does), other2ise you end yup passing extra zeros if you don't have a padding scheme
20:11:17 <roasbeef> no advtanage? you get the extra space
20:11:29 <cdecker> No, I mean over my proposal
20:11:43 <roasbeef> it's simple imo, idential creation+processing
20:11:45 <rusty> roasbeef: 0 tlv is terminal, usually.
20:11:46 <roasbeef> simpler*
20:12:04 <rusty> roasbeef: that's why I want to see an implementation of cdecker's proposal.
20:12:09 <cdecker> And I'm saying mine is even simpler since all we do is changing the shift size
20:12:12 <cdecker> :-)
20:12:24 <roasbeef> but processing changes completely, no?
20:12:33 <roasbeef> this you do the same processing as rn in a loop
20:12:37 <cdecker> I'm implementing it now and I'll publish on the ML
20:13:04 <cdecker> roasbeef: no, processing just gets variable shifts as indicated by the realm
20:13:10 <rusty> OK, we're 10 minutes over.  Any final topics worth a mention?
20:13:17 <roasbeef> yeh, it changes ;)
20:13:27 <cdecker> if the 4 MSB of the realm are 0, shift by 65, if they are 0b0001 then shift by 130 and so on
20:13:56 <cdecker> Nope, we'll bike-shed on the mailing list :-)
20:13:57 <roasbeef> creation too
20:14:18 <rusty> I would like to point people at https://github.com/rustyrussell/lightning-rfc/tree/guilt/tests which is my very alpha code and proposal for JSON-formatted test cases in the spec.
20:14:43 <cdecker> Yeah, but again that's specular, and way simpler than splitting the payload into 8bytes + 64 bytes + 32 bytes later, and encrypting them a bunch of times
20:14:51 <rusty> I would also like a decision on feature bits, but we might have to defer to next mtg.
20:15:24 <rusty> (ie. combine "routing" vs "peer" bits, or expand node_announcement to have two bitmaps?)
20:15:58 <roasbeef> loops are pretty simple, and creatino is the same once you get the "unrolled" route
20:16:04 <roasbeef> didn't see what was gained by splitting em
20:17:33 <rusty> ETIMEDOUT I think.  Happy to stick around for a bit, but OK if we end meeting?
20:17:56 <cdecker> Sure
20:17:59 <rusty> #endmeeting