19:00:59 <cdecker> #startmeeting
19:00:59 <lightningbot> Meeting started Mon Apr  1 19:00:59 2019 UTC.  The chair is cdecker. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:00:59 <lightningbot> Useful Commands: #action #agreed #help #info #idea #link #topic.
19:01:04 <araspitzu> hi
19:01:08 <cdecker> Absolutely sstone, loving the proposal
19:01:26 <cdecker> Ping AmikoPay_CJP bitconner BlueMatt cdecker Chris_Stewart_5 kanzure niftynei
19:01:27 <cdecker> ott0disk roasbeef rusty sstone bitconner BlueMatt cdecker johanth kanzure
19:01:29 <cdecker> roasbeef rusty sstone
19:01:57 <niftynei> ;wave
19:02:18 <Chris_Stewart_5> hi
19:02:20 <bitconner> hola
19:02:32 <cdecker> Heia, seems we have a quorum
19:02:44 <cdecker> Let's give people 2 more minutes to trickle in :-)
19:02:55 <BlueMatt> I'm around
19:02:56 <BlueMatt> kinda
19:03:45 <cdecker> Today's agenda: PR #557, PR #590, PR #593 and open floor discussion (including Trampoline hopefully)
19:04:44 <kanzure> hi
19:04:53 <cdecker> Ok, let's get started with #557
19:05:05 <cdecker> #action BOLT7: extend channel range queries with optional fields (#557)
19:05:28 <rusty> OK, I am right now checking the test vectors.
19:05:30 <roasbeef> i still think this is optimizing the wrong thing, there's a ton that can be done on the implementation level to make gossip for efficent than it is already
19:05:35 <cdecker> rusty, sstone can you give a short summary of what changed, and what is holding the PR up?
19:05:50 <roasbeef> for example, not syncing from all your peers, using a smaller backlog for gossip timestamp, etc
19:06:25 <rusty> roasbeef: I somewhat agree, but OTOH it's fairly trivial to implement.
19:06:28 <roasbeef> this has the potentialy to waste even more bandwidth as well due to spammy channels
19:06:48 <roasbeef> sure, but why make protocol level changes for things that can be optmized at the implementation level?
19:07:06 <roasbeef> most nodes really don't need every up to date channel update, instead  clients know from past history which routes they're likely to traverse again
19:07:34 <rusty> roasbeef: well, it leads to INV-based gossip quite directly, too.
19:08:00 <sstone> it's optional, and has not impact if you don't want to use it. and I think you're somewhat missing the point: it's not about getting everything all the time
19:08:08 <roasbeef> inv yeh, trying to alwayws query for up-to-date channel update policies on connection...no
19:08:44 <rusty> There are two parts to this PR: one just adds fine-grained queries so you don't need to get everything about every short_channel_id you want.  The other adds a summary of updates so you can avoid the query.
19:08:47 <roasbeef> the impact is the responder having to send all that extra data from queries
19:09:48 <sstone> only if you want to support this option
19:10:25 <rusty> roasbeef: it's hard to argue against the flags array on query_short_channel_ids.
19:10:36 <roasbeef> also as this is a new optional field, perhaps it should be tlv based instead? since if we add something after this, you'd have to understand what the prior ones are in order to parse the new fields
19:11:24 <niftynei> +1 for tlv
19:11:34 <rusty> We can argue about the timestamp & checksums on reply_channel_range.
19:11:36 <cdecker> Do we have a TLV spec?
19:11:48 <roasbeef> i think there's some code lingering around...
19:11:59 <niftynei> tlv spec is in https://github.com/lightningnetwork/lightning-rfc/pull/572
19:12:03 * rusty looks at niftynei who has implemented TLV parsing for the spec.
19:12:05 <cdecker> Not talking about code, this needs to be specced first
19:12:13 <cdecker> Thanks Lisa :+1:
19:12:18 <roasbeef> well code is always a first step imo
19:12:32 <roasbeef> as you typically find things that sound like the would've worked, but may have weird edge cases
19:12:34 <cdecker> As long as you don't go ahead and deploy it, ok :-)
19:13:03 <bitconner> how will the checksum work with fields added in the future?
19:13:04 <roasbeef> this seems like the onion tlv? i'm speaking of wire tlv
19:13:10 <niftynei> (my implemenation roughly adheres to this, minus the varint aspect for the length field)
19:13:14 <cdecker> Anyhow, what shall we do with #557? I see two options: postpone yet again, or take a vote now
19:13:15 <bitconner> in the past we discussed adding schnorr sigs, would thos be included?
19:13:23 <sstone> I thought about using TLV and decide against it. I don't think it's essential
19:13:24 <niftynei> (which is a todo)
19:14:02 <rusty> sstone: I'm happy to take a shot at making it TLV.  It should be easy, though we need varint length.
19:14:08 <cdecker> Guys, can we stick to this proposal, and defer a TLV variant?
19:14:24 <bitconner> proposal uses a 0 value checksum to indicate that there is no update, what if the checksum value is actually 0?
19:14:35 <rusty> bitconner: timestamp can't be 0.
19:14:41 <cdecker> We should first of all get a concept ACK / NACK and then we can worry whether we need serialization amendment
19:14:49 <sstone> yes please :) the point I'm trying to make is that current channel queries are almost unusable
19:15:02 <rusty> cdecker: agreed.
19:15:31 <sstone> it was focused on channel ann.but the real target should have been chan.updates
19:15:42 <rusty> I think adding the flags to specify which of announce/updates you want is fairly much a no-brainer (and was trivial to implement).
19:16:16 <bitconner> i think sending timestamps makes a lot of sense, not totally convinced on checksum tho
19:16:19 <rusty> We can argue over checksum and timestamp (was tempted to try to squeeze some bits out, but I'm not sure it's important).
19:17:12 <bitconner> is there any data pointing to how many queries are actually avoided by including the checksum?
19:17:37 <rusty> bitconner: checksums may prove unnecessary if more nodes start thottling spam.  sstone had some stats IIRC.
19:18:03 <sstone> yes in December it was saving more than 80% traffic
19:18:24 <niftynei> !!
19:18:25 <gribble> Error: "!" is not a valid command.
19:18:31 <roasbeef> throttling spam seems like a reasonable thing to do, most updates I see these days are suuuuper spammy, i've seen some node operators start to complain about bandwidth usage
19:18:32 <sstone> and in one of our uses cases (mobile nodes that are often online) it is still a huge win
19:18:38 <araspitzu> bitconner: the checksum lets you  figure out if two updates with the same timestamp actually carry the same information, you may not want to download it again except to detect stale channels
19:18:39 <roasbeef> 80% traffic when trying to sync all channel updates?
19:19:01 <sstone> no when you've been offline for a few days and sync
19:19:24 <araspitzu> *with different timestamps
19:19:40 <sstone> for the "initial sync" there's not much that can be done
19:19:47 <bitconner> i've been under the impression that the majority of the spam is not actually real, and that it is a bug in the enable/disable state machines
19:20:14 <cdecker> bitconner: can you elaborate?
19:20:19 <roasbeef> iirc we recently saw a 95% bandwidth reduction just from not syncing updates from all peers, and instead picking 3 or so and rotating periodically
19:20:35 <roasbeef> so like a destructive mode where enable/disable isn't stable
19:20:44 <roasbeef> i think most of the updates rn are just enable/disable bit flipping
19:20:49 <niftynei> if updates are b/c of an disable/enable flag, wouldn't that change the checksum?
19:20:55 <bitconner> earlier this year i refactored our state machine for enable/disable https://github.com/lightningnetwork/lnd/pull/2411
19:21:02 <niftynei> i.e. that wouldn't explain the 80% reduction
19:21:07 <rusty> niftynei: yes, but if they disable and reenable, sum will be the same.
19:21:09 <bitconner> i'm curiouis if that spam still persists after that's widely deployed
19:21:10 <sstone> on mobile nodes it does not help since you have just a few peers to begin with
19:21:39 <sstone> the real point is: how do you actually use channel queries if all you have to filter with are channel ids ?
19:21:45 <roasbeef> mobile nodes can trade off a bit of latency when getting the error back with the latest  channel update for bandwidth (trying to sync em all)
19:21:46 <cdecker> I think c-lightning currently witholds updates that disable until someone actually wants to use the channel IIRC
19:21:52 <roasbeef> sstone: can you elaborate?
19:22:08 <roasbeef> cdecker: so it'll cancel back with the disable, the brodcast?
19:22:20 <cdecker> Yep
19:22:29 <roasbeef> nice
19:22:35 <sstone> with the current queries, all you can do is get your peer's channel ids. what do you do then ?
19:22:40 <cdecker> Active enable, latent disable
19:23:10 <cdecker> Ok, I think we've reached the time for this PR, otherwise we'll talk about this forever
19:23:34 <cdecker> Shall we defer to the ML / Issue tracker or would people want to vote on it now?
19:23:34 <roasbeef> sstone: what's the goal? you can interscet that agsint what you know and query for those that you don't know of
19:23:57 <cdecker> roasbeef: he's more concerned about updates than catching all channels
19:24:07 <sstone> yes !
19:24:29 <roasbeef> yeh you can get those onbly when you need
19:24:36 <roasbeef> it's a waste of bandwidth to do it optimistically
19:25:02 <roasbeef> which is more precious for mobile phones: bandwidth or latency?
19:25:10 <cdecker> I mean, there's no point in catching a pikachu when you don't know whether it can do Thunderbolt :-)
19:25:20 <sstone> both. and ths PR optimizes both
19:26:46 <sstone> if your strategy is don't ask if you already know the channel id then your get almost no updates
19:28:26 <cdecker> Any objections to moving this to the back of the meeting, or off to the ML? There seems to be quite a bit of discussions needed and we could get some things off the table before that
19:28:30 <roasbeef> updates can be obtained when you actually need them, of the 40k or so channels, how many of those do you actually frequently route through?
19:28:50 <roasbeef> sgtm
19:28:53 <bitconner> cdecker, if we have time at the end maybe we can circle back?
19:28:55 <sstone> it's terrible for latency
19:29:21 <cdecker> sstone: shall we address the other two topics first and come back to #557 in a few minutes?
19:29:35 <sstone> yes :)
19:29:40 <cdecker> Thanks
19:29:53 <cdecker> #topic BOLT 9: add features wumbo and wumborama #590
19:30:00 <cdecker> Ready set fight about naming :-)
19:30:13 <araspitzu> lol
19:30:24 <roasbeef> idk it's afeature that no users will really see
19:30:27 <araspitzu> shall we leave that for last? perhaps there is some feedback
19:30:29 <roasbeef> one day, it'll just be the nrom
19:30:31 <roasbeef> norm
19:30:57 <cdecker> I'll abstain from this vote :-)
19:31:05 <roasbeef> seems related to teh feature bit modification proposal as well
19:31:11 <cdecker> Can't always be the naming stickler ^^
19:31:49 <cdecker> You mean #571?
19:31:57 <rusty> cdecker: OK, so there's some weirdness here.  wumbo HTLCs vs wumbo channels.
19:32:01 <araspitzu> roasbeef: yes, in fact we might want to discuss if we put option_wumborama in channel_announcement.features
19:32:28 <cdecker> Good catch rusty, I'd have missed that
19:32:38 <rusty> You need a way to say "you can make a wumbo channel with me", and another to say "this channel can pass a wumbo HTLC"./
19:33:06 <bitconner> is this fb also sent on connections?
19:33:30 <rusty> So I think we need a local (aka peer) bit to say "let's wumbo!" and a global (aka channel) bit to say "this channel can wumbo".
19:33:40 <cdecker> bitconner: that was kind of the point of Rusty's #571 :-)
19:34:12 <bitconner> rusty, cdecker: awesome just double checking :)
19:34:33 <bitconner> at one point i recall some discussion of only making global, but i may be mistaken
19:34:37 <araspitzu> rusty: could the channel_update.htlx_maximum_msat be interpreted for "wumbo HTLCs"?
19:34:51 <cdecker> araspitzu: I think so, yes
19:35:00 <roasbeef> yeh
19:35:06 <bitconner> why does wumbo HTLC matter?
19:35:23 <rusty> bitconner: vital if you want to send one, you nee dto know what channels can take it.
19:35:43 <bitconner> certain impls already put max htlc values much larger than those limits :)
19:35:57 <roasbeef> max uint64
19:36:12 <rusty> (FYI: feature bits get sent on init connect (local and global), advertized in channel_announcement and in node_announcment.
19:36:14 <bitconner> if we just start honoring those more directly, is that limit not lifted?
19:36:14 <araspitzu> cdecker: if that is the case we should be good with using just 2 options as in the PR
19:36:20 <roasbeef> but that'd need to change by the time this is widespread
19:37:05 <rusty> So, I think we need a channel feature (this channel wumbos!), and a peer feature (let's wumbo!)
19:37:09 <cdecker> Mhe, min(htlc_maximum_msat, funding_msat) will do the trick
19:37:10 <araspitzu> also PR #590 assumes issue #504
19:37:49 <roasbeef> seems only a peer feature is needed, if someone tries and you don't want to, you send a funding error back
19:37:58 <roasbeef> max htlc (if set properly) does the rest
19:38:01 <cdecker> araspitzu: #504 is probably a subset of #571
19:38:01 <rusty> araspitzu: sure, but we can pull single features at a time in separate PRs.
19:38:38 <araspitzu> sgtm
19:38:57 <rusty> roasbeef: sure, we could do that.  Should we start by slapping those setting max_htlc to infinity?
19:39:11 <bitconner> hehe
19:39:27 <bitconner> should we enforce that max htlc should not be larger than channel capacity?
19:39:33 <rusty> roasbeef: And meanwhile say max_htlc == UINT64_MAX implies it's capped at min(2^32, funding_capacity)".
19:39:39 <cdecker> Just to get an overview, what are the open issues with #590? Naming and how to signal?
19:39:46 <bitconner> when we initially implemented it, we added this check but that assumption was quickly broken :)
19:39:46 <rusty> bitconner: yeah, we should add that.
19:39:47 <araspitzu> bitconner: agreed
19:39:54 <roasbeef> yeh we ended up doing that, btu for example, I don't eclair knows the funding size
19:40:07 <roasbeef> (clamping to funding size if set to max uint64)
19:40:28 <rusty> roasbeef: then clamp to u32max, assuming it's pre-wumbo?
19:41:09 <araspitzu> is there agreement on the features signalling?
19:41:17 <cdecker> #action cdecker to change the htlc_maximum_htlc to match funding size
19:41:26 <roasbeef> araspitzu: i think only a single bit is needed
19:41:44 <cdecker> Ok, that's c-lightning's "let's wing it" strategy taken care of
19:42:12 <rusty> OK, so I propose we: 1) add language that you should not set htlc_max > channel capacity (duh!), or 2^32 if no wumbo negotated.  2) if you see an htlc_max set to u64max, cap it to one of those. 3) assign the i_wumbo_you_wumbo bit.
19:42:24 <rusty> (peer bit, that is).
19:42:29 <araspitzu> having a global wumborama lets you connect to nodes that you know will support wumbo
19:43:01 <roasbeef> araspitzu: yeh only global
19:43:02 <bitconner> imo we probably don't need to have any special casing for uint64, we should just defer merging that validation until most of the network has upgraded to set it properly
19:43:02 <rusty> araspitzu: that's where we start advertising peer bits in node_announcement.
19:43:20 <cdecker> Shall we split the PR into two parts (peer and global) then we can discuss them separately
19:43:36 <rusty> bitconner: agreed, it's more an implementation note.
19:43:36 <cdecker> I think everybody is happy with the peer bit
19:43:48 <bitconner> hmm, actually i think we do need to set the connection feature bit
19:43:54 <bitconner> o/w private nodes can't wumbo
19:44:28 <rusty> bitconner: private nodes don't send connection features at all though?
19:44:55 <rusty> We may need to eventually do something about feature bits in routehints, but I don't think we need it for this.
19:45:07 <bitconner> i think my mental model of the new terminology is off :P
19:45:24 <niftynei> (is a connection feature the same as a 'channelfeature' from #571)
19:45:24 <cdecker> Hehe, I know that feeling bitconner :-)
19:45:37 <niftynei> ?
19:46:03 <araspitzu> roasbeef: what would it look like only with the global?
19:46:33 <rusty> Oh, I read bitconner's sentence as channelfeature. If it's a peer feature, sure, but you know if you wumbod.
19:46:46 <roasbeef> araspitzu: single global feature bit, you send that in the innit message and use that in your node_announcement, based on that ppl know you'll make wumbo channels, you set max_htlc above the old limit ot signal yo'ull forward larger HTLcs
19:47:01 <roasbeef> yeh still behind on the new feature bit split terms myself...
19:47:20 <araspitzu> it sounds quite good
19:47:43 <araspitzu> we can still crosscheck the globals in the init message with those from the node_announcement
19:48:11 <bitconner> sounds to me like we are all in rough agreement, and we can hash out the terminology (and our understandings thereof) on the pr? :)
19:48:16 <niftynei> wait so the reason for the channelfeature is so other peers can find you to create wumbo channels?
19:48:24 <rusty> BTW we haven't got a final decision on feature unification.  It's kinda tempting to only have one kind of feature, and drop the distinction between "features which prevent you routign" and "features which prevent you making a direct channel".
19:48:37 <niftynei> instead of discovering on connect?
19:48:50 <cdecker> Yeah, kind of miss #571 in the agenda, my bad, sorry :-(
19:48:53 <niftynei> idky but that seems like it'll get deprecated pretty quickly
19:49:16 <niftynei> i guess you could say the same for all of wumbo tho nvmd lol
19:49:31 <cdecker> So like bitconner said, we seem to agree to the concept, shall we hammer out the details in the PR?
19:49:34 <niftynei> ack
19:49:35 <rusty> niftynei: yeah, even local/peer features, people want to avoid scanning for them, so advertizing makes sense.  Initial plan was just to advertize stuff you needed for routing via a channel/node.
19:49:38 <rusty> cdecker: ack.
19:49:41 <bitconner> ack
19:50:00 <araspitzu> should this wait for the features unification? I hoped it was a low hanging fruit :)
19:50:00 <rusty> I agree with roasbeef that only one feature bit is needed, though.
19:50:13 <cdecker> #agreed The concept of wumbo channels was accepted, but more details about signalling need to be discussed on the PR
19:50:32 <cdecker> #topic bolt04: Multi-frame sphinx onion routing proposal #593
19:50:39 <cdecker> Yay, I made a PR ;-)
19:51:03 <cdecker> Basically this is the multi-frame proposal written up (along with some minor cleanups that I couldn't resist)
19:51:18 <roasbeef> yayy json!
19:51:45 <cdecker> It has two working implementations  in C and Golang, and it currently blocks our progress on rendezvous routing, multi-part payments, spontaneous payments and trampoline routing :-)
19:52:03 <roasbeef> i'm behind on this but iirc i didn't see a clear way of being able to only use part of a frame by having an outer length
19:52:11 <cdecker> I should also mention that this is a competing proposal to roasbeef's implementation
19:52:31 <cdecker> Hm? How do you mean?
19:52:42 <cdecker> The payload always gets padded to a full number of frames
19:53:01 <cdecker> (though that is not strictly necessary, it made the proposal so much easier)
19:53:30 <roasbeef> i mean to extract only 8 bytes (for example) from the larger frame
19:53:54 <roasbeef> i also still think we need both type + tlv (with tlv being its own type)
19:53:59 <cdecker> Oh, that's the parsing of the payload, that is basically deferred to the TLV spec
19:54:25 <roasbeef> i also feel like when we talk about TLV, we seem to conflate the wire vs onion versions (which imo are distinct)
19:54:33 <cdecker> Right, a two frame payload with TLV would be realm 0x21
19:54:57 <cdecker> Sorry that's a three frame TLV payload
19:55:43 <cdecker> We can talk about TLV in a separate discussion, I feel we should stick to the pure multi-frame proposal here
19:55:46 <roasbeef> overloading the realm also clobbers over any prior uses for it, we can still have type+realm by leaving the realm as is, and using space in the padding for the type
19:56:19 <cdecker> Oh god, please no, I'm uncomfortable with the layeringviolation that the realm byte is currnetly
19:56:36 <roasbeef> how's it a layering violation?
19:56:51 <roasbeef> i mean it's unused right now, but the intention was to let you route to foochain or w/e
19:56:52 <cdecker> We only have realm 0x00 defined so far, if somebody was using something else it's their fault
19:57:24 <roasbeef> well they wouldn't need to ask anyone to use a diff byte, they just coudl as along as the intermediate node knew what it was meant to be
19:57:35 <rusty> roasbeef: we decided at the summit that we'd use an explicit tfield for "route to chain foo" to avoid the problems of realm byte assignment.
19:57:47 <cdecker> During the meeting in adelaide we decided that the realm byte is renamed a type byte, i.e., does not carry any other meaning than to signal how the payload is to be interpreted
19:57:56 <roasbeef> but now you conflate chain routing and tlv types
19:57:59 <cdecker> The chain can be signalled in the payload
19:58:09 <roasbeef> yeh i objected to that, as it combines namespaces when they can be disticnt
19:58:28 <cdecker> No, the type byte just says "this is a hop_data payload" or "this is a TLV payload"
19:58:49 <roasbeef> yeh the type byte says that, realm byte if for which chain the target link should be over
19:58:53 <roasbeef> is*
19:59:55 <cdecker> Let's please not go back on what was agreed upon, otherwise we'll be here forever
19:59:58 <roasbeef> but stepping back a level, the main diff between this and the other proposal is that one is "white box" the other is "black box"
20:00:13 <cdecker> The type byte tells us how to interpret the payload, that's it
20:00:21 <roasbeef> uh...I had this same opinion then, nothign there's no change in what i expressed then and now
20:00:47 <cdecker> Right, your concerns were heard, we decided for this change, let's move on :-)
20:01:07 <cdecker> How do you mean white box vs. black box?
20:01:08 <roasbeef> the Q is who decided? I think a few others had the same opinion as well
20:01:27 <rusty> changing framing seems much cleaner than squeezing into the existing framing, to me at least.
20:01:29 <roasbeef> this one has awareness of frame size when parsing, the other one just unpacked multiple frames
20:01:54 <rusty> roasbeef: true, but it's a fairly trivial calcuation.
20:02:10 <roasbeef> yeah, just pointing out the differences
20:02:20 <cdecker> roasbeef: your proposal wastes loads of space, and burdens processing nodes with useless crypto operations...
20:02:23 <roasbeef> i think that's the most fundamental diff (others follow from that)
20:02:34 <roasbeef> well the hmac can be restored to be ignored just as this one does
20:02:43 <roasbeef> with that the spac eis more or less the same, and the q is white box vs black box
20:02:53 <cdecker> At which point your construction looks awfully close to mine, doens't it?
20:02:55 <roasbeef> yeh the blackbox requires additional decryptions
20:03:13 <roasbeef> yeh they're very similar other than the distinction between the packet type and tlv
20:03:23 <cdecker> And you need to somehow signal that there is more to come in the blackbox
20:03:39 <roasbeef> i think you need to do the same in the whitebox too (my comment above about using a partial frame)
20:03:52 <roasbeef> if i just want to have 30 bytes, no tlv, just based on the type
20:04:12 <cdecker> Well, using the former realm byte to communicate number of frames and type of payload is a pretty clean solution
20:04:29 <cdecker> It doesn't even change the semantics of having a single frame with legacy hop_data
20:05:41 <cdecker> Anyhow, seems to me that not everybody is convinced yet, shall we defer to the issue?
20:06:33 <cdecker> Or would the others like to proceed and unblock all the things that are blocked on this?
20:06:35 <rusty> I'm reluctant to keep deferring it, but I'm not sure how to deal with the impasse, either.
20:06:50 <roasbeef> my main comment is that tlv shoudl be optional in the payloads
20:07:02 <roasbeef> as there're cases where you'd need to use an additional hop due to the signalling data
20:07:11 <cdecker> Right, as it is (you can use whatever payload serialization you want)
20:07:41 <roasbeef> i only see reg and tlv right now
20:07:53 <cdecker> You can use the legacy hop_data format which is fully supported and append data you want to that
20:08:00 <roasbeef> so you'd need to have a length there
20:08:06 <rusty> roasbeef: it's smaller if it's used to replace existing payloads, but I'm not totally adverse to it staying the same.
20:08:07 <cdecker> Well, those are the only ones that are "currently" defined
20:08:27 <cdecker> For hop_data the padding is defined as MUST be 0x00s
20:08:43 <cdecker> So you can pack anything in there to signal the presence of extra data
20:09:35 <cdecker> The proposal literally say "the following `realm`s are currently defined", so if we come up with the next great thing we can just add that to the list :-)
20:10:33 <cdecker> I just think that having the data all layed out consecutively without the need to do additional rounds of decryption is pretty neat and avoids a bunch of copies to reassemble the data
20:10:35 <roasbeef> using the realm though, which conflates realm and the actual packet type
20:11:09 <rusty> roasbeef: realm is dead.  It's now just a signalling byte for the contents, not an attempt to squeeze every possible chain into 8 bits.
20:11:15 <roasbeef> also maybe we should bump the onion type and commit to the version while we're at it?
20:11:20 <cdecker> It just makes good use of the space we have available: 4 bits to signal additional frames to be used, and 4 bits type information
20:11:37 <cdecker> That's another proposal altogether
20:11:49 <roasbeef> y'all seem to be the only ones that concluded the realm byte doesn't exist anymore....
20:11:52 <rusty> roasbeef, cdecker: agreed, but it's not a bad idea to have that on the table.
20:11:57 <rusty> roasbeef: see minutes.
20:12:08 <cdecker> Let's stick with this proposal, which I think is minimal to get the followup things unblocked
20:13:07 <roasbeef> i'll take a deeper look at this one, a bit behind with volume (some of it just tangential really?) on the ML these days
20:13:08 <cdecker> The realm byte was always conflating chain and payload format
20:13:23 <roasbeef> how so?
20:13:32 <roasbeef> it just said go to this chain/link identified by this chan id
20:13:57 <cdecker> Just a quick reminder what the current spec says: "The realm byte determines the format of the per_hop field; currently, only realm 0 is defined"
20:15:02 <rusty> OK, we're 15 over, is there anything else before we close?
20:15:14 <cdecker> I'm happy to defer, since I only opened the PR 3 days ago, I can understand that it hasn't had the desired scrutiny, but at some point we need to pull the trigger
20:15:59 <cdecker> #agreed Decision of PR #593 was deferred until the next meeting, discussion on the ML and the issue
20:16:02 <roasbeef> imo the realm can also determine exactly what types even make sense as well vs combining them under a single namespace
20:16:03 <rusty> sstone: test vectors seems mainly OK, but I'll try a TLV-variant anyway.  I'm wanting to get back to my more ambitious JSON test framework, for which this would be perfect to integrate (specificially, if we know chain state we can test real gossip queries/answers).
20:16:38 <bitconner> cdecker, sgtm
20:16:41 <cdecker> Should we take another stab at #557?
20:17:38 <cdecker> Which I feel I have a responsibility to bring back up since it was me to suggest postponing it till the end of the meeting
20:18:05 <bitconner> sure
20:18:16 <rusty> cdecker: I think if we make it TLV, which seems the new hotness, sstone and I can thrash it out.  It's clear that the checksums save significant startup latency in current network conditions.
20:18:29 <rusty> And TLV makes it easier to drop later, too.
20:18:41 <cdecker> #topic BOLT7: extend channel range queries with optional fields #557 (Round 2)
20:18:55 <sstone> sgtm
20:19:01 <cdecker> rusty: so you'd like to take another stab at it using the TLV format and re-propose?
20:19:45 <rusty> cdecker: yeah.  sstone and I were simply bikeshedding on the formatting/naming at this point anywat AFAICT.
20:19:49 <roasbeef> wire tlv right, not onion tlv?
20:19:58 <roasbeef> (main diff imo is the size of the type and length)
20:20:12 <rusty> roasbeef: we can use varints for both, TBH?
20:20:14 <cdecker> roasbeef: I still don't see why making two versions matters...
20:20:24 <cdecker> Varints sounds like it'd solve this pretty easily
20:20:31 <bitconner> for me, the bigger distinction is that wire doesn't need even/odd, while onion does
20:20:39 <roasbeef> well one can have a type that's KBs in length, the other is like 1.5kb max
20:20:43 <cdecker> Anyway, let's defer the #557 then
20:20:49 <rusty> bitconner: true...
20:21:12 <rusty> bitconner: though we still ahve the even/odd rule between peers, in case someone screws up.
20:21:12 <cdecker> #action rusty and sstone to change format to be TLV based, so it becomes easier to change in the future
20:21:26 * rusty drags niftynei in for TLV support...
20:21:38 <cdecker> roasbeef: 1300 - 1 - 32 bytes exactly for the onion :-)
20:22:05 <roasbeef> cdecker: well depends on how many hops the data is distributed over tho right?
20:22:13 <roasbeef> trampoline tyme?
20:22:13 <bitconner> rusty, yes though should that not be an enforcement of feature bits, rather than reimplementing it again at the parsing level?
20:22:18 <cdecker> Ok, I think that concludes the official part (with all decisions deferred once again)
20:22:27 <roasbeef> (due to signalling overhead)
20:22:36 <cdecker> roasbeef: that's the maximum possible payload, yeah, you can split differently
20:23:11 <rusty> bitconner: Yeah, parsing pretty much has to be specific to which TLV you're pulling apart, AFAICT.
20:23:17 <niftynei> yes
20:23:34 <cdecker> Shall we conclude the meeting and free people from their screens? I'm happy to stick around for a bit of trampoline fun and some more bikeshedding ;-)
20:23:40 <rusty> Well, none of us are likely to be bored between now and next mtg :)
20:23:48 <rusty> cdecker: ack
20:23:51 <cdecker> Yeah, sounds like it :-)
20:24:08 <bitconner> rusty: i think it'd be possible to agree on a common format for the T the L and the V, and then have the parsing restrictions added on top (or not in the case of wire)
20:24:09 <sstone> ack
20:24:28 <cdecker> roasbeef, bitconner you also happy calling it a meeting here?
20:24:45 <bitconner> cdecker, sure thing, know it's getting late across the pond
20:24:52 <cdecker> #endmeeting