19:02:44 #startmeeting 19:02:44 Meeting started Mon Mar 4 19:02:44 2019 UTC. The chair is rusty. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:02:44 Useful Commands: #action #agreed #help #info #idea #link #topic. 19:02:58 #link https://github.com/lightningnetwork/lightning-rfc/issues/577 19:03:16 And of course, https://github.com/lightningnetwork/lightning-rfc/labels/2019-03-04 19:04:07 Shall we start with an easy one? :) 19:04:10 #topic https://github.com/lightningnetwork/lightning-rfc/pull/550 19:04:31 We're basically waiting for roasbeef's call on whether the new wording is good... 19:05:59 We might be missing a representative from LL 19:06:05 Just got msg, he's only just landed. So he's clearly going to be acking that on-issue :) 19:06:29 #action 550 to resolve on-issue, since other acks are there. 19:07:03 #topic https://github.com/lightningnetwork/lightning-rfc/pull/557 19:07:44 Sounds good to me, but roasbeef had voiced some concerns last time 19:07:55 Hi all 19:07:57 So it's probably up to him to give the final ack as well 19:08:11 I'm in between flights, other LL members might also still be traveling 19:08:45 afaik last time he had not really looked at it yet 19:09:54 cdecker: well, for this we need 2 implementations. I'm pretty sure sstone has an implementation; I might commit c-lightning to implement it too, for testing. It's pretty simple. 19:10:02 sstone: are you ready for some interoperation testing? That's the next step, AFAICT. 19:10:22 yes definitely 19:10:26 (I need to work on c-lightning's primitive gossip issues anyway). 19:10:31 Great! 19:10:53 #action rusty to implement 557 in c-lightning, provide feedback and interop testing with sstone. 19:11:32 #topic https://github.com/lightningnetwork/lightning-rfc/pull/571 19:11:57 Oh, I got them mixed up, it was #571 which was blocked on roasbeef :-) 19:12:54 Really two questions: 1) should we maintain two types of features, or completely unify. And 2) should we start using those nominal feature assignments. 19:12:56 If we completely unify, it means that if you see a node with a feature you don't understand, you can't route through it, even though you might have actually been able to. 19:13:07 Similarly, you think you can't even gossip with it. 19:13:30 But, compulsory features are pretty extreme, and imply the majority of the network has upgraded, so maybe we don't care. 19:14:51 Background: right now we have global (aka "channel" features) and local (aka "peer" features). THe former tells whether you can use a channel, the latter whether you can chat directly to a peer. 19:15:16 eg. if someone insists on option_dataloss_protect, you can still route through their channels even if you have no idea WTF that is. 19:15:55 But if someone insists on scriptless_scripts for their channel, you can't. But you can still connect to them for gossip. 19:16:44 I'll stop typing now to let you all rush in... :) 19:17:07 lol, so it sounds like you're advocating not unifying global and local in your above description :p 19:17:19 Definitely a +1 for the tentative assignment of feature bits 19:17:34 And you just managed to unconvince me about the unification :-) 19:17:37 but tentative assignment sounds great 19:18:00 * BlueMatt is fine unifying the namespace, even if not actually ever setting the same bits on local/global 19:18:09 when you say "completely unify" does it mean not having features that don't overlap anymore ? 19:18:13 would be somewhat wasteful, but oh well 19:18:58 sstone: Yes, we won't overlap. That's almost a requirement, since people want to be able to select for peer features before connect, as we've seen. 19:19:23 (Technically, we could add another field to node_announce, but it's easier to just have no overlap). 19:19:39 ok 19:19:40 We can always recycle bits by introducing a mega bit which supercedes previous bits. 19:19:59 (Note to self: this is a horrible idea rusty, WTF are you thinking) 19:20:32 OK, so while we won't merge the feature assignments, we'll take them as stable for the moment? 19:20:45 ack 19:20:55 Well, not having them overlap is useful, at least we need to keep less context in mind to map bits to features, but we should still only use them in a context in which they are sensible 19:21:06 agreed 19:21:08 SGTM 19:21:40 #action rusty to update accepted proposal index at https://github.com/lightningnetwork/lightning-rfc/wiki/Lightning-Specification-1.1-Proposal-States with nominal feature assignments. 19:21:42 sgtm 19:22:12 #topic https://github.com/lightningnetwork/lightning-rfc/pull/578 19:22:35 I first rejected this out-of-hand as non-backwards compat, but it's actually not that impossible. 19:22:36 This is an interesting one 19:23:01 1 hr was my initial guess, with high bitcoin price volatility. I actually think in practice it's been a PITA. 19:23:03 For one I disagree that the fallback should be to open a new channel with the merchant (leads to preferential attachment) 19:23:20 cdecker: well, you may need to open a channel with *someone*. 19:23:32 Rather we should fall back to on-chain transaction and in parallel open a channel to someone 19:23:55 yea, I dont think "I tried to pay this guy once" is neccessarily the best channel-open strategy 19:23:58 In my limited experience, 1 day would have been far preferable to 1 hour. 19:24:02 And we definitely should not do that in an automated way which is what I read between the lines 19:25:02 Note: during transition, it would be unclear what a missing 'x' meant. Was it 1 hour, or 1 day? But if we assume 1 day and get it wrong, other end will in fact reject. So it's not a disaster, just a bit of UX nastiness. 19:26:13 Yeah, but we should push merchants to opt-in to longer timeouts, not having them opt-out of them 19:26:13 So I suggest we accept this, with a suggestion that during transition, x be explicit. 19:26:13 why? why not just have a note suggesting that people should try to use more than 3600 if possible 19:26:14 and x should be explicit 19:26:14 Technically, the x default is an optimization; any implementation can choose whatever default they want. In practice, however, they saw bolt11 as guidence. 19:26:32 so split the guidance into two: default, and default-if-not-present 19:26:50 but 1 day is really a long time when your payment does not go through. I'm not sure it's better UX 19:27:27 sstone: 1 day covers all the timezones though. 19:27:45 at least with 1h you don't have to worry that much about paying for something that may not make sense anymore or may not even be available 19:28:17 sstone: yes, that was the original intent. But for online stores, like the blockstream store, we would have been happier with a 1 day option. 19:28:35 Though it implies Just in Time inventory management. 19:28:40 Having invoices expire only after 1 day also means that the default shopping cart timeout is 1 day, which causes merchants to reserve stuff for way longer 19:29:47 cdecker: but 1 hour is still a long time for some orders. When I order concert tickets or book flights it's 15 minutes. 19:30:59 and with 1h you can retry safely, though it seems many shops don't actually enforce payment expiries :( 19:31:05 Good discussion; I think c-lightning might increase default (my original idea was 1 hr + 10 minutes per block we require for channel confirmation, but I think cdecker will disagree with that). 19:32:19 Well, that puts us at 80 minutes in expectation which is way more reasonable than 1440 :-) 19:32:30 OK, so I think the consensus is that there's no magic value; certainly not anything enough to make us break compat. So this becomes an implementation recommendation, should I draft something? 19:32:50 SGTM :-) 19:33:24 #action rusty to draft words around how implementations should select expiry values; no default change. 19:33:38 #topic https://github.com/lightningnetwork/lightning-rfc/pull/584 19:33:52 sstone: you have the floor :) 19:34:30 I thought that the channel queries are really close to what we need for INV-based gossip 19:34:43 so I went for it 19:35:33 Basically you announce the channel_updates that you have 19:36:44 when you receive an inv message you compare to what you have and query what you need. And I've reused the "checksum" from the extended channel queries to filter on content as well 19:37:01 It's a nice and clean proposal imho 19:37:33 sstone: Again, this looks elegent and simple. Should we try implementing this once we've got 557 working between us? 19:37:42 Am I correct in assuming that the checksums can be computed on demand (since we don't query with them, rather we only check for changes) 19:37:45 ? 19:38:12 Just want to make sure we don't actually need yet another index into our gossip data 19:39:08 (BTW, my raspberry Pi node with USB-connected spinning rust drive is now up-to-date with the main chain, so I will be playing with optimizations like this on that node. It validates almost 3 blocks per minute!) 19:39:13 for each channel update you compare your local checksum to the one you've received and if they match you may decide not to ask for it even if it is newer 19:40:10 cdecker: yes, we'd probably just store the 4 bytes, but yes, no index needed. 19:40:26 sstone: do you have an implementation yet 19:41:25 not yet but I can have something very soon so if you're game I can test with c-lightning when you're ready. I just wanted some feedback first before I started implementing 19:41:46 sstone: OK, let's do 557 first, then aim for this. 19:42:12 # action rusty,sstone to implement between implementations after 557. 19:42:22 #action rusty,sstone to implement between implementations after 557. 19:42:30 Excellent :-) 19:42:31 In case the ' ' mattered... 19:43:26 #topic https://github.com/lightningnetwork/lightning-rfc/issues/586 19:44:00 .... what's that roasbeef? You unconditionally approve this and want it merged immediately? Great! 19:44:07 Ok, this is basically the linked up version of the ML post I did 2 weeks ago 19:44:11 LOL :-) 19:44:41 cdecker: given roasbeef is MIA, I think we need to defer. But I reviewed the implementation, and it does seem straightforward. 19:45:02 Given the lack of discussion on the ML I assume I should probably just write up the BOLT changes so we can discuss in detail next time (and we have an issue to put on the agenda) 19:45:44 s/lack of discussion/lack of criticism/g :-) 19:45:54 cdecker: Agreed. I didn't put my MPP proposal up for this meeting, since implementation depends on this decision, but I'd really like to move forward with that after this. 19:46:26 That and the rendezvous routing and spontaneous payments which are all pretty exciting imho 19:46:32 #action cdecker to write up formal BOLT 4 proposal. 19:46:51 SGTM 19:47:12 OK, any other business we should discuss? Like, how to find that kid from http://lightning.pictures who still hasn't sent me the picture? 19:48:03 so I found a strange corner-case via fuzzing that may be of interest to folks: 19:49:40 in receiving an update_add_htlc, I was enforcing the reserve value based on all relevant inbound htlcs, which made sense, but the other side sometimes forgot about htlcs that it just needed to send a final raa to confirm removal for, resulting in htlcs getting rejected for reserve-value-violations 19:50:18 ie lesson is make sure the reserve-value restriction applies to whatever their *next* commitment transaction will contain when applying it based on update_add_htlc, ie not including things which are sufficiently far along the removal process 19:50:53 of course to hit this the fuzzer found some stupidly strange test-cases with crazy in-flight htlcs in both directions generated at exactly the right time 19:51:05 but, hey, thats why I have protocol-level fuzzers :P 19:51:54 BlueMatt: there's a subtlety here; you're explicitly allowed to defer checks until commit, which is detectable as it's a bit looser in this case I think, 19:52:09 BlueMatt: (we check as you go, because debugging otherwise would be a PITA) 19:52:10 yes, if you deferred the checks it would be ok 19:52:23 BlueMatt: well, as long as you *remove* first, then add... 19:52:38 but if you dont (which I dont do, at least for reserve value), then you need to make sure that you're ignoring htlcs that you still see, but which are in-flight-to-be-removed-before-their-next-commitment-tx 19:52:50 BlueMatt: yeah. 19:53:11 anyway, yay protocol-level-fuzzing! 19:53:22 found so many stupidly-subtle bugs this way 19:53:36 BlueMatt: nice! I look forward to turning those into portable JSON tests one day when I get my act together. 19:53:55 Sounds awesome, fuzzing is still on our roadmap somewhere 19:54:19 rusty: they're all written as rust-lightning calls, but it should be reasonably doable to convert them 19:55:00 BlueMatt: yes, I need to get back to that project, which is buried in a branch on GH. It will allow nice sharing of these kind of corner cases. 19:55:01 Speaking of JSON tests: the c-lightning frame-onion PR contains a tool and JSON onion specs for future tests 19:55:20 cdecker: nice, I missed that! 19:56:01 cdecker: sadly, the only real reason rust-lightning has protocol-level-fuzzing is that you can run multiple of them in a single process and make them talk to each other.... 19:56:59 Hehe, c-lightning still has a test of the state-machine that Rusty wrote at the very beginning that basically has two nodes talk to each other 19:57:52 Oh, BTW, lnd still has the "send an empty update under stress" bug; c-lightning is going to have to allow it, as it breaks channels. 19:58:13 Yep, and we are officially in quirksmode territory :-) 19:58:28 ugh 19:58:28 rusty: you mean send a sig when there are no changes ? 19:58:38 sstone: yeah. 19:59:56 OK, if no more topics, I'll close meeting? 20:00:03 +1 20:00:09 yea, sorry for the tangent 20:00:15 just figured it may be of interest 20:00:30 It definitely is interesting 20:00:32 BlueMatt: no, it's great! 20:00:37 Thanks ^^ 20:00:40 it was! 20:00:48 #endmeeting