19:06:59 #startmeeting 19:06:59 Meeting started Mon Feb 3 19:06:59 2020 UTC. The chair is cdecker. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:06:59 Useful Commands: #action #agreed #help #info #idea #link #topic. 19:07:25 #link https://github.com/lightningnetwork/lightning-rfc/issues/731 19:07:55 Any follow ups from last week? I think we executed on all actions we decided last time, or am I missing something? 19:08:23 There was some discussion on https://github.com/lightningnetwork/lightning-rfc/issues/728 19:08:36 I don't know if we reached something satisfactory, but we do have mitigations 19:08:52 thanks to BlueMatt and halseth 19:09:00 I think BlueMatt wanted to have a wider discussion of #726 (which features to announce where), and we decided to make a new issue for that discussion 19:09:37 Great, I love seeing discussions on GH ^^ 19:09:56 t-bast: yea, we should probably run with the idea there and make some pr to specify clients implement it, but I dont think that needs discussion here. 19:10:07 maybe you can write up what it appears y'all implemented in elciar? 19:10:08 BlueMatt: agreed 19:10:41 BlueMatt: we haven't implemented anything yet, but we will and I'll do a write-up to append to the issue and maybe to the spec too 19:10:45 Ok, so a brief explanation of what Eclair does on issue #728? 19:10:48 Sorry I'm late, needed coffee. 19:11:04 rusty: same here ^^ 19:11:11 #action t-bast: implement proposed mitigation for #728 and document that 19:11:22 * t-bast waves at Rusty 19:11:31 cdecker: yea, I think last meeting's discussion on 726 (which I was not here for, apologies) captured some of the issues, but by no means all of them - there is a ton of ambiguity in how flat features (which wasnt really introduced by flat features, but it brought it to the forefrunt) should work moving forward 19:11:44 Ok, that seems to conclude the pending items from last time, shall we get started with today's agenda? 19:12:16 No problem BlueMatt, better to address the fundamental problem, without the distraction of a concrete case that is easily settled ^^ 19:12:24 cdecker: sgtm 19:12:29 #topic Single-option large channel proposal #596 19:12:36 #link https://github.com/lightningnetwork/lightning-rfc/pull/596 19:12:40 WUMBO 19:13:09 This has been pending for quite a while, I don't think there's anything major, just needs acks 19:13:18 i just committed a fix for the last nit by t-bast 19:13:31 t-bast: how do y'all handle scaling confirmations? particularly if there's a push amt? 19:14:10 roasbeef: what do you mean by scaling confirmations? I might be missing some context 19:14:10 roasbeef: currently we don't, we keep it at 6 19:14:25 Ah, I see, sorry for the dumb question 19:14:25 you mean waiting for more confirmations if the channel is huge? 19:14:50 yeh so like a 100 btc channel 19:15:02 that's not #reckless at all 19:15:06 * cdecker hopes LNBig isn't listening xD 19:15:07 I'd do it with 6 confs 19:15:25 6 confs is....wayyyy too low for 100 btc 19:15:34 right now we're sticking to 6 confs, but the wumbo channels we're seeing are more around 1 BTC 19:15:48 for larger amount it's true that we should adapt the confirmation window 19:15:49 anyway, that should be up to the implementation? 19:16:02 BlueMatt: yeah I was trolling on that one ;) 19:16:03 Does it really matter for now? Wumbo is quite orthogonal to negotiating the number of confirmations isn't it? 19:16:10 yes 19:16:16 Yes this is true, feels implementation-specific 19:16:30 I can just unilaterally defer `funding_locked` and don't need to communicate that at all, do I? 19:16:57 cdecker: you could but it's not very nice :) 19:17:00 yeh it's implementation specific, was wondering what they've done since iirc they use themin production 19:17:01 you can also specify your delay? 19:17:09 but even then, we may want to have an advisory section 19:17:10 as you should 19:17:17 particularly as ppl are accepting zero conf channels these days 19:17:23 roasbeef: agreed, an advisory section would be useful 19:17:56 Worthy consideration, though the amount you can steal by opening and reorging a channel is limited to the total outgoing capacity of the node. But yeah, there's a logic which says "wait until the block rewards exceed the amount of the payment"... 19:17:56 Ok, so a new PR with an advisory section, and merge #596? 19:18:19 cdecker: SGTM 19:18:35 BlueMatt: what kind of calculation would you recommend to scale the waiting period? 19:18:43 i think an advisory section would be useful/nice especially if turbo channels make it to the spec 19:18:53 Any objections? Any volunteers to write the advisory (possibily someone with an idea for a sensible algo)? 19:19:02 cdecker: feature numbers are weird? They should be the next one, right? So 18/19? 19:19:20 rusty's right, we should merge with the latest available feature bits 19:19:20 t-bast: I dont think we can reasonably capture a Good recommendation in an advisory section, but having one that notes that you should do *something* is a good idea :) 19:19:33 BlueMatt: that's a start :D 19:19:36 as for how to scale, i think nodes need to be considering their total value across all channels, not just the one. 19:19:39 Well, it's just 3 more bytes in the features, but we can compress 19:19:57 Yeah, mention the problem, imply there's a clever solution. Makes us look smart without actually making us responsible if it goes wrong :) 19:20:12 a-la Fermat 19:20:14 yes those are from the waiting room, action point for me to change the PR to use 18/19? 19:20:24 #action cdecker to change the featurebits to 18/19 and merge #596 19:20:48 Anyone taking the lead for the advisory? We can hash out details on the PR I guess 19:21:12 If it's just mentioning the issue without providing the exact algo, araspitzu or I can do it 19:21:37 cdecker: "Note: an implementation may want to vary the number of confirmations required for a large channel." 19:21:42 "a-la Fermat" ahaha :thinking_face: 19:21:56 #action t-bast and araspitzu to brainstorm an advisory section in a new PR explaining how confirmations should scale with the funding amount 19:22:08 niftynei: I don't always make maths jokes, but when I do... 19:22:09 cdecker +1 19:22:23 (I will personally tweet-slap any person who places a margin note in one of the BOLTS!) 19:22:26 rusty: They should be inversely proportional xD 19:22:43 Awesome, any last words for the wumbo PR? 19:22:49 Let's make it official ^^ 19:22:53 ACK! 19:23:02 Ack! 19:23:10 I feel like we're missing the ability of nodes to advertise "how large" 19:23:32 Well, that goes more into liquidity providers I guess 19:23:41 as rn they'd just try and fail and not really get any feedback 19:23:59 As a node opening a channel I am in control, as a fundee I can fail HTLCs that exceed my risk tolerance 19:24:11 alises aren't really used for anything atm, so we could possilby use that for signalling 19:24:23 t-bast: also do y'all just allow infinite size, or is there a sort of config soft cap? 19:24:26 Ouch, aliases 19:24:32 have we added TLVs to node ann's yet? 19:24:54 roasbeef: I believe there's a cap, I don't remember which it is yet 19:25:03 roasbeef: would you like to create a proposal for this sort of signaling? 19:25:18 niftynei: that's for one of the next topics: TLV everywhere to simplify, then we can add one there for capacity? 19:25:45 Yep, that's #714 down 2 items in the agenda ^^ 19:26:09 It's common to have a minimum accepted channel size, but not sure if it's worth advertizing a range... 19:26:20 So let's move on down the list and we can brainstorm a bit at the end, sounds good? 19:26:23 was thinking just a max 19:26:47 sure seems either a new tlv field for the node ann or overloading the alias are possible candidates 19:26:52 roasbeef: I think for max that could be in your reply to a first open that's too big (in a tlv) 19:27:14 reply? 19:27:32 if we can avoid putting that in node_anns which are gossiped and consume a lot of bandwidth that'd be nice 19:27:40 Hm, signaling up front is a way nicer experience 19:27:53 cdecker: agreed 19:28:08 Let's brainstorm later / on GH 19:28:10 #topic BOLT 7: be more aggressive about sending our own gossip. #684 19:28:16 But we have actual experience with minima IRL. Has it proved a problem to have an error msg which says what the min is. 19:28:22 #link https://github.com/lightningnetwork/lightning-rfc/pull/684 19:28:42 cdecker: Wow, is that still open? OK, Ack. 19:29:02 It is, and requires a rebase :-) 19:29:09 Happy to rebase once we have all the acks 19:29:31 I think we need a sign-off from rust-lightning and lnd 19:29:45 I believe lnd is already doing that, isn't it? 19:30:10 we do a weaker version of this rn, for 0.10 we aim to implement a stronger version of it 19:30:12 I think so too, mainly looking for formulation acks in the spec at this point 19:30:26 but yeh on board w/ the general idea 19:30:39 Yeah, it's a subtle trap :( 19:30:51 RL doesn't implement any of the gossip filtering stuff as of yet...gotta love optional features. so I dont have a horse in this race 19:31:01 though note that RL also doesnt use timestamp to mean timestamp 19:31:17 the original spec did not require it be timestamps (only monotonically increasing), and we dont make syscalls, so we dont have a time source 19:31:28 Ok, so no objections I think 19:31:45 cdecker: sgtm 19:31:52 BlueMatt: yes, that is my fault, I wanted logical timestamps but then also came up with the dumb auto-pruning... 19:31:57 (and assuming all lightning nodes know the current time in any more precision than the block header sucks) 19:32:03 #action cdecker to rebase and merge #684 19:32:16 eventually we need to move to header-based timestamps, but I dont think we do yet 19:32:32 in any case, please avoid breaking low-precision (+/- 2 hours) timestamps! 19:32:38 for actions I think we should still give ppl a chance after the meeting to glance at the PR and hit the approve button on gh 19:32:45 BlueMatt: with gossipv2 I want to use block numbers, for exactly this reason. 19:32:50 Coincidence has it that more advanced gossip sync mechanisms are our research topic today 19:32:50 right. 19:32:52 :-) 19:33:05 don't think any of us rely on time in a strong fashion at all, beyond how bitcoin does 19:33:05 #topic Bolt 1: Specify that extensions to existing messages must use TLV #714 (@t-bast) 19:33:06 better-gossip is the reason we never bothered to implement the current gossip filter stuff 19:33:16 #link https://github.com/lightningnetwork/lightning-rfc/pull/714 19:33:19 roasbeef: gossip filters definitely does. 19:33:30 t-bast: would you do the honors of walking us through this one? 19:33:36 depends on how you interpret "strong" ;) 19:33:45 cdecker: my pleasure 19:34:10 So the idea is to make it explicit that all messages can be extended via TLV stream 19:34:20 perhaps lets do anchor outputs before this since it's more immediate and concrete (spec pr, working impl) 19:34:24 However there are a few optional fields in some messages that make this not-so-obvious 19:34:35 t-base: "- If it doesn't include `extension` fields: - MUST omit the `extension` TLV stream entirely." this doesn't make sense any more (now there's no length prefix), since an empty TLV is identical to a missing one? 19:35:07 rusty: yes, I can probably improve that one 19:35:19 * BlueMatt doesnt get why we need this - is anyone planning on implementing logic to read any extra message bytes as TLV right now? otherwise it seems like fuure guidance that just ties us to something uneccessarily? 19:35:43 So my main concern is that it limits us to only use TLV in future, which might be ok 19:35:57 BlueMatt: we are adding TLV streams in many messages already, and it made us realize that some existing extensions (upfront_shutdown_script) make it hard 19:35:57 right, but, unless there's a reason to, why do so? 19:36:14 So I think it's good to make that explicit in the spec to avoid future work from making this switch painful 19:36:46 The ability to add such TLV streams allow us (and others) to experiment with new features easily without forking from the network 19:36:48 t-bast: I like the *idea* of making those fields compulsory so TLV is easier, but this will break older Eclair if done today, right? 19:36:50 i mean we can add a non-normative "future extensions should heavily consider TLV", but my question is why make it normative, and why worry about such messages until we have to? 19:37:09 rusty: as of 0.3.3 released friday, we're good to go ;) 19:37:12 BlueMatt: and such advice is meta, thus belongs in CONTRIBUTING.md. 19:37:15 also, for those messages, we can just say "they are required if you want to go past them" 19:37:20 rusty: agreed 19:37:23 t-bast: it's premature :) 19:37:34 rusty: and it won't break older eclair, only potential Phoenix which won't see those messages anyway 19:37:40 like, if you want to add new fields, then those fields become mandatory, but no need to break it? 19:37:59 t-bast: um, the ACINQ node got this wrong until 3? days ago? 19:38:33 rusty: yes because we were waiting to deploy the updated version because this PR wasn't making progress on the spec, but now the Acinq node runs 0.3.3 which fixes it 19:38:41 BlueMatt: technically there's an issue about parsing TLV fields (and breaking if there's an even one) vs ignoring them. But since even-fields-I-dont-expect is a Shouldn't Happen, it's hard to care. 19:38:42 rusty: other eclair nodes don't run the custom extensions that broke this 19:38:50 t-bast: OK. 19:39:27 (ignore my last msg lol thought it was a diff subject) 19:39:28 BlueMatt: this is couple with the test message range to allow people to experiment with features before proposing them in the spec 19:39:29 So, the only spec change here is changing "ignore additional data" to "parse additional data as TLV". 19:39:55 yeh basically we need to make all the existing not really optional fields explcitily mandatory to build any new extensions on top 19:40:01 rusty: mostly, with the only exception being how we clarify the case of upfront_shutdown_script 19:40:31 I've got a suggestion to retro-fit upfront_shutdown_script as a TLV field with type 0 in the last comments on the PR 19:40:42 the proposal for open_channelv2 moves upfront_shutdown_script to a TLV ;) 19:40:47 * BlueMatt doesnt really want to implement "parse as TLV to validate, and then ignore everything you just parsed and store it as is so that you can re-serialize as-is for signature verification" 19:40:57 niftynei: yaay 19:41:00 at least for the announcement messages 19:41:01 t-bast: what about just makign it be all zeroes if you dont' want it? 19:41:41 roasbeef: that's exactly how it ends up being, but I think it makes the spec less messy if that can be also understood as a TLV field 19:41:44 ahh i see the encoding thing now 19:42:18 hmmm but then this isn't really backwards compatible, and now we have 2 ways to parse the same field? 19:42:34 We always send an upfront_shutdown_script (may be a 0 byte). Don't mess with it until open_channel2. Happy to make it compuslory. 19:42:51 no it is fully backwards-compatible because the encoding ends up being the same 19:42:54 mandatory feels like an eaiser update path, less cleverness 19:43:00 (by sheer luck) 19:43:35 we also have other "not really optional fields" we need to handle as well 19:43:43 Ok I can make this field mandatory if you prefer, without transforming it into a TLV 19:44:15 Yes but the others are more easily handled, all implementation already include them all the time I believe 19:44:18 BlueMatt: that requirement to preserve unknown msg tails already exists though... 19:44:23 I listed that in the last commit message 19:44:55 roasbeef: the only other optional field is the commitment point in channel_reestablish 19:45:04 rusty: yes, my point is that the diff here is that you'd parse that data as tlv, then ignore all the parse results unless it fails 19:45:05 this one can easily be made mandatory IMO 19:45:07 which, ugh 19:45:10 t-bast: but if I start parsing open_channel insisting that there be a (maybe 1 zero byte) upfront_shutdown_script, I think old eclair will no longer open channels with me? 19:45:29 t-bast: iirc max htlc as well in chan upd 19:45:33 BlueMatt: well, that now applies for every msg you receive, so it's not so bad? 19:46:09 rusty: you wouldn't because we don't set the upfront_shutdown_script feature bit, so you're not allowed to send a non-empty script :) 19:46:12 rusty: no, i mean you'll still have different parsing for the announce messages, but, whatever 19:46:29 Ok, seems that there is still quite a lot to discuss about this one, do we want to hash it out now (and forego the other topics) or do we want to defer? 19:46:32 I dont feel *super* strongly here (sorry if its coming across as such), just noting that it seems like a bit of jumping the gun 19:46:34 roasbeef: this one is gated on another byte in the message, so the logic can stay as it is today (channel_flags) 19:46:56 t-bast: ah yeh...that was kinda weird in retrospect, 19:47:05 roasbeef: heh yeah it was 19:47:06 t-bast: exactly, if you open a channel with me, your msg will lack that field and be malformed (once I remove the old logic which didn't read that field if you didn't set the option) 19:47:20 i'm on board w/ just making it mandatory though, we have a PR for lnd that is going in this direction 19:47:39 cdecker: agreed, thanks for the feedback, if you're ok we can keep discussing that on the PR? I'll make the upfront_script mandatory instead of my TLV hack 19:48:12 rusty: I don't think so, right now the spec allows me to send an empty upfront_script regardless of the features advertized 19:48:12 #action everyone to continue the discussion of #714 on GH 19:48:25 t-bast: and does old eclair do that? 19:48:42 * cdecker is sorry for being so pushy, but these meetings are too short to hash out every detail 19:48:54 #topic BOLT7: reply_channel_range parameter #560 19:49:02 #link https://github.com/lightningnetwork/lightning-rfc/pull/560 19:49:08 Now this should be an easy one 19:49:48 rusty: let's take this to the PR, thanks for the feedback! 19:49:49 It's a clarification on how the parameters should be interpreted when building and receiving range queries 19:50:44 rusty: would this require a clarification of what complete=1 means? 19:50:59 "MUST set with `chain_hash` [...]" -> should that "with" really be there? I don't understand this sentence 19:51:40 Good catch, I think that "with" needs dropping 19:52:41 cdecker: ack 19:52:53 Is that the only issue people have with this PR? If that's the case I'll fix it up and merge 19:53:30 I must admit the two next conditions aren't very clear to me 19:53:49 the "checking" is confusing me a bit 19:53:53 It's a bit redundant, yes 19:53:57 heh yeh they had the same interpretation of complete as we did in the past 19:54:01 OK, so high level: you can send multiple reply_channel_range in response to a query_channel_range (due to 64k limit, mainly). 19:54:30 You could choose to implement these responses as precanned 1024 block chunks, for example. 19:54:57 But your answers have to cover the range they asked for. 19:55:28 rusty: that sounds good 19:55:41 yes but cover exactly or you can overlap ? 19:55:43 So we're allowed to send a channel for height b even though they asked for [b+3, b+1000]? 19:55:44 e.g. if they ask for first_blocknum = 500,000 and number_of_blocks = 10,000. You can theoretically give 10,000 responses, one per block. (DO NOT DO THIS) 19:56:29 sstone: we allow an overlap of one block iirc 19:56:32 sstone: that should have been specified. Wasn't. We recently loosened the requirements that each one give *some* new information, but may overlap IIRC. 19:56:33 I think it would be useful to have a quick look at https://github.com/lightningnetwork/lightning-rfc/pull/730 in the same review process, this is tightly linked 19:56:59 Shouldn't it be "MUST overlap"? 19:57:06 the current specs implies you can send "more" as long as you cover what the request aks for, #560 seems so imply that replies should cover the exact range that was queries which is imho wrong 19:57:26 wpaulino: ^ 19:57:32 there is also a difference between lnd and cl/eclair 19:57:38 cdecker: MUST overlap the area asked, yes. I was implying that we should have said MUST NOT overlap other responses. 19:57:49 Ah ok, I see 19:57:59 sstone: as of 0.9? iirc we're now all in sync (other than that issue we reported to eclair) 19:58:11 So since #730 is indeed very closely coupled with this one, let's put that on the table as well 19:58:13 cl/eclair allow gaps betweeen sucessive channel_range_replies, lnd does not (?) 19:58:43 YEs, the intent was that you can over-send. c-lgihtning keeps a bitmap; it would have been nicer to assert replies must be in ascending order :( 19:59:25 basically the question is: do we allow gaps between replies ? 20:00:01 we changed to always send the complete range, so no gaps 20:00:07 as before we weren't compatible w/ wcl 20:00:11 cl* 20:00:14 sstone: hmm, c-lightning will not. We keep a bitmap of blocks, and clear as we receive responses. We need to clear all bits to consider it complete. 20:00:40 If everyone sends in block-ascending order,t hat would simplify the spec? 20:00:55 But on the receiver side (not sender-side) we've seen that discrepancy in what's accepted 20:01:31 lnd has special cases to accept the old way we did things, but we'll send thing "properly" now and do a little more validation on what we recv (if we don't detect it's the legacy case) 20:02:03 if query is first_blocknum = 500,000 and number_of_blocks = 10,000, but there's no result in block 510,000 to 520,000. Our first range result is blocks between 500,000 and 510,000, then the second range result is for blocks between 520,000 and onwards. Should the second message's `first_block_num` be 510,000 or 520,000? 20:02:38 520 20:02:39 (you can ignore the `number_of_blocks`, irrelevant to the example and misleading 20:03:16 I think it makes sense not to have gaps, and it's not explicit, hence #730 20:03:29 the two changes we implemented recently: https://github.com/lightningnetwork/lnd/pull/3785 https://github.com/lightningnetwork/lnd/pull/3836 20:03:42 which was triggered by a bug report by CL: https://github.com/lightningnetwork/lnd/issues/3728 20:03:54 perhaps we should continue this in the issue? or the PR ref'd above 20:04:06 Ouch, yes, c-lightning gets upset if you overlap your responses. i.e. every block you cover must not have been covered by a previous response. That's sane, but too strict for current spec. 20:04:14 I believe it's worth clarifying in the spec since implementations diverged (can be done on Github instead of IRC) 20:04:22 t-bast: agreed. 20:04:34 this seems like something the protocol test framework rusty's been working on would help with 20:04:41 So defer #560 and #730 to GH? 20:04:56 agreed with niftynei and cdecker ;) 20:05:04 My preference would be to allow over-response, but require ascending order. Then detecting the last response is simpler than keeping a bitmap. 20:05:06 (in addition to better clarification in spec) 20:05:13 Oh right, we need to talk about rusty's proto tests, rusty would you volunteer to present that next time? 20:05:19 (Assuming that everyone today *does* ascending order) 20:05:25 cdecker: ack1 20:05:41 #action everyone to discuss the minutiae of #560 and #730 on GH 20:05:54 rusty: yes everyone does :) 20:06:10 sstone: great, then we can cheat!! :) 20:06:17 Ok, that brings us to the Long Term updates. t-bast and joostjgr have volunteered to give a brief update 20:06:25 #topic Current status of the trampoline routing proposal (@t-bast) 20:06:34 #action rusty to rework 560 and 730 to assume ascending order. 20:06:39 Maybe we can start with anchor outputs which should be quicker? 20:06:50 * cdecker passes the mik to t-bast 20:07:01 Sure, let's do anchor first 20:07:10 #topic Current state of the anchor output proposal (@joostjager) 20:07:14 ok, the update: 20:07:21 two main points open atm: 20:07:46 one is the channel open parameters. whether to hard code and how to negotiate. we haven't come to an agreement on that. 20:08:09 we remain of the opinion that we shouldn't lock ourselves in a corner. 20:08:21 * cdecker nods furiously 20:08:39 the fixed anchor size of 294 sats is already obsolete because we needed 330 sats with the p2wsh outputs on the commitment tx :) 20:09:01 point two is about the elimination of gaming with the symmetric csv delay 20:09:22 wut? I thought we had a whole call about this issue and agreed we should just fix it to a reasonable amount? 20:09:30 we realized that there still is a game present in the htlc resolution. there the symmetry isn't present. so you could use that to circumvent a to_remote csv delay 20:09:50 (what ever happened to the audio of this, connor?) 20:10:01 you'd still be tied to the htlc cltv value, but that can be much lower than the to_self delay 20:10:43 so the solution doesn't actually address all griefing vectors 20:11:04 at this point, do we want to try and patch this (even more script changes), or just leave the to_remote non-delayed as things are rn 20:11:17 that's a good catch 20:11:40 imo it comes from trying to introduce symmetry into what is fundamentally an asymmetric commitment scheme 20:11:41 I think we don't want to leave the to_remote non-delayed though 20:11:45 we can't leave it completely non-delayed, because of the carve-out rules. it would at least need a csv delay of 1 20:12:00 sure yeh otherwise it would just be a csv of one 20:12:13 Good question, at some point we'll get too many changes and apparent security against everything, but only because we ourselves don't understand all the implications anymore 20:12:28 Is there a point in making incremental changes towards a good solution? 20:12:28 but if we try to patch this, then we go back into the terriroty where cltv+csv dependent on each other, which we worked to remove w/ the 2-stage htlc scheme 20:12:39 that is indeed the question. take it further possibly by also requiring the remote to use 2nd level txes or stay close to what we have now and don't address the gaming 20:13:33 the incremental would be leave the csv 1 (for carve out) for now, to later revisit once we think we've addressed all teh gaming vectors (but imo as I said above, we're trying to shoe horn symmetric redemption into an asymetric commitmetn scheme) 20:13:34 Is there a summary of that somewhere on the PR? I didn't see it 20:13:36 joostjgr: could you lay out the venues for gaming we have in the current proposal? 20:13:39 we don't need the symmetric csv for anchors. it was more in the category of 'now that we are fixing the format anyway', iirc 20:13:56 t-bast: iirc they discoverd this earlier this evening, but yeh we should add it to the PR 20:13:57 not on the pr, it came up this afternoon 20:14:21 mhmm they're independent: having better fee contorl vs trying to "fix" possilbe griefing vectors 20:14:23 joostjgr: indeed. And if that doesn't achieve what we want, clearly the minimal incremental change is the Right Thing. 20:14:26 gotcha, could you summarize this on the PR so that we can think about it more? 20:14:30 joostjgr: seems like action to go write it up, then? 20:14:40 ok, will do 20:15:50 Ok, sounds like we're at a cross-roads: is it worth deferring this just to "quickly clean up" a venue for gaming, or do we make a minimal change with a well defined scope, but requires a later patch for the gaming? 20:16:21 It sounds to me like we should go the incremental route, since we're still finding new venues for gaming 20:16:22 it is also a question what the priority of addressing the gaming is compared to other outstanding issues with the LN 20:16:23 as it'll already be csv based, it won't be too big of a change to modify it from having a value of 1 20:16:35 implementation wise, we're just about finished on our end 20:16:45 cdecker: we'll break this all with eltoo anyway, so I doubt we'll address the gaming separately short of that. 20:17:11 ok, sounds we'll just up the to_remote csv from 0 to 1 then 20:17:19 and leave everything else unchanged. 20:17:20 well eltoo runs into other issues that are slightly related to this, since the cltv has a relationship with the csv once again 20:17:30 Who knows when we'll get eltoo, so I prefer having a fix, rather than hoping for future magic :-( 20:17:32 right, iterative seems like a sensible approach...moving forward on anchor outputs is pretty critical for a post-0-fee world 20:17:35 then only remaining hot topic is ... variable anchor size 20:18:04 Let's not pull eltoo into the discussion just yet, but a recent ML post addressed just that issue roasbeef 20:18:27 orly? guess i'm behind lol, there's kind of a lot of cross chatter on the ML these days 20:18:39 bluematt: yes, we've had a meeting and exchanged opinions, but no firm acks were given 20:18:41 * rusty is months behind on lightning-dev too.... 20:18:52 rusty: lol ok cool I'm not the only one :p 20:18:53 roasbeef: I know, I just get really excited when people find new uses for noinput / anyprevout :-) 20:19:13 joostjgr: you mean the amount of satoshis we should give the anchor? 20:19:16 joostjgr: you implied above that 330 was the new minimum? 20:19:21 cdecker: yeh 20:19:21 yes 20:19:30 because our anchor out is p2wsh, 330 is the dust limit 20:19:32 joostjgr: wut? no, iirc the meeting was pretty firm. 20:19:37 we assumed p2wkh before 20:19:52 sure, if the scripts change, it'll have to change appropriately, but we were pretty firm on "minimal-ish value" 20:19:58 i think ultimately, we don't want to put our selves in a corner, so we may just make it a tlv field to allow lnd nodes to update the value later in the future w/o having to roll out a new version, if we don't end up making it a param in the spec 20:20:27 it's just a matter of managing uncertainty in the future imo 20:20:39 roasbeef: if you want to negotiate it, you can, but that seems pretty nonsensical, and the discussion on the call seemed that everyone largely agreed there. 20:20:55 Ok, let's prescribe a constant for now, and add flexibility later, when needed/useful 20:21:02 (aka rusty's mantra) 20:21:10 I dont really want to reopen this discussion, but there's no sense "negotiating" this kind of value, if you want more flexibility, both sides have to upgrade *either way* 20:21:14 i wasn't on the call BlueMatt, so I can't speak for others, but people seem to not be strongly tied to their past statements 20:21:20 we've beaten this horse to deal 5 times. 20:21:39 well seems it has re-opened after a surprise came up during implementation 20:22:05 does output size affect the dustlimit because you're considering the size of the script in the spending tx? 20:22:23 we've had this discussion ~5 times, but the basic premise that everyone on the call agreed on was "if we need to change it, negotiation doesn't help because both sides need to upgrade anyway, so writing negotiation code is just complexity when you'll only accept a fixed value, so negotiation is useless" 20:22:23 niftynei: yeah p2wsh spends are bigger than p2wkh spends 20:22:40 BlueMatt: you don' tneed to upgrade if it's just a command line or even param to the open call 20:22:43 so the dust limit is being applied to the spending tx, not the anchor output tx 20:22:45 so adding spec complexity is just shooting ourselves in the foot 20:22:47 To vary it you need to have a new msg to vary existing channels, too, which is where it really gets nasty... 20:22:52 you can have things be fixed now, but allow negotiation in the future 20:23:01 rusty: vary existing channels? 20:23:11 you still need both sides to accept a different value? 20:23:24 so might as well just fix it and let both sides assume it 20:23:25 I think this is proving to be a rather good discussion piece :-) roasbeef it's important we learn about these findings so we can follow along the development. BlueMatt a decision taken in a vacuum is unlikely to hold up when confronted with real implementations. I think we made good progress on this one ^^ 20:23:38 roasbeef: if this is insufficient after some Crisis Event, surely we need a way to fix established channels too? 20:23:52 cdecker: agreed that decisions in a vaccum are useless, but roasbeef didn't bring up a new issue we werent aware of here, afaict? 20:24:39 Agreed, I also don't quite understand the need to make it configurable, hence my call to explain :-) 20:24:59 if you need both sides to agree to a different value, just assume it and move on, you dont have to negotiate anything if both sides have to opt to switch the value to X? 20:26:24 t-bast: you ok with bumping the update on trampoline to next meeting? 20:26:49 cdecker: of course, no hurry, this is good discussion on anchor outputs 20:27:34 And the research topic (improved gossip sync) will also have to move :-( 20:27:45 Sure, no problem 20:27:57 * cdecker will need to leave soon 20:27:59 * BlueMatt is somewhat starving... 20:28:02 lets call it? 20:28:12 sgtm 20:28:14 joost/roasbeef: have you also looked a bit into the issue of reserving UTXOs to protect against attacks? 20:28:26 we can schedule another call (this time with roasbeef, I guess, unless connor can just share his thoughts from the meeting with him so we dont have to repeat it)? 20:28:27 ok nevermind, I can get that update offline 20:28:49 #action everyone to keep an eye on the anchor outputs proposal and help wherever possible :-) 20:29:16 Yeah, we can setup a call to discuss anchor output sizes more in detail 20:29:31 * BlueMatt is still missing the audio connor theoretically recorded on the last one 20:29:39 rusty: sure, i'm also working on a way to update commitment types in-flight, so ppl don't need to clsoe out channels to get all the new fancy features 20:29:55 BlueMatt: would you like to schedule it? 20:29:55 i need to head out, thanks for a great meeting everyone 20:29:57 SGTM, joost/roasbeef can you propose a few times for a call? 20:30:00 t-bast: our current impl doesn't make the fees minimal, so this is used mostly as a feebost rather than providing all the fees 20:30:51 roasbeef: even with that, you'll potentially need to reserve UTXOs to avoid being caught off-guard when channels close one after another, right? 20:31:29 roasbeef: I'm thinking about big nodes cases, with thousands of channels 20:32:29 t-bast: sure, but i mean w/o UTXO reservation, things fall back to how they are now (we still have fees on the commitment w/ update_fee), we also have some utxo management fan/out code for a system we run that we can apply use here 20:32:35 Ok, let's call it an end, but everybody is welcome to stick around and continue discussing (I will while cooking dinner) 20:32:39 #endmeeting