20:08:11 <rusty> #startmeeting
20:08:11 <lightningbot> Meeting started Mon Aug 17 20:08:11 2020 UTC.  The chair is rusty. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:08:11 <lightningbot> Useful Commands: #action #agreed #help #info #idea #link #topic.
20:08:29 <rusty> #topic https://github.com/lightningnetwork/lightning-rfc/pull/785
20:09:12 <rusty> I think we reached consensus last meeting on this, but let's just confirm?  The "must specify" + "assume 18 if not specified" combo seems to have made everyone "not unhappy"?
20:10:09 <roasbeef> yeh I think so, everyone should specify and it's safe to assume 18 otherwise, as they should still accvept the payment w/ the larger value
20:10:23 <roasbeef> but _decreasing_ would require a more uniform update path
20:10:40 <rusty> roasbeef: yep, I agree.  OK, anyone object to applying it?
20:11:27 <rusty> #action Apply https://github.com/lightningnetwork/lightning-rfc/pull/785
20:11:33 <rusty> #topic https://github.com/lightningnetwork/lightning-rfc/pull/767
20:12:17 <roasbeef> overall change sgtm, but haven't reviwed the wording or anything yet
20:12:25 * roasbeef glances at the PR....
20:12:54 <rusty> roasbeef: it's literally "if a channel's latest `channel_update`s `timestamp` is older than two weeks" -> "if a channel's oldest `channel_update`s `timestamp` is older than two weeks"
20:13:26 <niftynei> is it oldest or the newest?
20:13:34 <roasbeef> rusty: diff context?
20:13:38 <roasbeef> ahh ok switched PRs
20:13:42 <roasbeef> approved 785
20:14:14 <niftynei> ah i see, that's the whole thing isn't it lol
20:14:20 <rusty> niftynei: oldest... basically if either end lets the channel go stale, it's dead.
20:14:26 <roasbeef> yeh the actual diff is tiny
20:14:31 <niftynei> yep, makes sense to me.
20:14:40 <roasbeef> I guess it's clarification in a sense, as "latest" can be kinda ambigious
20:15:12 <roasbeef> I think the end goal here is also more aggressive pruning, as nodes need to make sure they aren't caught slipping w.r.t how old thier updates are
20:15:32 <rusty> aside: next lightning version I'm going to change routing to ignore channels which are disabled in *either* dir.  Turns out that (of course, but I'm slow) if one side goes offline, it doesn't send an update with disable.  This leads to loads of unidir channels which aren't actually usable.
20:15:57 <rusty> s/lightning/c-lightning/
20:16:23 <sstone> roasbeef: can you please clarify the "y'alls continues to send fresh channel_updates" bit ?
20:16:38 <roasbeef> yeh we've been looking at making some changes in that area ourselves too, lnd doesn't have an easy api to enable/disable, so we've seem some nodes start to advertise super high fees as a subsitute if they don't want the chan to be used for w/e reasn
20:16:49 <rusty> (Could refine this to take the *latest* update, so if one side has just marked it enabled it'll be counted, but in practice that's rare).
20:17:16 <roasbeef> sstone: I think he means that rn lnd nodes will continue to send the update even if the other person is never online
20:17:30 <roasbeef> which I think all other nodes do to? as in we always sign before the "deadline" regardless of if they other peer is active or not
20:17:33 <rusty> sstone: currently nodes will refresh channel, even if they haven't seen the peer all that time.
20:17:50 <rusty> Yeah, us too.
20:18:14 <rusty> (We don't close without explicit instructions, but "been away for 2 weeks" would be a pretty good heuristic...(
20:18:49 <sstone> mmm... what is the rationale for publishing updates for a channel for which the remote peer is offline ?
20:18:50 <niftynei> right, this proposed PR would remove those channels from gossip propagation
20:19:28 <rusty> sstone: we simply didn't check.  We don't actually keep records on when we last saw peer (at least, not inside our gossip logic)
20:20:14 <rusty> So, meanwhile, are people happy for https://github.com/lightningnetwork/lightning-rfc/pull/767 ?  Seems harmless, and useful, though we could have an other PR to fix the refresh for dead peers logic.,
20:20:28 <niftynei> hmm wait maybe that's not true... this is just a recommendation eh
20:21:43 <rusty> niftynei: yeah, and one we should probably implement.
20:22:48 <rusty> No objections in 1 minute and I'll action to apply it?
20:22:55 <niftynei> well it sounds like we'll already be de-facto implementing it by ignoring these channels for routing :)
20:23:18 <rusty> (Next: https://github.com/lightningnetwork/lightning-rfc/pull/695 because why not)
20:23:21 <niftynei> does 'prune' here, in bolt 07, mean 'remove from gossip'?
20:23:46 <roasbeef> sstone: in our case, the systems just don't communicate
20:23:50 <niftynei> or is this just 'fyi, you should probs do this to your internal representation'
20:24:05 <rusty> niftynei: yes, you no longer propagate it either.
20:24:16 <rusty> "This channel is dead to me"
20:24:29 <niftynei> gotcha :thumbs_up:
20:24:46 <roasbeef> one is the "link" the other is the "gossiper"
20:24:57 <sstone> ok we have a different design but I guess this PR is harmless, so I won't object
20:25:11 <rusty> #action apply https://github.com/lightningnetwork/lightning-rfc/pull/767
20:25:18 <roasbeef> ours is basically a cron job internall that runs and checks if it needs to send out another one, and sleeps if doesn't
20:25:26 <rusty> #topic https://github.com/lightningnetwork/lightning-rfc/pull/695
20:26:01 <rusty> OK, this is really rough (should prob be a TLV).  But we find ourselves probing for capacities now with MPP, which seems silly.
20:26:04 <roasbeef> hmmm, isn't this information already available if clients look at their path finding history?
20:26:19 <roasbeef> and nodes should be setting max_htlc properly to signal the largest htlc they're willing to accept
20:26:47 <roasbeef> it also effectively undoes the philosophy behind "temp channel failure"
20:26:47 <rusty> roasbeef: you currently end up bouncing off the channel until you get low enough.  That's a pretty poor use of resources.
20:27:20 <roasbeef> sure but you can also remember yoru past attempts, and discount any failures temporarily, so like decay the probability
20:27:38 <roasbeef> fwiw there've also been a buncha papers that show how mapping certain sections of the netowrk is super easy via probing
20:27:59 <roasbeef> tho there're a series of defenses gfainst that which require no protocol chagnes, but which aren't super widely implemented
20:28:08 <rusty> roasbeef: this simply shortcuts the process, esp when you're going to fail anyway.
20:28:19 <roasbeef> oh this is on the channel_update....somehow I read it as a new failure message
20:28:30 <roasbeef> err no it's a failur emessge kek
20:29:28 <rusty> Yeah, it's after the channel_update... trying to avoid the current inefficient probing, or at least reduce it.
20:30:10 <roasbeef> is capacity_order meant to be like 3 bits? "ok, send more, send less?"
20:30:17 <roasbeef> or 2
20:30:30 <rusty> roasbeef: naah, it's "you are out by this many orders of magnitude"...
20:30:36 <roasbeef> ahhh
20:30:42 <niftynei> seems like it's you're within 2*2, you're within 2*16, etc?
20:31:04 <bitconner> hey y'all 👋  missed the discussion about 767 but happy to circle back later if people have more questions
20:31:36 <niftynei> hello bitconner !
20:31:41 <rusty> bitconner: we actioned it for merge, assume that's OK?
20:32:01 <bitconner> sgtm!
20:32:27 <niftynei> seems like the real question here is "are hints ok or should we keep you guessing?"
20:32:30 <rusty> Dunno if the capacity_order should be relative to their HTLC, or simply absolute...
20:33:26 <niftynei> absolute meaning 'hints for the total channel capacity' vs 'hints for capacity missing for this htlc'?
20:33:35 <rusty> niftynei: yeah.
20:33:40 <roasbeef> relative to the htlc leaks more info I guess?
20:33:54 <bitconner> main q i think is whether to just return the value explicitly or to actually fuzz. fuzzing only adds a logarithmic factor to the htlcs one needs to send to binary search a channel capacity no?
20:33:55 <roasbeef> err less info*
20:34:05 <niftynei> right, the htlc amount allows you to probe for a more exact amount
20:34:11 <rusty> roasbeef: I'm not sure... I feel it's true, but I can't really prove it either.
20:34:42 <rusty> bitconner: fuzzing avoids them *accidentally* probing, at least...
20:35:41 <rusty> Anyway, discuss on issue?  The whole idea makes me kinda nervous, but I do feel that not including it is just a fig leaf and not real privacy.
20:36:30 <roasbeef> yeh I think discuss on issue, I can bring up some of the recent papers in this area, so we can examine some of their suggestions
20:36:41 <rusty> #topic https://github.com/lightningnetwork/lightning-rfc/pull/688 (yay!)
20:36:45 <niftynei> sgtm. it does get into interesting questions about what the point of (semi)obscuring current capacity is
20:37:12 <rusty> OK, we have a successful interop test!  Any objections to applying this?
20:37:52 <roasbeef> niftynei: mhmm, many of the papers argue that it sin't really effective, given ppl don't set max_htlc or limit the outstanding amt, and you can kinda paint parts of the sub-graph given probing is "free"
20:38:01 <roasbeef> yay interop!
20:38:43 <rusty> #action apply https://github.com/lightningnetwork/lightning-rfc/pull/688
20:39:05 <roasbeef> seems the fee thing has been resolved which is nice
20:39:08 <rusty> Nice!  I'm also running it on my testnet and mainnet nodes, but haven't actually interop tested yet because I'm slack :)
20:39:47 <roasbeef> so we need to make a small change on our end re the fee thing, but we should be able to bundle it in our next major release (about to release 0.11, so would be 0.12) or a minor release possilby so we can make it more widespread on the network sooner
20:39:59 <rusty> I think joost should do the honors of merging it, since he did all the hard work...
20:40:12 <roasbeef> he's back from vacation just in time ;)
20:40:37 <roasbeef> also johan back too in time
20:41:02 <roasbeef> it's late-ish from them, so tomorrow my time they should be able to do the final sniff and hit the merge button
20:41:09 <rusty> roasbeef: yeah, our implementation is config-experimental only.  It just forgets the anchor outputs on unilateral close.  But maybe that's OK, hmm...
20:41:16 <rusty> roasbeef: nice!
20:42:12 <rusty> OK, that's the substative part done I think.  Some minor things:
20:43:07 <rusty> #738 (ln.dev) I have found some people to try to wrangle the site.  The main thing I want is a matrix of major implementation versions and what features they support/require (e.g. payment secrets, MPP, anchor outputs).
20:43:24 <rusty> That would give implementors an idea of when they can deprecate...
20:44:47 <roasbeef> cool, we have a checklist in our readme, but it's super outdated
20:44:59 <rusty> #topic https://github.com/lightningnetwork/lightning-rfc/pull/672
20:45:04 <roasbeef> like the rest of our readme really lol
20:45:30 <rusty> roasbeef we haven't even been tracking, I'll have to dig through changelogs at very least... :(
20:45:30 <roasbeef> woah, this one is old, I'd forgotten about it lol
20:45:50 <rusty> Seems like we  should assign a feature bit and move to interop testing?
20:45:53 <roasbeef> wasn't there somtehing on the bitcoind side related to this?
20:45:58 <roasbeef> the anysegwit thing
20:46:07 <roasbeef> like something about modifying relay behavior...? I'm behind on irc lol
20:46:25 <rusty> roasbeef: I think it was simply awaiting 0.19.0 being widely deplpoyed...
20:46:51 <bitconner> prior the mempool only allowed version 0, but that was relaxed iirc
20:46:59 <roasbeef> this is like related to taproot if I'm vaguely remembering it properly
20:47:42 <rusty> bitconner: yeah... so if they propagate, there's no reason to ban them.  However, current clients do, so we need a feature bit.
20:48:26 <jeremyrubin> hi
20:48:37 <jeremyrubin> roasbeef: are you asking about wtxid relay?
20:48:37 <rusty> I propose 22/23 as feature bits for this: that seems to be the next cab on the rank?
20:48:39 <roasbeef> maybe jeremyrubin  knows?
20:48:45 <roasbeef> idk maybe, lol
20:49:15 * bitconner waves
20:49:22 <rusty> jeremyrubin: my understanding is that, regardless, a tx with a segwit v(1-16) output will currently propagate?
20:49:25 <roasbeef> fwiw, rn I think we just limit on size...of the addr, which is also diff for the next segwit version
20:49:31 <roasbeef> in terms of our validation of close addrs
20:49:41 <jeremyrubin> Output maybe.
20:49:44 <jeremyrubin> Input no
20:49:48 <jeremyrubin> Let me check
20:50:22 <rusty> roasbeef: yes, I suspect it was our discussion of how we have to handle this that lead to that bitcoind PR, but I might be wrong.
20:51:18 <rusty> (The idea was "upgrade once and your software will handle all future segwits too", but that proved fraught with subtle problems like this one...)
20:52:19 <bitconner> https://github.com/bitcoin/bitcoin/pull/15846
20:52:41 <jeremyrubin> TX_WITNESS_UNKNOWN seems to be a standard output
20:53:15 <bitconner> > This makes sending to future Segwit versions via native outputs (bech32) standard for relay, mempool acceptance, and mining. Note that spending such outputs (including P2SH-embedded ones) remains nonstandard, as that is actually required for softfork safety.
20:53:48 <jeremyrubin> I'm not sure where we reject the inputs though, will have to look
20:54:05 <jeremyrubin> ah
20:54:11 <jeremyrubin> discourage upgradeable witness
20:54:25 <jeremyrubin> So we only reject witness type if the script is run for mempool
20:54:28 * jeremyrubin carry on
20:55:14 <rusty> jeremyrubin: thanks for confirm.  So, let's assign a feature bit?  Anyone object?
20:55:37 <jeremyrubin> There is one issue
20:55:52 <jeremyrubin> Which is that you can get weird cross version pinning stuff
20:56:09 <jeremyrubin> E.g., v2 witness looks unspendable to and old client
20:56:20 <jeremyrubin> but then to a new client, v2 witness is spendable
20:56:33 <jeremyrubin> so new nodes could have another child tx causing pinning on new but not old
20:56:44 <jeremyrubin> Which might be confusing for LNDs to deal with
20:57:02 <rusty> jeremyrubin: in this case, it's a mutual close so pinning isn't really an issue AFAICT.
20:57:43 <jeremyrubin> cool
20:58:08 <bitconner> rusty: changes look good to me, i'd vote for proceeding with feature bit assignment
20:58:25 <rusty> OK, I'll assign 22/23 as features on the issue.  Interop testing on testnet should be pretty easy with some hacks...
20:59:43 <rusty> OK, let's sneak in one more: blast from the past: https://github.com/lightningnetwork/lightning-rfc/pull/641
21:00:12 <rusty> #action Assign features 22/23 for https://github.com/lightningnetwork/lightning-rfc/pull/672 for interop testing.
21:00:22 <rusty> #topic https://github.com/lightningnetwork/lightning-rfc/pull/641
21:00:41 <rusty> This is about not giving historical data when timestamp filtering.
21:00:43 <bitconner> hehe wow that is a blast
21:01:27 <bitconner> for those catching up, the idea is that just setting really old timestamps makes this initial dump equivalent to `initial_routing_sync`
21:01:52 <bitconner> we opted to ditch `initial_routing_sync` in favor of gossip queries because it proved to be inefficient
21:02:13 <rusty> Unf, c-lightning uses timestamp 0 for initial startup on the very first peer, precisely to get the firehose.
21:02:44 <rusty> I definitely think that after the initial gossip_timestamp_filter you shouldn't be able to ask for historicals.
21:03:04 <rusty> But the bootstrap case needs something.
21:03:35 <bitconner> why can't it just use the same set reconcilliation protocol?
21:03:55 <rusty> bitconner: it's more efficient to ask for everything if you know you know nothing?
21:04:20 <bitconner> in the past we observed cases on mobile where a mobile phone connected to CL would be asked to dump 40+MB every time the app was opened
21:04:30 <rusty> bitconner: yeah, that was quite a while ago,.
21:04:57 <bitconner> i think some of those bugs have been fixed, but i think the goal is to curb reliance on asking for dumps in general
21:05:12 <rusty> bitconner: I'm not sure asking for all possible scids is much better though.
21:05:16 <BlueMatt> why? dumps are...the expected thing on first startup
21:05:47 <bitconner> it ends up being < 10 control messages, which is still dwarfed by the thousands of messages for channels and ndoes
21:06:22 <BlueMatt> its not an effeciency question, though, its a "why make a common case deliberately inscruitable"?
21:07:01 <bitconner> so that people don't lean on it as a catch all, which has already happened
21:07:11 <roasbeef> BlueMatt: it persisted on every connect, not just the first start up
21:07:24 <BlueMatt> right, and thats a bug that rusty indicated was fixed?
21:07:25 <roasbeef> like it requested a dump even if it had everything, or just did a recent dump
21:07:52 <rusty> BlueMatt: it was a year ago, but it still hurts to think about it.
21:08:22 <BlueMatt> ok, so...problem solved? no need to update the spec? I'm confused. are you *still* seeing this issue roasbeef/bitconner?
21:08:31 <bitconner> iirc RL doesn't hasn't fully implemented gossip queries? so i see why you wouldn't want to remove this
21:08:33 <rusty> Now, we ask the first peer for everything.  Once that's done (or stalled), we ask another peer for every scid (because, I guess, if the first peer is LND it didn't honor the timestamp?)
21:08:40 <BlueMatt> or just the pr was opened then and hasnt been reconsidered since?
21:09:04 <rusty> BlueMatt: it *is* easier not to implement the rewind behavior.
21:09:32 <rusty> roasbeef/bitconner: I assume this PR reflect current lnd behavior?
21:09:57 <bitconner> currently we have a config option tha allows you toggle whether you reply or not
21:11:22 <rusty> bitconner: well, we'll survive if it's disabled.  I can change the algo not to rely on it at all: in practice, practice beats theory, so if this is existing behavior we should document it.
21:11:47 <bitconner> if you want a dump from the first peer, why not set `initial-routing-sync` instead of doing so via historical timestamps?
21:12:02 <bitconner> why does that functionality need to exist twice
21:13:17 <rusty> bitconner: because you have to know that at connect time :(
21:14:25 <rusty> bitconner: And it doesn't work with gossip timestamps extension anyway:
21:14:25 <rusty> A node:
21:14:25 <rusty> - if the `gossip_queries` feature is negotiated:
21:14:25 <rusty> - MUST not send gossip it did not generate itself, until it receives `gossip_timestamp_filter`.
21:14:59 <rusty> (i.e. initial-routing-sync doesn't really exist anymore)
21:15:23 <bitconner> don't you know you know nothing before connecting?
21:15:34 <bitconner> (in the very fist sync example)
21:15:39 <bitconner> first*
21:16:04 <bitconner> this is true, lnd removed initial-routing-sync several versions ago
21:16:51 <bitconner> BlueMatt: i haven't noticed the issue recently, but probably because i run with `ignore-historical-gossip-filters` on
21:17:25 <bitconner> perhaps we should continue this discussion on the issue given we are over
21:17:38 <rusty> bitconner: agreed.
21:18:06 <rusty> #action discuss https://github.com/lightningnetwork/lightning-rfc/pull/641 on issue
21:18:11 <rusty> Any Other Business?
21:20:33 <rusty> BTW, I've been doing some research on tracability of payments by intermediate hops, will share soon.  Seems like it's bad (immediate neigbors can guess within a handful of potential src/dst), but not awful (other intermediates have possible set sizes about 1/3 of entire network).  Will post something soon.
21:20:38 <rusty> #closemeeting
21:20:48 <rusty> #endmeeting