19:06:45 <t-bast> #startmeeting
19:06:45 <lightningbot> Meeting started Mon Oct 12 19:06:45 2020 UTC.  The chair is t-bast. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:06:45 <lightningbot> Useful Commands: #action #agreed #help #info #idea #link #topic.
19:06:48 <rusty> Yes, it's my fault.  I refuse to get up for a 4:30am call, but in DST the 5:30am is more civilized for everyone else.
19:06:52 <niftynei> let's goooo :)
19:06:56 <t-bast> Let's start recording :)
19:07:10 <t-bast> #link https://github.com/lightningnetwork/lightning-rfc/issues/802
19:07:23 <t-bast> I setup a tentative agenda, with feedback from cdecker and ariard
19:07:32 <t-bast> We have a couple simple PRs to warm-up
19:07:43 <t-bast> #topic Bolt 4 clarification
19:07:54 <t-bast> #link https://github.com/lightningnetwork/lightning-rfc/pull/801
19:08:22 <t-bast> A simple clarification around tlv_streams, I think we should let the PR author answer the comments
19:09:12 <rusty> t-bast: yes.  Your point about it breaking tooling was correct, though it's in the lightning-rfc tree, not a c-lightning specific thing.
19:09:30 <roasbeef> why idk if we should worry about that tool breaking when making spec changes
19:09:33 <roasbeef> yeh*
19:09:42 <t-bast> rusty: true, but I think you're the only ones to use it xD, but worth maintaining though
19:10:00 <rusty> t-bast: lnprototest also uses it, FWIW.  I guess that's kinda me too :)
19:10:01 <roasbeef> I think this PR is a toss up, no strong feelings w.r.t if this actually clarifies things, kinda in typo territory
19:10:02 <t-bast> I don't think the variable name needs to change TBH
19:10:22 <t-bast> Let's wait for the author to comment and move to the next PR?
19:10:32 <rusty> roasbeef: it does stop people breaking the spec parsing though, or at least makes them fix up the tool if they do.
19:10:33 <roasbeef> their port about the bolt 4 wording is more worthy of a change imo
19:10:35 <rusty> t-bast: ack
19:10:40 <rusty> roasbeef: ack
19:10:41 <roasbeef> point*
19:10:52 <t-bast> agreed
19:11:08 <t-bast> #topic Claiming revoked anchor commit txs in a single tx
19:11:11 <t-bast> #link https://github.com/lightningnetwork/lightning-rfc/pull/803
19:11:30 <t-bast> Worth having a look and bikeshedding the wording on this PR
19:11:46 <t-bast> It tells implementers to be careful with batching claiming revoked outputs
19:12:32 <roasbeef> if we're advising w.r.t best practices when sweeping revoked outputs, we may also want to add that depending on the lateest state and the broadcasted state, the "defener" can use the funds of the attacker to siphon off to chain fees to speed up the confirmation of the justice transaction
19:13:03 <t-bast> maybe a whole section of recommended best practices for sweeping could be considered?
19:13:03 <roasbeef> but also even if they pin the second level, that needs to confirm, then they still need to wait for csv, and there's another revocation output there
19:13:14 <t-bast> I'm sure ariard would love writing a section like that
19:13:28 <t-bast> Where we'd quickly explain the subtleties and attacks
19:13:36 <roasbeef> it's one of those "maybe does something, but can delay things at times" vectors from my pov, in that either they pay a lot of fees to jam things up, and that tiehr confirms or it doesn't
19:13:53 <roasbeef> if it does, then they paid a buncha fees, if it doesn't, then it's likely the case that the "honest" one would
19:14:07 <rusty> roasbeef: yeah, ZmnSCPxj recently implemented scorched earth for c-lightning.  But you've always had this problem where a penalty could be a single giant tx but then you need to quickly divide and conquer as they can play with adding HTLC txs...
19:14:30 <roasbeef> mhmm, another edge  case here is if they spend the htlc transactions within distinct transactions
19:14:47 <roasbeef> if you're sweeping them all in a single, yuou need to watch for spends of each in order to update your transaction and breach remedy strategy
19:14:52 <rusty> (So c-lightning just makes indep txs, because it's simple and nobody really seems to care).
19:15:24 <t-bast> yes, TBH eclair also makes indep txs, if you're claiming a revoked tx you're going to win money anyway, you don't need to bother optimizing the fees
19:15:32 <roasbeef> mhmm, according to jonny1000's "breach monitor" there was a "suspected breach" for the first time this year (?) a few weeks ago
19:15:51 <t-bast> roasbeef: didn't hear about that, can you share a link?
19:16:08 <roasbeef> t-bast: so not always (that you'd win money), if things are super lopsided then they may just be going back to a better state for them, and one that yuo're actually in a worse position for
19:16:32 <roasbeef> sure, sec....
19:16:42 <roasbeef> https://twitter.com/BitMEXResearch/status/1305794706702569472?s=20
19:16:52 <t-bast> roasbeef: true, if everything was on your side they may force you to spend on fees, that's right
19:17:00 <roasbeef> so it was a detected justice transaction, ofc you can't detect failed defense attempts
19:17:06 <roasbeef> https://forkmonitor.info/lightning
19:17:26 <t-bast> 55$...interesting :)
19:17:42 <roasbeef> ok yeh was wrong about "frst time this year", but they're relatively uncommon and usually for "small amounts", which could mean just ppl testing to see "if this thing actually works"
19:18:01 <roasbeef> no wumbo defeneses yet ;)
19:18:16 <t-bast> going back to the PR, do you think it's worth creating a section dedicated to sweeping subtleties? Or just bikeshed the current wording?
19:18:29 <t-bast> roasbeef: if it's never used, it doesn't work xD
19:18:34 <roasbeef> hehe
19:18:52 <roasbeef> I think this prob stands on it's own, wouldn't want to slow it down to add more stuff to it that can be done in a diff more focused PR (on the best practices section)
19:19:11 <roasbeef> gonna comment on it to mention that even if they pin, you can play at the second level if that ever confirms
19:19:48 <t-bast> great, too bad ariard isn't there today, we'll continue the discussion on github
19:19:58 <roasbeef> fsho
19:19:58 <t-bast> #action discuss wording on the PR
19:20:28 <t-bast> #topic clarify message ordering post-reconnect
19:20:31 <t-bast> #link https://github.com/lightningnetwork/lightning-rfc/issues/794
19:20:54 <t-bast> This issue of clarifying the order of replaying "lost" messages on reconnection has probably been discussed at length before
19:21:05 <t-bast> But I couldn't find past issues or ML threads to reference
19:21:20 <lndbot> <eugene> Hi,I made that PR
19:21:44 <roasbeef> yeh we discovered this when tracking down some spurious force closes and de-syncs in lnd, that would even happen lnd <-> lnd
19:22:05 <roasbeef> imo this is one of the areas of the spec that's the most murky, but also super critical to "get right" to ensure you don' thave a ton of unhappy users
19:22:42 <roasbeef> (this area == how to properly resume a channel that may have had inflight updates/signatures/dangling-commitments)
19:23:08 <t-bast> I agree, this is important to get right as this leads to channel closures, so it's worth ensuring we all agree on the expected behavior!
19:24:14 <roasbeef> mhmm, almost feels like we need to step waaaaaay back and go full on PlusCal/TLA+ here, as idk about y'all but even years later we've found some new edge cases, but could just be us lol, also ofc something like that can be pretty laborious
19:24:26 <rusty> roasbeef: indeed.  If we were doing it again I'd go abck to being half duplex whihc removes all this shit.
19:24:27 <roasbeef> I think a stop gap there would just be stating exactly what you need to commit to disk each time you recv a message
19:24:54 <roasbeef> but what I'm referrgin to rn (not writing all the stae yo uneed to) is distinct from what eugene found, which is ambiguous retransmission _order_
19:25:00 <roasbeef> and the order of sig/revoke here is make or break
19:25:08 <roasbeef> eugene have anything you want to add?
19:25:26 <t-bast> it feels to me that your outgoing messages are a queue, that you need to be able to persist/replay on reconnections
19:25:36 <lndbot> <eugene> Yes that's the crux of the issue
19:25:50 <roasbeef> t-bast: is that how ya'll implement it?
19:25:53 <t-bast> we should keep that queue on disk until the commit_sig/revoke_and_ack lets us know we can forget about those
19:26:17 <t-bast> roasbeef: I don't think we have that explicitly, but I'm thinking that we could :)
19:26:27 <t-bast> I may be missing some subtleties/edge cases though
19:26:44 <roasbeef> also iirc rust-lightning has something in their impl to force a particular ordering based on a start up flag?
19:26:47 <t-bast> This is really the kind of issues where I only feel confident once I enumerate all possible cases to verify it works...
19:26:57 <rusty> But I think this is wrong (though this is my first read of this issue).  Either ordering should work, but I will test.
19:26:59 <lndbot> <eugene> Yes you really do need to enumerate all cases, so that's what we did
19:27:04 <roasbeef> t-bast: hehe yeh, hence going waaaay up to concurrent protocol model checking...
19:27:10 <roasbeef> eugene did it more or less by hand in this case tho
19:27:22 <roasbeef> rusty: so I think we have a code level repro on our end, right eugene?
19:27:27 <lndbot> <eugene> There are a limited number of cases really, so it should be possible to enumerate
19:27:34 <roasbeef> I think the examples in that PR also spell out a clear scenario as well
19:27:49 <t-bast> rusty: either ordering works? I'm surprised, I'm not sure eclair will handle both orders (but haven't tested E2E)
19:27:54 <lndbot> <eugene> We can trigger the current state de-sync yeah
19:28:10 <roasbeef> check out this comment for the unrolled de-sync scenario: https://github.com/lightningnetwork/lightning-rfc/issues/794#issuecomment-687337732
19:28:46 <rusty> Well, OK.  We assume atomicity between receiving sig and sending rev.  Thus, if Alice says she has received the sig (via the reconnect numbers) the rev must go first.
19:29:09 <roasbeef> but there're scenarios where you need to retransmit _both_ those messages
19:29:26 <roasbeef> typically it occurs when there're in-flight concurrent updates
19:29:39 <t-bast> rusty: interesting, I don't think that's how eclair works, I'll check in details
19:29:39 <roasbeef> (so both sides send an add, both may also send a sig afterwards, with one of those sigs being lost)
19:30:37 <rusty> roasbeef: the reconnect numbers should indicate what happened there, though.  At least, that's the intent.
19:30:39 <t-bast> rusty: I read it wrong, I agree with what you said
19:31:15 <roasbeef> ok if y'all are able to provide a "counter-example" w.r.t why eugene's example doesn't result in a de-sync then I think that'd be an actionable step that would let us either move forward to correct this, or determine it isn't actually an issue in practice
19:31:24 <rusty> Sigh, there is a simpler scheme which I discarded under pressure to optimize this.  It would also simplify edge cases with fees.
19:31:25 <roasbeef> there very well could just be somethign wrong w/ lnd's assumptions here as well
19:31:36 <roasbeef> but I think, we did a CL <-> lnd repro too eugene?
19:31:37 <rusty> roasbeef: no, I've changed my mind.  If you ack the sig, you need to immediately revoke,
19:31:44 <roasbeef> fsho
19:32:19 <lndbot> <eugene> We did a CL <-> lnd repro but that didn't show this, that showed a different CL-specific issue with HTLCs being out of order.
19:32:33 <roasbeef> ah yeh that's another thing....htlc retranmission ordering
19:32:47 <roasbeef> seems some impl send them in increasing htlc id order, while others do something else
19:32:57 <rusty> eugene: yeah, I've got one report of that, I suspect that we're relying on db ordering which sometimes doesn't.
19:33:03 <roasbeef> but the way lnd works, we'll re-add those to the state machine, which enofrces that ids aren't skipped
19:33:08 <roasbeef> ok cool that y'all are aware/looking into it
19:33:26 <t-bast> I'll dive into eclair's behavior in details and will report back on the issue
19:33:41 <t-bast> Let's check eclair and c-lightning and post directly on github, sounds good?
19:33:50 <roasbeef> sgtm
19:33:53 <t-bast> And ping ariard/BlueMatt to have them check RL
19:34:12 <roasbeef> on our end we fixed a ton of "we didn't write the proper state so we failed the reconnection" issues in 0.11
19:34:20 <roasbeef> a ton being like 2/3 lol
19:34:40 <rusty> roasbeef: that's still quite a few, given how long this has been around :(  Maybe I'll hack up the simplified scheme, see what it looks like in practice.
19:34:44 <roasbeef> we plan on doing a spec sweep to see how explicit some of these requirements were
19:34:51 <roasbeef> fwiw the section on all this stuff is like a paragraph lol
19:35:09 <t-bast> it should fit on a t-shirt, otherwise it's too complex
19:35:14 <roasbeef> rusty: yeh....either we try to do something simpler, or actually try to formalize how we "think" the current version works w/ something like pluscal/tla+
19:35:31 <roasbeef> t-bast: mhmm, I think it can be terse, just that possibly what's there rn may be insufficient
19:35:36 <BlueMatt> oops, today's a us holiday so forgot there was a meeting. I'd have to dig, but we have fuzzers explicitly to test these kinds of things so I'd be very surprised to find if we have any desync issues left for just message ordering.
19:35:39 <roasbeef> I also wonder how new impls like Electrum handle this stuff
19:35:57 <t-bast> roasbeef: yeah, I was teasing the extreme, I think this needs to be expanded to help understanding
19:36:08 <roasbeef> BlueMatt: is it correct that RL has a flag in the code to force a particular ordering of sig/revoke on transmission?
19:36:54 <BlueMatt> roasbeef: yes. we initially did not but after fighting with the fuzzers for a few weeks we ended up having to to get them to be happy (note that it is only partially for disconnect/reconnect - its also because we have a mechanism to pause a channel while waiting on I/O to persist new state)
19:37:22 <BlueMatt> (and I'd highly recommend writing state machine fuzzers for those who have not - they beat the crap out of our state machine, especially around the pausing feature, and forced a much more robust implementation)
19:38:39 <lndbot> <eugene> What exactly do you test for?
19:39:00 <t-bast-official> damn I got dc-ed
19:39:06 <roasbeef> t-bast: and t-bast-official will the real t-bast plz stand up?
19:39:23 <t-bast-official> it's an official account, you're safe
19:39:24 <BlueMatt> create three nodes, make three channels, interpret fuzz input as a list of commands to send payments, disconnect, reconnect, deliver messages asynchrously, pause channels, unpause channels, etc.
19:39:28 * rusty looks at the c-lightning reconnect code and I'm reliving the nightmare right now.  We indeed do a dance to get the order right :(
19:39:57 <lndbot> <eugene> And make sure the channel doesn't de-sync?
19:40:24 <BlueMatt> right, if any peer decides to force-close that is interpreted as a failure case and we abort();
19:40:31 <BlueMatt> the relevant command list is here, for the curious https://github.com/rust-bitcoin/rust-lightning/blob/main/fuzz/src/chanmon_consistency.rs#L663
19:42:55 <t-bast-official> Shall we move on? What about everyone checks their implementation this week and we converge on the issue?
19:43:28 <roasbeef> sgtm, a good palce to start to is to check out that example of a de-sync scenario then doubl echeck against actual behavior
19:43:29 <rusty> t-bast-official: ack, I've already commented on-issue.  TIA to whoever does a decent write up of this in the spec though...
19:43:46 <roasbeef> weshould prob also note the htlc ordering thing more explictliy somewhere too
19:44:00 <t-bast-official> #action check implementations behavior in the scenario described in the issue
19:44:06 <ja> btw according to https://wiki.debian.org/MeetBot only the chair can do endmeeting, topic, agreed
19:44:33 <roasbeef> andd the-real-t-bast is no more?
19:44:35 <t-bast-official> #action figure out a good writeup to clarify the spec
19:44:48 <t-bast-official> Daaamn, I can manually reconnect, be back
19:44:52 <roasbeef> lol
19:44:56 <roasbeef> "the lost meeting"
19:45:17 <rusty> roasbeef: the meeting that never ended...
19:45:26 <t-bast> chair is back, back, back, back again
19:45:44 <t-bast> #topic Evaluate historical min_relay_fee to improve update_fee in anchors
19:46:04 <t-bast> rusty / BlueMatt, did you have time to dig up the historical number on this topic?
19:46:10 <rusty> t-bast: I got sidetracked trying to optimize bitcoin-iterate, sorry.  Will re-visit today.
19:46:24 <t-bast> no worries, we can go to the next topic then
19:46:40 <t-bast> blinded paths or upfront payments/DoS?
19:47:32 <rusty> t-bast: hmm, blinded paths merely needs review IIRC?
19:47:59 <BlueMatt> t-bast: lol i thought the meeting was tomorrow and had a free day today :/ sorry.
19:48:09 <BlueMatt> stupid 'muricans
19:48:16 <t-bast> Yes, I've asked around to get people to review, hopefully we should see some action soon
19:48:21 <t-bast> BlueMatt: no worries!
19:48:32 <roasbeef> BlueMatt: now we even have *two* holidays in one day!
19:48:37 <t-bast> Let's do upfront payments/DoS mechanisms then?
19:48:52 <t-bast> #topic Upfront payments / DoS protection
19:48:54 <rusty> DoS is an infinite well, but I have been thinking about it.  AFAICT We definitely need two parts: something for fast probes, something for slow probes.
19:49:14 <t-bast> Just as we set the topic, a wild joost appears
19:49:19 <roasbeef> yeh that's the thing, DoS is also greifing mainly, and some form of it exists in just about every protocol
19:49:21 <t-bast> he *knew*
19:49:29 <roasbeef> vs like something that results in _direct_ attacker gain
19:49:30 <joostjgr> Haha, I was off by one hour because of timezone and just catching up
19:49:50 <niftynei> two for one holiday deal, sounds very american haha
19:49:57 <roasbeef> then endless defence and ascend
19:50:02 <roasbeef> niftynei: lolol
19:50:15 <roasbeef> "now with a limited time offer!"
19:50:25 <rusty> slow probes I still can't beat a penalty.  There are some refinements, but they basically mean someone closes a channel in return for tying up funds.  You can still grief, but it's not free any more.
19:50:45 <roasbeef> well ofc there's the button we've all been looking at for a while: pay for htlc attempts
19:51:01 <roasbeef> but that has a whole heap of tradeoffs, but maybe also makes an added incentive for better path finding in general
19:51:11 <roasbeef> "spray and pray" starts to have an actual cost
19:51:13 <rusty> fast probes, up-front payment.  I've been toying with the idea of node-provided tokens whihc you can redeem for future attempts.
19:51:29 <roasbeef> but then again, that could be a centralization vector: ppl just outsource to some mega mind db that knows everything
19:51:37 <rusty> (which lets you fuzz the amounts much more: either add some to buy some tokens, or spend tokens to reduce cost)
19:51:41 <roasbeef> rusty: yeah I've thought of that too....but that's also kinda dangerous imo...
19:51:50 <rusty> roasbeef: oh yeah, for sure!
19:52:21 <rusty> roasbeef: ideally you go for some tradable chaumian deal, where you can automatically trade some with neighbors for extra obfuscation/
19:52:32 <roasbeef> imo the lowest hanging, low-risk fruit here is just dynamic limits as t-bast brought up again on the ML recently
19:52:45 <roasbeef> rusty: yeh...but I'm more worried about like "forwarding passes".....
19:52:53 <t-bast> it would be really helpful to have a way to somewhat quickly check a proposal against known abuses, as every complex mechanism we introduce may be abused and end up worse
19:53:10 <roasbeef> hmm interesting, may need some fleshing out there to enumerate what properties you think that can give us rusty
19:53:31 <roasbeef> t-bast: yeh if there was an ez no brainer solution, we'd have done it by now
19:53:49 <roasbeef> in the end, maybe there's justr a series of heterogenous policies ppl deploy, as all of them are "not so great"
19:54:22 <roasbeef> but again it's also "just" grifing/DoS, restricted operating modes in various flavors are ofc possible (with their own can of worms that worry me even more than the griefing tbh)
19:54:30 <rusty> roasbeef: yeah, OK, let's focus on up-front payment.  Main issue is that you now provide  a distance-to-failure/termination oracle, so my refinement attempts have been to mitigate that.
19:54:34 <t-bast> at least having a living document somewhere with all the approaches we tried that were broken may be helpful - I can draft that if you find it useful
19:55:10 <roasbeef> t-bast: sure, part of the issue is that there've been like 20+ proopals split up across years of ML traffic
19:55:40 <roasbeef> rusty: I was proposing going from the other direction: dynamic limits to give impls the ability to try out their own pollicies, with some of them eventually becoming The Word
19:56:00 <rusty> roasbeef: I think we need both, TBH.
19:56:05 <roasbeef> if we had upfront payments magically solved, how many of us would deploy it by default today on impls given all the tradeoffs?
19:56:08 <roasbeef> yeh :/
19:56:22 <roasbeef> it could just be part of some reduced operating model, where you then signal ok you need to pay to pass
19:56:30 <roasbeef> but then again that can be gamed by making things more expensive for everyone else
19:56:31 <joostjgr> on the "just grieving/dos": The ratio of attacker effort vs grief is quite different compared to for example trying to overload a website. And the fact that the attack can be routed across the network doesn't make it better
19:56:48 <rusty> joostjgr: indeed.
19:57:00 <roasbeef> joostjgr: it depends, over load a platform/infrastructure provider isntead, and you get the same leverage
19:57:28 <roasbeef> and dependign on who that is, actually do tangible damage, but ofc the big bois these days have some nice force fields
19:57:59 <roasbeef> > if we had upfront payments magically solved, how many of us would deploy it by default today on impls given all the tradeoffs?
19:58:02 <roasbeef> ?
19:58:14 <roasbeef> taking into account all the new gaming vecvtors it introduces
19:58:20 <t-bast> depends on the trade-offs :D
19:58:27 <roasbeef> lolol ofc ;)
19:58:43 <rusty> I would.  But then, I don't really *like* my users :)
19:58:47 <roasbeef> also you'd need to formulate it into a model of a "well funded" attacker as well, that just want to mix things up
19:58:51 <roasbeef> lololol
19:59:05 <joostjgr> if the upfront payment pays for exactly the cost the htlc represents, I'd say it is fair and would be deployed.
19:59:07 <roasbeef> as in: if you're a whale, only a very high value would actually be a tangilble "cost" to you
19:59:20 <roasbeef> joostjgr: deployment is one thing, _efficacy_ is another
19:59:36 <roasbeef> whales don't feel small waves...
20:00:53 <cdecker> Wait, did I fall into the DST trap... again.... ?
20:00:54 <joostjgr> Wasn't the question about deployment? Just saying I would deploy it :)
20:01:05 <t-bast> Good evening cdecker xD
20:01:08 <roasbeef> cdecker: welcome to the future ;)
20:01:14 <cdecker> Hi everyone :-)
20:01:34 <roasbeef> joostjgr: yeh but digging deeper, how do you determine if something is effective after deployment? should "ineffective" mitigations be deployed?
20:01:36 <cdecker> That's what I get for writing my own calendar...
20:01:41 <cdecker> Sorry for being late
20:02:03 <roasbeef> "efficacy" also depends on the profile of a theoretical attacker and their total budget, etc, etc
20:02:14 <joostjgr> If it makes the attacker pay for the attack, I think it can be effective
20:02:17 <t-bast> No worries cdecker, we're right in the middle of upfront payment/DoS, if you want to jump in
20:02:24 <rusty> Anyone have ideas about cost?  Would we add YA parameter in gossip, or make it a flat fee, or make it a percentage of the successful payment?
20:02:27 <roasbeef> just like a really "well funded" attacker can clog up bitcoin blocks for weeks with just their own transactions
20:02:39 <roasbeef> rusty: should be set and dynamic for the forwarding node imo
20:02:46 <joostjgr> Yes, there are always attackers with bigger pockets of course. But may not be a reason to let anyone execute these for free
20:02:53 <t-bast> rusty: I kinda liked your proposal where it wasn't only sats but also some grinding
20:03:27 <rusty> t-bast: I have a better one now.  You can buy/use credits.  Ofc it's trust, but you're only really using it for amount obfuscation.
20:03:31 <roasbeef> joostjgr: yeh, just getting at that stuff like this is never really "solved" just mitigated based on some attacker profile/budget, like just look into all the shenanigans that miners can get into against other miners in an adversarial env
20:03:54 <roasbeef> rusty: there's a danger zone there....
20:04:11 <roasbeef> centralization pressures also need to be weighed
20:04:50 <niftynei> roasbeef, is centralization in this case based on the 'decay function of service providerism' or something inherent to the proposal?
20:05:08 <rusty> roasbeef: yep, at least one.  Hence I really wanted them to be tradable, but that's more complex.
20:05:32 <roasbeef> not sure what you mean by decay there (there're lss ppl?), niftynei, more like "ma'am, do you have the proper papers for this transit?"
20:05:32 <t-bast> niftynei: can you explain what you mean by "decay function of service providerism"?
20:05:55 <roasbeef> "ahh, I see, then no can do"
20:06:14 <niftynei> number of people providing the service is smaller than total set of people using the service
20:06:15 <t-bast> you mean big barrier to entry for newcomers/small nodes?
20:06:49 <niftynei> e.g. everyone needs liqiudity, only so many people run 'liquidity providing' services
20:07:08 <ariard> hello
20:07:16 <niftynei> and over time those tend to fall offline/consolidate etc because 'running a service' is Work
20:07:24 <roasbeef> prob also introduces some other boostrapping problems as well, but imo fragmenting the network is an even bigger risk....if y'all get where I'm going with that above example....
20:07:36 <t-bast> hey ariard, the daylight savings got you too (we started an hour ago)
20:07:52 <ariard> ah damn
20:08:05 <ariard> -> reading the logs
20:08:14 <roasbeef> lotta logs lol
20:08:31 <cdecker> Nah, 250 lines, halfway through already ^^
20:08:52 <t-bast> roasbeef: yeah I agree, but it feels like all the potential solutions for this problem have to rely on some "trust" that is slowly built between peers (dynamically adjusting cost based on the "relationship" you had with your peer)
20:09:17 <t-bast> roasbeef: which definitely centralizes (why spend time building trust with people you don't know?)
20:09:24 <roasbeef> t-bast: I think what I'm talking about above is an entirely distinct, more degnerate class of "failure" mode
20:09:50 <t-bast> yeah because when taken to the extreme, you split and others start their own network
20:10:01 <roasbeef> even then, if you consider a "dynamic adversary" that can "corrupt" nodes after the fact, then that starts to fall short as well
20:10:18 <roasbeef> (playing devil's advocate a bit here, but we'd need to really settle on a threat model)
20:10:19 <t-bast> I'll have my own LN, with blackjack and hookers <insert futurama meme>
20:10:23 <roasbeef> lololol
20:10:31 <roasbeef> yeh that's always an option xD
20:10:40 <roasbeef> the freedom to assemble!
20:11:45 <t-bast> Well at least I'll start centralizing all the ideas that have been proposed in that space in a repo which we can update as new ideas come, I think it will save time in the long run
20:11:53 <roasbeef> gotta be going soon myself....great meeting though, also nice that with the time change I no longer have an actual meeting-meeting during this time as well
20:11:57 <roasbeef> t-bast: +1
20:12:07 <t-bast> #action t-bast summarize existing proposals / attacks / threat models
20:12:26 <joostjgr> In think that that threat model should at least include the trivial channel jamming that is possible today.
20:12:34 <t-bast> #action rusty can you detail your token idea?
20:12:45 <ariard> also issues with watchtowers
20:12:47 <rusty> t-bast: yeah, I'll post to ML.
20:12:53 <niftynei> ooh excited to see a summary/list of threat models to contemplate +1
20:12:56 <ariard> you should talk with Sergi he spotted few ones
20:13:00 <t-bast> joostjgr: yes definitely, there needs to be different threat models for different kinds of attackers
20:13:40 <t-bast> ariard: yeah I saw some of your discussions, where a watchtower may be spammed as well, I'll ping him to contribute to the doc
20:14:20 <t-bast> Shall we discuss one last topic or shall we end now (sorry for the unlucky ones who missed the DST change...)?
20:14:22 <joostjgr> t-bast: looking forward to that overview too :+1:
20:14:23 <roasbeef> idk if that's new? that's why all tower today are basically "white listed"
20:14:27 <roasbeef> towers
20:14:43 <ariard> roasbeef: for a future with public towers
20:14:44 <t-bast> roasbeef: I agree, I think the watchtower model today is very easily spammable
20:14:55 <roasbeef> at least it's like that in lnd, since we hadn't yet implemented rewards other than upon justice
20:15:02 <rusty> Yeah, the intention was always that you'd pay the watchtower.
20:15:03 <ariard> so yeah not the model of today but the model where you pay per update
20:15:04 <t-bast> roasbeef: but we can list the requirements/threat model to build the watchtowers of the future
20:15:17 <roasbeef> yeh, lol just like you don't open up your network attached filesystem to the entire world
20:15:18 <ariard> and thus someone can force you to pay for nothing the watchtower until you exhaust your "credit"
20:15:40 <ariard> the entire world includes your channel counterparty
20:15:47 <roasbeef> yeh depends entirely on how a credit system would work
20:16:08 <roasbeef> yeh ofc, basic isolation is a requirement
20:16:21 <roasbeef> but also one needs to assume the existence of a private tower at all times as well
20:16:29 <roasbeef> depends on the sitaution tho really, very large design space
20:16:41 <ariard> yeah but for mobile ? but agree the design space is so large
20:17:08 <ariard> the latency cost of the watchtower might be a hint of the presence of one
20:17:09 <roasbeef> depends really, also would assume mbile chan sizes <<<<<< actual routing node chan sizes
20:17:22 <ariard> especially if we have few of them
20:17:25 <roasbeef> and it's mobile also, so don't store your matress savings in there
20:17:52 <roasbeef> but lots of things to consider, which is why we just implemented white listed towers to begin with
20:17:53 <ariard> t-bast: what was this story with people huge amounts on mobile lol ?
20:18:03 <ariard> sure easier to start with private towers
20:18:19 <roasbeef> oh yeh we know for exp as well ppl just really like to sling around large amts of sats to "test" :p
20:18:51 <t-bast> some people are crazy with their bitcoin, they probably have too much of those
20:18:57 <roasbeef> g2g
20:19:04 <t-bast> see you roasbeef
20:19:16 <cdecker> cu roasbeef :-)
20:19:22 <t-bast> let's end for today, already a lot to do for the next meeting ;)
20:19:25 <t-bast> #endmeeting