19:06:45 #startmeeting 19:06:45 Meeting started Mon Oct 12 19:06:45 2020 UTC. The chair is t-bast. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:06:45 Useful Commands: #action #agreed #help #info #idea #link #topic. 19:06:48 Yes, it's my fault. I refuse to get up for a 4:30am call, but in DST the 5:30am is more civilized for everyone else. 19:06:52 let's goooo :) 19:06:56 Let's start recording :) 19:07:10 #link https://github.com/lightningnetwork/lightning-rfc/issues/802 19:07:23 I setup a tentative agenda, with feedback from cdecker and ariard 19:07:32 We have a couple simple PRs to warm-up 19:07:43 #topic Bolt 4 clarification 19:07:54 #link https://github.com/lightningnetwork/lightning-rfc/pull/801 19:08:22 A simple clarification around tlv_streams, I think we should let the PR author answer the comments 19:09:12 t-bast: yes. Your point about it breaking tooling was correct, though it's in the lightning-rfc tree, not a c-lightning specific thing. 19:09:30 why idk if we should worry about that tool breaking when making spec changes 19:09:33 yeh* 19:09:42 rusty: true, but I think you're the only ones to use it xD, but worth maintaining though 19:10:00 t-bast: lnprototest also uses it, FWIW. I guess that's kinda me too :) 19:10:01 I think this PR is a toss up, no strong feelings w.r.t if this actually clarifies things, kinda in typo territory 19:10:02 I don't think the variable name needs to change TBH 19:10:22 Let's wait for the author to comment and move to the next PR? 19:10:32 roasbeef: it does stop people breaking the spec parsing though, or at least makes them fix up the tool if they do. 19:10:33 their port about the bolt 4 wording is more worthy of a change imo 19:10:35 t-bast: ack 19:10:40 roasbeef: ack 19:10:41 point* 19:10:52 agreed 19:11:08 #topic Claiming revoked anchor commit txs in a single tx 19:11:11 #link https://github.com/lightningnetwork/lightning-rfc/pull/803 19:11:30 Worth having a look and bikeshedding the wording on this PR 19:11:46 It tells implementers to be careful with batching claiming revoked outputs 19:12:32 if we're advising w.r.t best practices when sweeping revoked outputs, we may also want to add that depending on the lateest state and the broadcasted state, the "defener" can use the funds of the attacker to siphon off to chain fees to speed up the confirmation of the justice transaction 19:13:03 maybe a whole section of recommended best practices for sweeping could be considered? 19:13:03 but also even if they pin the second level, that needs to confirm, then they still need to wait for csv, and there's another revocation output there 19:13:14 I'm sure ariard would love writing a section like that 19:13:28 Where we'd quickly explain the subtleties and attacks 19:13:36 it's one of those "maybe does something, but can delay things at times" vectors from my pov, in that either they pay a lot of fees to jam things up, and that tiehr confirms or it doesn't 19:13:53 if it does, then they paid a buncha fees, if it doesn't, then it's likely the case that the "honest" one would 19:14:07 roasbeef: yeah, ZmnSCPxj recently implemented scorched earth for c-lightning. But you've always had this problem where a penalty could be a single giant tx but then you need to quickly divide and conquer as they can play with adding HTLC txs... 19:14:30 mhmm, another edge case here is if they spend the htlc transactions within distinct transactions 19:14:47 if you're sweeping them all in a single, yuou need to watch for spends of each in order to update your transaction and breach remedy strategy 19:14:52 (So c-lightning just makes indep txs, because it's simple and nobody really seems to care). 19:15:24 yes, TBH eclair also makes indep txs, if you're claiming a revoked tx you're going to win money anyway, you don't need to bother optimizing the fees 19:15:32 mhmm, according to jonny1000's "breach monitor" there was a "suspected breach" for the first time this year (?) a few weeks ago 19:15:51 roasbeef: didn't hear about that, can you share a link? 19:16:08 t-bast: so not always (that you'd win money), if things are super lopsided then they may just be going back to a better state for them, and one that yuo're actually in a worse position for 19:16:32 sure, sec.... 19:16:42 https://twitter.com/BitMEXResearch/status/1305794706702569472?s=20 19:16:52 roasbeef: true, if everything was on your side they may force you to spend on fees, that's right 19:17:00 so it was a detected justice transaction, ofc you can't detect failed defense attempts 19:17:06 https://forkmonitor.info/lightning 19:17:26 55$...interesting :) 19:17:42 ok yeh was wrong about "frst time this year", but they're relatively uncommon and usually for "small amounts", which could mean just ppl testing to see "if this thing actually works" 19:18:01 no wumbo defeneses yet ;) 19:18:16 going back to the PR, do you think it's worth creating a section dedicated to sweeping subtleties? Or just bikeshed the current wording? 19:18:29 roasbeef: if it's never used, it doesn't work xD 19:18:34 hehe 19:18:52 I think this prob stands on it's own, wouldn't want to slow it down to add more stuff to it that can be done in a diff more focused PR (on the best practices section) 19:19:11 gonna comment on it to mention that even if they pin, you can play at the second level if that ever confirms 19:19:48 great, too bad ariard isn't there today, we'll continue the discussion on github 19:19:58 fsho 19:19:58 #action discuss wording on the PR 19:20:28 #topic clarify message ordering post-reconnect 19:20:31 #link https://github.com/lightningnetwork/lightning-rfc/issues/794 19:20:54 This issue of clarifying the order of replaying "lost" messages on reconnection has probably been discussed at length before 19:21:05 But I couldn't find past issues or ML threads to reference 19:21:20 Hi,I made that PR 19:21:44 yeh we discovered this when tracking down some spurious force closes and de-syncs in lnd, that would even happen lnd <-> lnd 19:22:05 imo this is one of the areas of the spec that's the most murky, but also super critical to "get right" to ensure you don' thave a ton of unhappy users 19:22:42 (this area == how to properly resume a channel that may have had inflight updates/signatures/dangling-commitments) 19:23:08 I agree, this is important to get right as this leads to channel closures, so it's worth ensuring we all agree on the expected behavior! 19:24:14 mhmm, almost feels like we need to step waaaaaay back and go full on PlusCal/TLA+ here, as idk about y'all but even years later we've found some new edge cases, but could just be us lol, also ofc something like that can be pretty laborious 19:24:26 roasbeef: indeed. If we were doing it again I'd go abck to being half duplex whihc removes all this shit. 19:24:27 I think a stop gap there would just be stating exactly what you need to commit to disk each time you recv a message 19:24:54 but what I'm referrgin to rn (not writing all the stae yo uneed to) is distinct from what eugene found, which is ambiguous retransmission _order_ 19:25:00 and the order of sig/revoke here is make or break 19:25:08 eugene have anything you want to add? 19:25:26 it feels to me that your outgoing messages are a queue, that you need to be able to persist/replay on reconnections 19:25:36 Yes that's the crux of the issue 19:25:50 t-bast: is that how ya'll implement it? 19:25:53 we should keep that queue on disk until the commit_sig/revoke_and_ack lets us know we can forget about those 19:26:17 roasbeef: I don't think we have that explicitly, but I'm thinking that we could :) 19:26:27 I may be missing some subtleties/edge cases though 19:26:44 also iirc rust-lightning has something in their impl to force a particular ordering based on a start up flag? 19:26:47 This is really the kind of issues where I only feel confident once I enumerate all possible cases to verify it works... 19:26:57 But I think this is wrong (though this is my first read of this issue). Either ordering should work, but I will test. 19:26:59 Yes you really do need to enumerate all cases, so that's what we did 19:27:04 t-bast: hehe yeh, hence going waaaay up to concurrent protocol model checking... 19:27:10 eugene did it more or less by hand in this case tho 19:27:22 rusty: so I think we have a code level repro on our end, right eugene? 19:27:27 There are a limited number of cases really, so it should be possible to enumerate 19:27:34 I think the examples in that PR also spell out a clear scenario as well 19:27:49 rusty: either ordering works? I'm surprised, I'm not sure eclair will handle both orders (but haven't tested E2E) 19:27:54 We can trigger the current state de-sync yeah 19:28:10 check out this comment for the unrolled de-sync scenario: https://github.com/lightningnetwork/lightning-rfc/issues/794#issuecomment-687337732 19:28:46 Well, OK. We assume atomicity between receiving sig and sending rev. Thus, if Alice says she has received the sig (via the reconnect numbers) the rev must go first. 19:29:09 but there're scenarios where you need to retransmit _both_ those messages 19:29:26 typically it occurs when there're in-flight concurrent updates 19:29:39 rusty: interesting, I don't think that's how eclair works, I'll check in details 19:29:39 (so both sides send an add, both may also send a sig afterwards, with one of those sigs being lost) 19:30:37 roasbeef: the reconnect numbers should indicate what happened there, though. At least, that's the intent. 19:30:39 rusty: I read it wrong, I agree with what you said 19:31:15 ok if y'all are able to provide a "counter-example" w.r.t why eugene's example doesn't result in a de-sync then I think that'd be an actionable step that would let us either move forward to correct this, or determine it isn't actually an issue in practice 19:31:24 Sigh, there is a simpler scheme which I discarded under pressure to optimize this. It would also simplify edge cases with fees. 19:31:25 there very well could just be somethign wrong w/ lnd's assumptions here as well 19:31:36 but I think, we did a CL <-> lnd repro too eugene? 19:31:37 roasbeef: no, I've changed my mind. If you ack the sig, you need to immediately revoke, 19:31:44 fsho 19:32:19 We did a CL <-> lnd repro but that didn't show this, that showed a different CL-specific issue with HTLCs being out of order. 19:32:33 ah yeh that's another thing....htlc retranmission ordering 19:32:47 seems some impl send them in increasing htlc id order, while others do something else 19:32:57 eugene: yeah, I've got one report of that, I suspect that we're relying on db ordering which sometimes doesn't. 19:33:03 but the way lnd works, we'll re-add those to the state machine, which enofrces that ids aren't skipped 19:33:08 ok cool that y'all are aware/looking into it 19:33:26 I'll dive into eclair's behavior in details and will report back on the issue 19:33:41 Let's check eclair and c-lightning and post directly on github, sounds good? 19:33:50 sgtm 19:33:53 And ping ariard/BlueMatt to have them check RL 19:34:12 on our end we fixed a ton of "we didn't write the proper state so we failed the reconnection" issues in 0.11 19:34:20 a ton being like 2/3 lol 19:34:40 roasbeef: that's still quite a few, given how long this has been around :( Maybe I'll hack up the simplified scheme, see what it looks like in practice. 19:34:44 we plan on doing a spec sweep to see how explicit some of these requirements were 19:34:51 fwiw the section on all this stuff is like a paragraph lol 19:35:09 it should fit on a t-shirt, otherwise it's too complex 19:35:14 rusty: yeh....either we try to do something simpler, or actually try to formalize how we "think" the current version works w/ something like pluscal/tla+ 19:35:31 t-bast: mhmm, I think it can be terse, just that possibly what's there rn may be insufficient 19:35:36 oops, today's a us holiday so forgot there was a meeting. I'd have to dig, but we have fuzzers explicitly to test these kinds of things so I'd be very surprised to find if we have any desync issues left for just message ordering. 19:35:39 I also wonder how new impls like Electrum handle this stuff 19:35:57 roasbeef: yeah, I was teasing the extreme, I think this needs to be expanded to help understanding 19:36:08 BlueMatt: is it correct that RL has a flag in the code to force a particular ordering of sig/revoke on transmission? 19:36:54 roasbeef: yes. we initially did not but after fighting with the fuzzers for a few weeks we ended up having to to get them to be happy (note that it is only partially for disconnect/reconnect - its also because we have a mechanism to pause a channel while waiting on I/O to persist new state) 19:37:22 (and I'd highly recommend writing state machine fuzzers for those who have not - they beat the crap out of our state machine, especially around the pausing feature, and forced a much more robust implementation) 19:38:39 What exactly do you test for? 19:39:00 damn I got dc-ed 19:39:06 t-bast: and t-bast-official will the real t-bast plz stand up? 19:39:23 it's an official account, you're safe 19:39:24 create three nodes, make three channels, interpret fuzz input as a list of commands to send payments, disconnect, reconnect, deliver messages asynchrously, pause channels, unpause channels, etc. 19:39:28 * rusty looks at the c-lightning reconnect code and I'm reliving the nightmare right now. We indeed do a dance to get the order right :( 19:39:57 And make sure the channel doesn't de-sync? 19:40:24 right, if any peer decides to force-close that is interpreted as a failure case and we abort(); 19:40:31 the relevant command list is here, for the curious https://github.com/rust-bitcoin/rust-lightning/blob/main/fuzz/src/chanmon_consistency.rs#L663 19:42:55 Shall we move on? What about everyone checks their implementation this week and we converge on the issue? 19:43:28 sgtm, a good palce to start to is to check out that example of a de-sync scenario then doubl echeck against actual behavior 19:43:29 t-bast-official: ack, I've already commented on-issue. TIA to whoever does a decent write up of this in the spec though... 19:43:46 weshould prob also note the htlc ordering thing more explictliy somewhere too 19:44:00 #action check implementations behavior in the scenario described in the issue 19:44:06 btw according to https://wiki.debian.org/MeetBot only the chair can do endmeeting, topic, agreed 19:44:33 andd the-real-t-bast is no more? 19:44:35 #action figure out a good writeup to clarify the spec 19:44:48 Daaamn, I can manually reconnect, be back 19:44:52 lol 19:44:56 "the lost meeting" 19:45:17 roasbeef: the meeting that never ended... 19:45:26 chair is back, back, back, back again 19:45:44 #topic Evaluate historical min_relay_fee to improve update_fee in anchors 19:46:04 rusty / BlueMatt, did you have time to dig up the historical number on this topic? 19:46:10 t-bast: I got sidetracked trying to optimize bitcoin-iterate, sorry. Will re-visit today. 19:46:24 no worries, we can go to the next topic then 19:46:40 blinded paths or upfront payments/DoS? 19:47:32 t-bast: hmm, blinded paths merely needs review IIRC? 19:47:59 t-bast: lol i thought the meeting was tomorrow and had a free day today :/ sorry. 19:48:09 stupid 'muricans 19:48:16 Yes, I've asked around to get people to review, hopefully we should see some action soon 19:48:21 BlueMatt: no worries! 19:48:32 BlueMatt: now we even have *two* holidays in one day! 19:48:37 Let's do upfront payments/DoS mechanisms then? 19:48:52 #topic Upfront payments / DoS protection 19:48:54 DoS is an infinite well, but I have been thinking about it. AFAICT We definitely need two parts: something for fast probes, something for slow probes. 19:49:14 Just as we set the topic, a wild joost appears 19:49:19 yeh that's the thing, DoS is also greifing mainly, and some form of it exists in just about every protocol 19:49:21 he *knew* 19:49:29 vs like something that results in _direct_ attacker gain 19:49:30 Haha, I was off by one hour because of timezone and just catching up 19:49:50 two for one holiday deal, sounds very american haha 19:49:57 then endless defence and ascend 19:50:02 niftynei: lolol 19:50:15 "now with a limited time offer!" 19:50:25 slow probes I still can't beat a penalty. There are some refinements, but they basically mean someone closes a channel in return for tying up funds. You can still grief, but it's not free any more. 19:50:45 well ofc there's the button we've all been looking at for a while: pay for htlc attempts 19:51:01 but that has a whole heap of tradeoffs, but maybe also makes an added incentive for better path finding in general 19:51:11 "spray and pray" starts to have an actual cost 19:51:13 fast probes, up-front payment. I've been toying with the idea of node-provided tokens whihc you can redeem for future attempts. 19:51:29 but then again, that could be a centralization vector: ppl just outsource to some mega mind db that knows everything 19:51:37 (which lets you fuzz the amounts much more: either add some to buy some tokens, or spend tokens to reduce cost) 19:51:41 rusty: yeah I've thought of that too....but that's also kinda dangerous imo... 19:51:50 roasbeef: oh yeah, for sure! 19:52:21 roasbeef: ideally you go for some tradable chaumian deal, where you can automatically trade some with neighbors for extra obfuscation/ 19:52:32 imo the lowest hanging, low-risk fruit here is just dynamic limits as t-bast brought up again on the ML recently 19:52:45 rusty: yeh...but I'm more worried about like "forwarding passes"..... 19:52:53 it would be really helpful to have a way to somewhat quickly check a proposal against known abuses, as every complex mechanism we introduce may be abused and end up worse 19:53:10 hmm interesting, may need some fleshing out there to enumerate what properties you think that can give us rusty 19:53:31 t-bast: yeh if there was an ez no brainer solution, we'd have done it by now 19:53:49 in the end, maybe there's justr a series of heterogenous policies ppl deploy, as all of them are "not so great" 19:54:22 but again it's also "just" grifing/DoS, restricted operating modes in various flavors are ofc possible (with their own can of worms that worry me even more than the griefing tbh) 19:54:30 roasbeef: yeah, OK, let's focus on up-front payment. Main issue is that you now provide a distance-to-failure/termination oracle, so my refinement attempts have been to mitigate that. 19:54:34 at least having a living document somewhere with all the approaches we tried that were broken may be helpful - I can draft that if you find it useful 19:55:10 t-bast: sure, part of the issue is that there've been like 20+ proopals split up across years of ML traffic 19:55:40 rusty: I was proposing going from the other direction: dynamic limits to give impls the ability to try out their own pollicies, with some of them eventually becoming The Word 19:56:00 roasbeef: I think we need both, TBH. 19:56:05 if we had upfront payments magically solved, how many of us would deploy it by default today on impls given all the tradeoffs? 19:56:08 yeh :/ 19:56:22 it could just be part of some reduced operating model, where you then signal ok you need to pay to pass 19:56:30 but then again that can be gamed by making things more expensive for everyone else 19:56:31 on the "just grieving/dos": The ratio of attacker effort vs grief is quite different compared to for example trying to overload a website. And the fact that the attack can be routed across the network doesn't make it better 19:56:48 joostjgr: indeed. 19:57:00 joostjgr: it depends, over load a platform/infrastructure provider isntead, and you get the same leverage 19:57:28 and dependign on who that is, actually do tangible damage, but ofc the big bois these days have some nice force fields 19:57:59 > if we had upfront payments magically solved, how many of us would deploy it by default today on impls given all the tradeoffs? 19:58:02 ? 19:58:14 taking into account all the new gaming vecvtors it introduces 19:58:20 depends on the trade-offs :D 19:58:27 lolol ofc ;) 19:58:43 I would. But then, I don't really *like* my users :) 19:58:47 also you'd need to formulate it into a model of a "well funded" attacker as well, that just want to mix things up 19:58:51 lololol 19:59:05 if the upfront payment pays for exactly the cost the htlc represents, I'd say it is fair and would be deployed. 19:59:07 as in: if you're a whale, only a very high value would actually be a tangilble "cost" to you 19:59:20 joostjgr: deployment is one thing, _efficacy_ is another 19:59:36 whales don't feel small waves... 20:00:53 Wait, did I fall into the DST trap... again.... ? 20:00:54 Wasn't the question about deployment? Just saying I would deploy it :) 20:01:05 Good evening cdecker xD 20:01:08 cdecker: welcome to the future ;) 20:01:14 Hi everyone :-) 20:01:34 joostjgr: yeh but digging deeper, how do you determine if something is effective after deployment? should "ineffective" mitigations be deployed? 20:01:36 That's what I get for writing my own calendar... 20:01:41 Sorry for being late 20:02:03 "efficacy" also depends on the profile of a theoretical attacker and their total budget, etc, etc 20:02:14 If it makes the attacker pay for the attack, I think it can be effective 20:02:17 No worries cdecker, we're right in the middle of upfront payment/DoS, if you want to jump in 20:02:24 Anyone have ideas about cost? Would we add YA parameter in gossip, or make it a flat fee, or make it a percentage of the successful payment? 20:02:27 just like a really "well funded" attacker can clog up bitcoin blocks for weeks with just their own transactions 20:02:39 rusty: should be set and dynamic for the forwarding node imo 20:02:46 Yes, there are always attackers with bigger pockets of course. But may not be a reason to let anyone execute these for free 20:02:53 rusty: I kinda liked your proposal where it wasn't only sats but also some grinding 20:03:27 t-bast: I have a better one now. You can buy/use credits. Ofc it's trust, but you're only really using it for amount obfuscation. 20:03:31 joostjgr: yeh, just getting at that stuff like this is never really "solved" just mitigated based on some attacker profile/budget, like just look into all the shenanigans that miners can get into against other miners in an adversarial env 20:03:54 rusty: there's a danger zone there.... 20:04:11 centralization pressures also need to be weighed 20:04:50 roasbeef, is centralization in this case based on the 'decay function of service providerism' or something inherent to the proposal? 20:05:08 roasbeef: yep, at least one. Hence I really wanted them to be tradable, but that's more complex. 20:05:32 not sure what you mean by decay there (there're lss ppl?), niftynei, more like "ma'am, do you have the proper papers for this transit?" 20:05:32 niftynei: can you explain what you mean by "decay function of service providerism"? 20:05:55 "ahh, I see, then no can do" 20:06:14 number of people providing the service is smaller than total set of people using the service 20:06:15 you mean big barrier to entry for newcomers/small nodes? 20:06:49 e.g. everyone needs liqiudity, only so many people run 'liquidity providing' services 20:07:08 hello 20:07:16 and over time those tend to fall offline/consolidate etc because 'running a service' is Work 20:07:24 prob also introduces some other boostrapping problems as well, but imo fragmenting the network is an even bigger risk....if y'all get where I'm going with that above example.... 20:07:36 hey ariard, the daylight savings got you too (we started an hour ago) 20:07:52 ah damn 20:08:05 -> reading the logs 20:08:14 lotta logs lol 20:08:31 Nah, 250 lines, halfway through already ^^ 20:08:52 roasbeef: yeah I agree, but it feels like all the potential solutions for this problem have to rely on some "trust" that is slowly built between peers (dynamically adjusting cost based on the "relationship" you had with your peer) 20:09:17 roasbeef: which definitely centralizes (why spend time building trust with people you don't know?) 20:09:24 t-bast: I think what I'm talking about above is an entirely distinct, more degnerate class of "failure" mode 20:09:50 yeah because when taken to the extreme, you split and others start their own network 20:10:01 even then, if you consider a "dynamic adversary" that can "corrupt" nodes after the fact, then that starts to fall short as well 20:10:18 (playing devil's advocate a bit here, but we'd need to really settle on a threat model) 20:10:19 I'll have my own LN, with blackjack and hookers 20:10:23 lololol 20:10:31 yeh that's always an option xD 20:10:40 the freedom to assemble! 20:11:45 Well at least I'll start centralizing all the ideas that have been proposed in that space in a repo which we can update as new ideas come, I think it will save time in the long run 20:11:53 gotta be going soon myself....great meeting though, also nice that with the time change I no longer have an actual meeting-meeting during this time as well 20:11:57 t-bast: +1 20:12:07 #action t-bast summarize existing proposals / attacks / threat models 20:12:26 In think that that threat model should at least include the trivial channel jamming that is possible today. 20:12:34 #action rusty can you detail your token idea? 20:12:45 also issues with watchtowers 20:12:47 t-bast: yeah, I'll post to ML. 20:12:53 ooh excited to see a summary/list of threat models to contemplate +1 20:12:56 you should talk with Sergi he spotted few ones 20:13:00 joostjgr: yes definitely, there needs to be different threat models for different kinds of attackers 20:13:40 ariard: yeah I saw some of your discussions, where a watchtower may be spammed as well, I'll ping him to contribute to the doc 20:14:20 Shall we discuss one last topic or shall we end now (sorry for the unlucky ones who missed the DST change...)? 20:14:22 t-bast: looking forward to that overview too :+1: 20:14:23 idk if that's new? that's why all tower today are basically "white listed" 20:14:27 towers 20:14:43 roasbeef: for a future with public towers 20:14:44 roasbeef: I agree, I think the watchtower model today is very easily spammable 20:14:55 at least it's like that in lnd, since we hadn't yet implemented rewards other than upon justice 20:15:02 Yeah, the intention was always that you'd pay the watchtower. 20:15:03 so yeah not the model of today but the model where you pay per update 20:15:04 roasbeef: but we can list the requirements/threat model to build the watchtowers of the future 20:15:17 yeh, lol just like you don't open up your network attached filesystem to the entire world 20:15:18 and thus someone can force you to pay for nothing the watchtower until you exhaust your "credit" 20:15:40 the entire world includes your channel counterparty 20:15:47 yeh depends entirely on how a credit system would work 20:16:08 yeh ofc, basic isolation is a requirement 20:16:21 but also one needs to assume the existence of a private tower at all times as well 20:16:29 depends on the sitaution tho really, very large design space 20:16:41 yeah but for mobile ? but agree the design space is so large 20:17:08 the latency cost of the watchtower might be a hint of the presence of one 20:17:09 depends really, also would assume mbile chan sizes <<<<<< actual routing node chan sizes 20:17:22 especially if we have few of them 20:17:25 and it's mobile also, so don't store your matress savings in there 20:17:52 but lots of things to consider, which is why we just implemented white listed towers to begin with 20:17:53 t-bast: what was this story with people huge amounts on mobile lol ? 20:18:03 sure easier to start with private towers 20:18:19 oh yeh we know for exp as well ppl just really like to sling around large amts of sats to "test" :p 20:18:51 some people are crazy with their bitcoin, they probably have too much of those 20:18:57 g2g 20:19:04 see you roasbeef 20:19:16 cu roasbeef :-) 20:19:22 let's end for today, already a lot to do for the next meeting ;) 20:19:25 #endmeeting