20:09:38 #startmeeting 20:09:38 Meeting started Mon May 25 20:09:38 2020 UTC. The chair is t-bast. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:09:38 Useful Commands: #action #agreed #help #info #idea #link #topic. 20:09:44 right, it was just to inform people :) 20:10:05 agreed, please all have a look at ariard's PR! 20:10:27 #topic Update test vectors to static_remotekey 20:10:30 #link https://github.com/lightningnetwork/lightning-rfc/pull/758 20:10:39 Ah found the PR 20:10:58 Has someone been able to test these updated vectors? Eclair still doesn't support static_remotekey, so I'm bailing out on this one :) 20:11:22 Not that I know of for c-lightning, we're knee-deep in refactorings at the moment 20:11:39 bitconner maybe on the lnd side? 20:11:57 But I think we can count ACKs on the PR and apply directly, looks like an innocent enough change 20:12:05 i know we have tests, but can't say for sure these have been verified 20:12:26 Allright, so let's ACK on the PR once implementations are able to test these 20:12:31 sgtm 20:12:43 #action ACK on PR when test vectors have been verified by other implementations, then apply 20:13:02 #topic Clarify TLV record uniqueness in streams 20:13:05 #link https://github.com/lightningnetwork/lightning-rfc/pull/777 20:13:19 This is a small clarification on the fact that TLV records can occur at most once in a stream 20:13:38 I think the rationale paragraphs are too dense and unnecessary, but apart from that it looks good to me 20:13:44 For a second there I was confused why bolt11 advocates for multiple `r` fields, but then I realized it's concatenated in the value 20:13:46 requirement changes lgtm, the rationale section should prorbably be cut down tho 20:14:05 the last paragraph is just an expansion of the final bullet above 20:14:13 Agreed 20:14:32 yes agree too, wording should be clarified 20:14:53 maybe i should leave some feedback as to how to condense the first paragraph into another bullet? 20:14:56 In general I think we should keep rationale to a minimum, and rather have it in proposal texts when necessary due to large changes or interdependencies 20:15:07 bitconner: SGTM 20:15:09 Sounds good bitconner ^^ 20:15:18 👍 20:15:37 #action bitconner to comment, condense rationale paragraphs and then apply 20:15:57 #topic varint / bigsize clarification 20:15:59 #link https://github.com/lightningnetwork/lightning-rfc/pull/778 20:16:10 This one is almost a spelling one IMO 20:16:19 Feels like a no-brainer 20:16:46 Yes, simple enough 20:16:54 c-lightning trying to have their u8 types invade the spec :D 20:17:24 Hehe, it makes our wire-code generation easy if we express the spec in C xD 20:17:42 xD 20:18:21 Since we're on the topic, I've been brainstorming a new description format with Maxim 20:18:24 yeah changes lgtm 20:18:42 i think there's one paragraph currently removed that we should keep (frist one in bolt-01) 20:19:11 The goal is to move away from weird markdown list and code markup, and instead move towards a more structured approach, such as yaml code-blocks, that are easier to parse and allow for more expressiveness 20:19:27 bitconner: why not, it felt fine to me to remove it, but I don't mind keeping it either, can you comment on the PR? 20:19:55 cdecker: yaml? yikes, that has created issues for many projects 20:20:03 sure i can do that 20:20:11 yaml > json 20:20:36 I know, but the current format is just hell, and anything where we can add type, length, name, and so on is an improvement 20:20:48 I just landed on yaml because it is slightly more human-readable 20:21:11 do you have a sample of what it would look like? 20:21:18 (also nesting datatypes, such as TLVs becomes really easy to do) 20:21:23 I agree that the current format can definitely be improved 20:21:39 I'll write up a small meta-proposal and see what the reaction is :-) 20:22:12 great thanks! 20:22:34 #action Decide whether we want to keep the paragraph mentioning Bitcoin varint or not, then apply 20:22:50 t-bast: actually i think moving the first paragraph is fine now that i read it more in depth 20:23:08 approved 20:23:11 bitconner: allright, I'll give you some time 20:23:14 that was fast 20:23:27 Hehe, the branch is called `die-varint-die`, I think Rusty might not like `varint`s 20:23:47 haha, I always love rusty's branch names 20:24:01 #action Apply #778 20:24:18 #topic flip(B) is incorrectly implemented, so we instead fix the spec xD 20:24:33 #link https://github.com/lightningnetwork/lightning-rfc/pull/779 20:25:01 I discovered this part of our codebase when reacting to this PR, and indeed we don't implement what the spec says 20:25:08 But it looks like no-one does :D 20:26:04 That's ... uhm ... a very ancient part of the spec xD 20:26:48 Haha looks like it, do you remember coordinating to implement the same thing? Or did all three implementation mis-implement the spec the same way? 20:27:55 probably one of those "you know what i mean not what i say" instances 20:27:57 And what about rust-lightning ariard? How did you find out that what you had to implement was actually not what was spec-ed? 20:28:01 Usual "the spec is probably right, let's fiddle until it matches" I think 20:28:18 the obvious way to do this in english != the obvious way to do it in code lol 20:28:20 haha indeed, tweak our code until we match the test vectors 20:28:21 t-bast: I'm checking right now, I think like bitconner said 20:28:56 really likely Matt copy-pasted from c-lightning on this 20:29:33 And the rust compiler trusted rusty's code :-) 20:30:10 lol, the secondary spec is c-lightning when the spec isn't clear enough 20:30:30 I think the clarification in rusty's PR is good enough, WDYT? 20:30:34 Yes, we're doing it right, i.e enforcing the spirit of the spec 20:31:03 Anyhow, if we all agree that what we actually implemented is described by the change (and the intended characteristics are still met) then I think we should apply 20:31:03 https://imgflip.com/i/42sldk 20:31:17 I don't think the secondary spec being c-lightning is in the spec t-bast :) 20:31:25 wrt to this I found a case recently were c-lightning doesn't quote the BOLT like for everything else 20:31:31 i stumbled upon this a few years ago: https://stackoverflow.com/q/49928131/309483 20:31:49 michaelfolkson: but it looks like it works in practice! ;) 20:32:00 michaelfolkson: if you take the first letter each 13th word of the theme song it spells out that c-lightning is the reference xD 20:32:18 ja: yes, it looks like this was the old thread rusty resurected and decided to fix :) 20:32:48 shall we apply this PR? 20:33:00 Or do you want more time to check/review? 20:33:45 cdecker: that would be amazing steganography 20:34:22 I think it's safe to accept, if everyone else is already doing this, which seems to be the case 20:34:47 shows the scheme was a bit too clever... 20:34:54 * roasbeef recalls elkrem with longing 20:35:02 * t-bast waves hello at roasbeef 20:35:27 * cdecker thinks we're trying to be clever too often :-) 20:35:29 We need to fix the spellchecker before applying, let's defer to github 20:35:35 ACK 20:35:47 #action fix spellchecker, wait for ACKs and apply 20:36:10 Which long-term thing do you want to discuss? 20:36:42 I can talk on mempool-pinning fix, spent a bit of time thinking about the changes needed 20:36:43 I'd love to get feedback from lnd on route blinding vs the HORNET debate at some point ;) 20:37:06 sure, go for it ariard, the floor is yours 20:37:14 #topic mempool-pinning issues 20:37:29 anchor is more or less the same, we've deployed with lnd 0.10, also super timely imo since fees are pretty high rn, and you need to be able to bump at times to get things in when respecting the cltv delta 20:37:41 likely the case that clv delta's should rise based on the current fee climate 20:37:47 t-bast: yes i promise that's coming :) 20:37:54 coool, basically we want to avoid second-stage transaction malleability to avoid your counterparty leveraging the mempool for pinning 20:38:07 roasbeef: is it on by default? did you get some interesting user feedback? 20:38:26 t-bast: it's a flag on start up rn, if both have it set then you'll make anchor chans 20:38:34 roasbeef: what do you mean by anchor is more or less the same? 20:38:37 it's also behind a bulid flag atm 20:38:37 bitconner: I'm craving for some cryptographic fun, too much implementation recently :) 20:38:54 ariard: I mean no major movement 20:39:32 to fix mempool pinning ? But you still have package size limit even if you somehow mitigate the absolte fee thing 20:40:20 Jeremy Rubin had some strong views on the mempool at the Socratic last week 20:40:23 I mean on the format itself spec wise, pinning is kinda greater than the scheme itself, and there're a few diff problems being solved here 20:40:27 wrt to anchor output, I'm actively working on the RL implementation _without_ sighash introduciton, have few miscs to write on the spec 20:40:29 "The mempool needs a complete rearchitecting and it doesn’t work in the way that anybody expects it to work." 20:41:00 roasbeef: yes I think we can move forward with anchor output, just cutting the sighash thing for htlc input-ouput pair 20:41:06 michaelfolkson: that looks worrying... 20:41:10 michaelfolkson: yeh either we morph things to side step issues, or some more fundamental fixes need to be rolled out on the mempool layer 20:41:32 ariard: why no sighash there? let's you combine them and actually bump the fee of the htlc transaction 20:41:34 michaelfolkson: it's a bit more complex, the mempool hasn't been designed for multiple txn offchain contract with time requirement 20:41:55 roasbeef: because that's the opposite of where we want to go for fixing mempool pinning 20:42:29 current mempool stuff starts to really creak with any sort of transaction trees 20:42:52 ariard: missing context 20:43:22 with the sighash there, and the csv 1, they're forced to make a rbf-able transaction, so it's a race at that point as was covered in that ML thread 20:43:25 roasbeef: okay pinning rely on bip125 rule 3 aka a high absolute fee but low-feerate to avoid eviction 20:43:46 Well, we can certainly lobby for longer term bitcoin mempool behavior changes, but that's not a solution for now 20:43:46 or building a children branch to reach package size wrt to ancestor and therefore avoi eviction too 20:44:12 We probably need to go down both the mempool changes for longer term, as well find a good fix for th interim 20:44:12 cdecker: yeh def, big win rn is just even being able to modify the fee when you broadcast, given that fees seem to be sitting at a high-ish level rn 20:44:16 roasbeef: it solves 1) not 2) https://bitcoinops.org/en/topics/transaction-pinning/ 20:45:22 I'd love to see eltoo's bring-your-own-fees applied to LN, but even eltoo might be vulnerable to tx pinning... 20:45:40 ariard: it's all super contextual imo still, as w/o miner cooperation, they can still end up just paying a high fee themselves which eventually _does_ confirm, if they can't thread the needle 20:45:45 roasbeef: well if we add anchor ouptut to htlc transaction you can bump via CPFP, assuming your htlc tx has a pre-committed feerate high enough to get into network mempool 20:46:09 ariard: mhmm at the cost of a bigger htlc transaction, and inability to aggregrate them 20:46:29 which means more fes for htlcs, which are squeezed out even more size wise 20:46:32 size as in amt 20:46:35 roasbeef: doing so via maximum package size may have a low absolute fee 20:47:03 ariard: depends right? they'd still need a high total fee for the package right, or oder to have it all be accepted 20:47:08 roasbeef: at the cost of bigger htlc transaction, agree until mempool has an API clear enough to lockdown-off confidently 20:47:42 also we don't really know how symmetric mempools are in the wild, if a large miner is running full rbf then the situation is diff 20:47:42 roasbeef: no, you just need to get first in network mempools and that's it, feerate is subsumed as a rule here 20:47:59 "first" ? 20:48:12 roasbeef: yes but I don't think that LN security model to have a boradcast path to a miner being full-rbf 20:48:21 we don't really know what "first" is here, there's no ordering 20:48:48 Well, if the tx doesn't make it into the mempool we can't bump it using CPFP 20:48:50 ariard: depends, there're a few problems being solved here, and feels like we're focusing a lot on the tail end when it becomes more murky 20:48:54 roasbeef: I broadcast my preimage tx with a bigger child atteched to reach package limitation before you broadcast your HTLC-timeout 20:49:17 ariard: yeh I understand that scenario, but there're ordering assumptions here 20:49:20 roasbeef: if you're malicious and don't respect p2p broadcast you will likely being first in majority of network mempools 20:49:49 roasbeef: right there is ordering scenario but malicious moves first 20:50:16 roasbeef: I wouldn't make that much assumptions on miner mempools, it's sadly kinda a black box 20:50:32 cdecker: right for this case you need package relay 20:50:33 you need to have the commitment confirm first, but all I'm getting at is that it's all pretty imprecise 20:51:11 roasbeef: honest party broadcast local commitment, it's get confirmed 20:51:29 i understand the scenarios 20:51:46 attacker broadcast a preimage tx + a big child, bypassing p2p tx-relay honest rules to get an advantage 20:52:20 mhmm, and the safety here depends on how long it takes that to confirm 20:52:31 if I get the pre-image from the mempool, then my incoming htlc is ok 20:52:37 by safety you mean proobabilities of success for the attack 20:52:50 if I time out, then I made money on the outgoing 20:53:01 roasbeef: yes if I know your full-node, I can throw a conflict in your mempool to blind you 20:53:13 specially if have full-malleability on the 2nd-stage tx 20:53:18 "If I know your full node", quite the assumption 20:54:00 roasbeef: I know but last time I check there was a consequent number of full-nodes using same IP address that a LN-node 20:54:12 lnd is planning on implementing mempool pre-image watching in our next major release 20:54:37 roasbeef: it would make attack harder, not impossible 20:54:47 yeh I understand, it's an easy mitigation 20:55:01 it then escalates things even further to possilby req them to collude w/ a miner 20:55:03 I do think inter-layers mapping hasn't been studied that much and it's likely far easy than what we think 20:55:29 roasbeef: no it doesnn't had the requirement to collude with miner, just to find your full-node 20:55:39 yeh it's all pretty fuzzy in the end imo, but there're a few low hanging fruits for mitigation, and let's not forget the #1 thing which is to actually be able to bump fees 20:55:48 might be the case that cltv deltas are too low network wide rn 20:55:54 and for big LN node like ACINQ, you may probe by opening zero-conf channel with them and see a tx propagation on the base layer 20:56:22 we do get preimages from the mempool though :) 20:56:24 roasbeef: yes I would recommend to increase a bit cltv deltas, that;s an easy mitigation 20:56:40 t-bast: let's me blind your mempool with a well-crafted conflict :) 20:57:00 ok but circling back, you're implementing with an anchor output on the htlcs, ariard ? 20:57:09 This brings back something I've been wondering: how hard would it be to bind two Bitcoin nodes together, so that one of them is only used as a "read" source by the second one? 20:57:17 roasbeef: the #1 thing is to actually be able to bump fees, right you can RBF the CPFP on anchor 20:57:19 t-bast: elaborate? 20:57:38 That would allow you to have a secondary Bitcoin node that can't be easily linked to your lightning node and thus eclipsed 20:57:44 t-bast: that's easy, bitcoind -connect= 20:57:49 ariard: referring to the commitment, as in just being able to land that, but yeh there're other ways to bumps fees for the htlc transactions 20:58:22 harding: yes but if I simply do that, it should be somewhat easy for an eclipse attacker to figure out that this is my secondary node because my tx will show up very quickly on that secondary node, doesn't it? 20:58:23 t-bast: it's doable just need to add few rpc in core with traffic class I guess, like open a block-relay-only connection 20:58:38 t-bast: mhmm, there're a few network configs possilbe to further isolate a not, or just selectively choose its peers, in the end still need e2e and auth to have full confidence if it isn't all on a private network 20:58:48 I'd like to ensure that the primary node never directly sends its txes to the secondary node, it only reads from it 20:58:55 roasbeef: yes agree we should move forward for anchor ouput on commitment tx, and defer sighash introduction 20:59:30 t-bast: don't broadcast your transaction to this node, they just do block trafffic 20:59:36 ariard: in the end it's the tradeoff of having it there vs not and if ppl feel comfortable w/ the mitigations 21:00:04 ariard: that was a possibility, but if I just get block traffic I can't extract my preimages from the mempool :/ 21:00:06 roasbeef: kind of the point, people don't feel comfortable with it 21:00:49 t-bast: -connect by default only makes that single connection; it doesn't connect to any other peers. So if A is your normal listening node and on B you do -connect=A, then B will only receive txes and blocks from A. It sounds to me like you want something like https://github.com/laanwj/bitcoin-submittx or a Tor node. 21:00:57 roasbeef: it's better forr security to bump commitment txn stuck in the mempool I agree, for mitigations it maybe not that much state machines depending on lockdown scope 21:01:07 *state machine changes 21:01:37 cdecker: how practical are these attacks is your wondering ? 21:01:55 batching htlc transactions together or w/ other transactions tho.... 21:02:19 harding: thanks for the link, it's still just a random thought right now, I need to make it more precise. Can I send you a few questions directly once I've formalized my ideas? 21:02:31 roasbeef: long-term I hope mempool being clean enough to lock-free htlc txn and batch them together 21:03:22 ariard: not very tbh, they take quite a bit of sophistication, but the fact that we're discussing them seems proof that people think they pose a risk (especially after blogs have picked it up and are reporting LN as being broken...) 21:03:23 t-bast: of course. You may also want to ping matt as I've heard many miners have similar protections (their concern is having their listening nodes DoS'd by other pools). 21:03:39 t-bast: yes I think you need to come up with a better attack scenario you're trying to mitigate because if you do what harding suggests B is useless for eclipse prevention 21:04:16 harding, ariard: thanks, I'll formalize that and share some thoughts with you and Matt soon-ish 21:04:20 cdecker: LN isn't broken, LN is in boostrap phase :) 21:04:35 * cdecker wonders if we could at least strengthen ourselves by gossiping about channel closes and preimages 21:05:00 It'd only require one node watching the mempool, that raises the alarm for the network 21:05:20 ariard: damn, I always mix up nomenclature, right :-) 21:05:56 cdecker: that's cleary an extension of security model, you know depend on your connection with them, I don't think it's a wrong fix but need to be analyzed 21:06:00 *now 21:06:18 cdecker: won't work with PTLC introduciton 21:06:36 Right, my thinking was more that the threat would likely already be sufficient to deter attackers... 21:06:55 btw, the mitigation I was envisioning should be compatible with PTLCs, we will likely need another round-trip 21:07:36 ariard: round trip for what outside of the extra interation of dlog htlcs? 21:07:39 cdecker: just blind the mempool of the "preimage-watcher" who has a nice public IP known to let other connecting to 21:07:48 ariard: it works for PTLCs, we just get xlen(route) the traffic, not optimal but if in the meantime we make progress with the mempool proposals it might just work 21:08:07 roasbeef: you need to exchange partial_sigs before committing need state IIRC ? 21:08:19 ariard: that's sort of the point, there just need to be multiple, that can't all be blinded 21:08:25 ariard: you mean if you make all the htlc paths multi-sig/ 21:08:26 ? 21:09:05 Ouch, that'd be painful for high-latency links... 21:09:09 roasbeef: right, my point we may have to do state machines changes now to fix mempool-pinning but we woudln't have to change them again when we introduce PTLCs 21:09:11 * t-bast thinks that PTLC will require many more roundtrips than today (multi-hop locks is already quite costly), hope we can engineer this to be efficient enough 21:09:19 just use new messages 21:09:31 ehh yeh idk if that'll work even, you need a lot of changes, as you need a new message before CommitSig 21:09:51 t-bast: depends on which sig scheme we end up using in the end 21:09:56 cdecker: why they can't all be blinded if we assume they are all publics ? 21:09:59 schnorr version adds the least round trips 21:10:15 roasbeef: yes but it's already one more round-trip than today, isn't it? 21:10:33 feels like we've been talking aobut dlog htlcs for years now, with several ways to deploy em (which keep getting better, but there's also the ideal) 21:10:39 roasbeef: yes it depends if we lockdown HTLC-preimage only or also the remote HTLC-tiemout txn ? 21:10:51 t-bast: possilby we can piggy back the extra state on other messages? haven't worked thru it too much 21:10:58 ariard: I'm saying that some LN nodes (public and non-public) watch the mempool, and start a broadcast whenever they see something that looks like a closure with HTLCs 21:11:21 roasbeef: that's what I'm hoping for, hopefully we can bundle many features in each roundtrip 21:11:28 Broadcast as in gossip broadcast, staggered and deduplicated, but quick enough to tell the impacted node about the HTLC details 21:11:29 cdecker: how a honest non-watching LN-node do peer discovery of these special nodes to connect to ? 21:11:33 ariard: see my message on the mempool aobut this, if you want to lock down any other htlc path, then before you send a commit to the other party, you already need to have your version of that htlc sig 21:12:01 ariard: they're not connecting to those node, the nodes send out a broadcast, just like they do with node_announcements and channel_announcements 21:12:13 roasbeef: yes which means you need update_add_htlc ->, <- update_countersign_htlc, -> commitment_signed 21:12:52 cdecker: I see, and we assume all routing nodes, which are the ones at risk accept all gossips messages 21:12:52 but zooming out, IMO if an implemetnation is able to implement _any_ form of anchors, then they should as soon as possible, given the current fee rates, if a ton of htlcs expire, you're likely to lose many of them 21:13:00 cdecker: is this a DoS vector ? 21:13:12 ariard: no, it's rate limited by channel closures 21:13:24 And deduplicated at each intermediate hop 21:13:27 cdecker: it's probably quite cheap to implement once we agree on the message, but isn't it also a new way of DoS-ing nodes by sending them many preimages to check against their current payments? 21:13:40 ariard: not sure that fully works, but yeh you need another message, not sure how that works on a concurrent setting too (that specific message) 21:13:49 you can't send the sig before you nkow what the commitment looks like 21:14:11 cdecker: oh you first prove that this is linked to a recent channel_close, something like that? 21:14:18 cdecker: well it may work, I can draft the preimage-watching proposal and the htlc lockdwon and we compare tradeoffs 21:14:36 we use the ack mechanism so I know what to sign based on what you've gotten, so it might require another level of synchronization, idk, but the more I think about it the more the possible set of changes sweels 21:14:41 swells 21:14:49 roasbeef: it should be good we already assume A may commit state X and B commit state Y then converge 21:14:49 Yep, we could combine it with an explicit channel_close message to inform nodes that they can remove it from their view at the same time 21:15:07 ariard: sounds very good ^^ 21:15:11 ariard: the devil is in the details ;) 21:15:25 ariard: ACK I'm ready to read these drafts ;) 21:15:31 cdecker: network-wise garbage-collecting preimage, hmmmm let's try it 21:16:02 Might be a good use-case to try out `custommsg` facility with ^^ 21:16:06 okay will do a draft for both :) 21:16:30 (I'm just not that much comfortable delegating my security to some random peer in the wild) 21:16:52 #action ariard to draft preimage-watching and htlc lockdown proposals 21:16:55 seems to add more blocking either way, since I need to now wait before I can send a sig, and I need a distinct ack for each htlc added, they all need to be done at once, since you need a stable commitment 21:16:57 even a swarm of them, that's really sounds like the "watchtower-swarm" idea :p 21:16:59 no_input makes it easier 21:17:10 since you can send an htlc sig w/o knowing the commitmetn for it 21:17:14 Totally agreed, hence the reliance on having anyone be honest in the network, not one specific node 21:17:18 let's get the ball rolling again on no_input! 21:17:31 100% 21:17:32 * roasbeef snaps his fingers 21:17:39 * roasbeef notices nothing happened... 21:17:44 roasbeef: likely to add more latency, you were sayng schnorr PTLC doesn't have a round-trip ? 21:17:45 5 more years for no_input 21:17:49 kek 21:17:50 roasbeef: do your magic, I know you can do better than that 21:17:54 like do you have scheme showcase 21:17:55 lolol 21:18:06 Hehe, would love progress on noinput/anyprevout ^^ 21:18:20 ariard: oh idk about that, was just mulling that we could maybe piggy back the state somewhere 21:18:30 cdecker: I think people expect first to get taproot, before to throw brainpower on eltoo 21:18:36 i haven't thought about that stuff too much since it just seems to creep further out into the future 21:19:04 ariard: yeah, that's my impression too, and I don't want to add noises about noinput that might delay schnorr + taproot 21:19:22 ariard: re blocking above, i was referring to the immediate implications of trying to lock down all htlc spend paths 21:19:25 roasbeef: I know I was just "if we have to do state machines changes just let be sure they fit with handwawing future features" 21:19:37 mhmm gotcha ariard 21:19:45 Anyway, I need to drop off 21:20:09 This was a really productive meeting, thanks everybody, and in particular to t-bast for chairing ^^ 21:20:18 yes, I'm going forward with implementing anchor outputs and will do draft to have ab etter opinon on mitigations :) 21:20:24 Hope to see all of you in person some time soon :-) 21:20:29 Agreed, loved the discussion at the end on mempool issues, very enlightening 21:21:09 Thanks everyone, let's keep in touch on github and someday IRL ;) 21:21:14 #endmeeting