20:09:38 <t-bast> #startmeeting
20:09:38 <lightningbot> Meeting started Mon May 25 20:09:38 2020 UTC.  The chair is t-bast. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:09:38 <lightningbot> Useful Commands: #action #agreed #help #info #idea #link #topic.
20:09:44 <ariard> right, it was just to inform people :)
20:10:05 <t-bast> agreed, please all have a look at ariard's PR!
20:10:27 <t-bast> #topic Update test vectors to static_remotekey
20:10:30 <t-bast> #link https://github.com/lightningnetwork/lightning-rfc/pull/758
20:10:39 <cdecker> Ah found the PR
20:10:58 <t-bast> Has someone been able to test these updated vectors? Eclair still doesn't support static_remotekey, so I'm bailing out on this one :)
20:11:22 <cdecker> Not that I know of for c-lightning, we're knee-deep in refactorings at the moment
20:11:39 <t-bast> bitconner maybe on the lnd side?
20:11:57 <cdecker> But I think we can count ACKs on the PR and apply directly, looks like an innocent enough change
20:12:05 <bitconner> i know we have tests, but can't say for sure these have been verified
20:12:26 <t-bast> Allright, so let's ACK on the PR once implementations are able to test these
20:12:31 <bitconner> sgtm
20:12:43 <t-bast> #action ACK on PR when test vectors have been verified by other implementations, then apply
20:13:02 <t-bast> #topic Clarify TLV record uniqueness in streams
20:13:05 <t-bast> #link https://github.com/lightningnetwork/lightning-rfc/pull/777
20:13:19 <t-bast> This is a small clarification on the fact that TLV records can occur at most once in a stream
20:13:38 <t-bast> I think the rationale paragraphs are too dense and unnecessary, but apart from that it looks good to me
20:13:44 <cdecker> For a second there I was confused why bolt11 advocates for multiple `r` fields, but then I realized it's concatenated in the value
20:13:46 <bitconner> requirement changes lgtm, the rationale section should prorbably be cut down tho
20:14:05 <bitconner> the last paragraph is just an expansion of the final bullet above
20:14:13 <cdecker> Agreed
20:14:32 <ariard> yes agree too, wording should be clarified
20:14:53 <bitconner> maybe i should leave some feedback as to how to condense the first paragraph into another bullet?
20:14:56 <cdecker> In general I think we should keep rationale to a minimum, and rather have it in proposal texts when necessary due to large changes or interdependencies
20:15:07 <t-bast> bitconner: SGTM
20:15:09 <cdecker> Sounds good bitconner ^^
20:15:18 <bitconner> đź‘Ť
20:15:37 <t-bast> #action bitconner to comment, condense rationale paragraphs and then apply
20:15:57 <t-bast> #topic varint / bigsize clarification
20:15:59 <t-bast> #link https://github.com/lightningnetwork/lightning-rfc/pull/778
20:16:10 <t-bast> This one is almost a spelling one IMO
20:16:19 <t-bast> Feels like a no-brainer
20:16:46 <cdecker> Yes, simple enough
20:16:54 <t-bast> c-lightning trying to have their u8 types invade the spec :D
20:17:24 <cdecker> Hehe, it makes our wire-code generation easy if we express the spec in C xD
20:17:42 <t-bast> xD
20:18:21 <cdecker> Since we're on the topic, I've been brainstorming a new description format with Maxim
20:18:24 <bitconner> yeah changes lgtm
20:18:42 <bitconner> i think there's one paragraph currently removed that we should keep (frist one in bolt-01)
20:19:11 <cdecker> The goal is to move away from weird markdown list and code markup, and instead move towards a more structured approach, such as yaml code-blocks, that are easier to parse and allow for more expressiveness
20:19:27 <t-bast> bitconner: why not, it felt fine to me to remove it, but I don't mind keeping it either, can you comment on the PR?
20:19:55 <t-bast> cdecker: yaml? yikes, that has created issues for many projects
20:20:03 <bitconner> sure i can do that
20:20:11 <bitconner> yaml > json
20:20:36 <cdecker> I know, but the current format is just hell, and anything where we can add type, length, name, and so on is an improvement
20:20:48 <cdecker> I just landed on yaml because it is slightly more human-readable
20:21:11 <t-bast> do you have a sample of what it would look like?
20:21:18 <cdecker> (also nesting datatypes, such as TLVs becomes really easy to do)
20:21:23 <t-bast> I agree that the current format can definitely be improved
20:21:39 <cdecker> I'll write up a small meta-proposal and see what the reaction is :-)
20:22:12 <t-bast> great thanks!
20:22:34 <t-bast> #action Decide whether we want to keep the paragraph mentioning Bitcoin varint or not, then apply
20:22:50 <bitconner> t-bast: actually i think moving the first paragraph is fine now that i read it more in depth
20:23:08 <bitconner> approved
20:23:11 <t-bast> bitconner: allright, I'll give you some time
20:23:14 <t-bast> that was fast
20:23:27 <cdecker> Hehe, the branch is called `die-varint-die`, I think Rusty might not like `varint`s
20:23:47 <t-bast> haha, I always love rusty's branch names
20:24:01 <t-bast> #action Apply #778
20:24:18 <t-bast> #topic flip(B) is incorrectly implemented, so we instead fix the spec xD
20:24:33 <t-bast> #link https://github.com/lightningnetwork/lightning-rfc/pull/779
20:25:01 <t-bast> I discovered this part of our codebase when reacting to this PR, and indeed we don't implement what the spec says
20:25:08 <t-bast> But it looks like no-one does :D
20:26:04 <cdecker> That's ... uhm ... a very ancient part of the spec xD
20:26:48 <t-bast> Haha looks like it, do you remember coordinating to implement the same thing? Or did all three implementation mis-implement the spec the same way?
20:27:55 <bitconner> probably one of those "you know what i mean not what i say" instances
20:27:57 <t-bast> And what about rust-lightning ariard? How did you find out that what you had to implement was actually not what was spec-ed?
20:28:01 <cdecker> Usual "the spec is probably right, let's fiddle until it matches" I think
20:28:18 <bitconner> the obvious way to do this in english != the obvious way to do it in code lol
20:28:20 <t-bast> haha indeed, tweak our code until we match the test vectors
20:28:21 <ariard> t-bast: I'm checking right now, I think like bitconner said
20:28:56 <ariard> really likely Matt copy-pasted from c-lightning on this
20:29:33 <cdecker> And the rust compiler trusted rusty's code :-)
20:30:10 <t-bast> lol, the secondary spec is c-lightning when the spec isn't clear enough
20:30:30 <t-bast> I think the clarification in rusty's PR is good enough, WDYT?
20:30:34 <ariard> Yes, we're doing it right, i.e enforcing the spirit of the spec
20:31:03 <cdecker> Anyhow, if we all agree that what we actually implemented is described by the change (and the intended characteristics are still met) then I think we should apply
20:31:03 <t-bast> https://imgflip.com/i/42sldk
20:31:17 <michaelfolkson> I don't think the secondary spec being c-lightning is in the spec t-bast :)
20:31:25 <ariard> wrt to this I found a case recently were c-lightning doesn't quote the BOLT like for everything else
20:31:31 <ja> i stumbled upon this a few years ago: https://stackoverflow.com/q/49928131/309483
20:31:49 <t-bast> michaelfolkson: but it looks like it works in practice! ;)
20:32:00 <cdecker> michaelfolkson: if you take the first letter each 13th word of the theme song it spells out that c-lightning is the reference xD
20:32:18 <t-bast> ja: yes, it looks like this was the old thread rusty resurected and decided to fix :)
20:32:48 <t-bast> shall we apply this PR?
20:33:00 <t-bast> Or do you want more time to check/review?
20:33:45 <t-bast> cdecker: that would be amazing steganography
20:34:22 <cdecker> I think it's safe to accept, if everyone else is already doing this, which seems to be the case
20:34:47 <roasbeef> shows the scheme was a bit too clever...
20:34:54 * roasbeef recalls elkrem with longing
20:35:02 * t-bast waves hello at roasbeef
20:35:27 * cdecker thinks we're trying to be clever too often :-)
20:35:29 <t-bast> We need to fix the spellchecker before applying, let's defer to github
20:35:35 <cdecker> ACK
20:35:47 <t-bast> #action fix spellchecker, wait for ACKs and apply
20:36:10 <t-bast> Which long-term thing do you want to discuss?
20:36:42 <ariard> I can talk on mempool-pinning fix, spent a bit of time thinking about the changes needed
20:36:43 <t-bast> I'd love to get feedback from lnd on route blinding vs the HORNET debate at some point ;)
20:37:06 <t-bast> sure, go for it ariard, the floor is yours
20:37:14 <t-bast> #topic mempool-pinning issues
20:37:29 <roasbeef> anchor is more or less the same, we've deployed with lnd 0.10, also super timely imo since fees are pretty high rn, and you need to be able to bump at times to get things in when respecting the cltv delta
20:37:41 <roasbeef> likely the case that clv delta's should rise based on the current fee climate
20:37:47 <bitconner> t-bast: yes i promise that's coming :)
20:37:54 <ariard> coool, basically we want to avoid second-stage transaction malleability to avoid your counterparty leveraging the mempool for pinning
20:38:07 <t-bast> roasbeef: is it on by default? did you get some interesting user feedback?
20:38:26 <roasbeef> t-bast: it's a flag on start up rn, if both have it set then you'll make anchor chans
20:38:34 <ariard> roasbeef: what do you mean by anchor is more or less the same?
20:38:37 <bitconner> it's also behind a bulid flag atm
20:38:37 <t-bast> bitconner: I'm craving for some cryptographic fun, too much implementation recently :)
20:38:54 <roasbeef> ariard: I mean no major movement
20:39:32 <ariard> to fix mempool pinning ? But you still have package size limit even if you somehow mitigate the absolte fee thing
20:40:20 <michaelfolkson> Jeremy Rubin had some strong views on the mempool at the Socratic last week
20:40:23 <roasbeef> I mean on the format itself spec wise, pinning is kinda greater than the scheme itself, and there're a few diff problems being solved here
20:40:27 <ariard> wrt to anchor output, I'm actively working on the RL implementation _without_ sighash introduciton, have few miscs to write on the spec
20:40:29 <michaelfolkson> "The mempool needs a complete rearchitecting and it doesn’t work in the way that anybody expects it to work."
20:41:00 <ariard> roasbeef: yes I think we can move forward with anchor output, just cutting the sighash thing for htlc input-ouput pair
20:41:06 <t-bast> michaelfolkson: that looks worrying...
20:41:10 <roasbeef> michaelfolkson: yeh either we morph things to side step issues, or some more fundamental fixes need to be rolled out on the mempool layer
20:41:32 <roasbeef> ariard: why no sighash there? let's you combine them and actually bump the fee of the htlc transaction
20:41:34 <ariard> michaelfolkson: it's a bit more complex, the mempool hasn't been designed for multiple txn offchain contract with time requirement
20:41:55 <ariard> roasbeef: because that's the opposite of where we want to go for fixing mempool pinning
20:42:29 <roasbeef> current mempool stuff starts to really creak with any sort of transaction trees
20:42:52 <roasbeef> ariard: missing context
20:43:22 <roasbeef> with the sighash there, and the csv 1, they're forced to make a rbf-able transaction, so it's a race at that point as was covered in that ML thread
20:43:25 <ariard> roasbeef: okay pinning rely on bip125 rule 3 aka a high absolute fee but low-feerate to avoid eviction
20:43:46 <cdecker> Well, we can certainly lobby for longer term bitcoin mempool behavior changes, but that's not a solution for now
20:43:46 <ariard> or building a children branch to reach package size wrt to ancestor and therefore avoi eviction too
20:44:12 <cdecker> We probably need to go down both the mempool changes for longer term, as well find a good fix for th interim
20:44:12 <roasbeef> cdecker: yeh def, big win rn is just even being able to modify the fee when you broadcast, given that fees seem to be sitting at a high-ish level rn
20:44:16 <ariard> roasbeef: it solves 1) not 2) https://bitcoinops.org/en/topics/transaction-pinning/
20:45:22 <cdecker> I'd love to see eltoo's bring-your-own-fees applied to LN, but even eltoo might be vulnerable to tx pinning...
20:45:40 <roasbeef> ariard: it's all super contextual imo still, as w/o miner cooperation, they can still end up just paying a high fee themselves which eventually _does_ confirm, if they can't thread the needle
20:45:45 <ariard> roasbeef: well if we add anchor ouptut to htlc transaction you can bump via CPFP, assuming your htlc tx has a pre-committed feerate high enough to get into network mempool
20:46:09 <roasbeef> ariard: mhmm at the cost of a bigger htlc transaction, and inability to aggregrate them
20:46:29 <roasbeef> which means more fes for htlcs, which are squeezed out even more size wise
20:46:32 <roasbeef> size as in amt
20:46:35 <ariard> roasbeef: doing so via maximum package size may have a low absolute fee
20:47:03 <roasbeef> ariard: depends right? they'd still need a high total fee for the package right, or oder to have it all be accepted
20:47:08 <ariard> roasbeef: at the cost of bigger htlc transaction, agree until mempool has an API clear enough to lockdown-off confidently
20:47:42 <roasbeef> also we don't really know how symmetric mempools are in the wild, if a large miner is running full rbf then the situation is diff
20:47:42 <ariard> roasbeef: no, you just need to get first in  network mempools and that's it, feerate is subsumed as a rule here
20:47:59 <roasbeef> "first" ?
20:48:12 <ariard> roasbeef: yes but I don't think that LN security model to have a boradcast path to a miner being full-rbf
20:48:21 <roasbeef> we don't really know what "first" is here, there's no ordering
20:48:48 <cdecker> Well, if the tx doesn't make it into the mempool we can't bump it using CPFP
20:48:50 <roasbeef> ariard: depends, there're a few problems being solved here, and feels like we're focusing a lot on the tail end when it becomes more murky
20:48:54 <ariard> roasbeef: I broadcast my preimage tx with a bigger child atteched to reach package limitation before you broadcast your HTLC-timeout
20:49:17 <roasbeef> ariard: yeh I understand that scenario, but there're ordering assumptions here
20:49:20 <ariard> roasbeef: if you're malicious and don't respect p2p broadcast you will likely being first in majority of network mempools
20:49:49 <ariard> roasbeef: right there is ordering scenario but malicious moves first
20:50:16 <ariard> roasbeef: I wouldn't make that much assumptions on miner mempools, it's sadly kinda a black box
20:50:32 <ariard> cdecker: right for this case you need package relay
20:50:33 <roasbeef> you need to have the commitment confirm first, but all I'm getting at is that it's all pretty imprecise
20:51:11 <ariard> roasbeef: honest party broadcast local commitment, it's get confirmed
20:51:29 <roasbeef> i understand the scenarios
20:51:46 <ariard> attacker broadcast a preimage tx + a big child, bypassing p2p tx-relay honest rules to get an advantage
20:52:20 <roasbeef> mhmm, and the safety here depends on how long it takes that to confirm
20:52:31 <roasbeef> if I get the pre-image from the mempool, then my incoming htlc is ok
20:52:37 <ariard> by safety you mean proobabilities of success for the attack
20:52:50 <roasbeef> if I time out, then I made money on the outgoing
20:53:01 <ariard> roasbeef: yes if I know your full-node, I can throw a conflict in your mempool to blind you
20:53:13 <ariard> specially if have full-malleability on the 2nd-stage tx
20:53:18 <roasbeef> "If I know your full node", quite the assumption
20:54:00 <ariard> roasbeef: I know but last time I check there was a consequent number of full-nodes using same IP address that a LN-node
20:54:12 <roasbeef> lnd is planning on implementing mempool pre-image watching in our next major release
20:54:37 <ariard> roasbeef: it would make attack harder, not impossible
20:54:47 <roasbeef> yeh I understand, it's an easy mitigation
20:55:01 <roasbeef> it then escalates things even further to possilby req them to collude w/ a miner
20:55:03 <ariard> I do think inter-layers mapping hasn't been studied that much and it's likely far easy than what we think
20:55:29 <ariard> roasbeef: no it doesnn't had the requirement to collude with miner, just to find your full-node
20:55:39 <roasbeef> yeh it's all pretty fuzzy in the end imo, but there're a few low hanging fruits for mitigation, and let's not forget the #1 thing which is to actually be able to bump fees
20:55:48 <roasbeef> might be the case that cltv deltas are too low network wide rn
20:55:54 <ariard> and for big LN node like ACINQ, you may probe by opening zero-conf channel with them and see a tx propagation on the base layer
20:56:22 <t-bast> we do get preimages from the mempool though :)
20:56:24 <ariard> roasbeef: yes I would recommend to increase a bit cltv deltas, that;s an easy mitigation
20:56:40 <ariard> t-bast: let's me blind your mempool with a well-crafted conflict :)
20:57:00 <roasbeef> ok but circling back, you're implementing with an anchor output on the htlcs, ariard ?
20:57:09 <t-bast> This brings back something I've been wondering: how hard would it be to bind two Bitcoin nodes together, so that one of them is only used as a "read" source by the second one?
20:57:17 <ariard> roasbeef: the #1 thing is to actually be able to bump fees, right you can RBF the CPFP on anchor
20:57:19 <roasbeef> t-bast: elaborate?
20:57:38 <t-bast> That would allow you to have a secondary Bitcoin node that can't be easily linked to your lightning node and thus eclipsed
20:57:44 <harding> t-bast: that's easy, bitcoind -connect=<ip of listening node>
20:57:49 <roasbeef> ariard: referring to the commitment, as in just being able to land that, but yeh there're other ways to bumps fees for the htlc transactions
20:58:22 <t-bast> harding: yes but if I simply do that, it should be somewhat easy for an eclipse attacker to figure out that this is my secondary node because my tx will show up very quickly on that secondary node, doesn't it?
20:58:23 <ariard> t-bast: it's doable just need to add few rpc in core with traffic class I guess, like open a block-relay-only connection
20:58:38 <roasbeef> t-bast: mhmm, there're a few network configs possilbe to further isolate a not, or just selectively choose its peers, in the end still need e2e and auth to have full confidence if it isn't all on a private network
20:58:48 <t-bast> I'd like to ensure that the primary node never directly sends its txes to the secondary node, it only reads from it
20:58:55 <ariard> roasbeef: yes agree we should move forward for anchor ouput on commitment tx, and defer sighash introduction
20:59:30 <ariard> t-bast: don't broadcast your transaction to this node, they just do block trafffic
20:59:36 <roasbeef> ariard: in the end it's the tradeoff of having it there vs not and if ppl feel comfortable w/ the mitigations
21:00:04 <t-bast> ariard: that was a possibility, but if I just get block traffic I can't extract my preimages from the mempool :/
21:00:06 <cdecker> roasbeef: kind of the point, people don't feel comfortable with it
21:00:49 <harding> t-bast: -connect by default only makes that single connection; it doesn't connect to any other peers.  So if A is your normal listening node and on B you do -connect=A, then B will only receive txes and blocks from A.  It sounds to me like you want something like https://github.com/laanwj/bitcoin-submittx or a Tor node.
21:00:57 <ariard> roasbeef: it's better forr security to bump commitment txn stuck in the mempool I agree, for mitigations it maybe not that much state machines depending on lockdown scope
21:01:07 <ariard> *state machine changes
21:01:37 <ariard> cdecker: how practical are these attacks is your wondering ?
21:01:55 <roasbeef> batching htlc transactions together or w/ other transactions tho....
21:02:19 <t-bast> harding: thanks for the link, it's still just a random thought right now, I need to make it more precise. Can I send you a few questions directly once I've formalized my ideas?
21:02:31 <ariard> roasbeef: long-term I hope mempool being clean enough to lock-free htlc txn and batch them together
21:03:22 <cdecker> ariard: not very tbh, they take quite a bit of sophistication, but the fact that we're discussing them seems proof that people think they pose a risk (especially after blogs have picked it up and are reporting LN as being broken...)
21:03:23 <harding> t-bast: of course.  You may also want to ping matt as I've heard many miners have similar protections (their concern is having their listening nodes DoS'd by other pools).
21:03:39 <ariard> t-bast: yes I think you need to come up with a better attack scenario you're trying to mitigate because if you do what harding suggests B is useless for eclipse prevention
21:04:16 <t-bast> harding, ariard: thanks, I'll formalize that and share some thoughts with you and Matt soon-ish
21:04:20 <ariard> cdecker: LN isn't broken, LN is in boostrap phase :)
21:04:35 * cdecker wonders if we could at least strengthen ourselves by gossiping about channel closes and preimages
21:05:00 <cdecker> It'd only require one node watching the mempool, that raises the alarm for the network
21:05:20 <cdecker> ariard: damn, I always mix up nomenclature, right :-)
21:05:56 <ariard> cdecker: that's cleary an extension of security model, you know depend on your connection with them, I don't think it's a wrong fix but need to be analyzed
21:06:00 <ariard> *now
21:06:18 <ariard> cdecker: won't work with PTLC introduciton
21:06:36 <cdecker> Right, my thinking was more that the threat would likely already be sufficient to deter attackers...
21:06:55 <ariard> btw, the mitigation I was envisioning should be compatible with PTLCs, we will likely need another round-trip
21:07:36 <roasbeef> ariard: round trip for what outside of the extra interation of dlog htlcs?
21:07:39 <ariard> cdecker: just blind the mempool of the "preimage-watcher" who has a nice public IP known to let other connecting to
21:07:48 <cdecker> ariard: it works for PTLCs, we just get xlen(route) the traffic, not optimal but if in the meantime we make progress with the mempool proposals it might just work
21:08:07 <ariard> roasbeef: you need to exchange partial_sigs before committing need state IIRC ?
21:08:19 <cdecker> ariard: that's sort of the point, there just need to be multiple, that can't all be blinded
21:08:25 <roasbeef> ariard: you mean if you make all the htlc paths multi-sig/
21:08:26 <roasbeef> ?
21:09:05 <cdecker> Ouch, that'd be painful for high-latency links...
21:09:09 <ariard> roasbeef: right, my point we may have to do state machines changes now to fix mempool-pinning but we woudln't have to change them again when we introduce PTLCs
21:09:11 * t-bast thinks that PTLC will require many more roundtrips than today (multi-hop locks is already quite costly), hope we can engineer this to be efficient enough
21:09:19 <ariard> just use new messages
21:09:31 <roasbeef> ehh yeh idk if that'll work even, you need a lot of changes, as you need a new message before CommitSig
21:09:51 <roasbeef> t-bast: depends on which sig scheme we end up using in the end
21:09:56 <ariard> cdecker: why they can't all be blinded if we assume they are all publics ?
21:09:59 <roasbeef> schnorr version adds the least round trips
21:10:15 <t-bast> roasbeef: yes but it's already one more round-trip than today, isn't it?
21:10:33 <roasbeef> feels like we've been talking aobut dlog htlcs for years now, with several ways to deploy em (which keep getting better, but there's also the ideal)
21:10:39 <ariard> roasbeef: yes it depends if we lockdown HTLC-preimage only or also the remote HTLC-tiemout txn ?
21:10:51 <roasbeef> t-bast: possilby we can piggy back the extra state on other messages? haven't worked thru it too much
21:10:58 <cdecker> ariard: I'm saying that some LN nodes (public and non-public) watch the mempool, and start a broadcast whenever they see something that looks like a closure with HTLCs
21:11:21 <t-bast> roasbeef: that's what I'm hoping for, hopefully we can bundle many features in each roundtrip
21:11:28 <cdecker> Broadcast as in gossip broadcast, staggered and deduplicated, but quick enough to tell the impacted node about the HTLC details
21:11:29 <ariard> cdecker: how a honest non-watching LN-node do peer discovery of these special nodes to connect to ?
21:11:33 <roasbeef> ariard: see my message on the mempool aobut this, if you want to lock down any other htlc path, then before you send a commit to the other party, you already need to have your version of that htlc sig
21:12:01 <cdecker> ariard: they're not connecting to those node, the nodes send out a broadcast, just like they do with node_announcements and channel_announcements
21:12:13 <ariard> roasbeef: yes which means you need update_add_htlc ->, <- update_countersign_htlc, -> commitment_signed
21:12:52 <ariard> cdecker: I see, and we assume all routing nodes, which are the ones at risk accept all gossips messages
21:12:52 <roasbeef> but zooming out, IMO if an implemetnation is able to implement _any_ form of anchors, then they should as soon as possible, given the current fee rates, if a ton of htlcs expire, you're likely to lose many of them
21:13:00 <ariard> cdecker: is this a DoS vector ?
21:13:12 <cdecker> ariard: no, it's rate limited by channel closures
21:13:24 <cdecker> And deduplicated at each intermediate hop
21:13:27 <t-bast> cdecker: it's probably quite cheap to implement once we agree on the message, but isn't it also a new way of DoS-ing nodes by sending them many preimages to check against their current payments?
21:13:40 <roasbeef> ariard: not sure that fully works, but yeh you need another message, not sure how that works on a concurrent setting too (that specific message)
21:13:49 <roasbeef> you can't send the sig before you nkow what the commitment looks like
21:14:11 <t-bast> cdecker: oh you first prove that this is linked to a recent channel_close, something like that?
21:14:18 <ariard> cdecker: well it may work, I can draft the preimage-watching proposal and the htlc lockdwon and we compare tradeoffs
21:14:36 <roasbeef> we use the ack mechanism so I know what to sign based on what you've gotten, so it might require another level of synchronization, idk, but the more I think about it the more the possible set of changes sweels
21:14:41 <roasbeef> swells
21:14:49 <ariard> roasbeef: it should be good we already assume A may commit state X and B commit state Y then converge
21:14:49 <cdecker> Yep, we could combine it with an explicit channel_close message to inform nodes that they can remove it from their view at the same time
21:15:07 <cdecker> ariard: sounds very good ^^
21:15:11 <roasbeef> ariard: the devil is in the details ;)
21:15:25 <t-bast> ariard: ACK I'm ready to read these drafts ;)
21:15:31 <ariard> cdecker: network-wise garbage-collecting preimage, hmmmm let's try it
21:16:02 <cdecker> Might be a good use-case to try out `custommsg` facility with ^^
21:16:06 <ariard> okay will do a draft for both :)
21:16:30 <ariard> (I'm just not that much comfortable delegating my security to some random peer in the wild)
21:16:52 <t-bast> #action ariard to draft preimage-watching and htlc lockdown proposals
21:16:55 <roasbeef> seems to add more blocking either way, since I need to now wait before I can send a sig, and I need a distinct ack for each htlc added, they all need to be done at once, since you need a stable commitment
21:16:57 <ariard> even a swarm of them, that's really sounds like the "watchtower-swarm" idea :p
21:16:59 <roasbeef> no_input makes it easier
21:17:10 <roasbeef> since you can send an htlc sig w/o knowing the commitmetn for it
21:17:14 <cdecker> Totally agreed, hence the reliance on having anyone be honest in the network, not one specific node
21:17:18 <t-bast> let's get the ball rolling again on no_input!
21:17:31 <cdecker> 100%
21:17:32 * roasbeef snaps his fingers
21:17:39 * roasbeef notices nothing happened...
21:17:44 <ariard> roasbeef: likely to add more latency, you were sayng schnorr PTLC doesn't have a round-trip ?
21:17:45 <roasbeef> 5 more years for no_input
21:17:49 <roasbeef> kek
21:17:50 <t-bast> roasbeef: do your magic, I know you can do better than that
21:17:54 <ariard> like do you have scheme showcase
21:17:55 <roasbeef> lolol
21:18:06 <cdecker> Hehe, would love progress on noinput/anyprevout ^^
21:18:20 <roasbeef> ariard: oh idk about that, was just mulling that we could maybe piggy back the state somewhere
21:18:30 <ariard> cdecker: I think people expect first to get taproot, before to throw brainpower on eltoo
21:18:36 <roasbeef> i haven't thought about that stuff too much since it just seems to creep further out into the future
21:19:04 <cdecker> ariard: yeah, that's my impression too, and I don't want to add noises about noinput that might delay schnorr + taproot
21:19:22 <roasbeef> ariard: re blocking above, i was referring to the immediate implications of trying to lock down all htlc spend paths
21:19:25 <ariard> roasbeef: I know I was just "if we have to do state machines changes just let be sure they fit with handwawing future features"
21:19:37 <roasbeef> mhmm gotcha ariard
21:19:45 <cdecker> Anyway, I need to drop off
21:20:09 <cdecker> This was a really productive meeting, thanks everybody, and in particular to t-bast for chairing ^^
21:20:18 <ariard> yes, I'm going forward with implementing anchor outputs and will do draft to have ab etter opinon on mitigations :)
21:20:24 <cdecker> Hope to see all of you in person some time soon :-)
21:20:29 <t-bast> Agreed, loved the discussion at the end on mempool issues, very enlightening
21:21:09 <t-bast> Thanks everyone, let's keep in touch on github and someday IRL ;)
21:21:14 <t-bast> #endmeeting