20:08:30 <cdecker> #startmeeting
20:08:30 <lightningbot> Meeting started Mon May 11 20:08:30 2020 UTC.  The chair is cdecker. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:08:30 <lightningbot> Useful Commands: #action #agreed #help #info #idea #link #topic.
20:08:38 <cdecker> #link https://github.com/lightningnetwork/lightning-rfc/issues/774
20:08:44 <roasbeef> heh, should've tried to open a chan in that block or the havling one
20:08:44 <cdecker> The agenda for today
20:08:58 <cdecker> roasbeef: that would have been epic ^^
20:09:37 <cdecker> #topic Adding a responsible disclosure document for the spec and pointers to the various implementations (#772)
20:09:48 <cdecker> #link https://github.com/lightningnetwork/lightning-rfc/issues/772
20:10:02 <rusty> "faultive" is not a word.  But damn, it should be.
20:10:12 <cdecker> Hehe, agreed
20:10:25 <t-bast> It's our project, we can make our own words
20:10:32 <cdecker> #action Add "faultive" to the merriam-webster dictionary
20:10:48 <t-bast> is ariard around?
20:11:29 <cdecker> ariard took the initiative has proposed having a document describing (a possible) way to contact the spec maintainers in case of a spec issue, and pointing to implementations in case it only impacts a specific implementation
20:11:36 <cdecker> (my interpretation)
20:11:55 <BlueMatt> ariard is out
20:12:01 <t-bast> I think this makes sense, but what would be the process for such a spec vulnerability?
20:12:01 <BlueMatt> but i think we can discuss it more on the pr
20:12:15 <cdecker> Sounds good :+1:
20:12:15 <roasbeef> one critical addition would be contact info for the various impls, but still a WIP so prob need to hash it out more on teh PR itself
20:12:19 <BlueMatt> imo "here's a list of each prominent implementation, please email all of them"
20:12:25 <rusty> Yeah, approve in principle: I think we should all nominate a contact both for cross-impl issues and for direct ones.
20:12:36 <BlueMatt> if we *really* want, we can set up a lightning-security@ private mailing list, but, ehhh
20:12:36 <t-bast> rusty: agreed
20:12:54 <cdecker> Well, we could also have a single address on the @ln.dev domain that fans out to a representative of each team
20:13:59 <cdecker> BlueMatt: ML would be one example (they never really work though in my experience) having an alias that fans out to the teams that can then triage seems reasonable though
20:14:17 <BlueMatt> right, thats what i meant
20:14:22 <BlueMatt> (like security@bitcoincore.org does)
20:14:30 <cdecker> Ok, gotcha
20:14:44 <rusty> security@ln.dev could work, be a huge spam magnet of course, but what isn't?
20:15:01 <BlueMatt> security@bitcoincore.org is...surprisingly quiet
20:15:02 <cdecker> So does this seem like something we should pursue? If yes we'd defer to the PR to hammer out details, ok?
20:15:08 <t-bast> BlueMatt: does security@bitcoincore.org has a shared key to encrypt the disclosure and have multiple people be able to read it? or is it unencrypted emails only?
20:15:17 <BlueMatt> the occasional "I got scammed, can you go look up who this address is? kthx" but thats about it
20:15:34 <BlueMatt> its, sadly, mostly unencrypted, but the website *does* list a set of pgp keys to encrypt to
20:15:43 <BlueMatt> anyway, I'm not sure its worth it
20:15:50 <BlueMatt> hopefully spec-level bugs keeps going down over time
20:15:58 <BlueMatt> and if its antoine reporting all of them I think he can handle it lol
20:16:02 <rusty> cdecker: ack, let's hammer out on PR.  Move on?
20:16:10 <roasbeef> if it just sends out to a pre set list of addrs it's manageable imo
20:16:18 <t-bast> BlueMatt: sounds like a plan, let's get ariard full time on finding those bugs :D
20:16:22 <cdecker> #agreed everyone to discuss the details on the pull request
20:16:46 <cdecker> #topic Channel announcement features (#769, #770, #773)
20:17:03 <BlueMatt> t-bast: well re y'all gonna fund him to do it from paris? :p
20:17:17 <cdecker> #769 describes the problem, and #770 + #773 are two potential solutions
20:17:43 <t-bast> BlueMatt: would love too, if he wants to do some Scala on eclair on the side it would be perfect!
20:17:44 * BlueMatt votes for 773
20:18:02 <cdecker> Issue being that there was a bit of a disagreement on whether wumbo should be in the 1) channel_announcement and 2) whether it should be optional or mandatory
20:18:10 <rusty> Yeah, facts on the ground and all that.  773 is the only real option.
20:18:18 <BlueMatt> note that I disagree strongly with cdecker's comment, though
20:18:22 <roasbeef> really just needs to be in the node ann
20:18:23 <t-bast> ACK 773, no code changes to do, perfect lazyness
20:18:26 <BlueMatt> relying on on-chain data to figure out the channel size is bad
20:18:28 <roasbeef> max_htlcs governs eveerything route-ability wise
20:18:42 <rusty> The spec was wrong (compulsory was dumb), but putting it in the channel_announce was just showing off.
20:18:54 <roasbeef> well not all node types verify that, max_htlc is the main thing you should look at
20:18:54 <cdecker> BlueMatt: we can add it to the announcement, but having a binary flag doesn't solve it either :-)
20:18:57 <rusty> (Though, as first feature there, it made us write some code).
20:18:58 <BlueMatt> but using htlc_maximum_msat is great
20:19:25 <cdecker> roasbeef, bitconner is #773 ok with you?
20:19:26 <bitconner> yeah doesn't make much sense for the channel ann to advertise it after its already been created
20:19:33 <bitconner> 773 lgtm
20:19:36 <cdecker> Ok
20:19:38 <rusty> And also, even 770 is surprising: if two wumbo-supporting nodes create a channel which &*isn't* wumbo, it still gets the feature bit.
20:19:44 <BlueMatt> cdecker: we already did! (htlc_maximum_msat)
20:19:51 <cdecker> #action cdecker to merge #773, and close #770
20:20:06 <bitconner> unless you set max_htlc to max_uint_64 :)
20:20:18 <cdecker> BlueMatt: that doesn't really have to be the channel capacity, that is just an upper limit on the HTLC size, which can be smaller
20:20:32 <BlueMatt> cdecker: true, but its also all you ever need to know for routing
20:20:48 <t-bast> exactly
20:21:07 <cdecker> Well, if you want to start being clever, you can bias against smaller channels since they have a higher chance of being depleted
20:21:25 <cdecker> Or if you want to do MPP the maximum HTLC doesn't tell you whether you can send another or not
20:21:29 <roasbeef> yeh can pre fliter if one wishes
20:21:35 <t-bast> But you can be clever-er and figure out clever people will do that and those small channels will thus be full
20:21:43 <roasbeef> cdecker: that's in the chan ann too iirc
20:21:53 <cdecker> Anywho, let's continue on the merry tour through the spec ^^
20:21:54 <BlueMatt> heh, using mpp to get around maximum_htlc seems superrrrr antisocial
20:22:06 <roasbeef> lol that's a given tho
20:22:17 <roasbeef> the link itself has the max outstanding value
20:22:23 <cdecker> Well, but it isn't prohibited, and you can't really do much about it anyway (especially once we get PTLCs)
20:22:37 <BlueMatt> yea, totally. not disagreeing, just noting
20:22:53 <cdecker> #topic Clarify Bolt 7 intro #760
20:23:36 <cdecker> Sounds to me like a good clarification
20:24:17 <cdecker> Did anyone see a blocker in this one?
20:24:42 <rusty> Sure.  The order was because you need to announce a channel before you can announce a node, but I can see how that's confusing.
20:24:49 <rusty> Ack from me
20:24:59 <t-bast> ACK #760
20:25:26 <cdecker> BlueMatt, roasbeef, bitconner any objections to apply this?
20:25:27 <t-bast> Agreed the channel / node ordering was a bit confusing, we can consider the first channel ever to be the exception, not the rule
20:25:31 <roasbeef> would file under like the typo/formatting rule
20:25:46 <roasbeef> hmm wel node discovery can't happen w/o channel discovery
20:25:51 <roasbeef> since you don't keep a node unless it has channels
20:26:06 <cdecker> Yep, was tempted to apply it as a spelling thing, but it's a bit bigger than a typo, so wanted to make sure ^^
20:26:24 <BlueMatt> eh, i think its fine.
20:26:44 <BlueMatt> (doesn't need discussion, i dont think, but in between topics, figured I'd note it'd be nice to get some reviews on 758)
20:26:48 <cdecker> I think keeping the ordering this way is ok, we're describing the what happens, not how it happens in all its minutiae
20:27:20 <cdecker> #action cdecker to apply #760
20:27:45 <bitconner> 760 lgtm
20:27:45 <roasbeef> BlueMatt: so joost is planning on moving things over to json for those test vectors, to make em easier to consume/generate
20:27:54 <t-bast> Agreed that we can cover #758 as well, eclair can't comment much since we still don't support static_remotekey (because we already had a deterministic derivation that's equivalent), but we'll have it soon
20:27:56 <roasbeef> there's also a q of if all the old ones should be kept around as well for commitments
20:28:05 <cdecker> Oh, I see #758 needs some cross-verification, can we put it on next time's agenda?
20:28:34 <rusty> #action Rusty to test #758
20:28:51 <cdecker> +1 on moving these to JSON btw
20:29:48 <cdecker> #action all implementations to verify the new test vectors in #758
20:29:54 <cdecker> Does that sound ok BlueMatt?
20:31:01 <cdecker> Any last comments for #760 / #758 before we proceed?
20:31:17 <cdecker> #topic Bolt 11 test cases #736
20:31:24 <BlueMatt> roasbeef: moving to json would be great! but letting them go stale while "they're about to be rewritten" is also not ok
20:31:25 <joostjgr> yes indeed, i am planning to write down the anchor test vectors in json
20:31:28 <BlueMatt> cdecker: yep, thats all I wanted
20:31:41 <joostjgr> still looking for the original remote private key to produce signatures...
20:31:49 <cdecker> Great, so #736 is more test vectors
20:32:15 <t-bast> I think this one only needs a rebase, conner and I validated them for eclair/lnd IIRC
20:32:49 <cdecker> Sounds good, so we're missing an OK from rust-lightning and that's it I think
20:33:29 <cdecker> rusty: would you do the honors of rebasing or shall I?
20:33:37 <joostjgr> rusty: do you still have that private key that you used to generate test vectors? i can also do a new one
20:33:51 <rusty> cdecker: will do.
20:34:01 <cdecker> #action rusty to rebase and merge #736
20:34:03 <rusty> joostjgr: huh... which ones?
20:34:17 <cdecker> Making good progress everyone ^^
20:34:18 <joostjgr> the remote private key for the commit tx test vectors in bolt 03
20:34:36 <joostjgr> the local priv key is there, but not the remote one. i want to generate new remote sigs for anchors
20:35:01 <rusty> joostjgr: hmm, that's a bad omission.  Let me check the usual candidates...
20:35:36 <joostjgr> also useful to test sig generation in our test suite
20:35:41 <cdecker> #topic Network pruning #767
20:35:43 <bitconner> the privkey isn't H(rusty)?
20:35:58 <cdecker> #link https://github.com/lightningnetwork/lightning-rfc/pull/767
20:36:17 <joostjgr> haha, okay, so we are supposed to leave hidden messages :)
20:36:20 <cdecker> bitconner: it's H(H(rusty | 1)) :-)
20:36:41 <joostjgr> ok thanks
20:36:42 <sstone> joosyjgr: I think they used to be in comments that you see when you display the "raw" .md file
20:37:11 <cdecker> So #767 is the biggest airplane in Boeing's lineup... aehm, I mean it wants to clarify the pruning semantics due to lack of updates
20:37:24 <bitconner> cdecker: a soild KDF indeed lol
20:38:03 <bitconner> ah yes, so since the original pr, i've ran some more numbers and through them in a gist
20:38:08 <cdecker> So I think "latest update" was definitely confusing, but it shouldn't be the oldest one either
20:38:10 <bitconner> see the link in the PR body
20:38:22 <joostjgr> sstone: interesting, see them now INTERNAL: "remote_privkey: 8deba327a7cc6d638ab0eb025770400a6184afcba6713c210d8d10e199ff2fda01"
20:38:24 <rusty> joostjgr: looks like they got lost.  See the original commit fb5e8667bbd02f086dc9fb16038a9f3d4434d241 ?
20:38:39 <bitconner> i mean at any time you only have two updates for the channel, the proposal is to use the older of the two
20:38:55 <bitconner> as niftynei said, it could be more clear as the older of the most recent channel updates for each node
20:39:20 <cdecker> But, that'd mean that I can force a channel I have with you to become pruned despite you wanting to forward payments, isn't it?
20:39:30 <bitconner> the main thing that sticks out is that 26.24% of the channels have at least one end older than 2 weeks
20:39:38 <t-bast> bitconner: did you check the channels that should have been pruned by that?
20:39:49 <cdecker> I still think that as long as one side sends updates, and the channel isn't closed on-chain, we should not prune it
20:39:55 <t-bast> It ended up pruning some very active channels from BlueWallet in my case
20:40:14 <bitconner> t-bast: yes to some extent, but i don't think it matters much
20:40:21 <t-bast> Either a bug on the BlueWallet node (in which case I need to re-run the simulation and check others) or another issue
20:40:31 <bitconner> if those nodes are falling behind that points to some other bug that shouldn't be happening
20:41:10 <bitconner> but those bugs can be fixed
20:41:11 <t-bast> but then we should investigate those bugs :)
20:41:12 <cdecker> For example c-lightning will not send updates if there is not enough capacity to forward anything, which might cause us to get pruned...
20:41:26 <t-bast> I think it's an opportunity to find and fix some gossip bugs
20:41:33 <bitconner> either way, the current heuristic allows buggy/malicious nodes to pollulte your routing table
20:41:35 <niftynei> this seems like a dumb question, but what's the trigger for a node to send out an updated channel_update msg?
20:41:42 <cdecker> Being silent is not a sign of misbehavior imho, but rather being respectful of everyone's bandwidth
20:41:53 <rusty> cdecker: really?  I don't think this is true?
20:42:00 <bitconner> the proposal is mean to eliminate that by using a local heuristic, rather than relying on the good will/correctness of other peers
20:42:01 <rusty> (re: c-lightning)
20:42:04 <roasbeef> cdecker: anything beign zero here?
20:42:15 <cdecker> Well, we'll defer the first channel update if we are the fundee for example
20:42:30 * niftynei goes to read the spec
20:42:38 <rusty> cdecker: I didn't think we implemented that because it was too much of a givaway.
20:42:45 <roasbeef> niftynei: yeh I guess "outdated" should never happen, in that the time stamp should be increasing
20:42:48 <bitconner> cdecker: by wouldn't it? also one update every two weeks is essentially silent wrt bandwidth
20:42:50 <cdecker> Oh, I thought we had
20:42:52 <roasbeef> (if i'm following)
20:43:30 <bitconner> the recency of your channel update proves liveness of the channel peers, and rn we are keeping channels where only one side proves liveness
20:43:33 <t-bast> right now I've collected metrics on channel_update frequencies, let me dig up the numbers
20:43:37 <rusty> niftynei: implementation dependent.  We have a 13-day timer which refreshes everything if necessary.  We also update lazily: if a channel is disabled (i.e. peer disconnected) and we try to route through it, we'll produce a channel update.
20:43:55 <bitconner> the channels don't necessarly have to be deleted, in lnd they would move into the zombie state and could be resurrected if a new update comes in later
20:44:07 <cdecker> So let's quickly recap: if your channel peer is dead, are you sending updates for those channels? (hint: you shouldn't)
20:44:16 <BlueMatt> we should at least add a note that nodes SHOULD renegotiate the channel_update every two weeks
20:44:19 <bitconner> then why do you have the channel?
20:44:19 <BlueMatt> which I think is missing
20:44:22 <cdecker> And if your channel is active are you sending keepalive updates?
20:44:34 <t-bast> here are the numbers: 50th percentile is to refresh channel_update every 24 hours
20:44:47 <rusty> I do worry we're seeing weird gossip propagation issues.  Some may be production bugs, but some may be actual prop issues.  (Cue rusty to dig up the minisketch proposal again)
20:44:49 <t-bast> 90th percentile is a bit more than 24 hours
20:45:05 <roasbeef> cdecker: does disable count as an update? ;)
20:45:19 <roasbeef> also depends what period of inactivity counts as "dead"
20:45:27 <t-bast> and only 99th percentile is 3 days, so clearly all nodes are aggressively sending channel_updates (probably too aggressively_
20:45:32 <cdecker> roasbeef: I wouldn't count it as a keepalive update, so it should be allowed
20:46:15 <cdecker> What we do IIRC is disable slowly (if a payment tries to go through a channel with a dead peer), but enable quickly (as soon as the peer reconnect we enable)
20:46:20 <bitconner> okay, so as far as next steps (i can run some more numbers if people think of better heuristics), what would people liek to see?
20:46:22 <BlueMatt> also, this needs updating in channel_update processing either way
20:46:28 <t-bast> note that my numbers are only for channel_updates that have a timestamp updated, so given these numbers I'm guessing that lnd by default re-emits a channel_update every 24h, doesn't it?
20:46:30 <BlueMatt> it says "if the fields below timestamp are equal" SHOULD ignore this message
20:46:44 <BlueMatt> obviously if we expect keepalives we MUST NOT ignore those messages
20:46:47 <BlueMatt> otherwise we have propagation issues
20:46:53 <rusty> cdecker: yes.  But we also have a 13-day refresh timer which refreshes everything (even if disabled)
20:47:04 <BlueMatt> it seems strange to me to change the spec to introduce propagation issues for nodes that faithfully implement today's protocol
20:47:10 <BlueMatt> we'd at least need some kind of waiting period
20:47:31 <bitconner> wdym by introduce propagation issues?
20:47:44 <BlueMatt> the spec *today* says you SHOULD ignore keepalive updates
20:47:51 <BlueMatt> and not relay them, and not accept them
20:48:06 <roasbeef> ....where exactly?
20:48:08 <BlueMatt> but pruning old entries means that you'd prune
20:48:11 <BlueMatt> err, sorry
20:48:18 <BlueMatt> - if the fields below `timestamp` are equal:
20:48:19 <BlueMatt> - SHOULD ignore this message
20:48:23 <roasbeef> keep alive would have a newer timestamp
20:48:29 <roasbeef> or you just +1 the prior one
20:48:39 <BlueMatt> you SHOULD ignore any messages where the fields other than timestamp are the same
20:48:42 <BlueMatt> according to the spec
20:48:59 <cdecker> When did we add that? Seems weird tbh
20:49:02 <bitconner> iirc that wording was changed recently, i'd confirm that's what the original wording said
20:49:08 <BlueMatt> thats been in there for a long time
20:49:13 <BlueMatt> but, yea, someone should check
20:49:21 <rusty> BlueMatt: true, we actually allow redundant updates every 7 days.
20:49:41 <roasbeef> above that, it says on ly apply that if the timestamps are equal
20:49:49 <roasbeef> the keep alive would have a new timestamp
20:50:05 <roasbeef> only*
20:50:08 <roasbeef> so spec is fine here
20:50:20 <cdecker> Seems that requirement was added in https://github.com/lightningnetwork/lightning-rfc/pull/621
20:50:23 <BlueMatt> oh, oops, indeed, i misread
20:50:25 <BlueMatt> sorry about that
20:50:28 <BlueMatt> thanks roasbeef
20:50:50 <BlueMatt> back to my original note, we should at least add a requirement in the sender section there that says you have to generate keepalives
20:50:55 <bitconner> so keepalive is legal, and the spec recommends keeping not-alive channels
20:51:16 <cdecker> Anyhow, seems we can go on discussing the pros and cons of using the earlier rather than the later of the two latest updates, shall we move that to the PR?
20:51:30 <BlueMatt> bitconner: where does it do that? I only see reference in the non-nomative section
20:51:40 <BlueMatt> the normative section needs to be updated here
20:52:06 <bitconner> by using the latest of the two updates, you only require one side to be alive
20:52:09 <t-bast> cdecker: ACK
20:52:21 <bitconner> but yes, we can continue on the PR
20:52:38 <cdecker> bitconner: yes, that's my point, as long as one node cares for a channel we shouldn't throw it away
20:52:56 <cdecker> But to be honest it's not a firm opinion on this
20:53:03 <bitconner> also have some time before this would see any action until the propagation issues are investigated/resolved
20:53:12 <niftynei> pruning it means you stop propagating it; if you keep it in your gossip store you could have a different internal heuristic that throws them away for routing table operations, no?
20:53:17 <t-bast> It would be good to clarify if you observe the same statistics as I do from my node; if we agree on these numbers we can very easily divide the bandwidth used by a factor of at least 10 with less aggressive channel_update keep-alive
20:53:20 <cdecker> Ok, let's move the discussion on the PR (I'll need to look at Conner's numbers a bit more I think)
20:53:37 <cdecker> #action everyone to discuss the pros and cons of using one update over the other on the PR
20:54:05 <niftynei> ack
20:54:17 <cdecker> That's the PR based topics
20:54:19 <bitconner> niftynei: yes that is also true, that's closer to what lnd does w/ its zombie index. my assumption is that what's in the spec is what's recommended for your routing table tho
20:54:24 <cdecker> On to the discussion topics ^^
20:55:01 <cdecker> BlueMatt, joostjgr or t-bast, do we have any volunteers to present their discussion topic first?
20:55:04 <rusty> Oh, I actually have an update on the protocol tests.  Kinda.
20:55:18 <cdecker> Great, let's do that then ^^
20:55:31 <t-bast> Great, let's do protocol tests for a change :)
20:55:37 <cdecker> #topic Protocol testing framwork
20:55:45 <t-bast> Trampoline and Route Blinding are simply waiting for some love on the PR itself
20:56:03 <t-bast> I've done a lot of work to add diagrams and stuff to make it easy to understand
20:56:13 <rusty> OK, so the current tests use this DSL to define them.  That's proven painful, and keeps needing new features (e.g. when we started doing optimized signatures)
20:56:20 <cdecker> ack t-bast, I'll give trampoline a look this week
20:56:24 * roasbeef whispers json into rusty's ear
20:56:39 <rusty> roasbeef: "no".
20:56:42 <roasbeef> kek
20:56:48 * t-bast thinks roasbeef is becoming a javascript dev or something
20:56:53 <roasbeef> lolol
20:56:58 <niftynei> hehehe
20:57:00 <rusty> So I've stepped back, and the new tests are straight python, with various libraries to support making messages, txs etc.
20:57:57 <rusty> So you can say things like "make a commitment tx with keys foo and bar".  then "add htlc x".  "sig should be valid for key K on the tx".
20:58:01 <cdecker> If you need additional inspiration btw I think skylark is a rather nice python DSL (it's used for the bazel build tool)
20:58:35 <rusty> Which means I have a lot more stuff to (re) implement in python, but the result is we use standard pytest which does tests in parallel etc and is easy to cut & paste.
20:58:35 <cdecker> Sorry, I meant starlark
20:58:39 <niftynei> that sounds nice rusty
20:58:59 <niftynei> speaking as perhaps your sole guinea pig on the old format :P
20:59:02 <t-bast> that sounds nice, what's the interface each implementation needs to add to plug into the python DSL?
20:59:28 <cdecker> It should eventually be talking over the P2P proto with the implementation, right?
21:00:01 <niftynei> and a lot faster to get things done in -- i ended up writing a python generator for the tests that i wanted for some of the early dual-funded protocol tests
21:00:14 <t-bast> It probably needs some hooks into the node under test to get into specific situations, doesn't it?
21:00:29 <niftynei> ('generator' here being 'script that prints tests' not an actual python generator thing)
21:00:33 <rusty> t-bast: currently it requires you to implement several methods, like "generate a block" and "open a channel" in a separate python file, and run that.  I'm going to change it, but the idea of one python module for each impl remains.
21:00:39 <roasbeef> rusty: is this much diff than decker's old tests?
21:00:48 <cdecker> Probably, but a small shim around the RPC and a way to mock the bitcoin backend should suffice
21:00:50 <rusty> cdecker: yes, it currently uses a C shim, but am reimplementing in native python.
21:00:56 <roasbeef> or this is like wire level message matching?
21:01:05 <niftynei> it's wire level message matching
21:01:18 <rusty> roasbeef: very.  It is wire-level (with more smarts to match sigs etc).
21:01:29 <cdecker> roasbeef: yes, with my integration tests we were running full implementations against each other, giving us many many combinations, while this just tests a single node for it's behaviour
21:01:31 <rusty> You create a DAG and it runs though all the paths
21:01:34 <roasbeef> how w/o inserting decryption keys?
21:01:41 <roasbeef> nodes need to be modified to run against it?
21:01:59 <roasbeef> ah ok signle node
21:02:15 <rusty> roasbeef: current implementation requires nasty invasive impl hacks for keys.  Hoping to loosen that so you provide the test runner with your priv keys, rather than vice verse.
21:02:17 <niftynei> no, c-lightning is not modified. there's a runner that wraps RPC commands to tie into the required DSl actions
21:02:35 <rusty> niftynei: no, c-lightnign has a heap of dev-only hooks to override keys for this!
21:02:41 <niftynei> oh. right.
21:03:11 <rusty> ''--dev-force-tmp-channel-id=0000000000000000000000000000000000000000000000000000000000000000',
21:03:12 <rusty> '--dev-force-privkey=0000000000000000000000000000000000000000000000000000000000000001',
21:03:12 <rusty> '--dev-force-bip32-seed=0000000000000000000000000000000000000000000000000000000000000001',
21:03:12 <rusty> '--dev-force-channel-secrets=0000000000000000000000000000000000000000000000000000000000000010/0000000000000000000000000000000000000000000000000000000000000011/0000000000000000000000000000000000000000000000000000000000000012/0000000000000000000000000000000000000000000000000000000000000013/0000000000000000000000000000000000000000000000000000000000000014/FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
21:03:12 <rusty> FFFFFFFFFFFFFFFFFFFFFF',
21:03:44 <rusty> There'll still be some of that I imagine, but ideally much less.
21:04:06 <niftynei> right so the python impl could be more dynamic wrt keys, e.g. injectable
21:04:28 <cdecker> Depends on how much of the logic we want to reimplement in python I guess
21:04:44 <rusty> So, status:  the current tests are really ugly to write (you have to manualy figure out what txs look like, etc), and DSL is... um... a bit special.  So I'm considering that the "one I throw away".
21:04:47 <niftynei> i hear electrum has a python implemenation ;)
21:05:02 <BlueMatt> we have python bindings
21:05:08 <BlueMatt> (or, will, Soon (tm) )
21:05:13 <cdecker> Right, both good options ^^
21:05:35 <rusty> I'm currently reworking the entire Python stack, currently doing the message generation (direct from the spec) in a nicer, generic way.
21:05:40 <BlueMatt> (but, we *do* have C bindings)
21:06:11 <cdecker> However if we can get away with some mock data rather than reimplementing it'd probably be easier to debug as well
21:06:23 <rusty> It's going under cdecker's pyln. p[ython namespace.  In particular, much is under pyln.proto right now (e.g. pyln.proto.bolts).
21:07:01 <cdecker> Speaking of which I wanted to add a couple of things to that myself
21:07:05 <rusty> Once I have converted the existing tests, I'll let you know we've reached MVP.
21:07:28 <cdecker> Great, thanks rusty for the update
21:07:42 <joostjgr> Wouldnt it be nice to have multiple threads to discuss things in parallel? Just start all agenda items at once
21:07:42 <rusty> But lots of recent interop problems would be avoided by this existing, so hence my effort.
21:07:55 <rusty> EOF :)
21:07:58 <t-bast> great that definitely sounds useful
21:08:14 <joostjgr> There is one thing I want to bring up about anchors. The outstanding discussion on whether to encumber all spend paths of the htlc outputs with CSV 1 rather than all paths except the revocation path. Isn't it an advantage if the other party or a watchtower can act on an unconfirmed breach? Anchor it down using the revocation path.
21:08:14 <roasbeef> def
21:08:18 <cdecker> joostjgr: I'm having loads of difficulties following as it is, multiple discussions at once seems counterproductive to me
21:08:26 <jkczyz> yeah, having seen the old grammar, looking forward to how it turns out
21:08:30 <joostjgr> at once, but separated in channels
21:08:51 <cdecker> #topic  Anchor outputs #688 (@joostjager)
21:09:03 <cdecker> Last topic for today, I need to get some sleep soon ^^
21:09:16 <BlueMatt> joostjgr: last I looked at it there were strange attack cases without it
21:09:21 <joostjgr> I rebased the pr and addressed some of the smaller comments. Now working on test vectors
21:09:24 <BlueMatt> I described one that makes implementation triply tricky
21:09:24 <roasbeef> joostjgr: yeh they can insta act which is nice, and also star tto siphon off funds from the cheating party into miner's fees to bump up the conf
21:09:26 <cdecker> Ah that makes more sense joostjgr
21:09:51 <roasbeef> also past revocation land, all bets are off more or less
21:09:59 <BlueMatt> given we're gonna change the htlc txn format later for anchor-but-secure-htlcs-this-time, it also isnt critical to make it as flexible as possible, but getting it secure I'd prefer
21:10:18 <BlueMatt> given we've seen that mempool relay policy has a tendency to bite us in the ass, lets not risk it
21:10:52 <roasbeef> the scenario described there doesn't seem like much of an attack, the party that's able to take the revoke funds has an incentive to do it as quickly as possilbe
21:11:18 <roasbeef> swift justice
21:11:29 <joostjgr> bluematt: yes i saw your scenario where the breaching party is punished extra by forcing them to rbf the fee upward
21:11:29 <bitconner> if the revocation path has CSV 1 then a tower can't bump the fee for a client?
21:11:40 <roasbeef> bitconner: that too
21:11:51 <BlueMatt> roasbeef: as we've learned over the last few weeks...."incentive to do it as quickly as possible" isn't always as simple as it seems :p
21:12:06 <roasbeef> well this is a specific context, yuo're playing for all the chips here
21:12:07 <bitconner> then the commitment txn needs to carry full fees to be tower-able
21:12:28 <BlueMatt> wait, why cant the tower use your anchor?
21:12:36 <roasbeef> joostjgr: the true punishment is them just losing all their funds in the end tho
21:12:39 <BlueMatt> its there...well, to allow CPFP
21:12:48 <BlueMatt> if the tower cant use the anchor, that seems like a bug in the anchor construction
21:13:35 <roasbeef> BlueMatt: it needs the mult-sig keys for that, before the block period or w/e it is
21:13:41 <bitconner> the tower can spend using presigned txns, but what are the inputs? now nodes need to keep a reserve of UTXOs just for the tower
21:14:09 <roasbeef> yeh could be presigned when sent ove r
21:14:37 <BlueMatt> right, so you're saying the tower should be able to hurn your htlc value for you to create cpfp
21:14:42 <roasbeef> but if this is really "the breaching party is punished extra by forcing them to rbf the fee upward", that's insignificant as the breaching party is about to lose all their money, seems minor if Alice can toy w/ them for w/e reason
21:15:13 <BlueMatt> roasbeef: thats the Obvious Issue I see, lets please not take "lack of obvious issue" as a reason to risk it
21:15:31 <roasbeef> risk what?
21:15:40 <roasbeef> as long as the revocation fine is ok, not sure what the issue is
21:15:45 <joostjgr> there is the obvious or not so obvious issue on the one hand and watchtower implications on the other hand
21:16:34 <BlueMatt> so, if you're suggesting the watchtower just burn your in-flight htlc value to create cpfp txn, i dont really see how that solves it either, though
21:16:40 <BlueMatt> cause you'd need to always maintain an htlc balance
21:17:06 <bitconner> not sure where htlc balances are coming in here
21:17:06 <BlueMatt> in the hopefully-common-case you wouldn't have n htlc and you're back to square one
21:17:25 <BlueMatt> bitconner: maybe I'm misunderstanding, but let me restate the way I've understood this:
21:17:25 <bitconner> ideally the tower can bump with arbitrary utxos
21:17:45 <BlueMatt> the desire to have htlcs not have a csv one is so taht the watchtower can bump the commitment transaction by burning your htlc value.
21:17:53 <BlueMatt> what did i misunderstand?
21:19:33 <bitconner> i think we're only referring to the revocation path, not htlcs as a whole
21:19:39 <BlueMatt> right
21:20:26 <BlueMatt> so if I didnt misunderstand anything, can someone restate the motivation for revocation path htlcs being non-csv'd?
21:21:17 <cdecker> Would it be ok if I ended the meeting notes? Feel free to continu discussing, but I really need to get some sleep ;-)
21:21:23 <roasbeef> the tower can revoke in the same block as the commitment is mined, they can also start to shift funds from the party breaching to miner's fees
21:21:49 <joostjgr> yes sure, the discussion has been started and can continue on the pr. thanks
21:21:59 <cdecker> #endmeeting