12019-11-19T00:03:34  *** Chris_Stewart_5 has joined ##taproot-bip-review
  22019-11-19T00:20:32  <sipa> pinheadmz: nice, that'll be very useful to generate actual test vectors to include in the bip
  32019-11-19T00:31:08  *** daniel__ has quit IRC
  42019-11-19T00:55:29  *** Chris_Stewart_5 has quit IRC
  52019-11-19T01:02:13  *** Chris_Stewart_5 has joined ##taproot-bip-review
  62019-11-19T01:18:26  *** Chris_Stewart_5 has quit IRC
  72019-11-19T01:57:05  *** rottensox has quit IRC
  82019-11-19T01:57:24  <evoskuil> The Abstract of bip-143 states: "This proposal defines a new transaction digest algorithm for signature verification in version 0 witness program" and the Specification goes on to remove FindAndDelete (for v0 scripts). bip-taproot is v1 and silent on FindAndDelete. I assume the expectation is that FindAndDelete should never rear its ugly head in a versioned script, but if so this should be made explicit, including whether
  92019-11-19T02:02:09  <aj> evoskuil: cut off after "including whether"
 102019-11-19T02:02:59  <evoskuil> ...  this can be applied retroactively to reserved versions.
 112019-11-19T02:03:39  <evoskuil> @aj: thanks, not sure what happened, it echoed fine.
 122019-11-19T02:05:03  <sipa> bip-taproot specifies a new signature hashing scheme, the one from BIP143 isn't used anymore in taproot spends
 132019-11-19T02:05:12  <sipa> perhaps that can be made explicit
 142019-11-19T02:11:42  <sipa> evoskuil: ah, perhaps this clarifies things better: the bip-taproot/tapscript sighash scheme doesn't include a "scriptCode" anymore (where the FindAndDelete used to be applied to)
 152019-11-19T02:12:13  <evoskuil> Actually I think it's probably sufficient as is. It would be nice to explicitly exclude it from all versioned scripts in the older code, but not necessary.
 162019-11-19T02:12:28  <sipa> instead there is the actual scriptPubKey, and for tapscript, the hash of the leaf that is chosen leaf
 172019-11-19T02:12:32  <sipa> -leaf
 182019-11-19T02:14:00  <evoskuil> Yeah, this is just a consequence of working through the spec in stages. It would have become obvious later.
 192019-11-19T02:14:30  <sipa> Ah, you mean a clarification in BIP143 like "Note that this scheme is not used in v1 witnesses", if bip-taproot ends up being adopted?
 202019-11-19T02:18:07  <evoskuil> No, I don't think any further clarification is required. It's just that when working through the places where version is used I hit the FindAndDelete condition, and instead of explicitly checking for v0 I wanted to check for "versioned", but this code won't execute for taproot.
 212019-11-19T02:18:20  <sipa> Right.
 222019-11-19T02:28:17  *** HighOnBtc has quit IRC
 232019-11-19T02:54:42  *** ZmnSCPxj_ has quit IRC
 242019-11-19T03:53:45  *** ZmnSCPxj has joined ##taproot-bip-review
 252019-11-19T03:55:00  <ZmnSCPxj> elichai2, instagibbs: regarding MuSig-in-MuSig, this is used in my Nodelets proposal: https://lists.linuxfoundation.org/pipermail/lightning-dev/2019-October/002236.html
 262019-11-19T03:55:32  <ZmnSCPxj> Obviously, Schnorr-based channels will use MuSig between the channel participants
 272019-11-19T03:55:57  <ZmnSCPxj> Nodelets considers the possibility that one of the nodes in the Lightning network is actually composed of multiple participants that have to be online 100% of the time
 282019-11-19T03:56:40  <ZmnSCPxj> It seems MuSig-in-MuSig resolves down to a 2-phase MuSig, which has a security proof that was shown as flawed
 292019-11-19T03:57:28  <ZmnSCPxj> I am in communication with MuSig authors and Tim Ruffing regarding composable MuSig.
 302019-11-19T03:58:06  <ZmnSCPxj> It seems to me that there should not be any issue if a participant in any MuSig-using protocol is itself an aggregate.
 312019-11-19T03:58:34  <ZmnSCPxj> One may observe that typical sentiences are primarily composed of multiple nonsentient agents, for example.
 322019-11-19T03:58:47  <ZmnSCPxj> (unless access to your design is not available to humans?)
 332019-11-19T04:00:07  <ZmnSCPxj> Regarding composable MuSig, it seems possible to use ElGamal commitments in the first phase of 3-phase MuSig.
 342019-11-19T04:01:16  <ZmnSCPxj> But part of ElGamal commitment is showing a point q * G in the commitment, which leads me to wonder why it would not be subject to the same flaw as 2-phase MuSig
 352019-11-19T04:03:27  <ZmnSCPxj> In any case, I am now checking through the logs kept by aj on erisian.com.au, so please respond at your leisure
 362019-11-19T04:03:50  *** ZmnSCPxj has quit IRC
 372019-11-19T05:03:13  *** hebasto has quit IRC
 382019-11-19T05:07:16  *** hebasto has joined ##taproot-bip-review
 392019-11-19T07:01:17  *** tecnovert has quit IRC
 402019-11-19T07:14:04  *** tecnovert has joined ##taproot-bip-review
 412019-11-19T07:52:27  *** daniel__ has joined ##taproot-bip-review
 422019-11-19T09:27:02  *** jonatack has quit IRC
 432019-11-19T10:02:36  *** jonatack has joined ##taproot-bip-review
 442019-11-19T10:11:38  *** jonatack has quit IRC
 452019-11-19T10:13:07  *** jonatack has joined ##taproot-bip-review
 462019-11-19T10:51:20  *** sipa has quit IRC
 472019-11-19T10:52:40  <waxwing> i finished this over the weekend, about what wagner's algorithm/attack is, it occurs to me that the few people that may be interested are likely to be here :) https://joinmarket.me/blog/blog/avoiding-wagnerian-tragedies/
 482019-11-19T10:56:54  *** sipa has joined ##taproot-bip-review
 492019-11-19T11:15:34  *** ZmnSCPxj_ has joined ##taproot-bip-review
 502019-11-19T11:28:27  *** Chris_Stewart_5 has joined ##taproot-bip-review
 512019-11-19T11:28:55  <jonatack> waxwing: ty :)
 522019-11-19T11:35:31  *** jonatack has quit IRC
 532019-11-19T11:39:01  *** andytoshi has joined ##taproot-bip-review
 542019-11-19T13:01:25  *** HighOnBtc has joined ##taproot-bip-review
 552019-11-19T14:15:35  *** pyskell has joined ##taproot-bip-review
 562019-11-19T14:19:08  *** rottensox has joined ##taproot-bip-review
 572019-11-19T15:13:38  *** andytoshi has quit IRC
 582019-11-19T15:59:54  *** davterra has joined ##taproot-bip-review
 592019-11-19T16:22:37  *** Guest61_ has joined ##taproot-bip-review
 602019-11-19T16:25:17  *** Guest61 has quit IRC
 612019-11-19T16:33:06  *** Guest61 has joined ##taproot-bip-review
 622019-11-19T16:36:30  *** Guest61_ has quit IRC
 632019-11-19T16:59:57  <waxwing> " A product of two numbers is a square when either both or none of the factors are squares." i must be dumb i don't get it .. a product of two not square factors isn't always a square (i.e the "none" case), like 3x7 is not a square even though both factors are not squares.
 642019-11-19T17:10:06  <waxwing> i think it's a statement correct modulo p, isn't that it?
 652019-11-19T17:11:18  <waxwing> oh and distinguishes when p=3 mod 4 vs 1 mod 4 according to wiki, because of the difference of whether -1 is a square or not.
 662019-11-19T17:14:31  <sipa> waxwing: yeah, modulo a prime this is correct
 672019-11-19T17:14:47  <sipa> (it's even true when -1 is a square)
 682019-11-19T17:16:53  <waxwing> right that specific sentence, true
 692019-11-19T17:20:04  <sipa> put otherwise: this follows from the legendre symbol being a completely multiplicative function
 702019-11-19T17:20:27  <sipa> maybe we should add "modulo a prime" to that sentence?
 712019-11-19T17:21:45  <sipa> i think this became confused when we changed the "quadratic residue" terminology for "square"
 722019-11-19T17:21:48  <waxwing> i think so yes.
 732019-11-19T17:22:05  <waxwing> yes quadratic residue is particularly unfortunate, but hard to argue with Gauss :)
 742019-11-19T17:22:38  <sipa> it's all Square's fault
 752019-11-19T17:23:07  <waxwing> lol. reminds me i always say Gaussian not normal because let's face it the former is just cool sounding.
 762019-11-19T17:24:47  <sipa> "standard normal" is even worse
 772019-11-19T17:24:54  <sipa> how boring a name can you come up with
 782019-11-19T17:25:16  <waxwing> i note you correctly say this is a new standard but it might be worth adding a footnote about how it's also widely used in slightly different forms, like what is it EDDSA is basically a Schnorr variant.
 792019-11-19T17:26:04  <waxwing> probably there are others but meh it's only a suggestion no need to go overboard. i just have occasionally encountered people thinking this is a new untried piece of cryptography, which is pretty far from accurate really.
 802019-11-19T17:26:08  <sipa> yup, worth citing
 812019-11-19T17:26:33  <sipa> even the hash(R || P || m) argument order matches eddsa
 822019-11-19T17:27:00  <waxwing> oh, right, presumably we will now bikeshed that for the next hour or two :)
 832019-11-19T17:27:27  <sipa> i prebikeshedded it with real_or_random already
 842019-11-19T17:28:07  <waxwing> yeah i saw that. kinda interesting discussion, maybe i'll link it here
 852019-11-19T17:29:58  <waxwing> cancel that i have no idea where i read it, now.
 862019-11-19T17:39:44  *** andrewtoth has joined ##taproot-bip-review
 872019-11-19T18:15:36  <aj> waxwing: https://github.com/sipa/bips/pull/62 ?
 882019-11-19T18:18:25  <sipa> that's it
 892019-11-19T18:50:17  <moneyball> 10 minutes until Q&A!
 902019-11-19T18:50:51  <sipa> i'll be 5-10 minutes late
 912019-11-19T18:52:51  *** pyskell has quit IRC
 922019-11-19T18:58:42  *** Moller40 has joined ##taproot-bip-review
 932019-11-19T19:00:22  <aj> #startmeeting
 942019-11-19T19:00:22  <lightningbot> Meeting started Tue Nov 19 19:00:22 2019 UTC.  The chair is aj. Information about MeetBot at http://wiki.debian.org/MeetBot.
 952019-11-19T19:00:22  <lightningbot> Useful Commands: #action #agreed #help #info #idea #link #topic.
 962019-11-19T19:00:35  <kabaum> Hi
 972019-11-19T19:00:38  <amiti> hi
 982019-11-19T19:00:38  <moneyball> hi
 992019-11-19T19:00:39  <nickler> hi
1002019-11-19T19:00:48  <nothingmuch> hi
1012019-11-19T19:00:50  <aj> hi
1022019-11-19T19:01:37  <kabaum> It's only me from group 5 here today, so my questions are not filtered through our pre-Q&A-meeting.
1032019-11-19T19:01:43  <fanquake> hi
1042019-11-19T19:02:03  <kabaum> Q 1/3: Why isn't implicit Y a reduction in security? I don't get the explanation given in last paragraph of "Implicit Y coordinates" or in footnote 6. Is it so that given a privkey q for a pubkey P with explicit Y, one can easily calculate the privkey p' for the pubkey -P?
1052019-11-19T19:03:33  <sipa> kanzure: https://medium.com/blockstream/reducing-bitcoin-transaction-sizes-with-x-only-pubkeys-f86476af05d7
1062019-11-19T19:03:38  <sipa> eh, kabaum ^
1072019-11-19T19:03:47  <fjahr> hi
1082019-11-19T19:04:33  <sipa> kabaum: also https://bitcoin.stackexchange.com/a/90120/208
1092019-11-19T19:04:35  <nickler> kabaum: but basically yes. Inverting the public key is just computing the negation of a 256 bit number (the y coordinate) so that's very easy.
1102019-11-19T19:04:40  <kanzure> hi
1112019-11-19T19:05:03  *** pglazman has joined ##taproot-bip-review
1122019-11-19T19:05:07  <sipa> nickler: you keep bringing that up as argument, but i don't think that's relevant
1132019-11-19T19:05:23  <sipa> even if it cost an EC multiplication to do so, x-only would still not be a reduction in security
1142019-11-19T19:06:25  *** willcl_ark has quit IRC
1152019-11-19T19:07:07  <nickler> still I want to point out that a third party can easily do that, no access to the secret key is required
1162019-11-19T19:07:47  <sipa> right
1172019-11-19T19:08:34  <kabaum> sipa: Got it. The stackexchange answer helped.
1182019-11-19T19:08:47  <nothingmuch> Q: both e and R style signatures seem to be rationalizable appealing to orphan rates, network latency for the former, validation cpu cost for the latter, but it's not clear how to compare these. what is the argument for favoring R and batch verification, especially on a longer time horizon? is it empirical?
1192019-11-19T19:09:05  <sipa> kabaum: another way to look at it: the fact that you can negate a public key (which negates the corresponding private key) is indeed going to help an attacker... but it *always* does that, whether public keys are x-only or full x/y
1202019-11-19T19:09:40  <kabaum> sipa: thanks.
1212019-11-19T19:10:17  <sipa> kabaum: FWIW, secp256k1 has another "efficiently computable endomorphism" (which is what this negation gives you), namely multiplication with a specific constant lambda... which also always helps an attacker
1222019-11-19T19:10:34  <nickler> nothingmuch: the argument is that batch verification is a significant speedup. It's empirical in the sense that we've implemented and measured it (that's the graph in the bip)
1232019-11-19T19:12:45  <sipa> kabaum: the expected number of EC additions an attacker needs to do with a DLP breaking algorithm is proportional to sqrt(p/m) where p is the largest prime factor of the group order, and m is the number of efficiently computable endomorphism (m=6 for secp256k1, but the efficient negation is part of it)
1242019-11-19T19:14:21  <sipa> nothingmuch: the "be made as small as 16 bytes" should probably get a qualifier "in some security models"
1252019-11-19T19:17:38  <kabaum> nickler: My next Q is related to nickler's answer.
1262019-11-19T19:17:39  <kabaum> Q 2/3: Is the graph "Batch signature verification in libsecp256k1" made using the bip-schnorr scheme or using some other/generic Schnorr scheme?
1272019-11-19T19:19:03  <nickler> kabaum: it's the bip-schnorr implementation in libsecp
1282019-11-19T19:19:12  *** willcl_ark has joined ##taproot-bip-review
1292019-11-19T19:19:34  <nickler> this one https://github.com/bitcoin-core/secp256k1/pull/558
1302019-11-19T19:20:05  <aj> nickler: well, it's from a pre-x-only version of that PR, right?
1312019-11-19T19:20:18  <sipa> here is a graph that's based on more lowlevel benchmarking: http://bitcoin.sipa.be/speedup-batch.png
1322019-11-19T19:20:38  <sipa> which only takes the multi-exp into account and not the other overhead of verifying a signature
1332019-11-19T19:20:41  <nickler> aj: right, but it shouldn't make a visible difference
1342019-11-19T19:21:15  <nickler> but I should try to reproduce the numbers just to be sure :)
1352019-11-19T19:22:11  <nickler> perhaps s/I/someone to reduce systematic bias
1362019-11-19T19:22:12  <kabaum> nickler: that'd be interesting.
1372019-11-19T19:22:30  <kabaum> Q 3/3: Why is batch verification of u signatures more efficient than verifying them separately? I counted the number of point multiplications and additions. For individual verification I get 2u mult and u add. For batch I get 2u mult and 2u-1 add. Also the amount of hashing seems similar in batch and separate verification. What am I missing?
1382019-11-19T19:22:56  <sipa> kabaum: Strauss' algorithm
1392019-11-19T19:23:05  <sipa> (and for larger numbers, Pippenger's algorithm)
1402019-11-19T19:23:39  <sipa> it turns out that x*A + y*B can actually be computed more efficiently in one go than computing (x*A) and (y*B) separately and adding them up
1412019-11-19T19:24:34  *** Moller40_ has joined ##taproot-bip-review
1422019-11-19T19:25:34  <sipa> kabaum: very broadly, the number of EC additions you need to do to implement n EC multiplications grows with n/log(n), rather than n
1432019-11-19T19:25:44  <sipa> (if you only care about their sum)
1442019-11-19T19:27:14  <nickler> kabaum: https://cdn.preterhuman.net/texts/cryptography/Hankerson,%20Menezes,%20Vanstone.%20Guide%20to%20elliptic%20curve%20cryptography%20(Springer,%202004)(ISBN%20038795273X)(332s)_CsCr_.pdf section 3.3.3
1452019-11-19T19:27:26  <sipa> i can try to give an intuition for that here too, if there are no other questions
1462019-11-19T19:27:51  <nickler> "Guide to Elliptic Curve Cryptography" by Hankersen, Menezes, Vanstone
1472019-11-19T19:27:51  *** Moller40 has quit IRC
1482019-11-19T19:28:13  <kabaum> sipe: I'd love that.
1492019-11-19T19:29:16  <kabaum> nickler: Strauss isn't mentioned in that document.
1502019-11-19T19:29:16  <nothingmuch> i jumped the gun with my question, but it's still outstanding, but an intuition for batch verification seems relevant to it anyway
1512019-11-19T19:29:23  <sipa> imagine you're trying to compute 19*P
1522019-11-19T19:30:05  <nickler> kabaum: section 3.3.3
1532019-11-19T19:30:15  <sipa> one way to do that is to compute 2P = P+P, 4P = 2P+2P, 8P = 4P+4P, 16=8P+8P, and then sum P+2P+16P
1542019-11-19T19:30:24  <nickler> they don't call it strauss but shamir's trick
1552019-11-19T19:30:42  <sipa> right?
1562019-11-19T19:30:57  *** Moller40_ has quit IRC
1572019-11-19T19:31:00  <kabaum> yes!
1582019-11-19T19:31:54  <waxwing> key prefixing means we lose the pubkey recovery property, right?
1592019-11-19T19:32:00  <sipa> waxwing: correct
1602019-11-19T19:32:14  <waxwing> how do we feel about that.
1612019-11-19T19:32:37  <sipa> i think key aggregation is more important that pubkey recovery
1622019-11-19T19:32:38  <waxwing> oh i see it's in footnote 3
1632019-11-19T19:33:28  <sipa> kabaum: so you're doing 4 doublings (because the largest power of two is 16P), and then 1 addition for every 1-bit in the binary representation of 16
1642019-11-19T19:34:24  <sipa> but say you want to compute 19P + 23Q, you're just going to do twice the amount of work now (two times 4 doublings, and an addition for every 1 bit in 19 and 23)
1652019-11-19T19:34:40  <sipa> agree?
1662019-11-19T19:34:53  <kabaum> Agree!
1672019-11-19T19:35:03  <sipa> so that's not a good approach
1682019-11-19T19:35:09  <aj> waxwing: you can do pubkey recovery with partial sigs with the same construction though; claim the complete key is P+-P+G, so you're going for sG=R+H(R,G,m)*G and calculate a partial signature s1,R1   s1*G = R + H(R,G,m)*P  and P is recoverable
1692019-11-19T19:36:06  *** Moller40 has joined ##taproot-bip-review
1702019-11-19T19:36:15  <sipa> kabaum: an alternative is working backwards: write 19 as 11001 in binary, and compute it as dbl(dbl(dbl(dbl(P)))+P)+P)
1712019-11-19T19:36:25  <sipa> this is the exact same number of additions/doublings
1722019-11-19T19:37:16  <sipa> but instead of precomputing all power-of-two's times P first, and then adding them up, you use an "accumulator" that you add P into every time a "1" bit is encountered, and you double for every bit
1732019-11-19T19:37:45  <waxwing> aj, hmm interesting, that's confusing, so you're talking about like a multisig scenario? but would that work with musig?
1742019-11-19T19:37:46  <sipa> this is actually the same algorithm as https://en.wikipedia.org/wiki/Exponentiation_by_squaring, expect addition/doubling instead of multiply/square
1752019-11-19T19:38:10  <kabaum> sipa: did you write 19 backwards in binary?
1762019-11-19T19:38:10  <sipa> kabaum, nothingmuch: does this make sense?
1772019-11-19T19:38:23  <sipa> no
1782019-11-19T19:38:25  <aj> waxwing: just abusing the api needed for partial sigs to allow for a protocol that wants pubkey recovery
1792019-11-19T19:38:44  <sipa> kabaum: maybe it's easier to see if i write it as dbl(dbl(dbl(dbl(0+P)))+P)+P)
1802019-11-19T19:38:47  <waxwing> aj, i guess it depends on what situation you want to do pubkey recovery .. the one i was made aware of was people working in a constrained environment wanting to transfer without being explicit about a key.
1812019-11-19T19:38:53  <waxwing> constrained as in something like mesh network.
1822019-11-19T19:39:40  <sipa> kabaum: oh, i'm confusing myself!
1832019-11-19T19:40:05  <sipa> it's dbl(dbl(dbl(dbl(0+P)+P)))+P
1842019-11-19T19:40:17  <sipa> the outer bits (11) become the inner part
1852019-11-19T19:40:38  <sipa> maybe IRC isn't the best medium for this
1862019-11-19T19:40:40  <nothingmuch> sipa: roughly makes sense to me but i doubt i will fully comprehend this here, i'm slow on the uptake and probably need to work it out on paper
1872019-11-19T19:40:58  <sipa> nothingmuch: i'm just giving background, i haven't explained it yet :)
1882019-11-19T19:41:10  <waxwing> there is a trailing ) on the last bullet point of "Default Signing"
1892019-11-19T19:41:10  <kabaum> Oh, it's 0x19, got that. Now I'm way behind you.... Catching up.
1902019-11-19T19:41:19  <sipa> kabaum: no, 19 = 16 + 8 + 1
1912019-11-19T19:41:27  <sipa> eh, 16 + 2 + 1
1922019-11-19T19:41:46  <sipa> i need a whiteboard :)
1932019-11-19T19:42:19  <waxwing> cancel that there is not a trailing )  . doh.
1942019-11-19T19:44:00  <sipa> kabaum: the number in binary is 10011 * P
1952019-11-19T19:44:07  <sipa> start with 0P = 0
1962019-11-19T19:44:25  <sipa> you see a 1 bit (the one in front), so you add P, and get 1P
1972019-11-19T19:44:38  <sipa> you double, and get 2P
1982019-11-19T19:44:39  <nickler> aj: fwiw the libsecp-zkp musig api wouldn't let you do that. Also, not committing to the pubkey allowsa an attacker to tweak the signature to be valid for another bip32 derived key
1992019-11-19T19:44:43  <sipa> you see a 0 bit, so don't do anything
2002019-11-19T19:44:51  <sipa> you double, and get 4P
2012019-11-19T19:44:56  <sipa> you see a 0 bit, so don't do anything
2022019-11-19T19:44:56  <sipa> you double, and get 8P
2032019-11-19T19:45:07  <sipa> you see a 1 bit, so you add P, and get 9P
2042019-11-19T19:45:11  <sipa> you double, and get 18P
2052019-11-19T19:45:14  <sipa> you see a 1 bit, so you add P, and get 19P
2062019-11-19T19:45:29  <sipa> done.
2072019-11-19T19:45:48  *** rottensox has quit IRC
2082019-11-19T19:45:55  <sipa> does this make sense?
2092019-11-19T19:46:10  <waxwing> nickler, yeah i haven't figured it out, as you have, but makes sense that that is not compatible with musig, as it's kind of a related key attack thing.
2102019-11-19T19:46:26  <aj> nickler: that means the api only does musig, not simple pubkey addition (with knowledge of discrete log) right?
2112019-11-19T19:47:01  <waxwing> aj, so you're basically arguing for people to still use base bip schnorr but have their own protocols (not in bitcoin itself, per se) that could do that kind of stuff?
2122019-11-19T19:47:30  <nothingmuch> sipa: so then to do both, it's just adding Q or P based on whether their respective bits are set into the same accumulator, and doubling once?
2132019-11-19T19:47:31  <nickler> well in our musig api you have the concept of a session which keeps your nonce, others nonces and your secret key. If you call partial sig it will use mostly session information
2142019-11-19T19:47:40  <sipa> nothingmuch: exactly!
2152019-11-19T19:47:44  <sipa> and what did you gain?
2162019-11-19T19:47:49  <aj> waxwing: that it's conceivable yeah, not necessarily a good idea :)
2172019-11-19T19:47:54  <waxwing> :)
2182019-11-19T19:50:01  <kabaum> sipa: Didn't you just describe normal "adding and doubling" method?
2192019-11-19T19:50:20  <sipa> kabaum: yes indeed
2202019-11-19T19:50:27  <kabaum> ok, then I follow.
2212019-11-19T19:51:27  <sipa> so the next step is what nothingmuch said: if you want to compute say 19P + 25Q, where 19 is 10011 in binary, and 25 is 11001, you can merge the two computations
2222019-11-19T19:51:45  <sipa> you start with 0P+0Q = 0
2232019-11-19T19:51:53  <sipa> you see a 1 bit in both, so you add P and Q, and 1P+1Q
2242019-11-19T19:52:03  <sipa> you double, and get 2P+2Q
2252019-11-19T19:52:32  <sipa> you see a 1 bit in 25, so you add Q, and get 2P+3Q
2262019-11-19T19:52:48  <sipa> you double, and get 4P+6Q
2272019-11-19T19:52:55  <sipa> you see 0 bits in both and do nothing
2282019-11-19T19:53:00  <sipa> you double, and get 8P+12Q
2292019-11-19T19:53:28  <sipa> you see 1 bit in 19, so you add P, and 9P+12Q
2302019-11-19T19:53:35  <sipa> you double, and get 18P+24Q
2312019-11-19T19:53:47  <sipa> you see a 1 bit in both, you add P+Q, and get 18P+25Q
2322019-11-19T19:53:52  <sipa> *19P+25Q
2332019-11-19T19:54:02  <sipa> and in doing so, you've only doubled 4 times, not 8 times
2342019-11-19T19:56:53  <waxwing> i see you specify deterministic nonces but a very much simpler form than rfc6979 .. perhaps mention the latter in footnote 10?
2352019-11-19T19:57:23  <waxwing> debatable since it's not like rfc6979 was actually part of a bitcoin standard/bip before iirc
2362019-11-19T19:57:54  <sipa> we all like 289264 invocation of the SHA256 compression functions to compute a nonce
2372019-11-19T19:57:59  *** Moller40_ has joined ##taproot-bip-review
2382019-11-19T19:58:06  <waxwing> also that bias being small is cool but maybe it would be nice to cite something. i remember 1 bit biases in nonces have been enough in theory before (lattice stuff)
2392019-11-19T19:58:27  <sipa> waxwing: they are
2402019-11-19T19:58:30  <waxwing> 6979 was kinda hard. but i mean ... Pornin is the man.
2412019-11-19T19:59:37  <sipa> it's a 0.0000000000000000000000000000000000000058 bit bias here
2422019-11-19T19:59:49  <waxwing> ah right yeah, thought i might have made that error :)
2432019-11-19T19:59:54  <sipa> i think we have a comment somewhere that the bias is unobservable
2442019-11-19T19:59:56  <waxwing> wait was it really 300K? lol
2452019-11-19T20:00:00  <kabaum> sipa: I got it! Thanks! You really made it simple.
2462019-11-19T20:00:08  <sipa> waxwing: no, more like 15
2472019-11-19T20:00:19  <sipa> waxwing: still, it's actually a nontrivial portion of the signing time
2482019-11-19T20:00:21  <waxwing> yeah you have that comment in 10 it's fine. and no need to comment rfc6979 i think.
2492019-11-19T20:00:40  <waxwing> i see. makes sense. i know there are some loops you can end up in but they're vanishingly unlikely.
2502019-11-19T20:01:38  *** Moller40 has quit IRC
2512019-11-19T20:03:03  <nothingmuch> sipa: i'd like to reiterate my question about e vs. R based signatures, and the rationale for preferring batch verification over compact signatures, and add to it
2522019-11-19T20:03:24  <waxwing> i think you could argue you don't need either the batch verification nor the optimisation sections in the BIP.
2532019-11-19T20:03:51  <waxwing> albeit it helps a lot to understand design decisions.
2542019-11-19T20:03:53  <nothingmuch> namely, is it fair to say that batch verification is necessarily serial?
2552019-11-19T20:04:01  <sipa> nothingmuch: what does that mean?
2562019-11-19T20:05:16  <nothingmuch> sipa: can't be computed in parallel
2572019-11-19T20:05:25  <sipa> i don't know what that means
2582019-11-19T20:05:40  <waxwing> i also don't understand, surely the whole point of batching is to do stuff in parallel
2592019-11-19T20:05:54  <nothingmuch> i mean given a batch, that the algorithm to do the whole batch is not concurrent
2602019-11-19T20:06:00  <nothingmuch> and can't benefit from multiple cores
2612019-11-19T20:06:14  <sipa> nothingmuch: ah, in theory it could, but we don't have an implementation for that
2622019-11-19T20:06:28  <sipa> however, you can split up the batch in 4 batches, and verify each on one core easily
2632019-11-19T20:06:58  <waxwing> that loses some of the scaling, but then if you had a parallelisable algo, it might too, i guess
2642019-11-19T20:07:06  <nothingmuch> i'm trying to better understand the compute vs. network latency tradeoff, the benefit from batching is obvious to me, but i'm not sure how it holds up against bandwidth cost, especially since witness data is discounted
2652019-11-19T20:07:48  <nothingmuch> or is the difference in security model the main rationale for preferring R over e, with batching being an extra benefit?
2662019-11-19T20:08:30  <nothingmuch> (without witness discount i can see that batch verification should be strictly better for orphan rates, but slightly reduced tx throughput)
2672019-11-19T20:09:04  <waxwing> pretty sure it's batching, the 'e' version is usually what's proved in security proofs isn't it.
2682019-11-19T20:09:29  <waxwing> oh but shortened .. ok.
2692019-11-19T20:11:53  <sipa> so using a 128-bit e value works in some security models for Schnorr, but not all
2702019-11-19T20:12:03  <sipa> (depending on which other assumptions on the group/hash are made)
2712019-11-19T20:12:58  <sipa> nothingmuch: as for bandwidth, i think the relevant metrics are (a) worst case bandwidth usage and (b) worst case CPU per byte usage
2722019-11-19T20:13:24  <sipa> and worst case bandwidth isn't affected by any of these proposals - it's close to 4M of data per block
2732019-11-19T20:13:42  <sipa> while batch validation significantly improves CPU per vbyte
2742019-11-19T20:14:47  <nothingmuch> that makes sense
2752019-11-19T20:14:59  <sipa> specifically, i believe the ROM/DL/forkinglemma proof for Schnorr indeed works with 128-bit hashes... but the generic group + collision resistant hash proof needs 256 bit hashes
2762019-11-19T20:16:57  <nothingmuch> we had another question in group 16 yesterday, which i attempted to answer but wanted to verify - is the reason for H(tag)||H(tag) being the prefix so that the compression function can already be applied given that the block size is double the image size?
2772019-11-19T20:17:45  <sipa> yes, that's the reason for having a 64-byte prefix
2782019-11-19T20:17:57  <sipa> though any other 64-byte prefix could be picked
2792019-11-19T20:18:06  <sipa> the reason for doubling the hash of the tag is given in bip-taproot i think
2802019-11-19T20:20:18  <aj> second paragraph of Specification in bip-taproot
2812019-11-19T20:20:32  <aj> any other questions or shall we call it?
2822019-11-19T20:20:47  <kabaum> Thank you all so much for your time today! My two suggestions for bip-schnorr are: 1) Specify the specific schnorr scheme used in the graph. 2) Add a footnote mentioning Strauss' algorithm or Shamir's trick as the reason for the efficiency gains of batch verification.
2832019-11-19T20:21:40  <waxwing> i think all of that including batch algo itself is kinda out of scope. but i'm conflicted because it's actually pretty cool to have it there :)
2842019-11-19T20:21:52  <waxwing> aj, don't let me stop you :)
2852019-11-19T20:21:57  <sipa> kabaum: i guess i should point out that Shamir's trick is only one of the ways in which multi-EC-multiplication is faster than single-EC-multiplication
2862019-11-19T20:22:58  <kabaum> Oh, shit. There's more. Maybe a followup on next Q&A.
2872019-11-19T20:23:10  <aj> #endmeeting
2882019-11-19T20:23:10  <lightningbot> Meeting ended Tue Nov 19 20:23:10 2019 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)
2892019-11-19T20:23:10  <lightningbot> Minutes:        http://www.erisian.com.au/meetbot/taproot-bip-review/2019/taproot-bip-review.2019-11-19-19.00.html
2902019-11-19T20:23:10  <lightningbot> Minutes (text): http://www.erisian.com.au/meetbot/taproot-bip-review/2019/taproot-bip-review.2019-11-19-19.00.txt
2912019-11-19T20:23:10  <lightningbot> Log:            http://www.erisian.com.au/meetbot/taproot-bip-review/2019/taproot-bip-review.2019-11-19-19.00.log.html
2922019-11-19T20:23:11  <sipa> there is a much cooler algorithm called Pippenger's, which actually gives O(n/log(n)) speed (all Strauss does is get rid of the doubling cost for the 2nd and later multiplications, but never reduces additions)
2932019-11-19T20:23:46  <sipa> kabaum: i can explain yet another algorithm called Bos-Coster which is very simple to explain, but isn't faster in practice so we don't use it
2942019-11-19T20:23:47  <aj> can't ask for a better last line than "Oh shit. There's more." ...
2952019-11-19T20:24:12  <kabaum> aj: hehe
2962019-11-19T20:24:36  <nothingmuch> thank you!
2972019-11-19T20:24:58  <sipa> kabaum: interested?
2982019-11-19T20:25:00  <kabaum> sipa: If you have the time, please.
2992019-11-19T20:25:15  <kabaum> But I'm a bit overloaded.
3002019-11-19T20:25:26  <waxwing> i remember it's cool and not too hard to understand.
3012019-11-19T20:25:40  <sipa> so you want to compute n1*P1 + n2*P2 + n3*P3 + ..., and assume there are hundreds of terms
3022019-11-19T20:25:43  <waxwing> unfortunately i don't actually remember the algorithm, so please :)
3032019-11-19T20:26:04  <sipa> you sort them by descending value of n_i
3042019-11-19T20:26:18  <sipa> so n1 and n2 are the largest two n values
3052019-11-19T20:27:13  <sipa> now observe that n1*P1 + n2*P2 is actually the same as (n1-n2)*P1 + n2*(P1+P2)
3062019-11-19T20:27:46  <sipa> (it's subtracting n2*P1 from the first term, and adding it to the second term)
3072019-11-19T20:27:49  <sipa> right?
3082019-11-19T20:28:33  * waxwing is listening
3092019-11-19T20:28:37  <kabaum> You add and subtract n2P1
3102019-11-19T20:28:58  <sipa> so you can rewrite your list of EC multiplications to perform
3112019-11-19T20:29:05  <sipa> by doing 1 EC addition (the P1+P2)
3122019-11-19T20:29:39  <sipa> however, n1 and n2 are you two largest numbers... if they're approximately uniformly distributed, n1-n2 is going to be say 100 times smaller than n1, if you have 100 terms
3132019-11-19T20:30:46  <sipa> so you replace the P2 in the n2*P2 term with (P1+P2), delete the n1*P1 term, and re-insert (n1-n2)*P1 into your list... but probably not at the front anymore (as n1-n2 will very likely be a lot smaller than n1)
3142019-11-19T20:30:59  <sipa> right?
3152019-11-19T20:33:44  <kabaum> I vaguely follow.... now that n2 is probably the biggest value you can rearrange the terms again.
3162019-11-19T20:34:03  <sipa> indeed
3172019-11-19T20:34:32  <sipa> but the important thing is that we've reduced the multiplication factor of one of the terms by a factor 100, with just one EC multiplication
3182019-11-19T20:34:39  <sipa> *one EC addition
3192019-11-19T20:34:44  <sipa> and we can keep doing this
3202019-11-19T20:34:57  <sipa> with the new top two terms
3212019-11-19T20:35:35  <sipa> remember that in a naive EC multiplication algorithms (double and add), we're really only getting rid of 1 bit in the scalars for each EC addition
3222019-11-19T20:35:48  <sipa> here we're getting rid of 7 or 8 bits with high likelihood
3232019-11-19T20:36:11  <sipa> and the more terms there are, the more bits we get rid of in one go
3242019-11-19T20:37:39  <sipa> of course, at some point this must end-- we only have a finite number of bits in those scalars
3252019-11-19T20:37:43  <kabaum> That's fantastic. Like magic. When do you stop rearranging procedure?
3262019-11-19T20:37:57  <sipa> how does it end? sometimes n1 will equal n2
3272019-11-19T20:38:21  <sipa> in which case you just rewrite n*P1 + n*P2 into a single term n*(P1+P2)
3282019-11-19T20:38:33  <waxwing> interesting that the worst case would be when the differences between the coefficients were the same as the coefficients themselves.
3292019-11-19T20:38:37  <waxwing> iiuc.
3302019-11-19T20:39:14  <sipa> so actually the end point of this algorithm is just a single term, always
3312019-11-19T20:39:19  <sipa> of the form n*P
3322019-11-19T20:39:35  <sipa> where n is the gcd of all input scalars, which with extremely high probability is just 1
3332019-11-19T20:39:50  <sipa> in which case you're done and can just return P
3342019-11-19T20:41:43  <kabaum> So no multiplicatin at all.
3352019-11-19T20:42:38  <sipa> well, multiplication is always just a bunch of additions
3362019-11-19T20:42:45  <sipa> the question is how :)
3372019-11-19T20:43:08  <kabaum> yes, sure :=)
3382019-11-19T20:44:03  <sipa> the reason this algorithm isn't used is because it interacts badly with another optimization we have
3392019-11-19T20:44:04  <kabaum> Roughly how many iterations would this take? O(number of terms)?
3402019-11-19T20:44:14  <sipa> O(n/log(n))
3412019-11-19T20:44:59  <sipa> or rather, O(n_terms*(1+bits_per_term/log(n_terms)))
3422019-11-19T20:45:23  <sipa> so it can't ever beat O(n_terms), but it can reduce the number of additions per term if the number of terms goes up
3432019-11-19T20:48:48  <kabaum> sipa: Thank you so much for the lecture! Hope to see you next Q&A!
3442019-11-19T20:48:55  <kabaum> Buy all!
3452019-11-19T20:49:29  <sipa> https://cryptojedi.org/peter/data/eccss-20130911b.pdf
3462019-11-19T20:49:29  *** pglazman has quit IRC
3472019-11-19T20:49:49  <sipa> this slide deck explains a lot of the ideas behind efficient multi-multiplication and more
3482019-11-19T20:50:08  <kabaum> Noted!
3492019-11-19T20:50:50  *** Chris_Stewart_5 has quit IRC
3502019-11-19T20:53:22  <instagibbs_> thick meeting notes :)
3512019-11-19T20:53:57  <instagibbs_> is there a name for the fixed sized signature encoding
3522019-11-19T20:56:30  <instagibbs_> bip-schnorr signature encoding I guess :P
3532019-11-19T21:04:00  <sipa> "64 byte sigs"
3542019-11-19T21:04:30  <sipa> "compact sigs"
3552019-11-19T21:04:36  <sipa> "non-braindead sigs"
3562019-11-19T21:05:26  <waxwing> was pretty disappointed you didn't use ASN.1 ngl
3572019-11-19T21:07:32  <instagibbs_> is secp's ECDSA compact serialization (e,s)?
3582019-11-19T21:07:39  <instagibbs_> looks like it's two scalars :)
3592019-11-19T21:08:04  <instagibbs_> in other words, it's not "entirely obvious" but the name is going to be BIPXXX
3602019-11-19T21:09:28  <sipa> no, it's (r,s) (where r = R.x)
3612019-11-19T21:09:45  <sipa> also, ECDSA doesn't have an equivalent for "e" (it has no hashes)
3622019-11-19T21:09:54  <instagibbs_> (r,s), right, :)
3632019-11-19T21:17:59  *** midnight has joined ##taproot-bip-review
3642019-11-19T21:18:02  <midnight> ahh..  here it is.
3652019-11-19T21:36:23  *** pglazman has joined ##taproot-bip-review
3662019-11-19T21:38:12  *** davterra has quit IRC
3672019-11-19T21:38:36  *** davterra has joined ##taproot-bip-review
3682019-11-19T21:55:01  *** pglazman has quit IRC
3692019-11-19T21:57:34  *** Moller40 has joined ##taproot-bip-review
3702019-11-19T21:58:53  *** Moller40_ has quit IRC
3712019-11-19T22:12:34  *** pglazman has joined ##taproot-bip-review
3722019-11-19T22:46:38  *** rottensox has joined ##taproot-bip-review
3732019-11-19T23:14:50  *** davterra has quit IRC
3742019-11-19T23:15:13  *** davterra has joined ##taproot-bip-review
3752019-11-19T23:52:13  *** pglazman has quit IRC