01:06:50 wow hot dog sarang xmrblocks is using 75% of the memory on that thing! 01:06:56 i love seeing resources utilized 01:23:03 ? 01:24:10 Oh on your speedyfast machine? 01:24:19 Yes! I'm running transaction stats 01:27:55 i *don't* think i updated onion-explorer on that one, but mainnet hasn't forked so i think ure good 07:20:18 how much memory is there on the thing? 13:53:15 64 gigs 15:31:41 Good day, all 15:31:57 Working on some MCCVR preparation, as well as Triptych/Arcturus modifications to support (n,k)-Gray codes 15:32:04 The latter is a fun challenge :D 15:45:47 hi i am reading the btc-xmr.pdf on atomic swaps. how do u make the monero address with only half the private key? 15:46:02 is gray code this thing? https://en.wikipedia.org/wiki/Gray_code 15:46:02 [WIKIPEDIA] Gray code | "The reflected binary code (RBC), also known just as reflected binary (RB) or Gray code after Frank Gray, is an ordering of the binary numeral system such that two successive values differ in only one bit (binary digit).For example, the representation of the decimal value "1" in binary would normally..." 15:51:58 Yep! 15:52:03 A generalized-base version 15:52:56 nothing like a 74 year-old optimization ! 15:53:11 Heh 15:53:21 It's a little more involved =p 15:53:24 i mean, i know its not... yeah 15:53:30 but just seeing the patent date 15:53:52 Has to do with using a Gray code to iteratively compute coefficients more efficiently than we are now, for both Triptych and Arcturus (would also apply to Lelantus, FWIW) 15:54:23 xxx1: the public key construction is homomorphic in the private keys 15:57:00 this means you can add the public keys of each private key to make a public key that works for the "sum" of the private keys? 16:00:21 If you form a composite private key via a sum, you form the corresponding public key via a sum too 16:00:36 Now, whether or not it's a good idea to use a sum for the composite depends on the use case 16:03:04 ohh ok, this is cool! tyvm sarang! 16:03:45 There's been a lot of good research on safe ways to robustly construct such composite keys, so caveat emptor 16:08:22 Gray code proof-of-concept completed... testing... 16:08:30 failed :( 16:08:53 Maybe one bit is off... 16:09:00 I'll... find the door 16:09:57 Heh that's probably what it is 16:10:12 The Python test code builds a Gray iterator and uses that to mess with coefficients 16:10:30 But the prover works a little differently, and that's likely where the problem is 16:10:54 it's also a little wonky since you don't actually care what the Gray code is... you care about which digit changes, and to what 16:37:33 Huzzah, it works now! 16:37:43 It was a problem with this multidimensional indexing that Triptych uses 16:37:54 It'll be even better in Arcturus, which adds another dimension to the indexing... 16:37:55 0_0 16:38:00 ^ cargodog[m] 16:38:28 moar dimensions! 16:38:33 Will push to my research repo after some code cleanup 17:35:51 Triptych update: https://github.com/SarangNoether/skunkworks/commit/f28be666141f0e74eeedd5f03198fbe119703c0d 17:36:08 Arcturus update: https://github.com/SarangNoether/skunkworks/commit/996e3f4a241d1f24f1bf70306475f2f6e5c3d384 17:36:27 Doesn't use batch inversion yet (need to write additional code because of the tensor structure) 17:36:36 ^ cargodog[m] 18:19:41 Oooh great work, sarang! 18:20:28 I keep dividing my time between code and paper, and neither are progressing as fast as I would like :( 18:22:20 For maximal efficiency, you'll want to collapse the `f` tensor into a list/array/whatever and do a single batch inversion 18:23:10 But the only thing this changes is where my code calls `f[u][j][i].invert()`-type operations 18:24:08 Another inefficiency is that defining `sigma` requires Gray codes for only the signing indices, and right now I do a silly thing and iterate until I reach that point 18:24:30 But later in the prover, you need all the Gray codes anyway... so you could compute them all from the start if you really wanted to 18:24:42 But prover efficiency isn't a big deal IMO anyway; not at that level, at least 18:25:09 The verifier still computes them all iteratively just once 18:25:49 Hopefully the way I structured the iterator makes sense 18:26:06 I used to feel like prover efficiency didn't matter, until my proofs started taking hours to generate :| 18:26:08 It can either return a single full Gray code, or else return only the single change from the last code 18:26:37 Well, you can have the prover compute all the gray codes at once, and pay for it in memory 18:26:47 but then you have to compute the differences later 18:27:15 You only have to do the single Gray codes once for each signing index, fortunately 18:27:19 Indeed. Although, I think for most realistic ring sizes it shouldn't be necessary 18:27:36 Yeah, seems silly to optimize for that initially 18:28:53 Anyway, hope this is useful to you 18:29:12 It certainly is. This is great work! 18:29:15 Thanks for sharing 18:29:18 np 18:29:51 It'll be very interesting to see your results using the dalek ristretto library 18:30:41 I think the method of iteratively tracking only the digit changes is optimal for efficiency 18:31:14 since the verifier never actually cares what the full Gray codes are (except for the initial one, which is always fixed at 00...0) 18:31:38 Yeah, thats how I see it too 18:32:19 Maybe there's a nice convenient way to generate a specific value without the iteration, but I don't really feel like working it out... 18:32:57 As I think about this more, I realize that more efficient curve libraries (e.g. lower relative cost of EC ops vs Scalar ops) will feel this improvement more quickly, as scalar operations will overcome EC ops more quickly in batching 18:33:25 I believe there is a way, but I don't think its very useful in our scenario 18:33:32 You're using multiscalar multiplication for the verifier EC operation, right? 18:33:41 I am 18:33:44 ok cool 18:33:57 `vartime_multiscalar_mul` in the verifier 18:34:03 yup yup 18:34:12 That's a huge savings (we use it too) 18:34:24 or rather, we use a similar approach under the hood 18:34:38 Its a brilliant bit of math :) 18:35:33 Do you happen to have comparative numbers for scalar-scalar multiplication versus scalar squaring? 18:35:45 IIRC the dalek library computes squaring differently 18:35:49 our library does not 18:36:17 This comes up a few times in verification, and is used frequently in the scalar inversion algorithm 18:36:25 (as part of the addition chain) 18:36:26 I don't. Its on my (ever growing) todo list to run those numbers, but I haven't done it yet 18:36:46 OK; I assumed there was a benchmark for it, but I don't have an installation for it right now 18:37:01 Which library are you using? 18:37:19 I think there is an existing benchmark for that... let me dig 18:37:29 For C++, the Monero codebase 18:37:47 For Python, I wrote a silly inefficient ed25519 library that makes prototyping easy 18:37:49 Makes sense :) 18:38:18 The Python code isn't remotely suitable for benchmarking 18:38:36 But it's great for getting the algebra to work, and for communicating algorithms 18:39:03 Indeed. Its a great tool for that job 18:39:34 So `curve25519-dalek` does provide benchmarks: https://docs.rs/curve25519-dalek/3.0.0/curve25519_dalek/#performance 18:39:34 I still haven't found one specifically for that though 18:40:12 no worries; was just an idle curiosity 18:40:23 I'm not seeing one 19:05:42 moar compute: http://psc.edu/bridges-2/eup-apply 20:14:48 win /8 20:42:04 * kenshamir[m] sent a long message: < https://matrix.org/_matrix/media/r0/download/matrix.org/mklhpfpiAjxyoUOlKEDaXPfb/message.txt > 20:42:31 Not quite sure what the marginal times were referring to though 20:45:19 So the numbers seem to be within the ballpark of groth16, with only DL assumption. 20:45:20 That’s quite nice. The block verification times seems to be an extra 1ms per sapling proof, so per 2inputs-2outputs in the block, didn’t quite understand that part 20:45:54 Link to video: https://youtu.be/pCSjN9__mtc 20:46:02 Time stamp: 34:57 20:46:21 There were also questions at the end regarding Sean’s presentation 20:59:36 If only it were possible to use some kind of public project management site to list details 20:59:41 some kind of hub that uses git 21:00:01 So transaction sizes will increase... adding at least 1 kB to each 21:00:16 Since those numbers would be for proofs only, I assume? 21:00:40 (obviously the Groth numbers do not include complete tx data) 21:02:00 I still really don't understand their reluctance to use their usual open GitHub-based discussion for this, which is usually quite good and fantastically detailed 21:02:33 That plus the license change makes me (cynically) think it's part of a broader strategy to restrict use of this and prevent scooping 21:23:42 Yeah this is what I gathered. Proofs will increase by 1300 bytes from groth16 21:24:19 Yeah it’s quite pedantic to be drop fed information 21:25:59 Yeah but I think they given enough information on how to implement it, if one was determined 21:30:30 Except for the hash function, I think if their pedersen hash comparison was based on the current state of art PLONK implementations, then this is a big win by itself. 21:30:30 Since currently PLONK uses ~256 constraints to do a simple pedersen hash. P =aG+bH. 21:30:30 I also think that this system might be applicable to Monero, as PLONK also has quite efficient rangeproofs also 21:37:31 I think some of their ideas will be useable on Monero, but will have to wait again to hear more information 21:46:08 Yeah, until there's actual concrete information, it's all speculation 21:46:16 Perhaps informed speculation, but speculation nonetheless 21:46:26 and it's too easy for that to bleed over into marketing, which isn't useful 21:48:26 Yeah speculation mixed with my hopes and dreams 21:48:42 * Inge- makes an ERC20 token 21:48:43 Oh totally; I'd love to get fast, small, trustless transactions 21:49:12 But I've learned to treat all press releases with extreme suspicion 21:49:27 and all technical claims with wait-and-see attitude 21:49:43 The whole point of math and code is that you shouldn't need to take anyone's word for it 21:49:58 I hope it all works out as they claim 21:50:05 Very true 21:50:07 But I'm not going to assume they've done it