00:11:03 huzzah! 10:36:30 sarang: can 2of2SignCond from https://eprint.iacr.org/2019/595.pdf be replaced with 2-of-3 MLSTAG from Zero to Monero 2.0? The third partcipant can be the yG =Y, which if Bob figures out before block t, he can spend the escrowed output of the DLSAG 14:09:54 Ooh, interesting question... I'll need to think a bit about that 14:10:04 Once I finish some more Gray code stuff :D 15:15:33 Ah, C++... whatever would I do without your segfaults 15:17:03 Spend much more time debugging. 15:17:51 It's probably related to a rewrite I did on the batch inverter to support matrices 15:18:34 Without that, you lose a ton of the Gray code efficiency gains 15:18:47 (batch inversion only requires a single scalar inversion, which is so cool) 15:18:55 translating your python gray code to c++ already sarang? 15:19:00 aye 15:19:19 * moneromooo suggests ASAN 15:19:23 The Python Gray code stuff works great for Arcturus and Triptych (albeit without batch inversion, since that code isn't for timing) 15:20:37 I'll be very eager to see what cargodog[m] learns from their Rust code 15:20:52 Right now they only use a binary Gray code on a base Groth/Kohlweiss library 15:21:13 Generalized Triptych/Arcturus require n-ary Gray codes (though right now the Triptych/Arcturus base is fixed at 2) 15:21:26 so you _could_ use binary Gray codes right now, but that's not flexible 15:21:48 and FWIW the way cargodog[m] uses the binary Gray codes isn't iterative, so I expect it's not optimally efficient as is 15:44:03 Hooray, solved 15:44:21 Glad to hear your C++ implementation is coming along! Sorry, work/family life has been relentless these last few days, so I haven't made as much progress as I would've hoped. Pesky work, getting in the way of the important things! 15:45:12 Oh, no problem at all! You aren't obligated to work on it, of course 15:45:47 I'm merely curious to see the relative speedups at different parameters between C++ and Rust since they use such different libraries 15:47:11 Also: my Monero-focused implementation assumes that only proofs within the same transaction share an input anonymity set for batching purposes 15:47:20 whereas yours assumes sharing the set much more broadly 15:47:25 So that's one major difference 15:47:29 No appologies necessary, I was just offering an excuse for leaving you in the dark. This is a work of love, no obligation :) 15:48:14 Yes, this optimization really has the most to gain if the set can be shared as broadly as possible 15:48:31 Sure, but as has been noted, this has usability consequences in practice 15:48:34 which, as we discussed before, is not an easy problem to solve 15:48:38 heh, yep 15:48:39 jinx 15:48:51 :D 15:49:11 But at any rate, I'm initially interested to see at what point we see benefits sharing only within transactions, which would use only 1 or 2 Triptych proofs, and 1 Arcturus proof 15:49:19 (since Triptych requires a proof per input) 15:49:43 I should dust off my (very poor) Rust skills and play around with your implementation 15:50:00 The Triptych C++ is working now, hooray, with benchmark results! 15:50:04 I am too. Once I get this write-up finished, I plan to build out a wide array of benches testing "real world" scenarios 15:50:22 Are you using the n-ary Gray code that I found? 15:50:24 Or another one? 15:50:27 Or a more general approach? 15:50:33 (since there isn't one n-ary Gray code) 15:51:18 I am using one of the papers linked in the Wiki, which I believe is the same paper you shared 15:51:34 it focuses on ternary, but lays the foundation for easy generalization 15:51:49 fyi, I am almost done with a first draft of the paper. Once I started including the necessary background info for the uninitiated, things got a bit wordy 15:51:56 The paper I used had a general algorithm 15:52:08 But yeah, its initial example was base-3 IIRC 15:52:58 I used the algorithm in this paper: https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.119.1344 15:53:27 Does Rust support anything akin to a Python generator? 15:53:33 Yeah, thats the paper. Sorry, I've been juggling too many papers lately. Indeed, it has a generalized algo 15:53:47 I built a generator for the Python example, but did a more hacky approach in C++ to do the same thing 15:54:24 It's just so darn convenient to iterate over a custom generator 15:54:46 Im not as fluent in Python, but Rust iterators can be used to create "generators", which is what I am using in my binary impl 15:55:09 agreed. Bundling the iteration logic into the iterator makes things incredibly smooth 15:55:09 OK, meaning a custom iterator? 15:55:22 yes, sorry 15:55:24 custom iterators 15:55:38 In Python, the generator is a function that has a `yield` statement that basically maintains internal state but returns the current iterated value 15:55:44 cool 15:56:10 So you can iterate over the function call without any messy state variables that need to be maintained between calls 15:56:18 You can also define iterators that permute from past iterators, which works nicely here 15:56:20 In C++ I fake it with global variables 15:56:29 Why would you need that? 15:56:30 ah, that sounds very handy 15:56:36 The verifier only does a single iteration 15:56:41 er, single iterator 15:57:10 So in Python, you do `for some_result in custom_iterator(parameters)` 15:57:24 and `custom iterator` has a `yield` when it needs to return its next result 15:57:27 I could have worded that better: One iterator, where each subsequent element is a permutation of the previous 15:58:02 What do you mean by permutation? 15:58:19 My iterator, for example, tracks the changed Gray digit, its old result, and its new result 15:58:36 and then the verifier uses that to identify which scalar-scalar mults to do 15:58:48 (there's a separate mechanism when it needs a given Gray code without iteration) 15:58:53 (the prover needs to do this) 15:59:13 Sorry for all the questions; I'm fairly new to Rust and don't use it often 15:59:36 Out of curiosity, did you find it really hard to read at first ? 15:59:40 No worries. My internet is quite slow, so sorry for lagged responses :) 15:59:45 sarang: Rust? Yes, very 15:59:45 (rust) 16:00:20 How much time till you could mostly read it, to follow the basic structure ? 16:00:24 I find Rust much easier to read 16:00:31 It was also tricky since the language itself seems to evolve quickly, meaning it was super irritating get anything to run properly and have consistency between features 16:00:36 cargodog[m]: how so? 16:00:40 Some things are easier, sure 16:00:45 And I am also a C/C++ dev 16:00:48 but others seem easy only in hindsight 16:01:05 I suppose this is very person-dependent :) 16:01:26 I found it incredibly obscure every time I tried reading some. 16:01:46 For math stuff (since we are talking about math here), the biggest "readability" advantage is how Rust iterators and iterator adapters work 16:01:50 moneromooo: there are some things I still have trouble reading for structure 16:02:04 I can write a very complex mathematical relation, just as a short chain of iterator adapters, and it reads almost like readin LaTeX 16:02:06 cargodog[m]: how much of that is iterators in general? 16:02:18 Sounds like LISP... 16:02:33 Indeed, very much like Lisp 16:02:52 But with a much nicer performance/security tradeoff 16:03:07 I am not a Rust maximalist :) 16:03:14 HEh 16:03:17 I like the idea of RUst 16:03:18 But I do prefer it for this sort of work 16:03:23 I don't like using it to try out new things 16:03:42 It is slower for rapid prototyping, no doubt 16:03:48 But I appreciate how hard it makes it to shoot yourself in the foot, for sure 16:03:57 Yeah, I like Python for testing algorithms, since it's easy to iterate 16:03:57 haha! 16:04:00 (pun intended) 16:04:32 That's why I did my Gray stuff in Python first, and only then in C++ 16:04:53 Python will run circles around Rust for quick prototyping 16:04:58 ya 16:05:22 but the way I use it is slow as molasses for curve operations 16:05:58 I'm sure that I'll use Rust more and more over time 16:05:59 :) 16:10:50 Hey gang 16:10:50 What does the core wallet use for the change address? 16:10:50 I spent a while perusing ZtM but this isn't mentioned (which makes sense since it's an implementation decision and not part of the protocol) 16:11:24 index 0 of the current account. 16:11:45 Cool, that makes sense. 16:11:46 ty mooo 16:59:07 cargodog[m]: for Triptych 2-in transactions, the Gray method is slower until you move from 64 to 128 for the anon set 16:59:37 For 128, it's 1% faster =p 17:00:24 For 256, it's 2.7% faster 17:02:01 Running bigger anon sets now... 17:18:31 I feel gingeropolous 's excitement 17:19:39 6% faster for anon 1028 17:19:44 (2 inputs, again) 17:21:46 Running on 4 inputs, which implies better batching 17:41:27 Yeah with only 2 inputs, I expect you to see an improvement above a certain set size (sounds like that threshold is 128 for your curve library), but no real "overwhelming" improvement 17:42:34 Honestly, once I get all of this formalized, and once I've come up with some good performance numbers demonstrating the batching benefit, I want to research ways to maximize set overlap to improve batching 17:43:02 but, theres a lot of steps before I get there, and that is a potentially never ending black hole 17:43:26 sarang: are those results with your C++ code? 17:45:31 "more performant" curve libraries (e.g. one whose point operations are relatively fast) will notice more of a benefit, because scalar operations will have more of an effect 18:20:00 noted for next edition thanks Isthmus 19:38:38 Arcturus finished as well 19:41:00 how does it look? 19:42:39 Testing now :) 19:42:39 Test failed 19:42:43 :( 19:42:44 noooo 19:54:01 For those interested: 19:54:15 Arcturus w/ Gray coefficients: https://github.com/SarangNoether/monero/tree/arcturus-gray 19:54:25 Triptych w/ Gray coefficients: https://github.com/SarangNoether/monero/tree/triptych-gray 19:55:43 The CI fails, but that appears to be a problem with the macOS configuration, not the code itself 20:00:46 Very nice! 20:01:29 cargodog[m]: for Triptych, there's definitely a clear threshold for when this starts being useful 20:01:57 Interesting. Why is it different? 20:02:07 Different from what? 20:02:20 Admittedly, I have spent more time focusing on Arcturus and base Groth 20:02:59 Are you implying it has a clearer threshold compared to Arcturus? 20:03:12 Perhaps I misunderstood 20:04:16 I mean that Gray is slower than non-Gray for smaller input sets, even for Arcturus 20:04:56 Ah understood. I thought you meant there was a notable difference in the threshold between the two proof schemes 20:05:12 Well, in Triptych the code currently inverts matrices on a per-proof basis 20:05:16 * cargodog dons his dunce cap 20:05:18 In Arcturus, it does them all at once 20:05:27 Of course, for Arcturus batching you can choose how you do this too 20:05:35 IIRC you invert per proof? 20:05:39 Not per batch? 20:06:07 I believe I invert over the entire batch. Let me confirm 20:06:10 And again, you're handling the idea of batching differently than I am, because of the implied use cases 20:06:13 but I see what you mean now. 20:07:07 You are assuming that multiple proofs share an input set 20:07:18 correct 20:07:33 I assume that multiple Triptych proofs share an input set _per transaction only_, and that Arcturus proofs (one per transaction) do not otherwise overlap 20:07:43 so YMMV 20:08:11 That is a more realistic assumption. 20:08:35 I intend to modify my library to make the assumption selectable (or add a separate API) 20:08:59 righto 20:09:23 Well, under that assumption, you only see benefits for 2-input transactions between input set size 2^6 and 2^7 for Triptych and Arcturus 20:09:35 below that, standard decomposition wind 20:09:37 *wins 20:09:43 above that, Gray wins 20:09:49 but not by much 20:10:00 Thats good to know. Its nice to put concrete numbers to this 20:10:11 for Triptych 4-input transactions, the crossover is between 2^5 and 2^6 20:10:56 same for Arcturus 20:11:01 I'll plot all this for tomorrow's meeting 20:11:14 fantastic. Can't wait :) 20:11:25 Feel free to build the code if you want to play around with it 20:11:42 If you do, I can point you to the right place for modifying the benchmarks 20:11:49 (not as easy as `cargo`...) 20:12:20 Oh, I should note that these tests also assume commitment offsets and balance proofs too 20:12:29 So they're accurate for complete transactions 20:12:35 not just the input proofs 20:18:48 sarang_: does Mac CI always fail or just once? 20:19:17 It failed regularly, but I hadn't rebased in a while 20:22:48 looks like package manager connection issues 20:23:04 should fix itself after a bit 20:23:26 fair enough 20:23:31 the code works, that's the important part 20:23:36 the check mark is a side benefit :D 20:50:48 sarang_: I was just thinking about the performance cross-over between Gray/non-Gray, and I am surprised Gray doesn't win every time. I will look at your code later, but the Gray code iteration logic can be done very efficiently in native words (if we assume N < 2^64 =p), and then used in the polynomial evaluation of scalars 20:51:29 I will try to get some numbers to "put my money where my mouth is", but I thought you should know I'm not yet ready to say its not helpful for small sets/batches 20:51:47 I don't think it has to do with the code computation 20:51:47 although, the benefit would of course be marginal compared to large sets/batches 20:51:50 but with the iteration 20:51:56 which implies a nontrivial number of scalarmults 20:52:14 How so? It should require only 2 scalar mults 20:52:14 s/iteration/inversion 20:52:14 sarang meant to say: but with the inversion 20:52:19 per exponent 20:52:25 Inversion requires ~200 scalarmults 20:52:27 ahhh yes... inversion.. 20:52:38 plus more for the batch inversion 20:52:44 I was forgetting about that cost 20:52:47 this isn't needed at all for the non-Gray version 20:53:00 indeed. Good point. 20:53:22 So at some point, you gain more from the Gray iteration than you lose from the batch inversion 20:53:27 At any rate, I intend to explore this thoroughly, but that sounds likely to be the limiting factor 20:53:40 Yeah, and I don't see a way to eliminate it 20:53:54 ~200 mults seems to be the best known addition ladder for ed25519 scalar inversion 20:54:10 Thats what I have seen too 20:54:41 I was forgetting about inversion... I'm too tired to make more progress today 20:55:56 But hey, FWIW after the crossover (which is negligible for large batches, probably), Gray wins! 20:56:31 Hehe 20:56:53 For something like Lelantus, it's a no-brainer to switch 20:58:46 Indeed.. Monero has been my first focus, but it would probably be a good idea to share this work with them. At minimum they may have useful suggestions/criticisms 20:59:28 Ok, Im off for today. I will try to stop by the meeting tomorrow 21:03:14 I'll send my work to Aram, their cryptographer 21:03:20 We've collaborated before on some Lelantus improvements