03:31:09 so... potential noodle of a thought regarding fees. I would bet there's some number that can be derived for a minimum fee. We have to use the perspective of the blockchain, because blockchain isn't aware of value associated with it. 03:31:22 so what is the optimal cost for the unit of account to travel on the blockchain. 03:33:16 basically, perhaps there's some number that describes the optimal conditions for giving a unit the ability to move. Because a "monero", to think of it as a .. an entity, has to effectively slice off a piece of itself to move 03:34:08 man who let this guy in 03:57:00 Draft of a blag post with Bulletproofs+ results: https://gist.github.com/SarangNoether/ee6367fa8b5500120b2a4dbe23b71694 03:58:18 The gist: smaller and faster 03:58:32 Typical transactions get around 5% smaller 03:58:45 Individual proofs verify slightly faster 03:59:11 Batched proof reduction is even better, getting up to 10% in 64-proof batches used in my tests 04:00:11 Note that the timing data has large variance... single-proof tests were run over 10K trials, while batch tests were run over 1K trials 04:00:44 You can run your own tests using the performance test framework on the branch linked in the gist 04:01:57 That branch was passing CI, but I just rebased and the new CI is failing on unrelated functional tests; not sure what's up with that 04:05:40 awesome sarang ! 04:05:59 Exciting stuff, 6.6% gain on the most common TX type is quite a win for further reducing chain growth! 04:06:38 Plus I would suspect the much quicker generation time will go a long ways towards aiding mobile wallets over time, and maybe even hardware wallets (though not sure on that point). 04:07:14 Proving is still a fraction of a second 04:07:33 Hence not optimizing for it generally 04:07:45 Ah, so that’s not the costly part of transaction creation, then? 04:56:49 sethsimmons: it's certainly the costly part of transaction creation ... but it's less important than verifying 04:57:00 because nobody needs to create millions of txs 04:57:10 while full nodes need to verify millions of txs 15:10:40 Yeah, so the batch improvements mean those numbers are likely more relevant than single-proof numbers in practice 15:11:10 I included single-proof numbers primarily to show that, statistically, there don't appear to be situations where BP+ ends up costing you in terms of verification time 15:11:47 makes sense 15:12:02 it would be nice if you could produce BPs on hwws at some point though 15:12:16 I encourage anyone who's interested to run the performance tests to see if they find similar results (note the built-in performance tests on the branch right now don't have the large trial numbers that I used) 15:12:25 do BP+ still separate the inner product argument from the "real" zkp? IIRC the innovation is entirely in the inner product argument right? 15:13:02 andytoshi: IIRC there was some work on BPs for constrained devices 15:13:45 upstream i've been able to implement a BP prover with only a kilobyte or so of stack space (and i could get that down further if i needed) 15:14:19 it offloads the inner product argument to the host, which is fine as that's not zero-knowledge, and it "streams" the proof, i.e. it can generate 64 bytes at a time, send the data and then reuse tha tbuffer 15:14:38 well, sometimes it's 65 bytes rather than 64, because secp256k1 is annoying.. 15:15:40 my benchmarks suggest that it takes 35-40% of the total proving time to do this, which gets us close to "something that can be reasonably done in commercial hw wallets" 15:16:13 and... most of the work is just const-time multiplying fixed generators by uniformly random blinding factors. and in principle this can be precomuted when the device is otherwise idle 15:16:34 performance_tests have a --loop-multiplier to run more samples (and --stats for quantiles/median/etc) 15:16:51 moneromooo: nice; I should switch to that 15:20:44 andytoshi: here is the preprint that I was thinking of https://eprint.iacr.org/2020/281 15:21:37 oh and moneromooo I definitely use the `stats` option so I can report median values 15:28:00 sarang: cool thanks 15:33:03 sarang: wow this is really a lot of work these guys did 16:00:21 PoC for deterministic generation of ring members: https://github.com/tevador/igamma 16:16:46 How does it feel to be in the queue to be the bottom bitch in a harem of a narcissistic fraud? 16:16:48 If you really want to learn an outdated programming language like C you can just take a course instead of sucking off "the Saviour of NASA" 16:27:58 tevador: interesting approach, reminiscent of the "squeeze-a-crowd" approach suggested a few years back 16:29:14 I'm certainly in favor of deterministic anonymity set selection if it can be done in a way that supports a safe distribution efficiently 16:30:05 It would be very enlightening to determine the practical verification hit using a good PRNG 16:31:52 I can run some benchmarks later on, but I think it's fairly fast, I'd say a couple microsecs per ring member 16:33:38 When squeeze-a-crowd was first suggested, there was concern about the efficiency of a practical inversion 16:34:20 I assume this would also require eschewing the current method of redrawing within blocks, which is used to mitigate issues caused by block density variance 16:34:21 ? 16:34:53 (and, for the case of empty blocks, results in some definition of overselection of coinbase outputs) 16:37:16 are you referring to this paper? https://ieeexplore.ieee.org/document/8406559 16:37:30 indeed 16:37:40 Nice to see it got accepted! 16:37:41 is there a PDF available somewhere? 16:38:32 JHU has a copy of the preprint: https://isi.jhu.edu/~mgreen/mixing.pdf 16:38:44 not sure what was changed in the accepted version 16:39:40 I suppose you could use the same seed method (and same seed?) to do redraws within blocks 16:39:57 Which is important for selections where outputs in the same block have the same age 16:40:19 If this isn't the case, I think you need to ensure that output ordering within blocks is also deterministic, to avoid miners packing blocks nefariously 16:40:35 For older blocks it's irrelevant given the distribution 16:40:42 but every block starts out as new... 16:41:22 This idea of deterministic ordering was also used in Miller et al.'s work on binning anonymity set selection 16:41:46 but it was used for building bins, not for actual ordering within blocks 16:42:38 Reconstructing redraws would imply an additional verification hit that scales with the number of outputs in the anon set 16:47:32 yeah, my PoC doesn't address outputs with the same age, but it can be implemented 16:49:37 Right now, we use an average "output age" computed over a long window, do the exp-gamma draw on that set, and then do a uniform redraw to account for output age 16:49:58 So if it's as simple as using the PRNG to determine the redraw, the only hit is the verification to reconstruct that 16:50:16 and, of course, the overall hit in the distribution reconstruction 16:50:56 Worth noting that the output-age-and-redraw method is surely suboptimal, since its behavior inherently changes based on block density over time (hence the long window) 16:52:04 Other options exist, like suggestions to remove coinbase outputs from selection entirely (except for all-coinbase rings) 16:52:37 @trevador this is pretty neat 16:52:52 Isthmus: had you seen the squeeze-a-crowd paper as well? 16:53:11 Maybe I'm missing something, but doesn't gamma_invert() just encode the real output into the seed in some tricky way that can potentially be decoded back from the seed? 16:53:12 Skimmed it a while back, but need to re-read 16:53:38 Their method used a seed and an offset to hide the true index 16:53:50 and had some options for multiple selections as well 16:54:00 Can the floating point math be replaced with the first few terms of a series expansion? 16:54:44 FP math can be replaced by fixed point math library, that's not a problem. The problem is it really needs proof that real output can't be recovered from the seed 16:54:47 sech1: yes, but the PRNG is reversible, so you cannot tell which of the seeds encode the real output 16:55:46 oh I'm sure there is a way 16:56:13 unless there is a bug in the code, there is no way to do that 16:56:14 starting from bit representation of a actual FP value of real output index and comparing it to randomly generated numbers 16:56:32 Real output will have some 0's in the low bits 16:57:39 not if you choose z close to but not precisely equal to the age of your output 16:58:40 you could choose z uniformly from the interval that maps to the required output 16:58:47 there's still a small difference between chosen value and random value. The security of this scheme will be limited by security of choosing method 16:59:15 If instead of floats we would have integers module N (block height), it would be simpler to prove 16:59:40 it can all be done with integers, just slower 17:02:50 I can't think of anything that would identify the real output except in cases then the real output is an obvious outlier 17:02:57 It still doesn't make me comfortable that a user supplied value is directly encoded in a seed. In my mind, the only secure scheme is to randomly select a seed for PRNG that produces needed value 17:03:37 good luck bruteforcing a seed that selects your seed from the 80+ million outputs 17:03:48 your output* 17:04:25 80 million attempts on average. Maybe a few times more if it's not uniform 17:05:14 with gamma, it is substantially non-uniform, the chance of hitting an older output is nil 17:05:27 it may work for recent outputs, though 17:08:17 If it was linear congruential PRNG modulo N, I wouldn't complain. But with this gamma stuff and 160 bits of information... How hard is it to prove that 27-28 bits of real output value didn't leak there? 17:10:45 I've already shown one case: real output value is 0 bits in the end, random values do not. Shifting gamma parameters a bit to remove 0's in real value doesn't guarantee that it will be indistinguishable from random values... 17:10:56 *has 0 bits in the end 17:14:31 Take a look at how squeeze-a-crowd does it, using an offset 17:14:43 that provably hides the index 17:19:45 the problem here is PRNG that generates too many excess binary bits of data. Side channel leak, implementation leak, call it whatever you want 17:20:06 The way I see it, it should be much much simpler than this 17:20:29 sech1: ./igamma --gen --seed e689401479617835dcc65ab362724610c276cd78 and tell me which output is mine 17:21:05 tevador you don't understand, this is cat and mouse game and you're the mouse who's doomed to loose 17:21:13 once it's in the blockchain it's there forever 17:21:23 I'm pretty sure this can be proven to be secure 17:21:26 until someone much more clever than you and me uncovers all outputs 17:21:59 I think I know how to make it provably (with easy proof) secure 17:23:30 take random uniformly distributed 128-bit number and use a function F(X) that maps 2^128 integer numbers into N outputs with desired distribution 17:23:41 this function can be stored as a table of N pairs of values 17:23:55 and move between these 128-bit numbers using AES round (10 rounds to be safe) 17:24:38 So you start by generating a random value for your output (any of 2^128 numbers such that F(X) = real_output) 17:24:51 this is much easier to prove 17:25:59 The table will take N*16 bytes in memory 17:26:15 quite a bit honestly, but I think you don't need to store it all in memory 17:26:33 just the part needed to generate the random number for your output, it can be generated on the fly 17:27:19 and nodes will need to regenerate this with each block 17:27:59 not the whole table, just the needed part for each output. They can use FP approximations to figure out which part of the table to generate 17:28:29 It's basically the same gamma function, I'm just describing the concept 17:28:45 it might also work 17:30:04 if you see 0.12345... (128 bits), you do reverse gamma of that number to get approximate output value (N) and generate table for N-2 ... N+2 to be safe 17:30:11 so 4-5 values to generate 17:30:50 But this table generation has to be deterministic and using only integer maths 17:34:50 The proof is easy. Since you choose randomly from all 128-bit numbers representing your output (and you use uniform distribution here), your 128-bit value for the real output has the same chance of happening as if you used AES repeatedly until you stumbled upon a 128-bit random value representing your output. 17:35:08 Of course the PRNG you use for this choice has to be cryptographically secure 17:37:53 hmm, that still leaves room for different implementations. Output selection algo will be fixed here, but those 95-100 bits you choose don't have to be random. Some people might use them to write swear words in blockchain :D 17:38:15 and uncovering their outputs by doing so, lol 17:39:04 so it is a problem. If you're willing to show the real output (for whatever reason), you can just select all 0 bits there... 17:39:35 On the other hand, current consensus doesn't even ban selecting 11 same outputs = real output 17:40:03 That's a bug, it's supposed to :/ 17:40:41 I don't know, maybe it does. But you still can select whatever output you want. So you can select output indices 1,2,...,10,N where N is your real output. Every time 17:41:11 Yes, that's allowed. 17:41:18 So my scheme with random 128-bit numbers doesn't make things worse in this aspect 17:41:58 Anyway, tevador sarang moneromooo you have food for thought now ^^^ 23:02:41 3 XMR bounty for breaking igamma: https://github.com/tevador/igamma#bounty 23:03:57 👀 23:40:20 :D 23:40:55 Of course, there is certainly a difference between provably secure and bounty-but-unbroken (to say nothing of the fact that implementations imply risks too)