-
gingeropolous
so... potential noodle of a thought regarding fees. I would bet there's some number that can be derived for a minimum fee. We have to use the perspective of the blockchain, because blockchain isn't aware of value associated with it.
-
gingeropolous
so what is the optimal cost for the unit of account to travel on the blockchain.
-
gingeropolous
basically, perhaps there's some number that describes the optimal conditions for giving a unit the ability to move. Because a "monero", to think of it as a .. an entity, has to effectively slice off a piece of itself to move
-
gingeropolous
man who let this guy in
-
sarang
-
sarang
The gist: smaller and faster
-
sarang
Typical transactions get around 5% smaller
-
sarang
Individual proofs verify slightly faster
-
sarang
Batched proof reduction is even better, getting up to 10% in 64-proof batches used in my tests
-
sarang
Note that the timing data has large variance... single-proof tests were run over 10K trials, while batch tests were run over 1K trials
-
sarang
You can run your own tests using the performance test framework on the branch linked in the gist
-
sarang
That branch was passing CI, but I just rebased and the new CI is failing on unrelated functional tests; not sure what's up with that
-
gingeropolous
awesome sarang !
-
sethsimmons
Exciting stuff, 6.6% gain on the most common TX type is quite a win for further reducing chain growth!
-
sethsimmons
Plus I would suspect the much quicker generation time will go a long ways towards aiding mobile wallets over time, and maybe even hardware wallets (though not sure on that point).
-
sarang
Proving is still a fraction of a second
-
sarang
Hence not optimizing for it generally
-
sethsimmons
Ah, so that’s not the costly part of transaction creation, then?
-
andytoshi
sethsimmons: it's certainly the costly part of transaction creation ... but it's less important than verifying
-
andytoshi
because nobody needs to create millions of txs
-
andytoshi
while full nodes need to verify millions of txs
-
sarang
Yeah, so the batch improvements mean those numbers are likely more relevant than single-proof numbers in practice
-
sarang
I included single-proof numbers primarily to show that, statistically, there don't appear to be situations where BP+ ends up costing you in terms of verification time
-
andytoshi
makes sense
-
andytoshi
it would be nice if you could produce BPs on hwws at some point though
-
sarang
I encourage anyone who's interested to run the performance tests to see if they find similar results (note the built-in performance tests on the branch right now don't have the large trial numbers that I used)
-
andytoshi
do BP+ still separate the inner product argument from the "real" zkp? IIRC the innovation is entirely in the inner product argument right?
-
sarang
andytoshi: IIRC there was some work on BPs for constrained devices
-
andytoshi
upstream i've been able to implement a BP prover with only a kilobyte or so of stack space (and i could get that down further if i needed)
-
andytoshi
it offloads the inner product argument to the host, which is fine as that's not zero-knowledge, and it "streams" the proof, i.e. it can generate 64 bytes at a time, send the data and then reuse tha tbuffer
-
andytoshi
well, sometimes it's 65 bytes rather than 64, because secp256k1 is annoying..
-
andytoshi
my benchmarks suggest that it takes 35-40% of the total proving time to do this, which gets us close to "something that can be reasonably done in commercial hw wallets"
-
andytoshi
and... most of the work is just const-time multiplying fixed generators by uniformly random blinding factors. and in principle this can be precomuted when the device is otherwise idle
-
moneromooo
performance_tests have a --loop-multiplier to run more samples (and --stats for quantiles/median/etc)
-
sarang
moneromooo: nice; I should switch to that
-
sarang
andytoshi: here is the preprint that I was thinking of
eprint.iacr.org/2020/281
-
sarang
oh and moneromooo I definitely use the `stats` option so I can report median values
-
andytoshi
sarang: cool thanks
-
andytoshi
sarang: wow this is really a lot of work these guys did
-
tevador
PoC for deterministic generation of ring members:
github.com/tevador/igamma
-
Whitecat812
How does it feel to be in the queue to be the bottom bitch in a harem of a narcissistic fraud?
-
Whitecat812
If you really want to learn an outdated programming language like C you can just take a course instead of sucking off "the Saviour of NASA"
-
sarang
tevador: interesting approach, reminiscent of the "squeeze-a-crowd" approach suggested a few years back
-
sarang
I'm certainly in favor of deterministic anonymity set selection if it can be done in a way that supports a safe distribution efficiently
-
sarang
It would be very enlightening to determine the practical verification hit using a good PRNG
-
tevador
I can run some benchmarks later on, but I think it's fairly fast, I'd say a couple microsecs per ring member
-
sarang
When squeeze-a-crowd was first suggested, there was concern about the efficiency of a practical inversion
-
sarang
I assume this would also require eschewing the current method of redrawing within blocks, which is used to mitigate issues caused by block density variance
-
sarang
?
-
sarang
(and, for the case of empty blocks, results in some definition of overselection of coinbase outputs)
-
tevador
-
sarang
indeed
-
sarang
Nice to see it got accepted!
-
tevador
is there a PDF available somewhere?
-
sarang
JHU has a copy of the preprint:
isi.jhu.edu/~mgreen/mixing.pdf
-
sarang
not sure what was changed in the accepted version
-
sarang
I suppose you could use the same seed method (and same seed?) to do redraws within blocks
-
sarang
Which is important for selections where outputs in the same block have the same age
-
sarang
If this isn't the case, I think you need to ensure that output ordering within blocks is also deterministic, to avoid miners packing blocks nefariously
-
sarang
For older blocks it's irrelevant given the distribution
-
sarang
but every block starts out as new...
-
sarang
This idea of deterministic ordering was also used in Miller et al.'s work on binning anonymity set selection
-
sarang
but it was used for building bins, not for actual ordering within blocks
-
sarang
Reconstructing redraws would imply an additional verification hit that scales with the number of outputs in the anon set
-
tevador
yeah, my PoC doesn't address outputs with the same age, but it can be implemented
-
sarang
Right now, we use an average "output age" computed over a long window, do the exp-gamma draw on that set, and then do a uniform redraw to account for output age
-
sarang
So if it's as simple as using the PRNG to determine the redraw, the only hit is the verification to reconstruct that
-
sarang
and, of course, the overall hit in the distribution reconstruction
-
sarang
Worth noting that the output-age-and-redraw method is surely suboptimal, since its behavior inherently changes based on block density over time (hence the long window)
-
sarang
Other options exist, like suggestions to remove coinbase outputs from selection entirely (except for all-coinbase rings)
-
Isthmus
@trevador this is pretty neat
-
sarang
Isthmus: had you seen the squeeze-a-crowd paper as well?
-
sech1
Maybe I'm missing something, but doesn't gamma_invert() just encode the real output into the seed in some tricky way that can potentially be decoded back from the seed?
-
Isthmus
Skimmed it a while back, but need to re-read
-
sarang
Their method used a seed and an offset to hide the true index
-
sarang
and had some options for multiple selections as well
-
Isthmus
Can the floating point math be replaced with the first few terms of a series expansion?
-
sech1
FP math can be replaced by fixed point math library, that's not a problem. The problem is it really needs proof that real output can't be recovered from the seed
-
tevador
sech1: yes, but the PRNG is reversible, so you cannot tell which of the seeds encode the real output
-
sech1
oh I'm sure there is a way
-
tevador
unless there is a bug in the code, there is no way to do that
-
sech1
starting from bit representation of a actual FP value of real output index and comparing it to randomly generated numbers
-
sech1
Real output will have some 0's in the low bits
-
tevador
not if you choose z close to but not precisely equal to the age of your output
-
tevador
you could choose z uniformly from the interval that maps to the required output
-
sech1
there's still a small difference between chosen value and random value. The security of this scheme will be limited by security of choosing method
-
sech1
If instead of floats we would have integers module N (block height), it would be simpler to prove
-
tevador
it can all be done with integers, just slower
-
tevador
I can't think of anything that would identify the real output except in cases then the real output is an obvious outlier
-
sech1
It still doesn't make me comfortable that a user supplied value is directly encoded in a seed. In my mind, the only secure scheme is to randomly select a seed for PRNG that produces needed value
-
tevador
good luck bruteforcing a seed that selects your seed from the 80+ million outputs
-
tevador
your output*
-
sech1
80 million attempts on average. Maybe a few times more if it's not uniform
-
tevador
with gamma, it is substantially non-uniform, the chance of hitting an older output is nil
-
tevador
it may work for recent outputs, though
-
sech1
If it was linear congruential PRNG modulo N, I wouldn't complain. But with this gamma stuff and 160 bits of information... How hard is it to prove that 27-28 bits of real output value didn't leak there?
-
sech1
I've already shown one case: real output value is 0 bits in the end, random values do not. Shifting gamma parameters a bit to remove 0's in real value doesn't guarantee that it will be indistinguishable from random values...
-
sech1
*has 0 bits in the end
-
sarang
Take a look at how squeeze-a-crowd does it, using an offset
-
sarang
that provably hides the index
-
sech1
the problem here is PRNG that generates too many excess binary bits of data. Side channel leak, implementation leak, call it whatever you want
-
sech1
The way I see it, it should be much much simpler than this
-
tevador
sech1: ./igamma --gen --seed e689401479617835dcc65ab362724610c276cd78 and tell me which output is mine
-
sech1
tevador you don't understand, this is cat and mouse game and you're the mouse who's doomed to loose
-
sech1
once it's in the blockchain it's there forever
-
tevador
I'm pretty sure this can be proven to be secure
-
sech1
until someone much more clever than you and me uncovers all outputs
-
sech1
I think I know how to make it provably (with easy proof) secure
-
sech1
take random uniformly distributed 128-bit number and use a function F(X) that maps 2^128 integer numbers into N outputs with desired distribution
-
sech1
this function can be stored as a table of N pairs of values
-
sech1
and move between these 128-bit numbers using AES round (10 rounds to be safe)
-
sech1
So you start by generating a random value for your output (any of 2^128 numbers such that F(X) = real_output)
-
sech1
this is much easier to prove
-
sech1
The table will take N*16 bytes in memory
-
sech1
quite a bit honestly, but I think you don't need to store it all in memory
-
sech1
just the part needed to generate the random number for your output, it can be generated on the fly
-
tevador
and nodes will need to regenerate this with each block
-
sech1
not the whole table, just the needed part for each output. They can use FP approximations to figure out which part of the table to generate
-
sech1
It's basically the same gamma function, I'm just describing the concept
-
tevador
it might also work
-
sech1
if you see 0.12345... (128 bits), you do reverse gamma of that number to get approximate output value (N) and generate table for N-2 ... N+2 to be safe
-
sech1
so 4-5 values to generate
-
sech1
But this table generation has to be deterministic and using only integer maths
-
sech1
The proof is easy. Since you choose randomly from all 128-bit numbers representing your output (and you use uniform distribution here), your 128-bit value for the real output has the same chance of happening as if you used AES repeatedly until you stumbled upon a 128-bit random value representing your output.
-
sech1
Of course the PRNG you use for this choice has to be cryptographically secure
-
sech1
hmm, that still leaves room for different implementations. Output selection algo will be fixed here, but those 95-100 bits you choose don't have to be random. Some people might use them to write swear words in blockchain :D
-
sech1
and uncovering their outputs by doing so, lol
-
sech1
so it is a problem. If you're willing to show the real output (for whatever reason), you can just select all 0 bits there...
-
sech1
On the other hand, current consensus doesn't even ban selecting 11 same outputs = real output
-
moneromooo
That's a bug, it's supposed to :/
-
sech1
I don't know, maybe it does. But you still can select whatever output you want. So you can select output indices 1,2,...,10,N where N is your real output. Every time
-
moneromooo
Yes, that's allowed.
-
sech1
So my scheme with random 128-bit numbers doesn't make things worse in this aspect
-
sech1
Anyway, tevador sarang moneromooo you have food for thought now ^^^
-
tevador
3 XMR bounty for breaking igamma:
github.com/tevador/igamma#bounty
-
Isthmus
👀
-
sarang
:D
-
sarang
Of course, there is certainly a difference between provably secure and bounty-but-unbroken (to say nothing of the fact that implementations imply risks too)