-
gingeropolous
-
gingeropolous
i wonder if there's any way to find out if large companies have started noticing increases in their power bills
-
gingeropolous
power usage
-
sech1
gingeropolous this botnet has been around for ages and they do things "properly" i.e. they hide mining address on the mining machines (possible with proxy), so this address hasn't surfaced anywhere but this IRC :D
-
sech1
Otherwise I'd have reported it already
-
gingeropolous
well, might not be a botnet, right?
-
gingeropolous
i mean, are entrepreneurial sysadmins considered botnets?
-
cohcho
gingeropolous: do you want to start an antivrus company with the first product randomx-sniffer?
-
gingeropolous
maybe ... ? You think there's a market for that? I mean, its open source, and its already packaged into other antiviruses, right?
-
cohcho
not packaged yet
-
cohcho
or packaged, but at least not deployed
-
moneromooo
That'd look a bit like a mafia protection racket. Make software, then sell other software to detect the first one...
-
gingeropolous
thats called capitalism
-
moneromooo
Nah. Capitalism is earning money from money, rather than from work.
-
gingeropolous
i was kinda thinking about an "inoculation" style antivirus option, where the software runs a miner and prevents infections from other stuff
-
gingeropolous
so the end user can decide whether to pay upfront or down the line in electricity
-
gingeropolous
though in either setup, would require coexisting with other antivirus software
-
gingeropolous
probably the most useful route for randomx-sniffer would be a package in debian / ubuntu.
-
gingeropolous
effective, not useful.
-
cohcho
those who will benefit from checking their pc with randomx-sniffer don't use debian / ubuntu.
-
gingeropolous
right. I don't know the makeup of botnets, but im guessing a large % are like me in my early days of fiddling with this stuff, where i didn't even put fail2ban on the servers i propped up
-
gingeropolous
i haven't tried randomx-sniffer yet. does it just scan once? or does it daemonize and scan in the background for new infections?
-
cohcho
sech1, you need to report blob from job that was sent to botnet node of that 15MH/s
-
cohcho
it will be enough to proof that it was sent to worker of that address
-
cohcho
since they don't use their own pool and xmrig-proxy doesn't touch anything except 4 bytes of nonce
-
cohcho
prepare honey pot windows machin, wait some time, collect enough mining malware, report their jobs to each public pool, profit
-
cohcho
but it doesn't work in my case, since i'm using monero-pool now
-
cohcho
ideally, pool operators should this in order to ban all botnets from their pools
-
cohcho
but they are not interested in it for some reason
-
cohcho
it should be easy to modify any pool software in order to api method like search_pattern_in_active_jobs
-
cohcho
and this method isn't too expensive so that it can be exposed and guarded by grecaptcha
-
cohcho
in order to submit hidden miners jobs automatically
-
cohcho
wrong channel*
-
cohcho
jtgrassie: comparator for share_t used in lmdb doesn't differ shares with the same tuple (height, address, difficulty, timestamp) that are possible and not rare.
-
cohcho
The man who submits such shares frequently will significant portion of his payout.
-
cohcho
will lose*
-
cohcho
s/tuple (height, address, difficulty, timestamp)/tuple(address, timestamp)/
-
cohcho
typo, It's even worse
-
moneromooo
What is the timestamp resolution it?
-
moneromooo
s/it//
-
cohcho
1 second
-
cohcho
It can be patched to summarize shares in case of dups withoun any need to rewrite comparator and migrate db.
-
jtgrassie
cohcho: the shares db is indexed by height and allows dups. the comperator only thus applies to shares at same height.
-
jtgrassie
e.g. you can have any amounts of shares at a given height.
-
cohcho
hyc, what's happen with duplicates in lmdb that are not different by comparator set by mdb_set_dupsort?
-
jtgrassie
the sort then applies to those dups
-
cohcho
jtgrassie: my answer is: new dup will replace old one in case if comparator doesn't differ them
-
jtgrassie
there is no replacing, just appending in this table
-
cohcho
your code doesn't check for errors when does mdb_cursor_set with new share
-
cohcho
in fact it returns MDB_KEYEXISTS
-
jtgrassie
`mdb_cursor_put(cursor, &key, &val, MDB_APPENDDUP);`
-
cohcho
and doesn't store new share
-
jtgrassie
that is in the store share. it appends.
-
cohcho
s/new dup will replace old one/new dup will not be stored/
-
cohcho
make small test, clean lmdb db, start testnet pool, connect xmrig to its with small diff, wait for at least 2 submits, wait until pool initilize randomx dataset and start processing of those 2 shares at the sime second, check db content
-
cohcho
jtgrassie: what's the result?
-
cohcho
btw, memory footprint after 24hrs is awesome(<10MiB of VmRSS)
-
jtgrassie
I dont have time this second to test right now but I neved noticed failed appends before, and there are even logs which would catch a failed transaction for storeing a share.
-
jtgrassie
-
jtgrassie
I will double check later though for sure.
-
cohcho
monero-pool doesn't check return code of mdb_cursor_set for shares, only mdb_txn_commit
-
cohcho
therefore these events are being reported
-
cohcho
s/are being/are not being/
-
jtgrassie
I think you're on to something.
-
jtgrassie
compare_share always returns -1 or 1, never 0 though.
-
jtgrassie
oh, I see, it can return 0!
-
jtgrassie
if timestamps are the same.
-
jtgrassie
(and address)
-
cohcho
sinse ther are some memory leaks in xmrig-proxy or fragmented heap after malloc/free It doesn't really a solution to handle 100k connections on a cheap vps
-
cohcho
but monero-pool can handle 100k connections even with separate block template for each client without any need to split nonce space
-
cohcho
s/since ther/Since there/
-
cohcho
I had 1000MiB of VmRss after 24hrs with xmrig-proxy
-
jtgrassie
Back to the share appending, I can see the issue and a quick fix maybe using the difficulty in the comparator too.
-
jtgrassie
So address->time->diff
-
cohcho
no, easy and right fix is to handle MDB_KEYEXISTS in store_share and store new share with updated .difficulty to sum of both
-
jtgrassie
It's quite an edge case though no? Same address, multiple submissions withon same second?
-
cohcho
Imagine few workers of the same person connected to monero-pool
-
cohcho
Is it too rare?
-
Snipa
Not impossible by any stretch of the imagination. Diff 5k with 15kh/s would suggest 3 shares in 1 second. *shrug*
-
jtgrassie
wrt MDB_KEYEXISTS yes of course handle that too but the comparator needs fixing.
-
cohcho
no need to change comparator
-
jtgrassie
there is
-
cohcho
then push your commit i'll point to errors in it
-
jtgrassie
because the db is dupsort
-
cohcho
your key in db is tuple(height, address, timestamp)
-
cohcho
your task is to handle similar entries for the same key
-
cohcho
by summarizing their difficulty
-
jtgrassie
the "key" is height and value is a share_t
-
cohcho
defacto key for physically stored entries*
-
cohcho
I agree that logical key in lmdb abstraction is different
-
jtgrassie
the problem arises with multiple shares for same address in same second. I could just append (eg MDB_APPEND instead of MDB_APPENDDUP) but then there could be an integrity issue (eg each miner submitted share must be unique).
-
jtgrassie
You wouldn't want a miner to be able to submit the exact same share multiple times.
-
jtgrassie
I'll look into this though. Thanks for the heads up.
-
cohcho
you check shares for being uniq by storing tuple(share nonce, job extra nonce, ..., ...) in job->submissions
-
cohcho
db_shares is being used only for payout
-
cohcho
There is no new problem that you've described
-
jtgrassie
Yes I think that's right.
-
cohcho
I suppose no one has deployed monero-pool yet anywhere since without this fix workers high freq submissions will be robbed
-
moneromooo
That's a good way to incentivize not using diff settings that spam the pool.
-
cohcho
moneromooo: few workers with same address will suffer too
-
cohcho
so incentive for single worker too?
-
jtgrassie
So we cannot use MDB_APPEND instead of MDB_APPENDDUP as it geta MDB_KEYEXISTS too.
-
jtgrassie
I have to make use of the compare function it seems.
-
jtgrassie
which is a simple fix.
-
cohcho
failed try with MDB_APPENDDUP, retreive old value with MDB_GET_BOTH, add old difficulty to new share, store share with flags == 0
-
cohcho
nothing to change in case of successful mdb_cursor_put
-
cohcho
no need to change comparator yet
-
jtgrassie
yes thats a nice way. add diff to existing.
-
jtgrassie
it loses the tracking though.
-
cohcho
update time resolution in case if you want more details
-
jtgrassie
I think I prefer the comparator aproach just never return 0.
-
cohcho
but difficulty should be summarized in any case
-
cohcho
I dont know whether lmdb requires strict ordering for comparator or not
-
jtgrassie
I'm testing now
-
cohcho
Looser comparator provided in case of strict ordering requirements may bring data corruption
-
cohcho
And your comparator will only decrease probability of such event
-
cohcho
to eliminate it completely with comparator you will need to change share_t too
-
cohcho
and migrate db
-
jtgrassie
It seems to work fine with `return (va->timestamp < vb->timestamp) ? -1 : 1;`
-
jtgrassie
no migration needed
-
cohcho
but extra space in db
-
jtgrassie
right, but its still a tiny amount. and i think at somepoint we'll want that history of shares submitted.
-
cohcho
ok
-
cohcho
one more fix on theway be plug&play solution
-
cohcho
there are some other problems that i've fixed for myself but i'm not sure about their importance for others
-
jtgrassie
same issue with blocks and payments btw
-
cohcho
so i've found a problem for you :D
-
jtgrassie
yes, ty
-
cohcho
broken daemon support in xmri-proxy seems to be not important at all
-
cohcho
since this problem arise only after many workers (> 256) or freq disconnect/connect of workers