03:53:32 holy cow https://xmr.nanopool.org/account/47NG3EVgM374qBkK2Cn4PpD5PwajyqDj41URpkrc9nj1CVRqUCFcwjCFv2j5KZehUVeo34GvBq7e1hFTohnjK9VG9cU3f1G 03:57:42 i wonder if there's any way to find out if large companies have started noticing increases in their power bills 03:57:45 power usage 07:07:17 gingeropolous this botnet has been around for ages and they do things "properly" i.e. they hide mining address on the mining machines (possible with proxy), so this address hasn't surfaced anywhere but this IRC :D 07:07:24 Otherwise I'd have reported it already 13:26:51 well, might not be a botnet, right? 13:27:15 i mean, are entrepreneurial sysadmins considered botnets? 13:37:45 gingeropolous: do you want to start an antivrus company with the first product randomx-sniffer? 13:39:23 maybe ... ? You think there's a market for that? I mean, its open source, and its already packaged into other antiviruses, right? 13:40:07 not packaged yet 13:40:35 or packaged, but at least not deployed 13:41:33 That'd look a bit like a mafia protection racket. Make software, then sell other software to detect the first one... 13:41:52 thats called capitalism 13:42:47 Nah. Capitalism is earning money from money, rather than from work. 13:43:09 i was kinda thinking about an "inoculation" style antivirus option, where the software runs a miner and prevents infections from other stuff 13:43:22 so the end user can decide whether to pay upfront or down the line in electricity 13:44:08 though in either setup, would require coexisting with other antivirus software 13:45:07 probably the most useful route for randomx-sniffer would be a package in debian / ubuntu. 13:47:26 effective, not useful. 13:49:43 those who will benefit from checking their pc with randomx-sniffer don't use debian / ubuntu. 13:51:01 right. I don't know the makeup of botnets, but im guessing a large % are like me in my early days of fiddling with this stuff, where i didn't even put fail2ban on the servers i propped up 13:53:40 i haven't tried randomx-sniffer yet. does it just scan once? or does it daemonize and scan in the background for new infections? 14:05:51 sech1, you need to report blob from job that was sent to botnet node of that 15MH/s 14:06:14 it will be enough to proof that it was sent to worker of that address 14:06:39 since they don't use their own pool and xmrig-proxy doesn't touch anything except 4 bytes of nonce 14:07:46 prepare honey pot windows machin, wait some time, collect enough mining malware, report their jobs to each public pool, profit 14:10:54 but it doesn't work in my case, since i'm using monero-pool now 14:18:57 ideally, pool operators should this in order to ban all botnets from their pools 14:19:08 but they are not interested in it for some reason 14:49:43 it should be easy to modify any pool software in order to api method like search_pattern_in_active_jobs 14:50:15 and this method isn't too expensive so that it can be exposed and guarded by grecaptcha 14:50:43 in order to submit hidden miners jobs automatically 14:51:04 wrong channel* 21:00:19 jtgrassie: comparator for share_t used in lmdb doesn't differ shares with the same tuple (height, address, difficulty, timestamp) that are possible and not rare. 21:01:34 The man who submits such shares frequently will significant portion of his payout. 21:01:42 will lose* 21:02:26 s/tuple (height, address, difficulty, timestamp)/tuple(address, timestamp)/ 21:02:47 typo, It's even worse 21:04:45 What is the timestamp resolution it? 21:04:54 s/it// 21:05:19 1 second 21:10:49 It can be patched to summarize shares in case of dups withoun any need to rewrite comparator and migrate db. 21:52:14 cohcho: the shares db is indexed by height and allows dups. the comperator only thus applies to shares at same height. 21:53:36 e.g. you can have any amounts of shares at a given height. 21:53:53 hyc, what's happen with duplicates in lmdb that are not different by comparator set by mdb_set_dupsort? 21:54:16 the sort then applies to those dups 21:54:49 jtgrassie: my answer is: new dup will replace old one in case if comparator doesn't differ them 21:55:33 there is no replacing, just appending in this table 21:56:17 your code doesn't check for errors when does mdb_cursor_set with new share 21:56:26 in fact it returns MDB_KEYEXISTS 21:56:43 `mdb_cursor_put(cursor, &key, &val, MDB_APPENDDUP);` 21:57:15 and doesn't store new share 21:57:59 that is in the store share. it appends. 22:00:37 s/new dup will replace old one/new dup will not be stored/ 22:01:44 make small test, clean lmdb db, start testnet pool, connect xmrig to its with small diff, wait for at least 2 submits, wait until pool initilize randomx dataset and start processing of those 2 shares at the sime second, check db content 22:05:10 jtgrassie: what's the result? 22:05:59 btw, memory footprint after 24hrs is awesome(<10MiB of VmRSS) 22:06:45 I dont have time this second to test right now but I neved noticed failed appends before, and there are even logs which would catch a failed transaction for storeing a share. 22:07:00 e.g. https://github.com/jtgrassie/monero-pool/blob/c7984e0e5f914751c853ba2e04eab635c3eba9ab/src/pool.c#L2463 22:08:18 I will double check later though for sure. 22:11:01 monero-pool doesn't check return code of mdb_cursor_set for shares, only mdb_txn_commit 22:11:22 therefore these events are being reported 22:11:40 s/are being/are not being/ 22:11:40 I think you're on to something. 22:13:07 compare_share always returns -1 or 1, never 0 though. 22:13:53 oh, I see, it can return 0! 22:14:16 if timestamps are the same. 22:14:27 (and address) 22:18:22 sinse ther are some memory leaks in xmrig-proxy or fragmented heap after malloc/free It doesn't really a solution to handle 100k connections on a cheap vps 22:18:53 but monero-pool can handle 100k connections even with separate block template for each client without any need to split nonce space 22:19:17 s/since ther/Since there/ 22:19:46 I had 1000MiB of VmRss after 24hrs with xmrig-proxy 22:21:48 Back to the share appending, I can see the issue and a quick fix maybe using the difficulty in the comparator too. 22:22:13 So address->time->diff 22:23:28 no, easy and right fix is to handle MDB_KEYEXISTS in store_share and store new share with updated .difficulty to sum of both 22:23:57 It's quite an edge case though no? Same address, multiple submissions withon same second? 22:24:23 Imagine few workers of the same person connected to monero-pool 22:24:30 Is it too rare? 22:24:34 Not impossible by any stretch of the imagination. Diff 5k with 15kh/s would suggest 3 shares in 1 second. *shrug* 22:24:34 wrt MDB_KEYEXISTS yes of course handle that too but the comparator needs fixing. 22:24:50 no need to change comparator 22:25:13 there is 22:25:34 then push your commit i'll point to errors in it 22:25:37 because the db is dupsort 22:26:21 your key in db is tuple(height, address, timestamp) 22:26:35 your task is to handle similar entries for the same key 22:26:46 by summarizing their difficulty 22:28:08 the "key" is height and value is a share_t 22:29:22 defacto key for physically stored entries* 22:29:42 I agree that logical key in lmdb abstraction is different 22:34:04 the problem arises with multiple shares for same address in same second. I could just append (eg MDB_APPEND instead of MDB_APPENDDUP) but then there could be an integrity issue (eg each miner submitted share must be unique). 22:34:59 You wouldn't want a miner to be able to submit the exact same share multiple times. 22:35:22 I'll look into this though. Thanks for the heads up. 22:37:38 you check shares for being uniq by storing tuple(share nonce, job extra nonce, ..., ...) in job->submissions 22:38:10 db_shares is being used only for payout 22:38:44 There is no new problem that you've described 22:41:15 Yes I think that's right. 22:47:23 I suppose no one has deployed monero-pool yet anywhere since without this fix workers high freq submissions will be robbed 22:48:01 That's a good way to incentivize not using diff settings that spam the pool. 22:56:36 moneromooo: few workers with same address will suffer too 22:56:57 so incentive for single worker too? 23:19:06 So we cannot use MDB_APPEND instead of MDB_APPENDDUP as it geta MDB_KEYEXISTS too. 23:25:11 I have to make use of the compare function it seems. 23:25:39 which is a simple fix. 23:25:41 failed try with MDB_APPENDDUP, retreive old value with MDB_GET_BOTH, add old difficulty to new share, store share with flags == 0 23:25:49 nothing to change in case of successful mdb_cursor_put 23:26:38 no need to change comparator yet 23:27:26 yes thats a nice way. add diff to existing. 23:27:57 it loses the tracking though. 23:28:23 update time resolution in case if you want more details 23:28:25 I think I prefer the comparator aproach just never return 0. 23:28:29 but difficulty should be summarized in any case 23:29:20 I dont know whether lmdb requires strict ordering for comparator or not 23:29:28 I'm testing now 23:29:58 Looser comparator provided in case of strict ordering requirements may bring data corruption 23:30:33 And your comparator will only decrease probability of such event 23:30:48 to eliminate it completely with comparator you will need to change share_t too 23:30:56 and migrate db 23:37:30 It seems to work fine with `return (va->timestamp < vb->timestamp) ? -1 : 1;` 23:38:02 no migration needed 23:38:42 but extra space in db 23:40:27 right, but its still a tiny amount. and i think at somepoint we'll want that history of shares submitted. 23:41:36 ok 23:45:21 one more fix on theway be plug&play solution 23:46:24 there are some other problems that i've fixed for myself but i'm not sure about their importance for others 23:46:27 same issue with blocks and payments btw 23:47:59 so i've found a problem for you :D 23:50:09 yes, ty 23:58:07 broken daemon support in xmri-proxy seems to be not important at all 23:58:27 since this problem arise only after many workers (> 256) or freq disconnect/connect of workers