00:55:13 howdy folks, i've git cloned and build xmrrig miner on linux, is there some simple way for me to cap how much CPU it uses on the box? 00:58:43 I assume you checked --help. If there's nothing, then namespaces I guess. 01:03:03 `u=$(whoami); limit_pct=50; cg=xmrig_cpu; sudo cgcreate -t $u:$u -a $u:$u -g cpu:$cg; echo $(( $limit_pct / 100 * 100000 )) | sudo tee /sys/fs/cgroup/cpu/$cg/cpu.cfs_quota_us; cgexec --sticky -g cpu:$cg xmrig --algo rx/0 --url=stratum+tcp://donate.v2.xmrig.com:3333 -u '0000000000000000000000000000000000000000000000000000000000000000' --no-color` 01:04:00 change limit_pct to whatever you want, and install libcgroup 01:05:28 thanks cohcho 01:06:02 use --threads in case if you're ok with discrete values of limit 01:14:49 randomx is the default CPU algo at the moment? 01:15:28 There is no such thing as default algo I suppose 01:15:33 I don't know 03:40:18 comparator should yield strict ordering, since lookups use binary search 03:40:27 and can fail if not strictly ordered 05:26:07 hyc, my tests seemed to work with the less strict comparator. I just needed a way to do an append of exact duplicates and with the comparator returning -1 or 1 (not 0) the append works. 05:39:54 I can't see any other way to have duplicate KV items (MDB_DUPSORT needed for dup keys, MDB_APPEND needs sorted keys and MDB_APPENDDUP needs sorted data). Return 0 in a comparator and MDB_KEYEXIST occurs. 05:46:19 The 1/-1 comperator only occurs for data (mdb_set_dupsort), I'm still using strict comperators for keys. 14:28:38 jtgrassie : I would expect the -1 / 1 (not 0) comparator to fail on lookup though 14:33:09 although searching with MDB_SET_RANGE should still work 16:19:11 vtnerd: you mean lookup based on data value right? because i'm not doing that. it' all itterating based on key (which has strict comp). 16:24:06 e.g. MDB_SET and MDB_NEXT_DUP 19:02:14 ah if you're only iterating with NEXT_DUP then binary search is irrelevant, so you should be safe 22:10:28 cool, thanks for confirming 22:10:38 so yes, all calls are either MDB_SET->MDB_NEXT_DUP or MDB_FIRST->MDB_NEXT when itterating 22:11:20 then there are either appends (using MDB_APPENDDUP) 22:11:57 of overwriting current (MDB_CURRENT) 22:13:49 When I have a change that necessitates a db migrate, I'll revert the data comparators and migrate the data items to be unique. 22:19:04 jtgrassie: src/webui.c:start_web_ui() uses MHD_USE_SELECT_INTERNALLY, why not EPOLL? 22:21:03 It's what's recommended in the docs when using a single thread. 22:21:05 Api requests doesn't work as soon as many clients connected or many connect/disconnect occured. 22:21:55 I've changed to MHD_USE_EPOLL_INTERNAL_THREAD in order to have working api. 22:22:03 I recommend using a proxy (ala apache/nginx) for the api. That's what I use on the reference pool. 22:22:34 That then is set to cache responses. 22:23:04 I need realtime stats from monero-pool, don't see any need cache something here. 22:23:36 I'm talking about like 15 seconds cache, not a huge timeout. 22:24:36 Have you tested monero-pool with > 256 clients? 22:24:57 Api connection will just get fd > FD_SETSIZE and nothing will be able to use api. 22:25:26 caching will not help, unless it's using persistent socket 22:26:48 api or pool? 22:27:03 pool yes, over 10k tested 22:27:28 api uses apache front-end with cache https://paste.debian.net/1124412/ 22:28:39 I do think using epoll probably makes sense, I assume the docs just suggest MHD_USE_SELECT_INTERNALLY for portability. 22:29:02 either way, really should cache and proxy web api 22:29:19 and thats not the job of the pool. 22:31:32 Answer simple question: Is it possible that new connection to webui port will have socket fd always > FD_SETSIZE if number of active clients is > FD_SETSIZE + some_margin? 22:32:41 I don't probability of this event but it occurred here few times in a row. 22:32:59 don't know* 22:48:07 persistent connection to webui port established at the beginning works without any problem; then this issue probably doesn't matter outside of my environment 23:02:14 as I say, you really gotta use a proxy and cache for the webui. It's the most efficient way to allow pool to have lots of connections and the webui to not affect pool performance. 23:05:40 MHD_USE_EPOLL_LINUX_ONLY would definitely help on Linux, but not other platforms. 23:07:08 FD_SETSIZE isn't a valid constant anymore, should be using getdtblsize() 23:07:50 number of descriptors depends on nfiles ulimit anyway 23:16:23 BUGS section of `man 2 select` says about 1024 limit for select(); getdtablesize() equals to `ulimit -n` here which is really high 999999; the error i've never had 999999 active sockets so 1024 looks like a realistic limit at least for select(). 23:17:16 s/the error// 23:19:45 and the webui defers to MHD (it doesn't even look at FD_SETSIZE). and if MHD is behind a caching proxy this is all redundant talk anyway. 23:21:10 microhttpd lib will fails somewhere inside it after each try to start select() with fd > FD_SETSIZE 23:21:17 your code will not report anything 23:21:31 the pool does look at FD_SETSIZE though, so yes, per hyc's point I sould change this at least. 23:22:21 no need to change for libevent listener since it uses the best available poller, epoll in my case 23:22:31 but webui is using select() since you've set it explicitely 23:23:26 w.r.t. MHD, I could feature test EPOLL and only use that if available for the webui, I just never bothered because I always intended webui to be behind a cacheing proxy. 23:24:04 It will be portable and fix my issue, good 23:25:49 ok I'll make the change, but please, use a proxy cache, they are better suited for the webui needs. 23:26:15 I need api mostly to monitor what's going on with workers 23:26:25 It isn't exposed to others 23:26:43 Also i've added few methods to get worker stats 23:27:32 my environment is too dynamic to have caching timeout > 1 second 23:28:25 You speak of lots of changes you are making. Can you show/share? 23:28:36 They are ugly but works 23:28:41 I'll after some testing