11:08:06 "tevador: because nobody is hashing for about 4 seconds every 2048 blocks" 11:08:24 What's the most optimal way to use next_seed_hash? 13:28:51 total miners 32385 13:29:55 d'you have a scheduled script posting that count for you? :P 13:32:33 I'm waiting for "total miners: -32645" in a few hundred... 13:35:53 lol 13:52:45 i am the bot 13:52:47 i am the script 13:52:55 all your base are belong to us 13:53:21 i only post if its higher than the last i can see in the chat scrollback 13:54:14 someday someone that cares could parse their irc logs for the history. though my scrollback doesn't show day stamps, just time stamps. so, could require some fancy bashing 13:56:52 awk '/Day changed/{D=$5" "$6" "$7}/ginger.*total miners/{print D, $0}' 13:57:33 (should work with irssi, "Day changed" it what it says) 13:58:28 hm yeah, finch doesn't show me when days change either 15:11:48 aren't we coming up on 3 months soon? ASICs ought to be visible soon 15:12:17 /e 15:12:32 /s 15:13:13 I heard there's a company out of Santa Clara working on one 15:17:51 good luck to them 15:18:13 more Sillycon Valley VC idiocy 15:18:37 Their CEO is an MIT graduate electrical engineer, so I think it would be dangerous to dismiss her at this stage 15:19:53 Still nonsense. If she knows any cool tricks, she'd make more money poaching on Intel's turf. 15:20:10 I think that's what they're currently doing ;) 15:20:54 as I said before - good luck to them ;) 15:22:15 Intel themselves are quite vulnerable at the moment, of course https://semiaccurate.com/2020/02/06/intel-officially-craters-xeon-pricing/ 15:23:09 Actually I really wish Mill or REX computing would get off their collective behinds and make some RandomX processors 15:24:17 It would be an excellent way to demo their ideas 15:24:40 is anyone actually building Mill designs? 15:24:48 AFAIK no 15:24:53 didn't think so 15:25:48 I think they're falling in the researcher's trap, designing ideas which are 10 years out rather than bringing 3-years-out ideas to mass production 15:26:48 Something I really appreciate about Tesla is they're not fooling around with solid state batteries, instead they're scaling up mass production of current technology 15:27:15 yes, Tesla was quite pragmatic 15:27:32 You can sink a lot of time and money into ideas which are "just 10 years away" 15:27:41 thouh now they've built up enough capital they can afford to churn out gigafactories and do some novel battery research 15:28:27 I think they're playing a cynical game of letting the other auto manufacturers fund solid state research and they're just going to license it once it's ready and then focus on mass production IP 15:29:03 which other manufacturers are researching it? Why spend the money when they can just license Tesla's tech for free? 15:29:50 https://www.navigantresearch.com/news-and-views/is-2020-the-year-of-solid-state-batteries-for-evs 15:30:02 > Major automakers like Ford, Hyundai, Nissan, Toyota, and Volkswagen are investing in solid-state battery research, as are some Chinese EV companies and Fisker. 15:32:51 sounds like severe low-temperature downsides 15:33:13 I think the other auto manufacturers feel like current-gen battery tech is already tapped out so the only place to have impact is on next-gen... It's a bad look to buy your hardware from Tesla, investors are going to start asking "why not just invest in Tesla?" 15:33:22 also previous research into lithium-metal batteries has usually ended in large explosions 15:33:58 Tesla is probably looking at this thinking "one of these projects will win, that one we'll license and the other 9 will fail and we get that research for free" 15:34:06 the more practical improvements I see for near-term are replacing carbon with silicon in the anodes 15:34:16 yup, most definitely 15:37:21 And meanwhile Tesla is developing autonamous car technology so that even in a worst case scenario where someone discovers excellent battery technology but makes impossible demands to license it, Tesla will continue with current technology and upset the entire auto market with autonamous vehicles 15:37:56 What they have that Uber doesn't have is cameras on all of the cars, so just by driving a Tesla you're helping train the AI 15:48:56 i hope at some point the demand for cpu efficiency will start pushing the 4-5 GHz limits. 15:49:31 at some point parallelization doesn't get you what you need. mainly, realtime stuff for AI etc 15:49:37 well, perhaps not efficiency, but performance 15:53:01 yeah, higher clocks will trash power efficiency 15:53:13 Intel already learned that with the pentium4 netburst 15:57:05 "I heard there's a company out of Santa Clara working on one" " Their CEO is an MIT graduate electrical engineer, so I think it would be dangerous to dismiss her at this stage" 15:57:16 lol, that was very subtle 15:58:12 Lisa Su? 15:58:27 Advanced Micro Devices, Inc. (AMD) is an American multinational semiconductor company based in Santa Clara County, California 15:58:59 there's plenty of small semico companies in silicon valley. that's literally why it's named that 16:05:44 At the height of the GPU mining craze, AMD could easily have jumped in themselves. They didn't, they just continued churning out GPUs as always 16:17:55 I want to see a fully async processor with a inward-feeding (reverse of the mill) operand belt 16:18:22 s/ a / an / 16:21:34 Anyway, one reason why I'm highly positive toward Monero and the RandomX algorithm is because it acts as a challenge grant to push microprocessor research forward 16:25:11 15:48 < gingeropolous> i hope at some point the demand for cpu efficiency will start pushing the 4-5 GHz limits. <-- You can reduce heat output a bit by going into different processes (Gallium arsenide), but really if we just write better code, lots of things can in fact be done at the same time 16:30:43 lol. yes, writing better code can get us at least an order of magnitude improvement already 16:31:39 IMO real limitation right now is the assembly language, it's designed for a single core that just stalls every time it needs a memory access 16:32:03 Compilers are not thinking that way, and processors aren't either, but it's the way we communicate 16:36:33 WDYmean? You can't hide or avoid memory stalls 16:36:53 Sparc T1000 tried too 16:37:03 processors are reordering instructions pretty hard to hide them 16:37:28 and compilers are trying to make instructions in such a way as to help processors achieve that 16:37:43 .... X86 mainly is reordering. ARM OOOE is pretty limited 16:37:54 X86 needs to because its ISA sucks so bad 16:38:09 reordering isn't really a critical requirement in a well-designed ISA 16:38:55 although I suppose, as you say, the model only works well for single-thread 16:39:20 compilers can't be relied on for this, with multiple threads the compiler can't account for the CPU state 16:39:41 this was the too-late realization of the Itanium designers 16:40:36 Well, I think Itanium fell into the 10-years-out trap 16:40:41 or maybe 20 years out at that time 16:40:45 lol 16:40:45 compilers weren't there yet 16:41:05 compilers can never account for the current realtime state of the CPU running multiple threads/processes 16:41:15 it's impossible, and was foolish to ever rely on that. 16:41:18 right indeed 16:41:53 My estimation is that the best thing to do is cut the code up into pieces which do not depend on memory access 16:42:11 difficult to do except in very specific algorithms 16:42:15 So a memory access is an async operation and the next chunk of code will only run after the memory comes back 16:42:35 we wrestled with this quite a lot at JPL. Large FFTs across big frame buffers. 16:42:56 Well you can do it mechanically with any code, you're just going to get a lot of very small pieces 16:42:57 you basically had to hand-tune the algo for each CPU's cache size, and break up the frame into tiles 16:44:02 FFTs are a fairly straightforward algorithm but most problems don't slice'n'dice so conveniently 16:44:19 But if you start thinking about LOAD as an async operation, then the CPU can interleave quite a large number of threads 16:44:43 however branch ops throw a wrench in the works 16:45:09 I'm told that in the GPU space, this type of interleaving is already happening 16:45:47 But I don't know that well myself so I just go on what I've been told 16:47:10 like I said, in scientific computing, we've been doing this for at least 20+ years 16:47:18 *nod* 16:47:48 I worked on optimized FFTs for intel i860, same story - load-to-use latency, reorder instructions so they don't try to operate before operands arrive 16:48:00 yup 16:49:37 but that's static reordering in the assembly code. not dynamic inside the CPU 17:40:45 cjd memory load instructions on AMD GPUs return immediately, you have to explicitly issue s_wait_vmcnt(N) instruction to use their results 17:41:01 and you can insert as many other instructions in between as you want 17:41:23 hmm, that's clever 17:41:41 I actually use it in RandomX implementation for AMD GPUs 17:41:59 reorder memory loads up, s_wait* down in the code flow 17:42:14 *nod* 17:42:26 You're not using cache, IIRC 17:42:53 So basically every scratchpad hit is a memory access 17:43:35 I don't use it, and maybe putting first 16 KB into local memory can give better performance 17:43:49 but it complicates every memory access to scratchpad 17:45:48 yeah, hard to beat straightline perf, adding special cases tends to make lots of things worse 17:49:39 https://git.dero.io/DERO_Foundation/RandomX 17:49:55 they claim RandomX is copyright Dero lol 17:50:50 I get a 500 17:50:58 502 17:51:02 bad gateway 17:51:37 it’s a Go rewrite but still AFAIK you can’t change copyright like this? 17:52:17 hmm it worked 1 minute ago 17:54:33 they can claim copyright on their Go implementation, but they have to credit tevador 17:55:48 how about taking square roots of negative numbers? https://github.com/xmrig/xmrig/pull/1553 17:57:02 LOL 17:58:32 17:43 < sech1> I don't use it, and maybe putting first 16 KB into local memory can give better performance <-- so you can interleave quite a large number of computations, I suppose 17:58:58 they only provided x86 JIT, not arm64 17:59:46 sounds like they didn't test adequately, would've uncovered that problem 17:59:51 on different platforms 17:59:53 actually on Navi the GPU core is underutilized, it can run at half the speed (900 MHz) and with the same hashrate 18:00:17 so maybe using local memory there to reduce memory bandwidth can really help 18:00:17 ahh ok, to its memory bandwidth bottlenecked 18:01:07 the thing is, I don't want to invest time in this known it can only boost hashrate probably 2x 18:01:07 still not very competitive 18:01:07 *knowing 18:04:32 wtf, a PR with a message that's a marketing pitch. Fucking people... 18:05:09 sigh 18:05:10 Gotta admire that hustle 18:06:40 this is such crap. I feel we've probably discussed it before https://medium.com/deroproject/analysis-of-randomx-dde9dfe9bbc6 18:07:10 "An important piece in the security of RandomX depends on the CFROUND instruction. It is also hardware dependent, which means that a chip manufacturer (Intel, AMD, ARM, etc.) can alter or remove their implementation, resulting in unexpected behavior that could lead to forcing invalid proofs on the chain." 18:07:29 eh actually no, rounding is part of IEEE-754 spec, no vendors can arbitrarily choose to withdraw it 18:08:08 can? 18:08:42 nobody will get anywhere today selling a chip without 754-compliant FP math 18:08:54 that might have worked 50 years ago. not today. 18:09:15 That's kind of sad though, because Posits can be more efficient 18:09:29 But I agree with your analysis 18:10:59 Posits have their own edge cases where they're much worse than ieee-754 floats 18:11:09 hyc: they are doing this all the time. Last time they copied bulletproofs, started to call them Rocket Bulltproofs and claimed their version is 10x without any proof. 18:11:34 they also had a huge premine. I’d say most of the shills on Twitter are bought or sockpuppets. 18:11:45 not surprising 18:15:33 I wonder how big their team is. Probably more brain cells in my cat than in the entire Dero Foundation. 18:20:21 at least Kristy kinda called them out. Curious how they’ll spin it now :P 18:21:07 yeah, she was pretty incensed 19:56:45 sure, more efficient code could do it. but im still .... slack jawed with the idea of using live machine learning to control the tokamak for fusion reactors 19:57:06 and i think thats ultimately gonna require the next level of computation 19:59:53 AI controlled nuclear reactor, hm 20:00:11 I have a question, can you raise my allowance? 20:07:47 i mean, its just a matter of putting just the right amount of power into electromagnets at just the right time.... i don't think we'll get full blown AI from that :) 20:13:01 i mean, you gotta get the input from the sensors, run the calcs, and then pump the proper current to the proper electromagnet in less time than it takes for one of them magnetic fields to snap 20:13:55 I wonder how big their team is. Probably more brain cells in my cat than in the entire Dero Foundation. <--- their website mentions 3 fulltime guys at the beginning, and that the "original developers" are still working (not surprising since this is a premined) 20:16:08 From their website "Dero Atlantis uses pre-compute tables to convert 64*2 Base Scalar multiplications into doublings and additions." I don't know what they mean by pre-compute, but otherwise it just seems to describe how all multiplications are done in ECC no? Ie, irrelevant gibberish 20:55:34 yah it all sounds like gibberish 20:55:49 I've had to stop responding. no point giving them any more oxygen. 20:56:43 always wonder if its better to not respond at all 🤷‍♂️ 20:58:01 it strikes me as harmful to allow single wrong statements to go unchallenged. but also harmful to get dragged into a long pointless dialog 20:58:56 I don't think there is any universal right answer to that question 20:59:39 If one adopts a rule of never answering incorrect statements then FUDsters will abuse them by making them appear to be afraid of the truth 21:00:03 If one adopts a rule of always answering incorrect statements then spammers will use them to get free advertizing 21:00:13 yeah 21:00:22 Being unpredictable is the only robust solution 21:00:27 it generally seems to me that the world is stacked against being the good guy 21:04:20 i think the issue is the medium and location. if they want to spout off in their own corner its best to ignore but if they come to your "house" then response seem warranted. 21:04:44 This concept of "good guy" makes me uneasy