You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Your current Server version located at the bottom of any Hashtopolis webpage: 0.10.1
Current Client version: Latest
Your current Hashcat version: 5.0.0
The exact task command you are trying to run: mask attack on SHA1 wordlist
Currently testing on three different setups before rolling out to 72 GPU mining rig and I've run into an issue where one of the CPU client's benchmark is off by 10x. The other dual GPU and single CPU clients works as expected and typically show ~1-2% progress each update and take roughly the 600 seconds to finish a chunk. The bad machine does about 10-20% progress each update and finishes a chunk in about 10-15 seconds.
The improperly performing CPU instance will behave as expected if I manually assign the GPU's benchmark to it (around 40Mh/s give or take).
I have tried both benchmark options and all clients perform the same (2 good, 1 bad).
I'm worried that when it gets rolled out to the bigger rig, I'll end up having clients spend more time reading and re-reading the hashlist than actually doing real work, especially if it ends up doing something like being 100x off the mark.
The text was updated successfully, but these errors were encountered:
Currently testing on three different setups before rolling out to 72 GPU mining rig and I've run into an issue where one of the CPU client's benchmark is off by 10x. The other dual GPU and single CPU clients works as expected and typically show ~1-2% progress each update and take roughly the 600 seconds to finish a chunk. The bad machine does about 10-20% progress each update and finishes a chunk in about 10-15 seconds.
The improperly performing CPU instance will behave as expected if I manually assign the GPU's benchmark to it (around 40Mh/s give or take).
I have tried both benchmark options and all clients perform the same (2 good, 1 bad).
I'm worried that when it gets rolled out to the bigger rig, I'll end up having clients spend more time reading and re-reading the hashlist than actually doing real work, especially if it ends up doing something like being 100x off the mark.
The text was updated successfully, but these errors were encountered: