You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is a feature suggestion posted based on a discussion between s3inlc and Thor.
Fact: Hashcat benchmarking is bad
When running large hash lists, different hash types or having an issue on the client side, different results will come in on the benchmark. In this test case we had a 250k vbulletin list (mode 2611) from hashes.org that functioned at subpar speeds.
9MH/s on 4x 1080 Ti
12MH/s on 1x 1080 Ti
The chunk size is about 3k out of the total wordlist attack keyspace of 1,464,244,267 and was completed in approx 50 seconds to 1 minute 20. This is far off from the 600 second target.
The goal of this feature is to adjust the benchmark based on the target and have it adapt the size, resulting in greater speeds & less chunks without reduced functionality or performance.
The proposed formula for this (by S3inlc) is <new chunk size> = 600s / <time needed> * <old chunk size>
The goal of the formula is to adjust the chunk size UP while the time needed for the last benchmark is less than the ideal chunk size (600). This allows for more utilization in case the benchmark turned out too low and could also be used to reduce utilization / chunk time if too high.
The text was updated successfully, but these errors were encountered:
This is a feature suggestion posted based on a discussion between s3inlc and Thor.
Fact: Hashcat benchmarking is bad
When running large hash lists, different hash types or having an issue on the client side, different results will come in on the benchmark. In this test case we had a 250k vbulletin list (mode 2611) from hashes.org that functioned at subpar speeds.
9MH/s on 4x 1080 Ti
12MH/s on 1x 1080 Ti
The chunk size is about 3k out of the total wordlist attack keyspace of 1,464,244,267 and was completed in approx 50 seconds to 1 minute 20. This is far off from the 600 second target.
The goal of this feature is to adjust the benchmark based on the target and have it adapt the size, resulting in greater speeds & less chunks without reduced functionality or performance.
The proposed formula for this (by S3inlc) is
<new chunk size> = 600s / <time needed> * <old chunk size>
The goal of the formula is to adjust the chunk size UP while the time needed for the last benchmark is less than the ideal chunk size (600). This allows for more utilization in case the benchmark turned out too low and could also be used to reduce utilization / chunk time if too high.
The text was updated successfully, but these errors were encountered: