-
Notifications
You must be signed in to change notification settings - Fork 1
Conversation
dl_bench/utils.py
Outdated
for i, x in enumerate(test_loader): | ||
while True: | ||
step += 1 | ||
sample = next(iter(test_loader)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will force us to always use the first batch. Why don't you just increase min_batches
? How many do you need? What combinations of parameters is the problem?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, you are right, this will force us to always use the first batch, which is not I want to do in this PR. I will revert this part. I can just increase min_batches
to get stable performance data.
2103a14
to
3fa7177
Compare
@@ -415,7 +417,7 @@ def inference(self, backend: Backend): | |||
y = self.net(x) | |||
else: | |||
y = self.net(x) | |||
|
|||
if i < 3: continue |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But the point of this PR is that, current warmup
seems not working. According to the onednn verbose, the first 3 steps in benchmarking (duration_s) period show quite poor performance. I think this can be the same issue with Issue#66. @Egor-Krivov Will you follow up Issue#66? Or may I just skip the first 3 steps in benchmarking period?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't have problem with this part. You can skip first 3 steps
Let me test this code and then I can merge |
This PR is an alternative to the PR#73