You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The best e_precision_recall_at_1 I can achieve is about 75, which is different from 80.6 reported in the paper. According to the given configurations of SOP in the paper, I set the memory ratio to 1, the initial learning rate to 3e-4 which is multiplied by 0.1 after 24k iterations, and the number of total iterations to 34k. The other configurations are given as follows
Did you come across the same issue? Any advice will be appreciated!
The text was updated successfully, but these errors were encountered:
Hi, I have the same problem here my rec@1 stuck at 74.69. Anyone managed to reproduce the code for SOP dataset ?
I can not reproduce the results either.
BTW, I'd like to introduce our work to you guys: https://arxiv.org/pdf/2011.08877.pdf. We released the code on https://github.com/XinyiXuXD/DGML-master. In the repository, we provide configurations of three datasets by three independent txt files. It enables the reproduction of three datasets much easier! On the SOP dataset, we achieve 79.0% of the R@1 metric! We will try our best to answer the questions about our code!
Hi all,
The best e_precision_recall_at_1 I can achieve is about 75, which is different from 80.6 reported in the paper. According to the given configurations of SOP in the paper, I set the memory ratio to 1, the initial learning rate to 3e-4 which is multiplied by 0.1 after 24k iterations, and the number of total iterations to 34k. The other configurations are given as follows
Did you come across the same issue? Any advice will be appreciated!
The text was updated successfully, but these errors were encountered: