You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I have a small question regarding the MuP proxy model sweeps. Did you perform full learning rate decay to the 4b or 16b tokens in the proxy models mentioned in Appendix F.4 (gpt3)? Or did you decay the learning rate to the "real" number of tokens to be used in the target model? (Effectively, decaying very little in the proxy model sweeps)
It'd be interesting to know what did you do in the experiments in the appendix 4.3 (gpt3) and in general if this has any effect at all on the transferability (perhaps you have some empirical or theoretical insights), recommendations would be very welcome :)
The text was updated successfully, but these errors were encountered:
The same question. We also found that the optimal learning rates differ for different training steps across the widths. For instance, in the early stages of training, a larger learning rate performs better, but as training progresses, a smaller learning rate gradually overtakes it.
Hello, I have a small question regarding the MuP proxy model sweeps. Did you perform full learning rate decay to the 4b or 16b tokens in the proxy models mentioned in Appendix F.4 (gpt3)? Or did you decay the learning rate to the "real" number of tokens to be used in the target model? (Effectively, decaying very little in the proxy model sweeps)
It'd be interesting to know what did you do in the experiments in the appendix 4.3 (gpt3) and in general if this has any effect at all on the transferability (perhaps you have some empirical or theoretical insights), recommendations would be very welcome :)
The text was updated successfully, but these errors were encountered: