You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It is challenging to test that training is perfectly consistent when doing distributed training. I.e. if I change the number of GPUs I'm training with but keep the number of processes the same, do my models I get exactly the same gradients?
Desired solution
We should create an experiment config in AI2-THOR that is seeded such that the agents will see the same scenes and take the same actions regardless of the number of GPUs. In this case, we can test that the gradients are exactly the same in different GPU configurations.
Additional context
Issue based on the observation by @marlohmann that training curves may be different based on the number GPUs used.
The text was updated successfully, but these errors were encountered:
Problem
It is challenging to test that training is perfectly consistent when doing distributed training. I.e. if I change the number of GPUs I'm training with but keep the number of processes the same, do my models I get exactly the same gradients?
Desired solution
We should create an experiment config in AI2-THOR that is seeded such that the agents will see the same scenes and take the same actions regardless of the number of GPUs. In this case, we can test that the gradients are exactly the same in different GPU configurations.
Additional context
Issue based on the observation by @marlohmann that training curves may be different based on the number GPUs used.
The text was updated successfully, but these errors were encountered: