-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hyperparameters not the same in paper and config #19
Comments
To increase my worry, I indeed find that these hyperparameters are not leading to successful training. Below you find the current training loss and dev smatch respectively. While the config aims to train for 30 epochs, it is clear it already starts to deteriorate after less than one epoch. So if you have the exact hyperparameters that were used in finetuning text2amr, I would be grateful. |
Hi, I am not sure what is happening here. We have retrained SPRING several times and we have always reproduced the same results, up to some variation due to the random seed. We have used the same exact config you are using. Would you mind giving us some additional context? Your run seems to be diverging, would you mind pasting a plot of the training/dev loss? What's the training data? Setup? I have seen runs diverging irrecoverably, but tbh it was very rare, and usually restoring a checkpoint or setting a different seed fixed everything. The beam size of 5 was used only outside of the training run for efficiency (BS is very slow). This was kind of pointed at in Table 1 of the Appendix, but I can see how it can be confusing. Warmup we don't care about because the scheduler is constant. In my experience warmup is not crucial with pre-trained BART. If there are other discrepancies, the config is the final truth. (@rexhinab comments?) |
The training was exactly as described in this repository with the AMR3.0 dataset. I have put train loss graphs and dev smatch scores in the comment above. Perhaps it was a bad seed. I will try again if I find the time/compute and keep you posted. |
@mbevila I tried again with a fresh install of Python and Spring on our cluster and unfortunately I still cannot reproduce your results. Again, after some 10k steps, training loss already starts to increase. It would seem that the learning rate is too high - or that a constant LR does not work well. Config (same as the one in this repo except for name: baseline+smart_init
model: facebook/bart-large
# <--------------
# Linearizations
# Comment DFS and uncomment the relevant block if you want to use a different linearization scheme
# DFS
penman_linearization: True
use_pointer_tokens: True
raw_graph: False
# BFS
# penman_linearization: False
# use_pointer_tokens: True
# raw_graph: False
# PENMAN
# penman_linearization: True
# use_pointer_tokens: False
# raw_graph: False
# BART baseline
# penman_linearization: True
# use_pointer_tokens: False
# raw_graph: True
remove_wiki: False
dereify: False
collapse_name_ops: False
# Hparams
batch_size: 500
beam_size: 1
dropout: 0.25
attention_dropout: 0.0
smart_init: True
accum_steps: 10
warmup_steps: 1
training_steps: 250000
weight_decay: 0.004
grad_norm: 2.5
scheduler: constant
learning_rate: 0.00005
max_epochs: 30
save_checkpoints: True
log_wandb: True
warm_start: True
use_recategorization: False
best_loss: False
remove_longer_than: 1024
# <------------------
# Data: replace DATA below with the root of your AMR 2/3 release folder
train: multilingual-text-to-amr/data/amr_annotation_3.0/data/amrs/split/training/*.txt
dev: multilingual-text-to-amr/data/amr_annotation_3.0/data/amrs/split/dev/*.txt
test: multilingual-text-to-amr/data/amr_annotation_3.0/data/amrs/split/test/*.txt Environment:
Environment:
|
Thank you for open-sourcing your repo! I am trying to reproduce your results but found difficulty reaching the same scores. I then found that the hyperparameters in the config are not the same as discussed in the paper's appendix. Specifically, you mention a beam search of 5 in the paper but the config has
1
. Could you please clarify? Which of these is correct?I also find that there is a warmup_steps of
1
which seems out-of-place and a very uncommon value. Can you confirm that this is indeed correct?The text was updated successfully, but these errors were encountered: