Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OOM error during checkpoint saving in large-node training #788

Open
SimonSuster opened this issue Jan 21, 2025 · 1 comment
Open

OOM error during checkpoint saving in large-node training #788

SimonSuster opened this issue Jan 21, 2025 · 1 comment

Comments

@SimonSuster
Copy link

I'm continue-training from an Olmo checkpoint on LUMI. When scaling up to 64 nodes (with global_train_batch_size=512), training works fine until the point where the first checkpoint save should occur, resulting in an OOM error on one of the nodes:

...
nid005143:0 out: 2025-01-20 10:29:10.590        nid005143:0     olmo.train:967  INFO    [step=50/65355,epoch=0]
nid005143:0 out:     train/masked_instances_local_rank=0
nid005143:0 out:     optim/total_grad_norm=0.3484
nid005143:0 out:     train/CrossEntropyLoss=2.633
nid005143:0 out:     train/Perplexity=13.91
nid005143:0 out:     train/ZLoss=0.0035
nid005143:0 out:     throughput/total_tokens=104,857,600
nid005143:0 out:     throughput/total_training_Gflops=684,340,122
nid005143:0 out:     throughput/total_training_log_Gflops=20.34
nid005143:0 out:     throughput/device/tokens_per_second=174.8
nid005143:0 out:     throughput/device/batches_per_second=0.0427
nid005143:0 out:     System/Peak GPU Memory (MB)=42,394
nid005143:0 out: 2025-01-20 10:29:11.183        nid005143:0     olmo.train:1259 INFO    Saving checkpoint...
nid005143:0 out: 2025-01-20 10:29:11.786        nid005143:0     olmo.checkpoint:1922    INFO    Saving model and optim state...
slurmstepd: error: Detected 1 oom_kill event in StepId=9178265.0. Some of the step tasks have been OOM Killed.
srun: error: nid006190: task 277: Out Of Memory
srun: Terminating StepId=9178265.0
slurmstepd: error: Detected 1 oom_kill event in StepId=9178265.0. Some of the step tasks have been OOM Killed.
slurmstepd: error: *** STEP 9178265.0 ON nid005143 CANCELLED AT 2025-01-20T10:34:05 ***

Have you encountered this issue before? Would you have any suggestions to get around this?

Note that in another setup (32 nodes, global_train_batch_size=256), checkpointing works as expected.

My job script and the config file appended below.

test-mling.sh
#!/bin/bash
#SBATCH --job-name=test-mling
#SBATCH --account=project_PROJECT_ID
#SBATCH --output=/scratch/project_PROJECT_ID/logs/%j.log
#SBATCH --nodes=64              # Total number of nodes
#SBATCH --ntasks-per-node=8
#SBATCH --gpus-per-node=8       # Allocate one gpu per MPI rank
#SBATCH --cpus-per-task=7
#SBATCH --exclusive=user
#SBATCH --hint=nomultithread
#SBATCH --mem=480G  # max on lumi-g
#SBATCH --time=02:00:00
#SBATCH --partition=standard-g

module load LUMI/22.08 partition/G
export OLMO_CONTAINER=lumi-flash_latest.sif
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
export MPICH_GPU_SUPPORT_ENABLED=1
export NCCL_SOCKET_IFNAME=hsn
export NCCL_NET_GDR_LEVEL=3
export MIOPEN_USER_DB_PATH=/tmp/${USER}-miopen-cache-${SLURM_JOB_ID}
export MIOPEN_CUSTOM_CACHE_DIR=${MIOPEN_USER_DB_PATH}
export CXI_FORK_SAFE=1
export CXI_FORK_SAFE_HP=1
export FI_CXI_DISABLE_CQ_HUGETLB=1
# We need to set this to avoid "Cassini Event Queue overflow detected." errors.
export FI_CXI_DEFAULT_CQ_SIZE=131072
#export NCCL_DEBUG=INFO
export PYTHONPATH=.:${PYTHONPATH}
export ROCM_PATH=/opt/rocm
export SINGULARITYENV_LD_LIBRARY_PATH=/usr/local/lib:/opt/cray/libfabric/1.15.2.0/lib64
# Try playing with max_split_size_mb if you run into OOM errors.
export PYTORCH_HIP_ALLOC_CONF='max_split_size_mb:512'
#export PYTORCH_HIP_ALLOC_CONF=expandable_segments:True
srun \
  --cpus-per-task=$SLURM_CPUS_PER_TASK \
  --distribution=block:block \
  --kill-on-bad-exit \
  scripts/run_with_environment.sh \
    singularity exec \
    -B"$PROJECT_DIR:$PROJECT_DIR" \
    -B"$SCRATCH_DIR:$SCRATCH_DIR" \
    -B"$FLASH_DIR:$FLASH_DIR" \
    -B"$USER_DIR:$USER_DIR" \
    -B /opt/cray:/opt/cray \
    -B /usr/lib64/libcxi.so.1:/usr/lib64/libcxi.so.1 \
    -B /usr/lib64/libjson-c.so.3:/usr/lib64/libjson-c.so.3 \
    $PROJECT_DIR/containers/$OLMO_CONTAINER \
    python scripts/train.py configs/test-mling.yaml --run_name=${SLURM_JOB_ID} ${@}
test-mling.yaml
run_name: test-mling-run-001
seed: 6198
dry_run: false

model:
  d_model: 4096
  n_heads: 32
  n_layers: 32
  mlp_hidden_size: 22016
  weight_tying: false
  alibi: false
  rope: true
  rope_theta: 500000
  flash_attention: true
  attention_dropout: 0.0
  include_bias: false
  #block_type: sequential
  layer_norm_type: rms
  layer_norm_with_affine: true
  layer_norm_eps: 1e-6
  bias_for_layer_norm: false
  attention_layer_norm: true
  attention_layer_norm_with_affine: true
  norm_after: true
  activation_type: swiglu
  residual_dropout: 0.0
  embedding_dropout: 0.0
  max_sequence_length: 4096
  vocab_size: 100278
  embedding_size: 100352
  eos_token_id: 100257
  pad_token_id: 100277
  init_device: meta
  init_fn: normal
  init_std: 0.02
  init_cutoff_factor: 3

softmax_auxiliary_loss: true
auxiliary_loss_multiplier: 1e-5
#fused_loss: true

compile: null

optimizer:
  name: adamw
  learning_rate: 0.000061499
  weight_decay: 0.1
  eps: 1e-8
  decay_norm_and_bias: true
  decay_embeddings: false
  betas:
  - 0.9
  - 0.95
  metrics_log_interval: 1

scheduler:
  name: linear_with_warmup
  t_warmup: 0
  alpha_f: 0

tokenizer:
  identifier: tokenizers/allenai_dolma2.json
  truncate_direction: right

save_folder: ${path.choose:${oc.env:SCRATCH_DIR,no_exist}/checkpoints,/results}/test-mling-run-001
#save_folder: ${path.choose:${oc.env:SCRATCH_DIR,no_exist}/checkpoints,/results}/${oc.env:SLURM_JOB_ID,${run_name}}
save_overwrite: false
save_interval: 50
#save_interval_ephemeral: 250
save_num_checkpoints_to_keep: -1
sharded_checkpointer: olmo_core

save_interval_unsharded: null
save_num_unsharded_checkpoints_to_keep: -1

load_path: ${oc.env:SCRATCH_DIR}/checkpoints/OLMo-2-1124-7B/step928000-unsharded
#load_path: https://olmo-checkpoints.org/ai2-llm/peteish7/step928646-unsharded  # last Olmo2-7b-stage1 checkpoint
#load_path: null

max_duration: 1ep
#max_duration:  10  # n of steps

try_load_latest_save: false
restore_dataloader: false
no_pre_train_checkpoint: true

global_train_batch_size: 512
device_train_microbatch_size: 1

precision: amp_bf16

fsdp:
  wrapping_strategy: by_block_and_size
  precision: mixed

max_grad_norm: 1.0
max_grad_norm_ratio: null

speed_monitor:
  window_size: 1

gen1_gc_interval: 1

eval_interval: 1000
eval_subset_num_batches: -1
device_eval_batch_size: ${device_train_microbatch_size}
evaluators:
  - label: arc_challenge
    type: downstream
data:
  pad_direction: right
  num_workers: 6
  drop_last: true
  pin_memory: true
  prefetch_factor: 8
  persistent_workers: true
  memmap_dtype: uint16
  timeout: 0
  instance_filter:
    repetition_max_period: 13
    repetition_min_period: 1
    repetition_max_count: 32
  paths:
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-00-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-01-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-02-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-03-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-04-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-05-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-06-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-07-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-08-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-09-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-10-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-11-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-12-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-14-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-15-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-16-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-17-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-18-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-19-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-20-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-21-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-22-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-23-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-24-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-25-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-26-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-27-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-28-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-29-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-30-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-31-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-32-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-33-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-34-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-36-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-37-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-38-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-39-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-40-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-41-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-42-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-43-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-44-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-45-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-46-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-47-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-48-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-49-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-50-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-51-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-52-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-53-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-54-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-55-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-56-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-57-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-58-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-59-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-60-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-61-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-62-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-63-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-64-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-65-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-66-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-67-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-68-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-69-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-70-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-71-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-72-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-73-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-74-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-75-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-76-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-77-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-78-00000.npy
    - ${oc.env:SCRATCH_DIR}/pretraining_data/preprocessed/test-stage1/olmo2-1124-7b-test-v1/part-79-00000.npy
@dirkgr
Copy link
Member

dirkgr commented Jan 31, 2025

This is a little puzzling, but I noticed this line: SBATCH --mem=480G. Do you need that? Does it restrict you? I don't think we set that when we ran on LUMI.

You might already know this, but you're running out of CPU memory, not GPU memory. It probably has something to do with 8 ranks per node all assembling the model, or too much of the model, at the same time. You can't fit 8 7B models in CPU memory at the same time. You could try a different checkpointing scheme. We had good luck with local checkpointing on LUMI before, but this was before a bunch of software updates.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants