Skip to content

Using FDS on CSC (taito.csc.fi)

gforney edited this page Sep 13, 2016 · 7 revisions

These notes provide basic information about using FDS on taito.csc.fi, one of the two super computers of CSC, Finland. It is assumed you have an account and have logged in for the first time.

  1. Add git to your environment

    module load git
    
  2. Clone the firemodels/fds-smv repo to your $USERAPPL folder as you would on any linux cluster. Use these notes. Follow GitHub's instructions for generating ssh keys.

  3. Make sure your environment contains both Intel MPI and Fortran:

    $ module list
    
    Currently Loaded Modules:
       1) intel/16.0.0   3) intelmpi/5.1.1   5) git/1.9.2
       2) mkl/11.3.0     4) StdEnv
    
  4. Modify the FDS makefile entry as follows:

    mpi_intel_linux_64 : FCOMPL = mpiifort
    

    mpifort is replaced by mpiifort to ensure that Intel MPI will be used. Infiniband will be used automatically whenever you run the code on more than one node.

  5. In the directory mpi_intel_linux_64, modify the make_fds.sh file as follows:

    #!/bin/bash
    dir=`pwd`
    target=${dir##*/}
    
    echo Building $target
    make -j4 VPATH="../../FDS_Source" -f ../makefile $target
    

    There is no need for any path or environment variables pointing to the compiler or MPI app. Run the script to compile.

  6. Prepare a SLURM script like this in a folder under your $WRKDIR (in this example, the name of the script is job_name_script.sh):

    #!/bin/bash -l
    #SBATCH -J job_name
    #SBATCH -o job_name.log
    #SBATCH -e job_name.err
    #SBATCH -p parallel
    #SBATCH --mem-per-cpu=1000
    #SBATCH -n 18
    #SBATCH --tasks-per-node 5
    #SBATCH -t 4000
    #SBATCH --cpus-per-task=3
    export KMP_AFFINITY=compact
    export OMP_NUM_THREADS=3
    srun $USERAPPL/fds-smv/FDS_Compilation/mpi_intel_linux_64/fds_mpi_intel_linux_64 job_name.fds
    
    

    This script allocates 18 tasks (18 mesh) and three OpenMP threads for each task. It allows up to 15 tasks per node, i.e. up to 5 meshes would be assigned into one node. The -t sets the required walltime, and you will be told immediately upon submitting the script if a time is not allowed. Taito has two types of nodes: those with 16 cores, and those with 24 cores.

    Add the following command to the end of job_name_script.sh to print out the maximum memory requirement during the job:

    sacct -o reqmem,maxrss -j $SLURM_JOBID
    
  7. Run the job by submitting the script:

    $ sbatch job_name_script.sh
    
  8. Download the results on your local workstation using sftp or, if working on Windows, use Win-SSHFS program to mount a drive to your work $WRKDIR folder (host=taito.csc.fi, Directory=/wrk/you_username). Using Win-SSHFS, you can read the FDS result files directly from your Windows PC.

Clone this wiki locally