-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
segfault #144
Comments
In my experience with R, the first error may be related to the INFO messages of Singularity v. Apptainer. I will need to look into that further to see if we have seen that error on our side. For the second error, I think it's related to package versions (or the version(s) of R present on your cluster). I've seen that error when the wrong versions are grabbed by the environment, or one package/dependency is updated but not another |
Hi @VICTO160, as you have been able to use the larger bin sizes successfully for these two samples, it's less likely to be a conflict between package versions and rather that the 1kbp bin size is not compatible with the data you're trying to process. We have published some guidelines here re: bin size, and as I mentioned on the other issue, we are planning to introduce a new CNV caller which may be better suited to your needs, so please keep following for updates. |
@VICTO160 something else to note is that the most recent version of the workflow has updated memory requirements for each process, so it's possible that this will allow you to run it successfully with the 1kbp bins, however this is not guaranteed as there may be other factors which will cause the Rscript to error for this particular dataset/bin size combination. |
The most recent version of the workflow did not fix either of these issues. I've run this script now with 1000 bp bins on ~66 patient samples. The error that traced back to '7' happened in about 5 samples. Adding 'force=TRUE' to the 'Smooth outliers' line in the ./.nextflow/assets/epi2me-labs/wf-human-variation/bin/run_qdnaseq.r file seemed to fix this issue. I have not solved the error that traced back to '36', but now two samples had this error. Neither of these errors seemed to correlate with read-depth, number of reads, or N50s despite the advised minimum number of reads. |
Hi @VICTO160, I'm really sorry for the delay in responding. There is now a new default CNV caller, Spectre, included in the workflow so please try this out and let us know if you have any feedback. |
Operating System
CentOS 7
Other Linux
No response
Workflow Version
v1.10.1
Workflow Execution
Command line
EPI2ME Version
No response
CLI command run
$NEXTFLOW -c .threads.txt run epi2me-labs/wf-human-variation -revision v1.10.1
-profile singularity
-w ${OUTPUT}/variation_workspace
--bam ${SAMPLE}.sorted.aligned.bam
--ref ${REFS}
--tr_bed $TRBED
--sample_name ${SAMPLE}
--out_dir ${OUTPUT}
--threads 36
--cnv
--bin_size 1
--depth_intervals
Workflow Execution - CLI Execution Profile
None
What happened?
I'm running into two segfault errors. It's consistently happening to two of my samples. I've been able to run the same pipeline, which has worked on other samples with the same options. Also, these two samples have been processed with no errors using larger bin sizes (5 and up). The threads.txt file is limiting the workflow to 36/38 cpus and 480/499 Gb of memory allocated to the VM.
Since doubling the stack size seemed to fix a segmentation fault error while running clair3 in another run using this workflow, here's the original output of
ulimit -a
on the VM.core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 8252695
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) 523239424
open files (-n) 32768
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 4096
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Relevant log output
Application activity log entry
No response
Were you able to successfully run the latest version of the workflow with the demo data?
yes
Other demo data information
No response
The text was updated successfully, but these errors were encountered: