-
Notifications
You must be signed in to change notification settings - Fork 522
Open
Labels
bugSomething isn't workingSomething isn't working
Description
Description of the bug
Dear developer,
I am using experiencing OOM for GetPileupSummaries constantly. GetPileupSummaries is reaching the 250GB limit I allocated for the process. I am asking how much memory I should allocate during the run, since this seems too much for only whole exome samples. I also tried with interval options, but it had 106091 processes and kept running forever. Is there a way to lower the interval numbers? Please find the attached log and base.config.
Command used and terminal output
nextflow run nf-core/sarek -r 3.7.0 -profile conda --input ./sample_sarek.csv --step mapping
--outdir ./results --aligner bwa-mem2 --genome GATK.GRCh38 --tools mutect2,strelka,freebayes,deepvariant -with-report --dbsnp ../database/res
ources_broad_hg38_v0_Homo_sapiens_assembly38.dbsnp138.vcf.gz --known_indels ../database/Homo_sapiens_assembly38.known_indels.vcf.gz --pon ../
database/PON.sorted.vcf.gz --pon_tbi ../database/PON.sorted.vcf.gz.tbi --germline_resource ../database/af-only-gnomad.hg38.vcf.gz --wes --int
ervals ../database/GRCh38_exome.bed --only_paired_variant_calling --dbsnp_tbi ../database/resources_broad_hg38_v0_Homo_sapiens_assembly38.dbs
np138.vcf.gz.tbi --skip_tools baserecalibrator,baserecalibrator_report --no_intervals -c nextflow.config --snv_consensus_calling --consensus_
min_count 2 --no_intervals --normalize_vcfsRelevant files
No response
System information
No response
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working