You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
An alignment task may fail for a variety of reasons, and it is retried several times and finally ignored to allow the workflow to run to completion. Metadata on all, including failed/ignored task are captured by NF in trace and the HTML report but this information is not currently included in our rendered report. Consequently tables/figures will simply lack some entries rather than indicate that e.g. aligner X failed to complete a given task given the set time/memory limits.
is task metadata for already completed tasks available before pipeline finishes?
If it is,
include in rendered report
capture reason (time? memory? other?)
If it is not,
additional channels can be used to indicate expected output entries to data collecting processes.
If we want failures to be captured directly alongside or within evaluation summary JSON we could take over from Nextflow the handling of failures, with a dummy SAM/BAM produced and perhaps the contents of .command.err captured and embedded or linked-to in the generated report.
👍 Failed tasks info in plain sight, able render some plots without the relevant data points just missing
👎 NF no longer re-submitting the failed tasks which is really useful to iron-out glitches in HPC/Cloud execution - could probably get around this by setting, outputting appropriate validExitStatus in conjunction with task.attempt .
The text was updated successfully, but these errors were encountered:
An alignment task may fail for a variety of reasons, and it is retried several times and finally ignored to allow the workflow to run to completion. Metadata on all, including failed/ignored task are captured by NF in trace and the HTML report but this information is not currently included in our rendered report. Consequently tables/figures will simply lack some entries rather than indicate that e.g. aligner X failed to complete a given task given the set time/memory limits.
If it is,
If it is not,additional channels can be used to indicate expected output entries to data collecting processes.If we want failures to be captured directly alongside or within evaluation summary JSON we could take over from Nextflow the handling of failures, with a dummy SAM/BAM produced and perhaps the contents of
.command.err
captured and embedded or linked-to in the generated report.validExitStatus
in conjunction withtask.attempt
.The text was updated successfully, but these errors were encountered: