revert demuxing strategy to initial split2 method#213
Conversation
📝 WalkthroughWalkthroughThis pull request updates how read output files are handled. In the Python script, the Changes
Sequence Diagram(s)sequenceDiagram
participant Script as demultiplex_gen1.py
participant FS as File System
participant Snake as Snakefile Workflow
participant Merge as merge_partitions Rule
participant Quality as assess_quality Rule
Script->>FS: Write R1 & R2 read data to uncompressed .fq files
Snake->>FS: Read .fq files for demultiplexing
Snake->>Merge: Trigger merge of partitions
Merge->>FS: Concatenate .fq files then compress output with gzip
Snake->>Quality: Execute assess_quality with threads=1
Possibly related PRs
📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (1)
✅ Files skipped from review due to trivial changes (1)
⏰ Context from checks skipped due to timeout of 90000ms (5)
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
harpy/scripts/demultiplex_gen1.py (1)
139-140: Simplify direct file writes to improve readability.The current implementation writes formatted records to output files. It's correct, but could be more readable.
Consider simplifying:
- R1_output[sample_name].write(f"{r1_rec}\n") - R2_output[sample_name].write(f"{r2_rec}\n") + print(r1_rec, file=R1_output[sample_name]) + print(r2_rec, file=R2_output[sample_name])This approach is more Pythonic and makes the intention clearer.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
harpy/scripts/demultiplex_gen1.py(2 hunks)harpy/snakefiles/demultiplex_gen1.smk(5 hunks)
🧰 Additional context used
🪛 Ruff (0.8.2)
harpy/scripts/demultiplex_gen1.py
113-113: Use a context manager for opening files
(SIM115)
114-114: Use a context manager for opening files
(SIM115)
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: demux gen1
- GitHub Check: Rebuild Container
🔇 Additional comments (6)
harpy/snakefiles/demultiplex_gen1.smk (6)
28-28: Improved comment clarity.The updated comment provides a clearer and more conversational explanation of the error handling approach.
107-107: File extension change aligns with demultiplexing implementation.The output file extension has been changed from
.fq.gzto.fq, which correctly aligns with the changes in demultiplex_gen1.py where direct file writing to uncompressed files now occurs.
123-123: Input file extension update for consistency.The input file extension has been changed to match the output from the demultiplex rule (
.fqinstead of.fq.gz).This change maintains consistency with the modifications in the demultiplex rule.
131-131: Shell command updated to compress output after concatenation.The shell command now correctly pipes the concatenated
.fqfiles through gzip to produce compressed.fq.gzoutput files.This approach allows demultiplexing to work with uncompressed files (potentially faster) while still producing compressed final outputs for storage efficiency.
173-174: Added thread configuration to improve flexibility.Adding the
threadsparameter allows for configurable thread usage in the assess_quality rule.This change improves configurability and ensures the thread value is properly passed to the falco command on line 179.
179-179: Updated falco command to use the thread parameter.The falco command now correctly uses the thread parameter value.
This change ensures the thread configuration is properly utilized by the underlying tool.
There was a problem hiding this comment.
Actionable comments posted: 0
♻️ Duplicate comments (1)
harpy/scripts/demultiplex_gen1.py (1)
113-114: 🛠️ Refactor suggestionUse context managers for file handling.
The code opens output files for each sample but doesn't use context managers (
withstatement), which can lead to resource leaks if an exception occurs between opening and closing these files.While you've added explicit
.close()calls below, using a context manager would be more robust:- R1_output[sample] = open(f"{outdir}/{sample}.{part}.R1.fq", 'w') - R2_output[sample] = open(f"{outdir}/{sample}.{part}.R2.fq", 'w')Consider using a context manager approach with
ExitStackto manage multiple files:import contextlib # At the beginning of the function with contextlib.ExitStack() as stack: R1_output = {} R2_output = {} for sample in samples: R1_output[sample] = stack.enter_context(open(f"{outdir}/{sample}.{part}.R1.fq", 'w')) R2_output[sample] = stack.enter_context(open(f"{outdir}/{sample}.{part}.R2.fq", 'w')) # Rest of your processing... # No need for explicit close() calls🧰 Tools
🪛 Ruff (0.8.2)
113-113: Use a context manager for opening files
(SIM115)
114-114: Use a context manager for opening files
(SIM115)
🧹 Nitpick comments (1)
harpy/scripts/demultiplex_gen1.py (1)
156-158: File closing is good, but context managers would be better.While explicitly closing files is necessary with the current approach, using context managers (as suggested above) would handle this automatically, even in case of exceptions.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
harpy/scripts/demultiplex_gen1.py(3 hunks)
🧰 Additional context used
🪛 Ruff (0.8.2)
harpy/scripts/demultiplex_gen1.py
113-113: Use a context manager for opening files
(SIM115)
114-114: Use a context manager for opening files
(SIM115)
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: demux gen1
- GitHub Check: Rebuild Container
🔇 Additional comments (1)
harpy/scripts/demultiplex_gen1.py (1)
139-140: LGTM: File writing approach simplified.The direct file writing is cleaner than using subprocess pipes. This change aligns with the PR objective of reverting to the initial split2 method.
Summary by CodeRabbit
Refactor
New Features
Style