You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In my Illumina NovaSeq read, I have many G and C homopolymer reads. I used fastp --trim_poly_g option.
However, this option detects reads with at least 10 Gs at the end and trims the 10 Gs. If the whole read is made up of Gs, those reads still stay there but will only be 10 base pairs shorted. In addition, if G homopolymers appear in the middle of reads, this filtering option does not remove them.
I can easily imagine to write a python script to filter reads based on GC% but given I have 300 million reads, it will probably take forever to finish the job.
Is there any way you would suggest for doing this filtering in an efficient way?
The text was updated successfully, but these errors were encountered:
Hello,
In my Illumina NovaSeq read, I have many G and C homopolymer reads. I used fastp --trim_poly_g option.
However, this option detects reads with at least 10 Gs at the end and trims the 10 Gs. If the whole read is made up of Gs, those reads still stay there but will only be 10 base pairs shorted. In addition, if G homopolymers appear in the middle of reads, this filtering option does not remove them.
I can easily imagine to write a python script to filter reads based on GC% but given I have 300 million reads, it will probably take forever to finish the job.
Is there any way you would suggest for doing this filtering in an efficient way?
The text was updated successfully, but these errors were encountered: