Abstract
The rapid emergence of large-scale next-generation sequencing (NGS) data has created a growing demand for efficient preprocessing tools. Removal of polymerase chain reaction (PCR) duplicates is a critical step in many NGS data processing pipelines to reduce amplification bias. Currently, numerous de novo-based PCR duplicate removal tools are available, which cluster identical or highly similar reads without reference genome alignment. Practical application of such programs to large-scale NGS data (100 GB and more) is hampered by significant computational requirements, particularly high computer short-term memory usage comparable to the original file sizes. Processing large datasets generated by modern high-throughput techniques such as Hi-C or RNA-chromatin interaction sequencing can require hundreds of gigabytes of RAM, posing an exceptionally high computational demand. Here, we present Fastq-dupaway as a new tool for efficient PCR duplicate removal from both single-end and paired-end sequencing data ( https://github.com/AndrewSigorskih/fastq-dupaway ). Its key innovation lies in its primary operational modes, which are designed to use a small, parameterizable amount of RAM (2-10 GB) independent of input data size, at the cost of requiring approximately 2× the input file size in disk space. This enables the processing of very large datasets even on standard personal computers. Fastq-dupaway matches or exceeds (by up to threefold) the processing speed of major de novo deduplication tools, while maintaining the same level of duplicate removal.