Abstract
BACKGROUND: QRS complex detection is a key processing step of automated ECG analysis and determines its overall quality. The purpose of this paper is to study the detection performance of probably the most frequently implemented ready-to-use QRS detector in the presence of noise and with tightened temporal tolerance of detection points. METHODS: We applied commonly used detection statistics (Detection Error Rate, Sensitivity, Positive Predictive Value, and F(1) score), but re-defined true positive detection based on variable time jitter between detected and reference points. We also applied a controlled level of mixed noise to assess the detector's performance in true-to-life conditions. RESULTS: We found the following: (1) the detector under test showed a considerable drop in quality when reducing the jitter between 97.23 ms (DER = 8.08%) and 86.12 ms (DER = 67.22%), which means that the detection points' time series are not accurate enough to be directly used for ECG time analysis; (2) with jitter allowed to 163.90 ms and an increasing noise level (SNR from 20 dB to -7.96 dB), the detection quality drops (DER from 0.98% to 57.13% respectively); however, an analysis of individual files revealed records, where the algorithm performs better in the presence of noise; (3) with a step-by-step code execution analysis of ECG strips where better performance was the most prominent, the imprecise definition of the local maximum was the cause of DER errors. CONCLUSIONS: Our research clearly indicates that selecting a QRS-detection algorithm based solely on DER, Se, and PPV detection statistics may be incorrect. Two equally important detection quality parameters are the change in the DER error rate with tightened requirements of jitter and robustness of the detection statistics DER, Se, and PPV to noise level variations (algorithm's detection points immunity to noise).