Abstract
Background/Objectives: Variations in image clarity across different OCT devices, along with the inconsistent delineation of RNFL boundaries, pose a challenge to achieving consistent diagnoses for glaucoma. Recently, deep learning methods such as GANs for image transformation have been gaining attention. This paper introduces deep learning methods to transform low-clarity images from one OCT device into high-clarity images from another, concurrently estimating the retinal nerve fiber layer (RNFL) segmentation lines in the enhanced images. Methods: We applied two deep learning methods, pix2pix and cycle-GAN, and provided a comparison of their performance by evaluating the similarity between the generated and actual images, as well as comparing the generated RNFL boundary delineation with the actual boundaries. Results: The image conversion performance was compared based on two criteria: Fréchet Inception Distance (FID) and curve dissimilarity. In the comparison of FID values, the cycle-GAN method showed significantly lower values than the pix2pix method (p-value < 0.001). In terms of curve similarity, the cycle-GAN method also demonstrated higher similarity to the actual curves compared to both manually annotated curves and the pix2pix method (p-value < 0.001). Conclusions: We demonstrated that the cycle-GAN method produces more consistent and precise outcomes in the converted images compared to the pix2pix method. The resulting segmented lines showed a high degree of similarity to those manually annotated by clinical experts in high-clarity images, surpassing the boundary accuracy observed in the original low-clarity scans.