Abstract
The challenges of handling imbalanced datasets in machine learning significantly affect the model performance and predictive accuracy. Classifiers tend to favor the majority class, leading to biased training and poor generalization of minority classes. Initially, the model incorrectly treats the target variable as an independent feature during data generation, resulting in suboptimal outcomes. To address this limitation, the model was adjusted to more effectively manage target variable generation and mitigate the issue. This study employed advanced techniques for synthetic data generation, such as synthetic minority oversampling (SMOTE) and Adaptive Synthetic Sampling (ADASYN), to enhance the representation of minority classes by generating synthetic samples. In addition, data augmentation strategies using Deep Conditional Tabular Generative Adversarial Networks (Deep-CTGANs) integrated with ResNet have been utilized to improve model robustness and overall generalizability. For classification, TabNet, a model tailored specifically for tabular data, proved highly effective with its sequential attention mechanism that dynamically processes features, making it well suited for handling complex and imbalanced datasets. Model performance was evaluated using a novel approach of training synthetic data and testing on real data (TSTR). The framework was validated on the COVID-19, Kidney, and Dengue datasets, achieving impressive testing accuracies of 99.2%, 99.4%, and 99.5%, respectively. Furthermore, similarity scores of 84.25%, 87.35%, and 86.73% between the real and synthetic data for the COVID-19, Kidney, and Dengue datasets, respectively, confirmed the reliability of the synthetic data. TabNet consistently showed substantial improvements in F1-scores compared to other models, such as Random Forest, XGBoost, and KNN, emphasizing the importance of selecting the right synthetic data augmentation techniques and classifiers. Additionally, SHapley Additive exPlanations (SHAP)-based explainable AI tools were used to interpret model performance, providing insights into feature importance and its impact on predictions. These findings confirm that the proposed approach enhances the accuracy, robustness, and interpretability, offering a valuable solution for addressing data imbalance in classification tasks.