Abstract
The central obstacle in Open Set Recognition (OSR) is striking a balance between minimizing classification errors on known data and managing the risks posed by open space for unknown data. To address these issues, we present three novel frameworks: the Positive-Negative Prototypes Fusion Framework (PNPFF), its adversarial extension (APNPFF), and an enhanced version, APNPFF++. The PNPFF framework incorporates multiple positive prototypes to capture intra-class variability and a single negative prototype to strengthen intra-class compactness and inter-class separation. This approach reduces the classification risk for known data while reserving space for unknowns, partially alleviating open space risk. Building on this, APNPFF and APNPFF++ use Generative Adversarial Network (GAN) and manifold mix-up methods, respectively, to simulate two parts of unknown class data, further reducing open space risk and enhancing the model's generalization performance. Comprehensive experiments conducted across multiple benchmark datasets demonstrate the effectiveness and robustness of the proposed methods, highlighting their superiority in handling both known and unknown class recognition tasks.