Abstract
In the evolving landscape of artificial intelligence (AI), the assumption that more data lead to better models has driven unchecked reliance on synthetic data to augment training datasets. Although synthetic data address crucial shortages of real-world training data, their overuse might propagate biases, accelerate model degradation, and compromise generalisability across populations. A concerning consequence of the rapid adoption of synthetic data in medical AI is the emergence of synthetic trust-an unwarranted confidence in models trained on artificially generated datasets that fail to preserve clinical validity or demographic realities. In this Viewpoint, we advocate for caution in using synthetic data to train clinical algorithms. We propose actionable safeguards for synthetic medical AI, including standards for training data, fragility testing during development, and deployment disclosures for synthetic origins to ensure end-to-end accountability. These safeguards uphold data integrity and fairness in clinical applications using synthetic data, offering new standards for responsible and equitable use of synthetic data in health care.