Abstract
OBJECTIVE: While the integration of modern AI systems in clinical practice is currently in the process of transforming how medicine is being practiced, the focus of most research activities lies on AI-associated efficacy and safety. However, the interplay between human agents and AI systems will equally shape the actual impact of such systems. METHODS: This study simulated human decision-making using 27 agents characterized by varying levels of competence, certainty, and trust. Agents completed binary and three-option decision tasks, both with and without AI assistance. AI models varied in competence (0.3-0.9) and, in some simulations, included confidence signals to influence human trust dynamically. Each scenario involved 10,000 simulated decisions per agent. In AI-assisted conditions, decisions were modulated by the agent's baseline trust and, in the conditional trust setting, the AI's expressed confidence. RESULTS: AI support significantly improved decision accuracy for most agents, especially those with high competence but low certainty. In binary tasks, agents showed up to 150% relative improvement in decision accuracy with AI competence ≥0.6. In three-option tasks, even lower-performing AI (e.g., 0.4 competence) enhanced decision results. Conditional trust simulations showed further gains, particularly among agents with moderate baseline trust, as dynamic trust adjustments based on AI confidence reduced over-reliance on poor AI recommendations. DISCUSSION: Results demonstrate that AI assistance, particularly when paired with confidence calibration, enhances human decision-making, especially for uncertain or moderately skilled users. However, over-trusting low-competence AI can impair outcomes for high-performing agents. Tailored AI-human collaboration strategies are essential for optimizing clinical decision support.