Abstract
According to the results of our study, ChatGPT shows a variable performance in different aspects in medical fields. It was observed that it did not reach the level of physicians in communicating with the patient and conducting in-depth questioning. However, in most cases, the individuals' past psychiatric illnesses or medical histories are questioned. However, especially in cases where risk assessment is required, this process does not achieve the expected level of success and lacks the depth required for an effective assessment. While it functions effectively to inform differential diagnoses, its ability to make complex clinical decisions and make definitive diagnoses is limited. While it generally performed adequately in determining which medications should be prescribed and appropriate dosages, its capacity to warn about side effects of medications was found to be relatively weak. ChatGPT, which performs strongly in providing supportive treatment recommendations, shows a marked inability in more complex clinical processes such as making patient hospitalization decisions. These results suggest that AI-based systems can be useful as assistive tools in medical practice, but should be used with an awareness of their limitations.