Abstract
Amidst the new arrival of application of ChatGPT (Chat Generative Pre training Transformer) for scientific writing and in biomedical research projects, this AI powered chatbot is liable for "Artificial hallucinations" as the information provided by it, at times, does not relate to the real world scenario and needs continuous monitoring and re evaluation by human intervention. Though it has been mooted as a new panacea for students, researchers, and academicians to write credible scientific writing, the issue remains: "Is the information provided trustworthy"? Identifying this research gap, we intended to pen down a concept centric synthesis matrix framework (SMF) to identify the usages and perils of employing Chat GPT as a credible tool in the field of scientific writing in academics and research. Google Scholar and PubMed search databases were implemented with search strings using Boolean terms such as "Chat GPT", "Scientific writing", "Chat GPT and scientific writing", Chat GPT and Biomedical research", "Artificial hallucinations and Chat GPT", "Advantages and Chat GPT", "Disadvantages and Chat GPT", "Accuracy of Chat GPT and Scientific writing". We sourced articles, chiefly full text, written in the English language. The review was further assessed by using three category rubrics applying specific parameters such as coverage, synthesis, and significance of included studies. This review highlighted inferences derived from evidence based studies in which Chat GPT has reported huge concerns related to plagiarism, ethics, bias, incorrect content, etc., when used in scientific writing platforms. Hence, it cannot be a reliable tool and needs supervision by human intervention. This review additionally emphasized the importance of applying the SMF as a guiding principle in the field of scientific writing and hence should be included as a modification in curriculum design by educationalists and policy reformers in higher education as a quality enhancement initiative in the discipline of research.