Explainable AI chatbots towards XAI ChatGPT: A review

可解释人工智能聊天机器人迈向 XAI ChatGPT:综述

阅读:1

Abstract

Advances in artificial intelligence (AI) have had a major impact on natural language processing (NLP), even more so with the emergence of large-scale language models like ChatGPT. This paper aims to provide a critical review of explainable AI (XAI) methodologies for AI chatbots, with a particular focus on ChatGPT. Its main objectives are to investigate the applied methods that improve the explainability of AI chatbots, identify the challenges and limitations within them, and explore future research directions. Such goals emphasize the need for transparency and interpretability of AI systems to build trust with users and allow for accountability. While integrating such interdisciplinary methods, such as hybrid methods combining knowledge graphs with ChatGPT, enhancing explainability, they also highlight industry needs for explainability and user-centred design. This will be followed by a discussion of the balance between explainability and performance, then the role of human judgement, and finally the future of verifiable AI. These are the avenues through which insights can be used to guide the development of transparent, reliable and efficient AI chatbots.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。