Survey on Explainable AI: Techniques, challenges and open issues
Published in Journal of Expert Systems with Applications, 2024
Recommended citation: Abusitta, Adel, Miles Q. Li, and Benjamin CM Fung. "Survey on Explainable AI: Techniques, challenges and open issues." Expert Systems with Applications (2024): 124710. https://www.sciencedirect.com/science/article/pii/S095741742401577X
Artificial Intelligence (AI) has become an important component of many software applications. It has reached a point where it can provide complex and critical decisions in our life. However, the success of most AI-powered applications is based on black-box approaches (e.g., deep neural networks), which can create learned models that are able to predict and make decisions. While these advanced models could achieve high accuracy, they are generally unable to explain their decisions (e.g., predictions) to users. As a result, there is a pressing need for explainable machine learning systems in order to be trustworthy by governments, organizations, industries, and users. This paper classifies and compares the main findings in the domain of explainable machine learning and deep learning. We also discuss the application of Explainable AI (XAI) in sensitive domains such as cybersecurity. In addition, we characterize each reviewed article on the basis of the methods and techniques used to achieve XAI. This, in turn, allows us to discern the strengths and limitations of the existing XAI techniques. We finally discuss some substantial challenges and future research directions related to XAI.
Recommended citation: Abusitta, Adel, Miles Q. Li, and Benjamin CM Fung. “Survey on Explainable AI: Techniques, challenges and open issues.” Expert Systems with Applications (2024): 124710.