Security-based explainable artificial intelligence (XAI) in healthcare system
Künye
Guruler, H., Islam, N., & Din, A. (2023). Security-based explainable artificial intelligence (XAI) in healthcare system. Explainable Artificial Intelligence in Medical Decision Support Systems, 229.Özet
Explainable Artificial Intelligence (XAI) is one of the most advanced research areas of Artificial Intelligence (AI). To explain the deep learning (DL) model is the main objective of XAI. It deals with artificial models which are understandable to humans, including the users, developers, policymakers, etc. XAI is very important in some critical domains like security, healthcare, etc. The purpose of XAI is only to provide a clear answer to the question of how the model made its decision. The explanation is very important before any system decision-making. As an example, if a system responds to a decision, it is necessary to have inside knowledge of the model about that decision. The decision can be positive or negative, but it is more important to know the decision based on characteristics. The decision of the model should be trusted when we know the internal structure of the DL model. Generally, DL models come under the black box models. So for security purposes, it is very necessary to explain a system internally for any decision-making. Security is very crucial in healthcare as well as in any other domain. The objective of this research is to provide a decision about security based on XAI which is a big challenge. We can improve security systems based on XAI for the next level. For medical/healthcare security, when we recognize human action using transfer learning techniques, one pre-trained model is considered good for action and the same action is not good in terms of accuracy using another pre-trained model. This is called the black-box model problem, and it needs to know what is the internal mechanism of both models for the same action. Why one model considers good for action and why the same action is not very well using another model? Here need a model-specific approach of post-hoc interpretability to know the internal structure and characteristics of both models for the same action.