Abstract:Medical image data are rapidly accumulating and traditional image analysis methods based on manual
approaches has imposed a heavy burden on doctors. Computer vision has played an important role in alleviating the pressure of manual reading, improving the accuracy of diagnosis and promoting the standardization of medical procedures by providing automatic or semi-automatic auxiliary diagnostic methods. At present, deep learning
convolutional neural network has achieved outstanding performance in various medical image processing tasks,but the unexplainability of deep learning "black box" has become a major obstacle to further explore the full potentials of intelligent medical diagnosis. This paper summarizes the research progress of deep learning interpretability in medical image processing in recent years.Firstly,we clarify the application status and problems of deep learning in the medical field, and discusses the interpretable connotation of neural networks. Then, we focus on the researh progress of deep learning interpretability in medical image data processing starting from the common methods of deep learning interpretability. Finally,the interpretable development trend of medical image processing is discussed.