Hybrid Deep Learning Model for Answering Visual Medical Questions
Abstract
Due to the increase in electronic documents containing medical information, the search for specific information is often complex and time-consuming. This has prompted the development of new tools designed to address this issue. Automated visual question/answer (VQA) systems are becoming more challenging to develop. These are computer programs that take images and questions as input and then combine all inputs to generate text-based answers. Due to the enormous amount of question and the limited number of specialists, many issues stay unanswered. It’s possible to solve this problem by using automatic question classifiers that guide queries to experts based on their subject preferences. For these purposes, we propose a VQA approach based on a hybrid deep learning model. The model consists of three steps: (1) the classification of medical questions based on a BERT model; (2) image and text feature extraction using a deep learning model, more specifically the extraction of medical image features by a hybrid deep learning model; and (3) text feature extraction using a Bi-LSTM model. Finally, to predict the appropriate answer, our approach uses a KNN model. Additionally, this study examines the influence of the Adam, AdaGrad, Stochastic gradient descent and RMS Prop optimization techniques on the performance of the network. As a consequence of the studies, it was shown that Adam and SGD optimization algorithms consistently produced higher outcomes. Experiments using the ImageCLEF 2019 dataset revealed that the suggested method increases BLEU and WBSS values considerably.