Exposure Fusion Framework in Deep Learning-Based Radiology Report Generator

Hilya Tsaniya, Chastine Fatichah, Nanik Suciati


Writing a radiology report is time-consuming and requires experienced radiologists. Hence a technology that could generate an automatic report would be beneficial. The key problem in developing an automated report-generating system is providing a coherent predictive text. To accomplish this, it is important to ensure the image has good quality so that the model can learn the parts of the image in interpreting, especially in medical images that tend to be noise-prone in the acquisition process. This research uses the Exposure Fusion Framework method to enhance the quality of medical images to increase the model performance in producing coherent predictive text. The model used is an encoder-decoder with visual feature extraction using a pre- trained ChexNet, Bidirectional Encoder Representation from Transformer (BERT) embedding for text feature, and Long-short Term Memory (LSTM) as a decoder. The model’s performance with EFF enhancement obtained a 7% better result than without enhancement processing using an evaluation value of Bilingual Evaluation Understudy (BLEU) with n-gram 4. It can be concluded that using the enhancement method effectively increases the model’s performance.


Exposure Fusion Framework; ChexNet; Medical Report Generator; LSTM; Learning-Based Radiology

Full Text:

Full Text


Chokshi FH, Hughes DR, Wang JM, Mullins ME, Hawkins CM, Duszak R. Diagnostic Radiology Resident and Fellow Workloads: A 12-Year Longitudinal Trend Analysis Using National Medicare Aggregate Claims Data. Journal of the American College of Radiology 2015 7;12(7):664–669.

Sutramiani NP, Suciati N, Siahaan D. MAT-AGCA: Multi Augmentation Technique on small dataset for Balinese character recognition using Convolutional Neural Network. ICT Express 2021 12;7(4):521–529.

Darma IWAS, Suciati N, Siahaan D. A Performance Comparison of Balinese Carving Motif Detection and Recognition using YOLOv5 and Mask R-CNN. In: Proceedings of 5th International Conference on Informatics and Computational Sciences (ICICoS) Institute of Electrical and Electronics Engineers (IEEE); 2021. p. 52–57.

Agus W, Darma S, Suciati N, Siahaan D. Neural Style Transfer and Geometric Transformations for Data Augmentation on Balinese Carving Recognition using MobileNet. International Journal of Intelligent Engineering and Systems 2020;13(6):349–363

Sutramiani NP, Suciati N, Siahaan D. Transfer Learning on Balinese Character Recognition of Lontar Manuscript Using MobileNet. In: Proceeding of 4th International Conference on Informatics and Computational Sciences Institute of Electrical and Electronics Engineers Inc.; 2020. p. 1–5.

Cho K, van Merrienboer B, Gulcehre C, Bahdanau D, Bougares F, Schwenk H, et al. Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP) Association for Computational Linguistics; 2014. p. 1724–1734.

Firmawan DB, Siahaan D. Bidirectional Long Short-Term Memory for Entailment Identification in Requirement Specifications Using Information from Use Case Diagrams. In: Proceedings of 2021 International Seminar on Machine Learning, Optimization, and Data Science Institute of Electrical and Electronics Engineers Inc.; 2022. p. 331–336.

Zisserman KSA. Very Deep Convolutional Networks for Large-Scale Image Recognition . American Journal of Health-System Pharmacy 2018;75(6):398–406.

He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2016-December IEEE Computer Society; 2016. p. 770–778.

Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, et al. Going Deeper with Convolutions Christian. Journal of Chemical Technology and Biotechnology 2016;91(8):2322–2330.

A SH, ’Ling Y, ’Liu J, ’Sreenivasan R, ’Anand S, ’Arora TR, et al. PRNA at ImageCLEF 2017 caption prediction and concept detection tasks. In: CEUR Workshop Proceedings; 2017. p. 1866–1871.

Ying Z, Li G, Ren Y, Wang R, Wang W. A New Image Contrast Enhancement Algorithm Using Exposure Fusion Framework. In: Proceedings of International Conference on Computer Analysis of Images and Patterns Springer Cham; 2017.p. 36–46.

Ahammad SH, Rajesh V, Khan PJ, Sumanth P, Sivaram G, Inthiyaz S, et al. Chexnet reimplementation for pneumonia detection using pytorch. International Journal of Pharmaceutical Research 2020;12(2):327–333.

Duygulu P, Barnard K, de Freitas JFG, Forsyth DA. Object Recognition as Machine Translation: Learning a Lexicon for a Fixed Image Vocabulary. In: Proceedings of European Conference on Computer Vision Springer Berlin, Heidelberg; 2002.p. 97–112.

Kiros R, Salakhutdinov R, Zemel RS. Unifying Visual-Semantic Embeddings with Multimodal Neural Language Models. CoRR 2014;p. 1–13. http://www.cs.toronto.edu/~rkiros/lstm_scnlm.html.

Shin HC, Roberts K, Lu L, Demner-Fushman D, Yao J, Summers RM. Learning to Read Chest X-Rays: Recurrent Neural Cascade Model for Automated Image Annotation. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition 2016 12;2016-December:2497–2506.

Pelka O, Friedrich CM. Keyword generation for biomedical image retrieval with recurrent neural networks. In: CEUR Workshop Proceedings; 2017. p. 1866–1871.

Liang S, Li X, Zhu Y, Li X, Jiang S. ISIA at the ImageCLEF 2017 image caption task. In: CEUR Workshop Proceedings; 2017. p. 1866–1871.

Lyndon D, Kumar A, Kim J. Neural Captioning for the ImageCLEF 2017 Medical Image Challenges. In: CEUR Workshop Proceedings; 2017. p. 1866–1871.

Su Y, Liu F, Rosen M. UMass at ImageCLEF caption prediction 2018 task. In: CEUR Workshop Proceedings; 2018. p. 215.

Zeng X, Wen L, Liu B, Qi X. Deep learning for ultrasound image caption generation based on object detection. Neurocomputing 2020 6;392:132–141.

Zhang Z, Xie Y, Xing F, McGough M, Yang L. MDNet: A semantically and visually interpretable medical image diagnosis network. Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017 2017 11;2017-January:3549–3557.

Naufal A, Fatichah C, Suciati N. Preprocessed Mask RCNN for Parking Space Detection in Smart Parking Systems. International Journal of Intelligent Engineering and Systems 2020 12;13(6):255–265.

Ayesha H, Iqbal S, Tariq M, Abrar M, Sanaullah M, Abbas I, et al. Automatic medical image interpretation: State of the art and future directions. Pattern Recognition 2021 6;114:107856.

Demner-Fushman D, Kohli MD, Rosenman MB, Shooshan SE, Rodriguez L, Antani S, et al. Preparing a collection of radiology examinations for distribution and retrieval. Journal of the American Medical Informatics Association 2016 3;23(2):304–310.

Veluchamy M, Subramani B. Image contrast and color enhancement using adaptive gamma correction and histogram equalization. Optik 2019 4;183:329–337.

Huang G, Liu Z, Maaten LVD, Weinberger KQ. Densely Connected Convolutional Networks. In: Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) IEEE; 2017. p. 2261–2269.

Hochreiter S, Schmidhuber J. Long Short-Term Memory. Neural Computation 1997;9(8):1735–1780.

Papineni K, Roukos S, Ward T, Zhu WJ. Bleu: a method for automatic evaluation of machine translation. In: Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL; 2002. p. 311–318.

DOI: http://dx.doi.org/10.12962/j20882033.v33i2.13572


  • There are currently no refbacks.

Creative Commons License

IPTEK Journal of Science and Technology by Lembaga Penelitian dan Pengabdian kepada Masyarakat, ITS is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Based on a work at https://iptek.its.ac.id/index.php/jts.