Explainable Artificial Intelligence (XAI) towards Model Personality in NLP task

Dimas adi, Nadhila Nurdin


In recent years, the development of Deep Learning in the field of Natural Language Processing, especially in sentiment analysis, has achieved significant progress and success. It is because of the availability of large amounts of text data and the ability of deep learning techniques to produce sophisticated predictive results from various data features. However, the sophisticated predictions that are not accompanied by sufficient information on what is happening in the model will be a major setback. Therefore, the significant development of the Deep Learning model must be accompanied by the development of the XAI method, which helps provide information about what drives the model to get predictable results. Simple Bidirectional LSTM and complex Bi-GRU-LSTM-CNN model for Sentiment Analysis were proposed in the present research. Both models were analyzed further using three different XAI methods (LIME, SHAP, and Anchor) in which they were used and compared to two proposed models, proving that XAI is not limited to giving information about what happens in the model but can also help us to understand and distinguish models’ personality and behaviour.


Deep learning, Explainable artificial intelligence, Natural language processing, Sentiment analysis

Full Text:



kzuzuo, “Personality seen in natural language deep learning model, and its interpretation (shap etc.) and prospect,” 2020, https://https://medium.com/@kzuzuo/personality-seen-in-natural- language-deep-learning-model-and-its-interpretation-shap-etc-

ddee25, Last accessed on 2020-06-14.

D. Wang, A. Khosla, R. Gargeya, H. Irshad, and A. H. Beck, “Deep learning for identifying metastatic breast cancer,” arXiv preprint arXiv:1606.05718, 2016.

M. T. Ribeiro, S. Singh, and C. Guestrin, “” why should i trust you?” explaining the predictions of any classifier,” in Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, 2016, pp. 1135–1144.

——, “Anchors: High-precision model-agnostic explanations,” in Thirty- Second AAAI Conference on Artificial Intelligence, 2018.

S. M. Lundberg and S.-I. Lee, “A unified approach to interpreting model predictions,” in Advances in neural information processing systems,

, pp. 4765–4774.

D. Hovy and S. L. Spruit, “The social impact of natural language processing,” in Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Berlin, Germany: Association for Computational Linguistics, Aug. 2016, pp. 591–598. [Online]. Available: https://www.aclweb.org/anthology/P16-2096

J. Hu, “Explainable deep learning for natural language processing,”

B. Liu, “Handbook chapter: Sentiment analysis and subjectivity. hand- book of natural language processing,” Handbook of Natural Language Processing. Marcel Dekker, Inc. New York, NY, USA, 2009.

I. El Alaoui, Y. Gahi, R. Messoussi, Y. Chaabi, A. Todoskoff, and A. Kobi, “A novel adaptable approach for sentiment analysis on big social data,” Journal of Big Data, vol. 5, no. 1, p. 12, 2018.

A. Dey, M. Jenamani, and J. J. Thakkar, “Lexical tf-idf: An n-gram feature space for cross-domain classification of sentiment reviews,” in International Conference on Pattern Recognition and Machine Intelligence. Springer, 2017, pp. 380–386.

A. Sboev, I. Moloshnikov, D. Gudovskikh, A. Selivanov, R. Rybka, and T. Litvinova, “Deep learning neural nets versus traditional machine learning in gender identification of authors of rusprofiling texts,” Pro- cedia computer science, vol. 123, pp. 424–431, 2018.

A. Cambray and N. Podsadowski, “Bidirectional recurrent models for offensive tweet classification,” arXiv preprint arXiv:1903.08808, 2019.

J. Schmidhuber and S. Hochreiter, “Long short-term memory,” Neural

Comput, vol. 9, no. 8, pp. 1735–1780, 1997.

K. Cho, B. Van Merrie¨nboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio, “Learning phrase representations using rnn encoder-decoder for statistical machine translation,” arXiv preprint arXiv:1406.1078, 2014.

S. Minaee, E. Azimi, and A. Abdolrashidi, “Deep-sentiment: Sentiment analysis using ensemble of cnn and bi-lstm models,” arXiv preprint arXiv:1904.04206, 2019.

A. Adadi and M. Berrada, “Peeking inside the black-box: A survey on explainable artificial intelligence (xai),” IEEE Access, vol. 6, pp. 52 138–

160, 2018.

I. Saftic´ et al., “Max tegmark, life 3.0: Being human in the age of artificial intelligence,” Croatian Journal of Philosophy, vol. 18, no. 54, pp. 512–516, 2018.

E. Tjoa and C. Guan, “A survey on explainable artificial intelligence

(xai): Towards medical xai,” arXiv preprint arXiv:1907.07374, 2019. [19] DARPA, “Explainable artificial intelligence (xai),” 2020,

https://www.darpa.mil/program/explainable-artificial-intelligence, Last accessed on 2020-06-14.

J. Zhu, A. Liapis, S. Risi, R. Bidarra, and G. M. Youngblood, “Explain- able ai for designers: A human-centered perspective on mixed-initiative co-creation,” in 2018 IEEE Conference on Computational Intelligence and Games (CIG). IEEE, 2018, pp. 1–8.

S. Lipovetsky and M. Conklin, “Analysis of regression in game theory approach,” Applied Stochastic Models in Business and Industry, vol. 17, no. 4, pp. 319–330, 2001.

Kaggle, “Us air tweets dataset,” 2020, https://www.kaggle.com/crowdflower/twitter-airline-sentiment, Last accessed on 2020-06-14.

F. Chollet et al., “Keras,” https://keras.io, 2015.

DOI: http://dx.doi.org/10.12962/j23378557.v7i1.a8989


  • There are currently no refbacks.

Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License. IPTEK The Journal of Engineering published by Pusat Publikasi Ilmiah, Institut Teknologi Sepuluh Nopember


Please contact us for order or further information at: email: iptek.joe[at]gmail.com Fax/Telp: 031 5992945. Editorial Office Address: Pusat Riset Building 6th floor, ITS Campus, Sukolilo, Surabaya 60111, Indonesia.