Research Article
BibTex RIS Cite

Finansal Verilere İlişkin Tahminleri Açıklamaya Yönelik Yeni bir Model-Agnostik Yöntem ve Uygulaması

Year 2022, Issue: 38, 32 - 39, 31.08.2022
https://doi.org/10.31590/ejosat.1079145

Abstract

Yapay sinir ağları (YSA), günümüzde insan hayatını doğrudan etkileyen sağlık, sürücüsüz araçlar ve ordu gibi kritik görev sistemlerinde ve bu sistemlerle ilgili verilere ilişkin tahminlerde yaygın olarak kullanılmaktadır. Bununla birlikte, yapay sinir ağı algoritmalarının kara-kutu yapıları, kritik görev uygulamalarında kullanımlarını zorlaştırırken güven eksikliğine yol açan etik ve adli kaygıları da gündeme getirmektedir. Yapay zeka (YZ) kavramının günden güne gelişmesi ve hayatımızda daha fazla yer kazanması, bu algoritmalardan elde edilen sonuçların daha açıklanabilir ve anlaşılır olması gerektiğini ortaya çıkarmıştır. Açıklamalı Yapay Zeka (AYZ), yapay zeka kararlarının yüksek kaliteli yorumlanabilir, sezgisel, insan tarafından anlaşılabilir açıklamalarını oluşturabilen bir dizi araç, teknik ve algoritmayı destekleyen bir yapay zeka alanıdır. Bu çalışmada, açıklanabilirlik için borsa verileri ele alınarak finans sektörü için kullanılabilecek yeni bir model-agnostik yöntem oluşturulmuştur. Geliştirilen bu yöntem, oluşturulan modele verilen girdiler ve modelden elde edilen çıktılar arasındaki ilişkiyi anlamamızı sağlamaktadır. Tüm girdiler tekli ve birleşik olarak değerlendirilmiş ve değerlendirme sonuçları tablo ve grafikler ile gösterilerek açıklanmıştır. Çalışmada önerilen bu model, farklı makine öğrenimi algoritmaları ve uygulama alanları için de açıklanabilir bir katman oluşturmaya yardımcı olmaktadır.

References

  • Samek, W., Wiegand, T., & Müller, K. R. (2017). Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. arXiv preprint arXiv:1708.08296.
  • Holzinger, A. (2018). From Machine Learning to Explainable AI, World Symposium on Digital Intelligence for Systems and Machines, DISA, 11(2), 55–66. https://doi.org/10.1109/DISA.2018.8490530
  • Guo, T., Lin, T., & Antulov-Fantulin, N. (2019). Exploring interpretable lstm neural networks over multi-variable data. International conference on machine learning, 2494-2504.
  • Peng, J., Zou K., Zhou, M., Teng, Y., Zhu, X., Zhang, F. & Xu J. (2021). An Explainable Artificial Intelligence Framework for the Deterioration Risk Prediction of Hepatitis Patients, Journal of Medical Systems, 45-61, https://doi.org/10.1007/s10916-021-01736-5
  • Howard, D. & Edwards, M. A. (2018). Explainable A.I.: The promise of Genetic Programming Multi-run Subtree, Encapsulation International Conference on Machine Learning and Data Engineering, ICMLDE, 158–159. https://doi.org/10.1109/iCMLDE.2018.00037
  • Pierrard, R., Poli, J. & Hudelot, C. (2018). Learning Fuzzy Relations and Properties for Explainable Artificial Intelligence, IEEE International Conference on Fuzzy Systems, FUZZ-IEEE, 1–8. https://doi.org/10.1109/FUZZ-IEEE.2018.8491538
  • Fernandez A., Herrera, F., Cordon, O., Jesus, M. J. & Marcelloni, F. (2019). Evolutionary Fuzzy Systems for Explainable Artificial Intelligence: Why, When, What for, and Where to?, IEEE Computational Intelligence Magazine, 14(1), 69–81. https://doi.org/10.1109/MCI.2018.2881645
  • Zhou, Z., Sun, M. & Chen, J. (2019). Model-Agnostic Approach for Explaining the Predictions on Clustered Data, 2019 IEEE International Conference on Data Mining, (ICDM), 1528–1533. https://doi.org/10.1109/ICDM.2019.00202
  • Turek, M. (2021), Defense Advanced Research Projects Agency, (DARPA), 11 Agustos 2021 tarihinde https://www.darpa.mil/program/explainable-artificial-intelligence adresinden alındı

A New Model-Agnostic Method and Implementation for Explaining the Prediction on Finance Data

Year 2022, Issue: 38, 32 - 39, 31.08.2022
https://doi.org/10.31590/ejosat.1079145

Abstract

Artificial neural networks (ANNs) are widely used in critical mission systems such as healthcare, self-driving vehicles and the army, which directly affect human life, and in predicting data related to these systems. However, the black-box nature of ANN algorithms makes their use in mission-critical applications difficult, while raising ethical and forensic concerns that lead to a lack of trust. The development of the Artificial Intelligence (AI) day by day and gaining more space in our lives have revealed that the results obtained from these algorithms should be more explainable and understandable. Explainable Artificial Intelligence (XAI) is a field of AI that supports a set of tools, techniques, and algorithms that can create high-quality interpretable, intuitive, human-understandable explanations of artificial intelligence decisions. In this study, a new model-agnostic method that can be used for the financial sector has been developed by considering the stock market data for explainability. This method enables us to understand the relationship between the inputs given to the created model and the outputs obtained from the model. All inputs were evaluated individually and combined, and the evaluation results were shown with tables and graphics. This model will also help create an explainable layer for different machine learning algorithms and application areas.

References

  • Samek, W., Wiegand, T., & Müller, K. R. (2017). Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. arXiv preprint arXiv:1708.08296.
  • Holzinger, A. (2018). From Machine Learning to Explainable AI, World Symposium on Digital Intelligence for Systems and Machines, DISA, 11(2), 55–66. https://doi.org/10.1109/DISA.2018.8490530
  • Guo, T., Lin, T., & Antulov-Fantulin, N. (2019). Exploring interpretable lstm neural networks over multi-variable data. International conference on machine learning, 2494-2504.
  • Peng, J., Zou K., Zhou, M., Teng, Y., Zhu, X., Zhang, F. & Xu J. (2021). An Explainable Artificial Intelligence Framework for the Deterioration Risk Prediction of Hepatitis Patients, Journal of Medical Systems, 45-61, https://doi.org/10.1007/s10916-021-01736-5
  • Howard, D. & Edwards, M. A. (2018). Explainable A.I.: The promise of Genetic Programming Multi-run Subtree, Encapsulation International Conference on Machine Learning and Data Engineering, ICMLDE, 158–159. https://doi.org/10.1109/iCMLDE.2018.00037
  • Pierrard, R., Poli, J. & Hudelot, C. (2018). Learning Fuzzy Relations and Properties for Explainable Artificial Intelligence, IEEE International Conference on Fuzzy Systems, FUZZ-IEEE, 1–8. https://doi.org/10.1109/FUZZ-IEEE.2018.8491538
  • Fernandez A., Herrera, F., Cordon, O., Jesus, M. J. & Marcelloni, F. (2019). Evolutionary Fuzzy Systems for Explainable Artificial Intelligence: Why, When, What for, and Where to?, IEEE Computational Intelligence Magazine, 14(1), 69–81. https://doi.org/10.1109/MCI.2018.2881645
  • Zhou, Z., Sun, M. & Chen, J. (2019). Model-Agnostic Approach for Explaining the Predictions on Clustered Data, 2019 IEEE International Conference on Data Mining, (ICDM), 1528–1533. https://doi.org/10.1109/ICDM.2019.00202
  • Turek, M. (2021), Defense Advanced Research Projects Agency, (DARPA), 11 Agustos 2021 tarihinde https://www.darpa.mil/program/explainable-artificial-intelligence adresinden alındı
There are 9 citations in total.

Details

Primary Language Turkish
Subjects Engineering
Journal Section Articles
Authors

Samet Öztoprak 0000-0002-0878-5979

Zeynep Orman 0000-0002-0205-4198

Early Pub Date July 26, 2022
Publication Date August 31, 2022
Published in Issue Year 2022 Issue: 38

Cite

APA Öztoprak, S., & Orman, Z. (2022). Finansal Verilere İlişkin Tahminleri Açıklamaya Yönelik Yeni bir Model-Agnostik Yöntem ve Uygulaması. Avrupa Bilim Ve Teknoloji Dergisi(38), 32-39. https://doi.org/10.31590/ejosat.1079145