New models developed for detection of misconceptions in physics with artificial intelligence


Demirezen M. U., Yilmaz Ö., Ince E.

NEURAL COMPUTING & APPLICATIONS, cilt.35, sa.12, ss.9225-9251, 2023 (SCI-Expanded, Scopus) identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 35 Sayı: 12
  • Basım Tarihi: 2023
  • Doi Numarası: 10.1007/s00521-023-08414-2
  • Dergi Adı: NEURAL COMPUTING & APPLICATIONS
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Academic Search Premier, PASCAL, Applied Science & Technology Source, Biotechnology Research Abstracts, Compendex, Computer & Applied Sciences, Index Islamicus, INSPEC, zbMATH
  • Sayfa Sayıları: ss.9225-9251
  • Anahtar Kelimeler: Artificial intelligence, Physics misconceptions, Deep learning, Transformer, TSDAE, Natural language processing, CHEMISTRY, KAPPA, CONCEPTIONS, GUIDANCE, FEEDBACK, ONLINE, ESSAYS, LEARN, ATOMS
  • İstanbul Üniversitesi-Cerrahpaşa Adresli: Evet

Özet

Students' misconceptions of various topics in physics have been investigated by many researchers. The detection of misconceptions is very difficult and takes a long time as a human being. Our aim in the study carried out is to determine the misconceptions of the students regarding the concept of the atom by machine instead of humans. This study proposes two novel methods: the Transformers model and the fastText algorithm, to classify the students' answers. Since there is currently no Turkish language model for physics-related questions or the physics domain, we trained a transformer model from scratch for this domain using transfer learning and domain adaptation techniques. In the second part of this research, we proposed an unsupervised learning approach to accurately understand and identify the reasons behind the misconceptions. For this purpose, we utilized sentence transformers to obtain vector representation of the sentences with transformer-based denoising autoencoder training. We then used two clustering algorithms: an agglomerative one and a density-based one, to group similar sentences in a high-dimensional vector space. Again, for the first time, the unsupervised transformer-based denoising autoencoder training of the sentence transformers was employed for the Turkish language to provide domain adaptation for sentence transformers. Finally, we compared the human performance (experts' opinions) and the proposed method results for both the classification and the clustering tasks according to the kappa metric. According to our results, we managed to distinguish misconceptions with a high accuracy of between 0.97 and 1.00 with our proposed methodology.