Automatic assessment of short answer questions: Review

https://doi.org/10.55214/25768484.v8i6.3956

Authors

  • Salma Abdullbaki Mahmood Department of Computer Science, Basra University, Basra, Iraq
  • Marwa Ali Abdulsamad Department of Computer Science, Basra University, Basra, Iraq

With the rapid development of technology, automated assessment systems have become an essential tool to facilitate the process of correcting short answers. This research aims to explore ways to improve the accuracy of automated assessment using natural language processing techniques, such as Latent Semantic Analysis (LSA) and Longest Common Sequence (LCS) algorithm, hile highlighting the challenges associated with the scarcity of Arabic language datasets.These methodologies facilitate the assessment of both lexical and semantic congruences between student submissions and benchmark answers. The overarching objective is to establish a scalable and precise grading mechanism that reduces manual evaluation's temporal and subjective dimensions.  Notwithstanding significant advancements, obstacles such as the scarcity of Arabic datasets persist as a principal impediment to effective automated grading in languages other than English. This research scrutinizes contemporary strategies within the domain, highlighting the imperative for more sophisticated models and extensive datasets to bolster the precision and adaptability of automated grading frameworks, particularly concerning Arabic textual content. With the rapid development of technology, automated assessment systems have become an essential tool to facilitate the process of correcting short answers. This research aims to explore ways to improve the accuracy of automated assessment using natural language processing techniques, such as Latent Semantic Analysis (LSA) and Longest Common Sequence (LCS) algorithm, hile highlighting the challenges associated with the scarcity of Arabic language datasets.These methodologies facilitate the assessment of both lexical and semantic congruences between student submissions and benchmark answers. The overarching objective is to establish a scalable and precise grading mechanism that reduces manual evaluation's temporal and subjective dimensions.  Notwithstanding significant advancements, obstacles such as the scarcity of Arabic datasets persist as a principal impediment to effective automated grading in languages other than English. This research scrutinizes contemporary strategies within the domain, highlighting the imperative for more sophisticated models and extensive datasets to bolster the precision and adaptability of automated grading frameworks, particularly concerning Arabic textual content.

Section

How to Cite

Mahmood, S. A. ., & Abdulsamad, M. A. . (2024). Automatic assessment of short answer questions: Review. Edelweiss Applied Science and Technology, 8(6), 9158–9176. https://doi.org/10.55214/25768484.v8i6.3956

Downloads

Download data is not yet available.

Dimension Badge

Download

Downloads

Issue

Section

Articles

Published

2024-12-28