AssignMatch: Intelligent Matching of Student Submissions
Aman Kumar Verma, Kumail Mujtaba, Asim Kaif, Syed Tabish Sajjad
Recent advances in digital learning environments and AI-assisted content generation have significantly changed how academic assignments are created and evaluated. Conventional manual grading approaches are increasingly difficult to scale, often requiring substantial instructor effort while remaining vulnerable to inconsistency and undetected semantic copying. This paper introduces AssignMatch, an intelligent assignment evaluation framework designed to automate assessment through the integration of Optical Character Recognition (OCR), Natural Language Processing (NLP), large language models (LLMs), and semantic similarity analysis. The system accepts submissions in multiple formats, including scanned documents and images, extracts textual content using OCR, and performs structured preprocessing before evaluating responses against reference solutions using meaning-aware comparison techniques. Beyond automated scoring, AssignMatch incorporates embedding-based plagiarism detection and cross-document similarity analysis to identify paraphrased or highly similar submissions. Experimental observations indicate that the proposed system substantially reduces grading time while maintaining strong agreement with instructor evaluation. The framework provides a scalable and adaptable solution for institutions seeking efficient, transparent, and AI-supported academic assessment.

