
SEOUL — Yonsei University's Sinchon campus has been thrown into turmoil following the revelation of a mass cheating scandal involving the misuse of generative Artificial Intelligence (AI) tools, such as ChatGPT, during a non-face-to-face midterm examination. The incident highlights the growing challenges and ethical dilemmas facing academic institutions as the reliance on AI technology rapidly increases.
The Course and the Exam
The controversy centers on the 'Natural Language Processing (NLP) and ChatGPT' course, a third-year offering focused on generative AI, including NLP and Large Language Models (LLMs). The course boasts a substantial enrollment of approximately 600 students.
The non-face-to-face midterm exam took place on October 15, administered via an online platform with objective questions. In an attempt to maintain academic integrity, the supervising professor required all students to submit a video recording of their computer screen, hands, and face for the entire duration of the exam.
Evidence of Widespread Misconduct
Despite the stringent monitoring measures, numerous instances of academic misconduct were detected. The professor, along with a team of 16 teaching assistants, initiated a full review of the submitted videos.
The evidence of cheating included:
Screen Capture: Students capturing the exam questions.
Averted Gaze: Periodically looking away at blind spots or areas outside the camera's view.
Screen Manipulation: Rapid changes in windows and programs on the computer screen.
Obscured View: Intentionally cropping the video frame to hide other running programs.
Crucially, numerous accounts and subsequent findings pointed directly to the unauthorized use of AI tools, particularly ChatGPT, to solve the exam questions. One student, identified as A (25), stated, "Most of them used ChatGPT to take the exam. I saw many people around me using it."
The University's Response and Student Confessions
On October 29, the professor publicly addressed the "numerous cases of student misconduct" and announced a firm response. Students were given a window for self-confession: those who admitted to cheating would receive a zero score for the midterm exam but face no further immediate disciplinary action for the moment. However, students who denied the allegations would face a more severe penalty, including potential disciplinary probation in accordance with the university's academic regulations.
While the total number of students who cheated is unconfirmed, rumors within the campus community suggested that over half of the course's 600 enrollees were involved. A poll posted anonymously on the university community board 'Everytime' further indicated the scale of the issue: out of 387 verified student respondents, 211 admitted to "cheating," while 176 claimed to have "solved the problems directly."
As of now, approximately 40 students have formally confessed to academic misconduct. Yonsei University has stated it will review the severity of each case and consider appropriate university-wide disciplinary action.
A Campus in Crisis: The AI Dilemma
This incident at Yonsei, a leading institution in South Korea, underscores a profound crisis of academic integrity in the age of generative AI. The very course designed to teach the capabilities of LLMs became the site of their most egregious misuse.
The widespread nature of the cheating suggests a growing reliance on AI not just as a study aid, but as a direct means to circumvent the educational process itself. For many students, the pressure to perform, coupled with the ease and effectiveness of tools like ChatGPT, has blurred the line between acceptable resource use and outright academic fraud.
The university now faces the challenging task of enforcing ethical standards while navigating a rapid technological shift. While strong disciplinary action is necessary to uphold academic values, institutions must also fundamentally re-evaluate their assessment methods. Exams and assignments must evolve beyond simple objective questions that can be easily outsourced to an AI, shifting focus instead to critical thinking, application, and original synthesis of knowledge that resists algorithmic bypass. The 'chaos on campus' is not just about a few dishonest students; it is a signal that the traditional educational model is struggling to adapt to the reality of the AI-dependent future.
[Copyright (c) Global Economic Times. All Rights Reserved.]


























