Ethical Awareness of ChatGPT use in Academic Writing: Integrity Concerns, Policy Support, and Assessment Trust among Tertiary Students

by Alia Nabella Fateha Zolkifli, Sharina Saad

Published: January 6, 2026 • DOI: 10.47772/IJRISS.2025.91200182

Abstract

Generative AI tools such as ChatGPT are rapidly entering students’ academic writing practices, yet institutions continue to face uncertainty about academic integrity, disclosure expectations, and fairness in assessment. This study reports a focused analysis of tertiary students’ ethical awareness regarding AI-assisted academic writing. A cross-sectional online survey was administered to 41 tertiary students with prior experience using AI tools. Three closed-ended indicators captured (i) concern about academic integrity, (ii) support for institutional AI-use guidelines, and (iii) acceptability of AI-supported grading. An open-ended prompt elicited perceived ethical risks and recommendations for responsible use. Descriptive statistics and inductive thematic analysis were applied. Results show very high integrity concern (92.7% somewhat/very concerned) and strong endorsement for institutional guidelines (75.6% agree/strongly agree). For AI-supported grading, most respondents preferred a hybrid approach combining AI and human evaluation (56.1%), while 19.5% believed AI could assess fairly. Qualitative responses triangulated these patterns, emphasising plagiarism/originality risks, overreliance, and distrust of fully automated evaluation. The findings support the need for explicit university policies and ethics-oriented AI literacy that strengthens students’ judgement and responsible decision-making in AI-assisted writing.