Real-Time AI-Assisted Error Feedback System and Copilot for MATLAB Programming in Core Engineering Courses: A Comparative Analysis

Authors

  • Mahadevaswamy Associate Professor, Dept. of ECE, Vidyavardhaka College of Engineering, Mysuru
  • Sathyanarayana N. Associate Professor, Dept. of ISE, Vemana Institute of Technology, Bengaluru
  • B. P. Pradeep Kumar Professor, Dept. of CSD, Atria Institute of Technology, Bengaluru
  • Jagadeesh B. Associate Professor, Dept. of ECE, Vidyavardhaka College of Engineering, Mysuru
  • Swapnil S. Ninawe Assistant Professor, Dept. of ECE, Dayananda Sagar College of Engineering, Bengaluru

DOI:

https://doi.org/10.16920/jeet/2026/v39is2/26001

Keywords:

Debugging; error feedback; MATLAB; programming education; real-time guidance; learning gain

Abstract

New MATLAB learners often get stuck because they cannot read or fix early errors. We tested a real-time system that catches run-time exceptions, pulls the message and nearby code, and gives a short explanation plus a small corrected example inside the editor. We compared this with MATLAB’s built-in copilot. In a class study with 60 students, both groups did the same lessons and tasks; only the feedback differed. The treatment group, which saw the real-time explanations, scored higher on the post-test (16.8 vs. 13.5), showed a much larger learning gain (0.58 vs. 0.29), and fixed errors faster. Across about 300 total errors, total repair time dropped from 1,500 s to 450 s, retries fell from 2 to 1 per error, and successful fixes rose from 80% to 96%. Beginners improved the most, and repeat errors fell by roughly 47%. Students said the messages were clear and useful. Then, turning MATLAB error messages into short, direct guidance improved accuracy, speed, and retention without leaving the MATLAB workspace.

Downloads

Download data is not yet available.

Downloads

Published

2026-02-18

How to Cite

Mahadevaswamy, N., S., Pradeep Kumar, B. P., B., J., & Ninawe, S. S. (2026). Real-Time AI-Assisted Error Feedback System and Copilot for MATLAB Programming in Core Engineering Courses: A Comparative Analysis. Journal of Engineering Education Transformations, 39(Special Issue 2), 1–11. https://doi.org/10.16920/jeet/2026/v39is2/26001

References

Azaiz, I., Kiesler, N., & Strickroth, S. (2024). Feedback-generation for programming exercises with GPT-4. In Proceedings of the 2024 ACM Conference on Innovation and Technology in Computer Science Education V.1 (pp. 31–37). ACM. https://doi.org/10.1145/3649217.3653594

Becker, B. A. (2016). An effective approach to enhancing compiler error messages for novice programming students. Computer Science Education, 26(2–3), 148–175. https://doi.org/10.1080/08993408.2016.1225464

Becker, B. A., Glanville, D., & Goslin, K. (2018). Do enhanced compiler error messages help students? Proceedings of the 49th ACM Technical Symposium on Computer Science Education (SIGCSE), 860–865. https://dl.acm.org/doi/10.1145/3159450

Estévez-Ayres, I., Pazos, J., Rodríguez, P., & Muñoz-Merino, P. J. (2024). Evaluation of LLM tools for feedback generation in a university programming course. International Journal of Artificial Intelligence in Education, 34, 1299–1326. https://doi.org/10.1007/s40593-024-00387-x

Jadud, M. C. (2006). Methods and tools for exploring novice compilation behaviour. ACM SIGCSE Bulletin, 38(3), 1–5. https://dl.acm.org/doi/10.1145/1140124

Keuning, H., Jeuring, J., & Heeren, B. (2018). A systematic literature review of automated feedback generation for programming exercises. ACM Transactions on Computing Education, 19(1), 3:1–3:43. https://doi.org/10.1145/3231711

Lohr, A., Jablonski, D., Körber, M., van Breugel, B., & Matter, S. (2024). Let them try to figure it out first: When and how to integrate LLM feedback for student code. In Proceedings of ITiCSE 2024 (pp. 455–461). ACM. https://doi.org/10.1145/3649217.3653530

Marceau, G., Fisler, K., & Krishnamurthi, S. (2011). Measuring the effectiveness of error messages designed for novice programmers. Proceedings of SIGCSE 2011, 499–504. https://doi.org/10.1145/1953163.1953230

Meyer, O., Ji, K., Wang, J., Leibowitz, N., Qi, P., & Wang, Q. (2024). Can AI-generated feedback increase university students’ text revision? Computers & Education: Artificial Intelligence, 6, 100280. https://doi.org/10.1016/j.caeai.2024.100280

Pan, Z., Biegley, L. T., Taylor, A., & Zheng, H. (2024). A systematic review of learning analytics–incorporated instructional interventions on learning management systems. Journal of Learning Analytics, 11(2), 52–72. https://doi.org/10.18608/jla.2023.8093

Paulsen, L., Nygård, S., Håklev, S., Mattek, A., & Rosé, C. P. (2024). Student-facing learning analytics dashboards in higher education: A systematic review. Education and Information Technologies, 29(6), 7081–7111. https://doi.org/10.1007/s10639-023-12401-4

Rivers, K., & Koedinger, K. R. (2017). Data-driven hint generation: Lessons learned in developing a self-improving Python programming tutor. International Journal of Artificial Intelligence in Education, 27(1), 37–64. https://doi.org/10.1007/s40593-015-0070-2

Ruan, F., Zhang, J., & Wu, R. (2024). Comparing feedback from large language models and from a rule-based system for introductory programming exercises. In Proceedings of ITiCSE 2024 (pp. 657–663). ACM. https://doi.org/10.1145/3649217.3653495

Santos, E. A., & Becker, B. A. (2024). Not the silver bullet: LLM-enhanced programming error messages are ineffective in practice. In Proceedings of the 2024 Conference on United Kingdom & Ireland Computing Education Research (UKICER) (pp. 1–7). ACM. https://doi.org/10.1145/

Steiss, J., Nückles, M., & Renkewitz, F. (2024). Comparing the quality of human and ChatGPT feedback on student writing: A randomized field study. Computers & Education, 212, 104980. https://doi.org/10.1016/j.compedu.2024.104980

Watson, C., & Li, F. W. B. (2014). Failure rates in introductory programming revisited. ACM Transactions on Computing Education, 14(2), 1–28. https://doi.org/10.1145/2602488

Most read articles by the same author(s)