A discussion about Artificial Intelligence and Academic Misconduct was held on Oct. 16, hosted by Associate Dean of Students Shawna Casperson.
The event was held as an open discussion, with the primary audience being faculty members. During the event, Casperson asked thought-provoking questions regarding professors’ opinions on AI.
Over the past year, generative AI programs like ChatGPT have become increasingly used by students. Professors and other university staff have begun to see essays and homework assignments that are done using these tools and are concerned about academic integrity. Last year, of the 30 academic misconduct cases, five concerned generative AI usage.
The reason why AI usage is seen as a problem by many professors is because it allows for students to complete work without using the specific tools they need to learn for their careers or live after college. Many English classes prioritize the importance of research and grammar usage, but tools like ChatGPT do it without students needing to learn these skills. The point of these classes is lost, and the value of a degree is decreased.
A faculty member compared the use of AI to the usage of calculators. They are similar in some ways; both are tools used to take shortcuts. There were arguments concerning the usage of calculators when they first made an appearance in schools, but now, most classes require them. It is essential to find the line between education enhancement and academic dishonesty to solve this problem with AI.
Students often use ChatGPT because they don’t care about their assignments or they are short on time. It not only offers quick answers to academic questions but also provides a platform for students to explore various topics. However, a problem quickly arises when students begin to cheat on their assignments using this tool. Many professors have begun to outline their expectations regarding AI usage in their syllabi.
When faced with suspicious work, it is hard for some professors to determine if a student used AI or not. Tools like Turnitin are used by UW-Platteville to detect AI-generated content, but they are hardly ever accurate.
If an assignment is suspected to be AI-generated, it is common for a professor to have a conversation with the student. While navigating the academic misconduct process can seem daunting for the student, it is essential for them to learn that AI should not be used to cheat on assignments.
Professors can also take this opportunity to fine-tune their courses to reduce the likelihood of students exploiting AI.