top of page

Strategies for Navigating Ethical AI Use in College Courses

Updated: 3 hours ago


Todd Zakrajsek, Director, ITLC Lilly Conferences on College and University Teaching

 

If I were asked to identify the choppiest of waters in higher education right now, my pick would be guiding students in using AI ethically rather than catching academic misconduct. Discussions at teaching conferences have been dominated by concerns regarding how to restructure assignments so AI could not be used to complete the work for students and how to catch students when they do use AI to do their work. Overall, the focus has been too frequently on students’ “inappropriate” behavior, rather than how we might mitigate some of the issues causing concern.


Following are four suggestions for consideration in the quest to have students do their own work, to be adapted as you see fit for your own institution/classroom context.


ree


1. Keep in mind the historical literature on academic misconduct.


Some of the factors that prompt a student to “cheat” on a homework or written assignment are likely similar regardless of the method used to cheat—buying a term paper written by another student, cutting and pasting material without citing, or using ChatGPT to write the paper.


Don McCabe (McCabe & Treviño, 1997; McCabe et al., 2001; McCabe et al., 2012), a researcher in the field of academic misconduct for nearly 30 years, and others found increased student willingness to resort to academic misconduct when there was:

  1. Perception that many students engaged in the behavior (normalized academic misconduct)

  2. Pressure to do well,

  3. Perception that the work was meaningless (busywork)

  4. Low likelihood of being caught


Mitigating or managing these issues—such as having intermediate drafts that must be turned in when writing a paper so that all of the work is not left to the last minute, or requiring phases of the work to be done in class without technological aid—will not eliminate the use of AI (or other forms of academic misconduct). But it can only help to try to avoid creating those situations where students have long been more inclined to be dishonest in the completion of work.



2. Set clear expectations as to what level of AI use is appropriate.


It is important to be explicitly clear as to what we expect with regard to students using AI when completing coursework. I am becoming increasingly convinced that although there are certainly times when is best for work to be a 100% human effort, it is less reasonable in the ever-changing world in which we live. For nearly 3 decades, faculty have used spell- and grammar checks, or used the internet to help search for articles and elucidate concepts that were poorly understood. These are early forms of AI already in nonnegotiable use.


I often turn to AI to find articles for me—even for this article. For example, when writing the section above on cheating, I opened my ChatGPT5 account and asked it to, “Find me the study done by Don McCabe right around the year 2000 that brought together a great deal of research about cheating/academic dishonesty.” I was quickly presented with exactly the article I had in mind. There seems little academic value in my taking an hour digging through research databases looking for the article, rather than getting on with writing this post.

It seems reasonable for students to also use AI in some capacity for some work. The challenge is for them to know when, and how much, especially given that different faculty have different thresholds for appropriate use. This does not seem like a time to say to a student, “You should have known it was wrong to use AI the way you did.”


This is not a new concept. It has always been a good idea to be clear to students with respect to how tasks can be completed in the course, what resources are appropriate, when it is okay to collaborate, and so on. A long time ago, when I was teaching at Southern Oregon University, I recall a faculty member who accused two students of cheating on a take-home exam. Their answers were very similar, and when the instructor asked if they had worked on the exam together, they apparently stated right away that they had done so. He explained they would both receive a failing grade and be turned in for academic misconduct. The students protested the accusation and stated that they had never been told not to collaborate on the take-home test. The students pointed out that the entire class was encouraged to do all homework assignments with classmates throughout the semester, so when a take-home test was given with no instructions regarding working with others, these students assumed that the same rules applied to exams as homework.


Regardless of whether one agrees with the faculty member or students in this example, the situation could have been avoided with clear instructions when the take-home exam was given. The same is true of AI. You might even list several items to demonstrate what is acceptable and not acceptable, such as the following examples:

  • It is okay to use AI to

    • edit a paper for mechanics, not to rewrite it to sound better.

    • help locate references or cited works for your paper, but it is your responsibility to read and accurate cite those works.

    • ask if there are any glaring areas that are missed, provided you then write about those areas.

  • It is not okay use AI to

    • write the first draft of your paper

    • identify the major points of your paper or to develop an outline for you.



3. Require an AI Disclosure Statement for all written work.


In an AI disclosure statement, a writer must admit whether and how they used AI. And if the use is inappropriate and they are uncomfortable admitting it, they have no way to convince themselves what they are doing is okay. As a modeling opportunity at the University of North Carolina at Chapel Hill, faculty members are expected to disclose to students how AI was used in the course:

The use of AI should be open and documented. It is essential to be transparent and document the use of AI in your work. Inform your students about the use of AI in generating course assignments, exam questions, and other relevant materials. Provide explanations or demonstrations of how AI is employed, helping students understand the technology’s role and limitations. (Retrieved from: https://ai.unc.edu/teaching-generative-ai-usage-guidance/)

 


Checking Students’ Work


One additional consideration, which I hesitate to call a strategy, is to talk to students about honor and integrity and share that you will be checking work periodically to assess the extent to which the material turned in was written by them or a machine.


There is an ever-growing consensus in higher education that AI checkers are not reliable enough to accuse a student of cheating on a written assignment, let alone fail a student because an AI detection system flags a paper. Even a few years ago, when AI was much more machine-like than it is today, detection tools rarely reliably exceeded 80% accuracy, with most closer to 70% (Weber-Wulff, 2023). Since then, machine-generated text has become increasingly difficult to differentiate from human writing, and cases of false positives—where a paper is listed as being AI-generated when it was written by humans—are increasing. Particularly troubling is that non-native speakers and neurodivergent learners are prone to false positive results (https://teaching.unl.edu/ai-exchange/challenge-ai-checkers/).


This additional consideration needs to be used carefully. If you say you are going to check all papers with an AI plagiarism checker, then they should be checked as you stated they would. My suggestion is simply to point out you will periodically check for authenticity of the writing. One could argue you do that every time you grade a paper. The point here is simply to let students know their papers will be checked in hopes that it serves as a deterrent to using AI inappropriately. This last suggestion seems, to me, very similar to strategies used all the time to keep behavior in check. For example, in all countries, customs officials announces that they will be checking bags periodically. You know not all bags will be checked, but you don’t want to be the one pulled if you have something in your bag you shouldn’t. 

 

Conclusion


As AI becomes increasingly sophisticated, it is important to think critically about how we can interact with AI in an educational environment. We are not without options to help put guardrails around the use of AI in our classrooms. We can look at historic factors that indicate a higher probability of academic misconduct, including inappropriate use of AI, and strive to minimize those factors in our courses. We can be much clearer as to what is (not) acceptable AI use for different assignments in the course. Particularly as we work with students from underrepresented groups and first-year students who have less awareness as to what might inherently be expected college behavior, there is certainly no downside to being clear about our expectations. Finally, and something I feel is going to be needed for all writing, is to ask writers to clearly state how they used AI in their work through a disclosure statement.


We are certainly not going to stop all students from using AI inappropriately. That said, 30 years ago, when I was teaching full time, I knew it was not possible to stop all students from cheatingand 30 years ago, we looked for ways to discourage academic misconduct whenever we could. We shouldn’t throw our hands up in the air and surrender on this issue with AI. There are many strategies we are not using, and some have yet to be developed. This is where critical thinking and collaboration will once again prove their value.

 

Discussion Questions 

 

  1. AI detection tools are becoming less reliable, yet many faculty still lean on them as deterrents. How might we balance deterrence with fairness, especially considering that false positives disproportionately affect non-native speakers and neurodivergent learners (https://teaching.unl.edu/ai-exchange/challenge-ai-checkers/)?

  2. McCabe’s research highlights long-standing factors that contribute to academic misconduct (e.g., pressure, meaningless work, normalization of cheating). How might these insights help us design assignments that reduce—not just detect—misuse of AI?

  3. Faculty expectations around AI use vary widely. If you had to create an “AI use policy” for your course tomorrow, what would you allow, forbid, and require disclosure about—and why?

 


References


McCabe, D. L., & Treviño, L. K. (1997). Individual and contextual influences on academic dishonesty: A multicampus investigation. Research in Higher Education, 38(3), 379–396. https://doi.org/10.1023/A:1024954224675


McCabe, D. L., Treviño, L. K., & Butterfield, K. D. (2001). Cheating in academic institutions: A decade of research. Ethics & Behavior, 11(3), 219–232. https://doi.org/10.1207/S15327019EB1103_2


McCabe, D. L., Butterfield, K. D., & Treviño, L. K. (2012). Cheating in college: Why students do it and what educators can do about it. Johns Hopkins University Press.


University of Nebraska–Lincoln. (n.d.). The challenge of AI checkers. Center for Transformative Teaching. https://teaching.unl.edu/ai-exchange/challenge-ai-checkers/


University of North Carolina at Chapel Hill. (n.d.). Teaching with generative AI: Usage guidance. AI at UNC. https://ai.unc.edu/teaching-generative-ai-usage-guidance/


Weber-Wulff, D. (2023). Can AI-generated text be detected? Journal of Academic Ethics, 21(2), 295–308. https://doi.org/10.1007/s10805-023-09463-1

 

AI Disclosure Statement: For this article, I used AI to locate articles that I had read previously and to put the citations in APA 7th edition format. I also used AI to draft the discussion questions and help me figure out the analogy at the end of the piece. Finally, I had AI edit the manuscript for spelling, consistent use of words, and mechanics, but told it to not change content when copy editing.  All other aspects of this piece were completed without the use of AI.


About the Author

ree

Subscribe to receive notifications of each new post!

Thanks for subscribing!

Feel free to link to our page on your Teaching and Learning
Center page!
  • Black Facebook Icon
  • Black Twitter Icon
  • Black LinkedIn Icon
  • Black Pinterest Icon
  • Black Instagram Icon

Special Thanks to Our Sponsors:

Lilly Logo 2018_edited.png
think udl.png
OneHE Logo (Colour).png
sketch EB logo.jpg
intentional white border.png
bottom of page