Stop blaming students for AI cheating. Your assessment is what’s broken.
If a chatbot aces your MCQ in three clicks, it isn’t a test — it’s a form. A contrarian take.
For two years now, staff rooms have been orbiting the same sentence: "students cheat with ChatGPT." Not wrong. Also lazy.
If a general-purpose model passes your assessment with no context, no material, no knowledge of you — your assessment wasn’t measuring learning. It was measuring the ability to regurgitate what’s already public. A search engine would’ve cheated the same way in 2005. We just didn’t notice.
The fix isn’t banning AI. It’s writing quizzes AI can’t short-circuit: contextual questions tied to your specific lesson, a video you showed in class, a reading you assigned. Questions that require reasoning over a situation the model doesn’t have in its corpus.
Ironic twist: those are exactly the quizzes AI lets you produce quickly. It isn’t cheating when it drafts a contextual quiz from your own material — it frees up the teacher’s time to write ten of them.
← Back to the blog