Believe Your Assignment is AI-Immune? Let's Put it to the Test
Professors submit their assignments to try to prove that we can't plagiarize them with AI tools.
[image created with Midjourney]
Welcome to AutomatED: the newsletter on how to teach better with tech.
Each week, I share what I have learned — and am learning — about AI and tech in the university classroom. What works, what doesn't, and why.
Let’s have some fun and put our money where our mouth is.
This week, I throw down the gauntlet to my fellow professors to submit their best AI-immune writing assignments. We will rise to the occasion and try to complete each of them in less than an hour, using the latest generative AI tools.
I have argued that the AI plagiarism problem is deep, so let’s see if I am right.
After my recent pieces on the depth of the AI plagiarism problem and on how I think we should conceptualize solutions to it, I received some feedback online from a fellow professor. They told me that their solution is to create assignments where students work on successive/iterative drafts, improving each one on the basis of novel instructor feedback.
Iterative drafts seem like a nice solution, at least for those fields where the core assignments are written work like papers. After all, working one-on-one with students in a tutorial setting to build relationships and give them personalized feedback is a proven way to spark strong growth.
The problem, though, is that if the student writes the first draft at home — or, more generally, unsupervised on their computer — then they could use AI tools to plagiarize it. And they could use AI tools to plagiarize the later drafts, too.
When I asserted to my internet interlocutor that they would have to make the drafting process AI-immune, they responded as follows (note: I won’t link their response because the point here isn’t to put them on blast):
In my view, this is a perfect example of a professor not grasping the depth of the AI plagiarism problem.
The student just needs to tell the AI tool that their first draft — which they provide to the AI tool, whether the tool created the draft or not — was met with response X from the professor.
In other words, they can give the AI tool all of the information an honest student would have, were they to be working on their second draft. The AI tool can take their description of X, along with their first draft, and create a new draft based on the first that is sensitive to X.
Not much work is required of the student, and they certainly do not need to learn how to input the suggested changes or about the relevant concepts. After all, the AI tools have been trained on countless resources concerning these very concepts and how to create text responsive to them.
This exchange indicates to me that the professor simply has not engaged with recent iterations of generative AI tools with any seriousness.
Given that they did not assert that they have tried this sort of assignment on AI tools, I take it they would admit as much, at least with respect to this sort of assignment. Their confidence came from their general understanding of how AI tools’ capabilities interfaced with their assignment’s parameters.
Solving the AI plagiarism problem from the armchair runs contrary to a crucial takeaway of my original piece on solutions: for every assignment, professors need to either decide it can/should be completed in an AI-free environment; make it AI-immune by experimenting a lot with how different AI tools respond to its instructions; or pair it with an AI-immune assignment.
Here is my original diagram of the assignment design process:
The AutomatED AI-immune assignment design flowchart.
But maybe I am wrong. Maybe my interlocutor is right that their iterative draft assignment is AI-immune. Maybe, in general, I am too optimistic (or pessimistic, depending on your perspective) about the power of these AI tools and thus about how professors must experiment with them.
Ironically, though, the best way to find out if I am wrong is to run an experiment.
🎯 🏅 The Challenge
The experiment comes in the form of a challenge. I hereby challenge professors from universities around the world to submit assignments that they believe are AI-immune. Here is the process.
Professors: you can submit to the challenge by subscribing to this newsletter and then responding to the welcome email (or to this email if you are already subscribed). Your response should contain your assignment and your grading rubric for it. We will then pick five assignments and attempt to crack them with AI tools — and nothing else.
We will also ask humans — of the appropriate skill level in the relevant domains — to complete the five assignments. Professors who submit assignments for the challenge will not be told whether the submissions they grade are AI-generated or not.
Here are the rules for the submitted assignments:
Each assignment must be standalone — not part of a pair or series;
The submissions for the assignment must be capable of being typewritten in their entirety (e.g., no oral exams, hand-written essays, dramatic performances, etc.); and
You, the professor, must provide in advance the rubric by which the submissions will be graded.
And here are the parameters for us when we try to complete the assignments with AI tools:
Every sentence that we submit must be AI-generated — no edits allowed, except for formatting if needed;
We must use only publicly available AI tools;
We cannot spend more than 1 hour on each assignment;
Each of our efforts must be documented and described in a future piece; and
Each of our efforts must be recorded, and the recording must be made available upon request (in fact, we are thinking of making a YouTube channel and uploading them there anyway — let us know if this sort of content would interest you).
Finally, here are the parameters for you, the professor, when you grade the submissions:
You must provide a grade and a rationale relative to the originally submitted rubric; and
You must note strengths and weaknesses of the submission.
We will post pieces on the five assignments that we pick, including discussions of how we used the AI tools, what grades we earned, what we learned, and what our graders reported that they learned.
I may even throw in an iterative draft assignment for good measure…
"I want to encourage academics and administrators not to be intimidated by AI, either by its technical nature or by the complexity of the issues it raises for us. There are many unknowns, but there are also simple steps we can take to move forward."
That’s Anna R. Mills (College of Marin) in The Chronicle of Higher Education. Mills provides a lot of great background and tips on how we should grapple with AI in the university setting.
Computer scientist Sam Bowman (NYU) shared a draft entitled “Eight Things to Know about Large Language Models.” Many media pieces cover LLM’s technology in misleading ways, rendering this piece an essential snapshot of where we’re at.
Here they are:
— Sam Bowman (@sleepinyourhat)
Apr 2, 2023
I wrote about students using LLMs to plagiarize, but perhaps reviewers were the real risk all along:
A reviewer rejected my paper, and instead suggested me to familiarize myself with the following readings. I could not find them anywhere. After a control in GPT-2, my fears where confirmed. Those sources where 99% fake...generated by AI.
— Robin Bauwens (@BauwensRobin)
Mar 27, 2023