Got an AI-Immune Assignment? Let Us Try to Crack It!

We double down on April's AI-immunity challenge. Calling all professors!

Welcome to AutomatED: the newsletter on how to teach better with tech.

Each week, I share what I have learned — and am learning — about AI and tech in the university classroom. What works, what doesn't, and why.

In today’s edition, I double down on our AI-immunity challenge.

The Background

Back in April, we first announced our AI-immunity challenge. The idea was simple: we thought professors were overconfident about the resistance of their take-home written assignments to AI plagiarism, so we challenged them to submit their best ones for us to try to crack in an hour using the latest AI tools. We promised to submit answers created entirely by a combination of these tools’ outputs, with no modifications by us.

So far, we have publicly taken on two assignments. We ran into a bit of trouble when we took on an economics project, but we had better luck with a clinical research exam. (The pieces describing our efforts are deep dives, so be ready for a lot of reading if you haven’t read them already. Takeaways are at the bottom.)

Next up is a philosophy assignment, and our report on it should be coming out soon.

We are still impressed by the power of AI tools. Even in the case of the challenging economics project, AI tools helped us a lot, and there is reason to think that future iterations of these tools — combined with a little bit of insight from the student — will be able to short-circuit a wide range of take-home assignments.

Furthermore, we have received several submissions that were easily cracked and that we ultimately did not report on because their vulnerabilities and strengths provided less insight.

Another reason why we are still confident is that we are finding that too many professors remain uninformed about recent iterations of AI tools, both generative and otherwise. When we discuss these tools’ capabilities at conferences and online, we find a lot of misconceptions and wishful thinking.

So, we think we need to expand the AI-immunity challenge, in part to display to professors how these tools can be used. Today, we double down.

But you might wonder: why not instead offer more (and updated) general advice about AI immunity?

General advice can be helpful, of course, and we plan to continue to offer it, including in a comprehensive guide for the fall semester we will release to AutomatED learning community members on August 16 (tldr: refer two of your colleagues to AutomatED to get access to the community).

Yet, solving the AI plagiarism problem from the armchair runs contrary to a crucial takeaway of my original piece on solutions and my expanded piece on making room for AI training in our teaching: for every take-home assignment that a professor is concerned may be plagiarized with AI, they must make it AI-immune by experimenting a lot with how different AI tools respond to its instructions; or pair it with an AI-immune assignment.

Here is the diagram of the assignment design process that I created for the expanded piece:

The process of assignment design in the age of AI.

Although a take-home assignment could be paired with an AI-immune in-class assignment, many pedagogically appropriate — and even pedagogically ideal — assignments need to or ought to be completed at home.

Enter experimentation. Us professors need more information on (our) specific assignments and the ways in which they interface with specific AI tools available to students.

🎯 🏅 The Challenge

As in April, I hereby challenge professors from universities around the world to submit assignments that they believe are AI-immune. Here is the process.

Professors: you can submit to the challenge by subscribing to this newsletter and then responding to the welcome email (or to this email if you are already subscribed). Your response should contain your assignment and your grading rubric for it. We will then pick three assignments and attempt to crack them with AI tools — and nothing else.

Here are the rules for the submitted assignments:

  1. Each assignment must be standalone — not part of a pair or series;

  2. The submissions for the assignment must be capable of being typewritten in their entirety (e.g., no oral exams, hand-written essays, dramatic performances, etc.); and

  3. You, the professor, must provide in advance the rubric by which the submissions will be graded.

And here are the parameters for us when we try to complete the assignments with AI tools:

  1. Every sentence that we submit must be AI-generated — no edits allowed, except for formatting if needed;

  2. We must use only publicly available AI tools;

  3. We cannot spend more than 1 hour on each assignment; and

  4. Each of our efforts must be documented and described in a future piece.

Finally, here are the parameters for you, the professor, when you grade the submissions:

  1. You must provide a grade and a rationale relative to the originally submitted rubric; and

  2. You must note strengths and weaknesses of the submission.

We will post pieces on the three assignments that we pick, including discussions of how we used the AI tools, what grades we earned, what we learned, and what our graders reported that they learned.

We plan to accelerate our publication schedule with the fall semester looming.

💭🗨️ 1-on-1 Consultations with the AutomatED Team

The purpose of AutomatED’s 1-on-1 tech and AI consultations is to help professors (and others in higher education) with personalized guidance on integrating tech and AI.

Our default format is a one hour 1-on-1 consulting session on Zoom. Afterwards, we will provide you with a custom and actionable plan so that you are well-equipped to supercharge your fall courses.

Alternatively, if you're keen on exploring various possibilities, or considering a different consultation format (we offer group/team consultations as well), why not schedule a complimentary 15-minute exploratory Zoom call through Calendly? Just tap on the button below to set it up: