Professors of the Gaps

Should professors fill only the shrinking gaps left by science and tech, just like the gods before them?

[image created with Dall-E 3 via ChatGPT Plus]

Welcome to AutomatED: the newsletter on how to teach better with tech.

Each week, I share what I have learned — and am learning — about AI and tech in the university classroom. What works, what doesn't, and why.

In this week’s edition, I reflect on some lessons of our recent consulting work.

It has long been claimed that modern science repeatedly reduces the role of deities. Before we understood the weather, the motion of the planets, and the origin of species, we attributed them to deities directly. As we have come to understand more about these processes, the role of any deities has become more and more limited. God is reduced to a “god of the gaps” — filling any gaps in our understanding, at least until there is nothing left to fill.

This line of concern about theism parallels a line of concern about the growing power of AI.

As Kate Crawford writes in Atlas of AI, human workers in Amazon’s fulfillment centers are surrounded by an “army” of whirring robots and machines guided by AI that complete most of the thousands of logistical tasks necessary to keep packages flowing. The humans’ role is limited: they are “there to complete the specific, fiddly tasks that robots cannot.” These are humans of the gaps. As soon as the gaps can be efficiently filled by robots, machines, and/or AI, the humans’ role will shrink even further.

Just like the worry with theism, you might think that this isn’t really a worry at all. Perhaps we never should have thought of deities as directly micromanaging the weather like magicians. Perhaps we never should have thought that humans should be doing repetitive tasks like taping boxes for 12 hours straight.

I don’t want to argue about any of these issues here. Instead, my point will be more limited.

Given professors’ strained time and energy, AI tools and their integrations with other software offer professors countless ways to rearrange their work to have a better impact on their students and to create more time for their research — or for themselves.

I leave it up to each professor to decide whether they should occupy their time with tasks AI could complete or, instead, whether they should fill only those gaps it cannot. However, I do maintain that professors should spend time reflecting on this decision in an informed way given its wide-ranging implications.

Let me explain how I think professors should think about this decision, using some examples from our recent tech and AI consultations.

📝 Two Lessons from Our Work

In my one-on-one consultations, I find that most professors are curious about ways that AI can help them save time while producing comparable or improved pedagogical outputs. Unsurprisingly, everyone is swamped. Some people seek to claw back some work-life balance while others just want to spend less time with tasks they find boring.

For instance, some professors want our help using AI to respond faster to student emails while remaining as sensitive to students’ needs as they were before using AI — or more sensitive, as the case may be. Questions that can be answered by a professor’s syllabus can be answered with unique and engaging answers by AI tools that have access to their specific syllabus. No need for the generic “See the syllabus…”

Other professors want to use AI to provide their students with faster high-quality feedback so that their students learn better. There’s nothing worse than super late feedback — other than useless feedback, of course!

With almost every consultation, I have found that the first step for any professor considering the role of AI in their workflow is this: for each task you need to complete in a given semester, consider what portion of it must be completed by you and you only, as well as what portion could be completed by an undergraduate student TA, an administrative assistant, or a graduate student TA trained in your field of study.

Many of a professor’s tasks have significant portions that can be completed by people other than themselves, even if there would remain some component that must be verified or reviewed by the professor themselves.

For example, the professor who wants to spend less time responding to student emails while providing high quality responses might have some significant portion of their emails that could be responded to by someone other than them. So long as this other person has sufficient expertise and information about the professor’s courses, they could fill in for the professor with respect to emailing. The professor could check in periodically to make sure they are doing a good job.

The next step is to consider whether AI could do these portions of the tasks.

On this front, what I have found is that almost every portion of a task that could be completed by an appropriately situated undergraduate student TA, administrative assistant, or graduate student TA could be completed by AI tools, at least once they have been configured and integrated properly.

Thus, if we reimagine a professorial task in terms of whether its components could be completed by these other people, we have thereby imagined whether they could be completed by AI. The power of AI tools is so significant that many professorial tasks have the potential to be automated through them, even if — like that of an undergraduate or graduate student TA — the AI’s work must be occasionally checked or monitored by the professor.

🧪 Two Challenges

Correspondingly, there are two challenges:

  1. Professors must find the time and energy to shift their perspective from one of “I have too much to do to worry about whether my tasks could be completed by someone else!” to “Which of my tasks have the potential to be done by an assistant?” Perhaps the best time to do this is between semesters, but we all know how easy it is to put off important work until a time period that is not only too short but also one where we need a break.

  2. Either professors need to figure out which assistants specific AI tools can simulate and how they can be integrated with one another (and other software), or professors need to offload this process onto someone at their department, institution, or beyond who is more knowledgeable. Experimentation and reading are crucial for any professor who wants to learn and implement themselves.

The upshot is this: even if the result of this process is that a professor decides that their current workflow needs no change, this is a reasonable position only if they have grappled with these two challenges.

In other words, even if a given professor rightfully decides that their role in their workflow should be maximal — they should do all portions of all their professorial tasks — this decision depends on someone getting a grip on what percentage of their workflow could be completed by AI, and at what cost.

Even if the theist is rightfully unperturbed with the role that their deity fills in the natural world, an informed perspective on science and its explanations of the natural world is necessary to come to a reasonable conclusion about the contours of this role.

Professors should be informed about the degree to which they could be a “professor of the gaps” — and what it would take to get to that point.

🔗 Links