Google Duet Now Available to Higher Ed

I explain the big news, and I advocate for "pairing" to address AI misuse and hallucination.

Sponsored by

[image created with Dall-E 3 via ChatGPT Plus]

Welcome to AutomatED: the newsletter on how to teach better with tech.

Each week, I share what I have learned — and am learning — about AI and tech in the university classroom. What works, what doesn't, and why.

In this week’s piece, I discuss how “pairing” assignments enables professors to align incentives to discourage student AI misuse and encourage students to check for hallucinations, and I discuss how universities and colleges can now purchase Google Duet AI at scale.

💡 Idea of the Week:
Aligning Incentives Through Pairing

This coming week, I will be speaking to PhD students in a pedagogy seminar at the University of Notre Dame about ways to address and leverage artificial intelligence (AI) through their teaching. One of the core teaching methods that I will discuss and advocate for is what I have called “pairing.” I have found this method to be very effective in my own teaching, and it is a member of a family of methods that have a long tradition of success in pedagogy (see here for a more comprehensive treatment). Here is how I will describe it…

If it is likely that students will misuse AI in completing an assignment of yours — intentionally or not — you should look for ways to incentivize them to not do so. In particular, you should look for other assignments you can pair with this vulnerable assignment that are designed to incentivize students to complete the vulnerable assignment in a way without misusing AI.

Take the case of an assignment that is easy to complete satisfactorily with ChatGPT without the student needing to learn or work to meet its objectives. Suppose they can simply paste in the professor’s prompt and rubric into ChatGPT, upload a reading or two, and — voilà — they get an essay ready to submit that will earn an A by the rubric’s parameters.

The strategy of many professors has been to go ahead and assign this sort of essay, along with stern warnings about AI misuse. Then, when they suspected a student of submitting ChatGPT-generated work, they call the student into their office and query them in ways that reveal whether the student did in fact offload their work onto the LLM.

Pairing builds this sort of check or layered approach into the assignment design from the beginning. Instead of assigning the essay alone, the professor using this method assigns the essay in conjunction with an oral exam or tutorial that — just like the professor-student meeting from the above scenario — allows the professor to determine the extent to which the student has learned what the essay was designed to get them to learn.

Being aware of the link between their essay and the oral exam, the student is incentivized to complete the essay in a way that results in their satisfaction of the essay’s learning objectives, even if this still involves some reliance on AI. (In my experience, the oral exam needs to be weighted more highly than the essay to incentivize students properly.)

Pairing also works for cases unrelated to academic dishonesty.

On LinkedIN this week, Ethan Mollick was discussing how it doesn’t work for professors to adopt a “use AI but you are responsible for errors” policy. Ethan’s reason was that errors of the more recent LLM models, like ChatGPT4, are much harder to detect, so it is not in professors’ interest to try to hunt them down. Ethan recommended that professors “need to make a conscious choice to adjust to an AI-filled world” by “adopting AI broadly, banning it (but not using AI writing detectors which don’t work), etc.”

As I commented, if AI-induced subtle errors — “hallucinations” — in student submissions are a professor’s concern with an assignment, then the assignment needs to be graded in a way that incentivizes students to detect and address the errors before submitting their work. Pairing gives the professor a powerful way to align incentives in such a case. A second assignment can be paired with the initial assignment, such that the second assignment is expressly designed to evaluate the degree to which the students’ AI-produced content has errors. This second assignment, like the professor-student meeting, should be precisely the sort of test one would undertake, were one to want to root out the student’s errors.

For example, perhaps it could involve a random sampling of sources to ensure that they support the factual claims the student asserts they support. Such a sampling aspect could also be built into the first assignment’s grading, but a paired second assignment — properly weighted — puts the right emphasis on the student avoiding AI misuse or errors, rather than it being treated as an aspect of the first assignment. (And you could assign the second assignment to a peer to leverage the benefits of peer evaluation while aligning incentives...)

In general, AI puts pressure on professors to better align the incentives their students face, especially if Duke University student Aaron Price is correct that “only a small group of students choose to attend college primarily out of love for intellectual discovery.”

📝 How to Sign Up for Our
“Build Your Own GPT” Webinar

Spots are going quick for our upcoming "Build Your Own GPT" webinar, a Zoom-based learning experience led by me and designed for professors, instructional designers, and anyone else who wants to incorporate custom GPTs into their pedagogy this year. With evidence growing of the effectiveness of this sort of intervention, now is the time to jump on the opportunity afforded by recent advances from OpenAI.

We have only 9 slots left in the first cohort. We want to keep each webinar small to maximize value for all attendees.

Now, you might be thinking: “wait, Graham, what are custom GPTs?” In brief, they are AI chatbots fueled by ChatGPT4 that you supply with instructions and files, thereby empowering them to effectively play a unique role. I have discussed using GPTs for in-class activities, as they can enable you to coach your students at scale. Whether you're looking to improve your in-class activities, enhance student engagement, streamline course creation, or empower subject matter experts, this webinar is your gateway to unlocking the full potential of custom GPTs.

The webinar will occur on Zoom on Saturday, February 17th, from 12pm to 2pm Eastern Standard Time. By the end of the webinar, you can expect to have learned how to develop your own GPTs — but you will also walk away with one already developed!

The price is $99. All Premium subscribers ($5/month or $50/year, with prices going up soon) get a 10% discount code, included immediately below.

To sign up, click this link or the below button:

This past weekend, we updated our college/university course design GPT so that it can produce better assignment rubrics. Give it a try to produce assignments, assignment sequences, rubrics, and AI course policies to see how it performs for you!

🔊 Ed Tech News Highlight:
Google Duet AI Now Widely Available

In connection with Bett 2024 in the UK — one of the world’s largest ed-tech events that ran from January 22 to 24 in London — Google announced a range of features for educators.

The full list is included in the below images (click either of them to enlarge), but the important bit for professors and others in higher education is as follows:

Duet AI is now globally available as a paid add-on to any Google Workspace for Education edition. Duet AI brings the power of Google’s generative AI tools to the various parts of the Worksplace — including Gmail, Docs, Slides, Sheets, and Meet — thereby enabling educators to integrate their use of these softwares with Bard-like functionality without needing to do so via a dedicated tab open to bard.google.com.

With Duet, you can generate text, edit, and proofread in Docs; you can write emails in Gmail that draw on content stored elsewhere in the Workspace; you can visualize data and create Slides; you can organize and analyze data in Sheets; and you can generate backgrounds, translate captions, and more in Meet.

Even trialing Duet (available since August) was not an option for Education customers.

In our February 21st Premium guide (noted at the bottom of this piece), we will provide a comprehensive set of professorial AI use-cases leveraging Google’s Bard and Duet.

In the meantime, see whether your institution is looking to get Duet as an add-on. If they aren’t, it might be worth it to encourage them to do so soon!

Linda Nordling on Research Managers’ Use of AI

- Nordling highlights the experiences of several research managers incorporating generative AI into their workflow. Mads Lykke Berggreen, a research adviser at VIA University College in Aarhus, Denmark, utilizes ChatGPT for drafting research proposals, significantly reducing the time taken from days to hours. Yolanda Davids from the University of the Witwatersrand in Johannesburg employs ChatGPT for drafting letters and reports, customizing the output to suit regional linguistic nuances. Similarly, Kelly Basinger at the University of North Texas uses the tool to enhance the readability of complex texts for a broader audience. James Shelley from Western University in Ontario emphasizes developing bespoke AI tools for administrative tasks, moving beyond basic ChatGPT usage.

- Nordling discusses the potential risks associated with using generative AI like ChatGPT in grant writing and research management. Ellen Schenk, a research-funding consultant, cautions against the tool's tendency to "hallucinate" or generate misleading content, highlighting an instance where ChatGPT produced non-existent references for a funding proposal. Mads Lykke Berggreen notes the importance of understanding ChatGPT's requirements to avoid "word salad" outputs, advocating for a trial and error approach.

Summary courtesy of ChatGPT4.

Jeffrey Watson: “I kind of miss plagiarism”

- Watson reflects on the challenges posed by AI writing tools like ChatGPT in evaluating students' work. Traditional indicators of effort and learning, such as writing quality and use of citations, are no longer reliable due to AI's ability to efficiently produce high-quality text. This has made it difficult to discern genuine student effort from AI-assisted work, undermining traditional grading metrics.

- Watson explores new methods to ensure academic integrity and genuine learning, such as oral defenses of written essays and a three-part exam model combining take-home essays, live proctored exams, and optional oral defenses. These approaches aim to ensure that students engage with the material and comprehend their submissions, rather than relying solely on AI.

- Watson expresses concerns about the broader implications of AI in education, including the potential for increased teaching loads and depersonalization of learning. The author advocates for a more holistic evaluation of students as thinkers and learners, emphasizing the importance of personal interaction and intellectual development over mere textual output.

Summary courtesy of ChatGPT4.

📬 From Our Partners:
An AI Short Course from MIT

Artificial Intelligence online short course from MIT

Study artificial intelligence and gain the knowledge to support its integration into your organization. If you're looking to gain a competitive edge in today's business world, then this artificial intelligence online course may be the perfect option for you.

  • Key AI management and leadership insights to support informed, strategic decision making.

  • A practical grounding in AI and its business applications, helping you to transform your organization into a future-forward business.

  • A road map for the strategic implementation of AI technologies in a business context.

🔜 What’s Next for Premium Subscribers

Late in the fall of 2023, we started posting Premium pieces every two weeks, consisting of comprehensive guides, releases of exclusive AI tools like AutomatED-built GPTs, Q&As with the AutomatED team, in-depth explanations of AI use-cases, and other deep dives.

Our next three Premium pieces will be released on the following dates and will cover these topics:

  1. February 7th - an AI use-case deep dive into how professors and others in higher ed can best leverage Microsoft Copilot and 365 Copilot.

  2. February 21st - an AI use-case deep dive similar to the above but focused on Google’s Bard and Duet.

If your college or university uses Windows and the Microsoft 365 suite or Google Workspace, you won’t want to miss these deep dives.

So far, we have four Premium pieces:

To get access to Premium, you can either upgrade for $5/month, $50/year, or get one free month for every two (non-Premium) subscribers that you refer to AutomatED (note: we expect to raise prices this spring, so now is the time to lock in a lower rate).

To get credit for referring subscribers to AutomatED, you need to click on the button below or copy/paste the included link in an email to them.

(They need to subscribe after clicking your link, or otherwise their subscription won’t count for you. If you cannot see the referral section immediately below, you need to subscribe first and/or log in.)