How to Use ChatGPT to Learn

A new paper on LLMs in the classroom that contains heuristics for their effective use.

[image created with Midjourney]

Welcome to AutomatED: the newsletter on how to teach better with tech.

Each week, we share what we’ve have learned about AI and tech in the university classroom. What works, what doesn't, and why.

In this piece, Caleb Ontiveros applies tips from a recent paper on teaching economics with ChatGPT.

How do you Teach Students to Use AI Well?

That’s one question that economists Tyler Cowen and Alex Tabarrok ask in their paper “How to Learn and Teach Economics with Large Language Models, including GPT.” They note that, in a few months, this paper may be out of date. Maybe so, but it’s useful now, and I expect key lessons to remain.

In this piece, I’ll highlight their nine heuristics for using ChatGPT (or other LLMs) well.

I’ll illustrate each of their suggestions with examples from GPT-4. This should give you a good sense of how to use these tools yourself and what heuristics to impart to students.

🦾 Understanding the Strengths and Limitations of LLMs

Cowen and Tabarrok start with what we need to know about LLMs.

First, there are a number of misconceptions. LLMs are not just “autocomplete” engines. It looks increasingly likely that they possess internal models. Nor are they tools that simply read from a database or allow one to “talk” with the internet. Instead, they generate novel sentences.

But how useful are these sentences? Or to be more precise, what are LLMs actually good for?

Cowen and Tabarrok answer that LLMs:

Read, transform and manipulate text

Improve writing

Improve coding

Summarize ideas

Answer analytical questions with short causal chains

Solve simple models

Write exams

Generate hypotheses and ideas​​

Cowen and Tabarrok

On the other hand, their performance is mixed when it comes to:

Finding data

Directing you to sources

Summarizing papers and books

Cowen and Tabarrok

And there are some things they are, in my view, not good at:

  • Answering precise empirical questions (“What famous people were born June 11, 1963?”).

  • Answering math questions (Such as: “Give an example of a finite abelian (i.e. commutative) monoid that is not a group, or prove that such a structure cannot exist.”).

  • Completing sophisticated and lengthy chains of reasoning

It’s important to remember that each of these considerations may change. The world is changing rapidly.

👑 Mastering the Art of Prompt Engineering

Some people complain that LLMs are misleading, inaccurate, and illogical.

They are correct. ChatGPT is not good at sourcing data. However, like any tool, we can be better or worse at using them.

Using Google is an art (“googlefu”). Performing literature reviews is a skill. Learning with LLMs like ChatGPT is no different. What you ask for is what you get, and learning how to ask is crucial to getting better answers.

Cowen and Tabarrok give 9 tips for prompting LLMs (“prompt engineering”). This, if anything, may be what you seek to impart to your students.

1. 🖼️ Include lots of detail and specific keywords

Note the difference in the performance of these image generation queries:

Context and details matter!

2. 🧠 Make your question sound smart

Users need to “set the tone” when they're interacting with a LLM. Ask for an answer that exhibits expertise and mastery in the subject matter – if that’s what you want.

For example, one can ask: “What is a good criticism of microeconomics?”

The first entry is, indeed, a common criticism of microeconomics and economics in general. Here’s a slightly better question:

This question received a slightly better answer. However, it can still be improved. Cowen and Tabarrok’s other tips detail better ways to set the tone.

3. 👤 Ask for answers in the voice of various experts

This is essentially an extension of tip #2.

Given some familiarity with Cowen, I expect him to stress the third point in particular, so I’d consider this a reasonable answer – especially the ending summary:

A follow up question shows how mixed ChatGPT is when it comes to sourcing:

Most of this is correct, but it may be too vague to be useful.

4. 🆚 Ask for compare and contrast

Compare the performance of the following:

5. 📜 Have it make lists

LLMs love lists. Many answers that do not ask for lists explicitly generate them anyway – it can’t help itself.

6. 🛞 Keep on asking lots of sequential questions on a particular topic

In their paper, Cowen and Tabarrok suggest following up to the query about Irish economic growth with “What about the importance of demographics?” but interestingly, in my case, that’s the first factor named!

Instead we can follow up with the following:

A metaphor that may be useful: instead of looking for an answer with LLMs, use them to help explore the space of ideas.

7. 📓 Ask it to summarize doctrines

I will note that this tactic may misfire. This heuristic is best used with the follow-up suggested in tip #8 – either for better understanding or for generating new ideas.

8. 🎧 Ask it to vary the mode of presentation

If you seek understanding, you can always try:

Alternatively, you can try to ask that the concept be explained to a historical figure.

Following tip #1 and adding detail gets us further:

9. 🐣 Use it to generate new ideas and hypotheses

This is one of the best uses of LLMs. Cowen and Tabarrok suggest asking the following:

A number of assertions in this answer needs to be fact-checked – that’s to be expected.

Nonetheless, following this line of inquiry could be fruitful. Additional questions come to mind, such as, is there actually a correlation between progressive work culture and reports of sexual harassment? To what extent can that factor be teased apart from the diversity of a workforce and its education level?

If we followed up with questions that detailed expert opinions, compared different theories, and separately verified the empirical data, we may crystalize an interesting, important, and plausible hypothesis.

🤖 LLMs as Smart and Imperfect Interlocutors

Cowen and Tabarrok’s tips are useful, but perhaps most important is the implicit model they put forth:

Think of GPTs not as a database but as a large collection of extremely smart economists, historians, scientists and many others whom you can ask questions.

Imagine getting Ken Arrow, Milton Friedman, and Adam Smith in a room and asking them economics questions. If you ask them what was Germany’s GDP per capita in 1975 do you expect a perfectly correct answer? Well, maybe not. But asking them this question isn’t the best use of their time or yours. A little bit of knowledge about how to use GPT models more powerfully can go a long way.

Tyler Cowen and Alex Tabarrok

🏆 An Update on the AI-Immunity Challenge

As Graham noted last week, we have received many excellent submissions for our AI-immunity challenge. We are very excited to see the creativity of our community in coming up with AI-immune assignments for us to try to crack with the latest AI tools.

We plan to continue to receive and review assignment submissions for the next several weeks, so please check out the challenge parameters and submit an assignment if you believe you have a good one.

We hope to start posting pieces on the results of the challenge by early May.

🔗 Links

Subscribe to keep reading

This content is free, but you must be subscribed to AutomatED: Teaching Better with Tech to continue reading.

Already a subscriber?Sign In.Not now