Teaching with Large Language Models

Under Construction – Part 2 and 3 to be completed by 3/28

Much has been written about how students are using Large Language Models (LLMs) to do (or avoid doing) work. However, less frequently discussed is how instructors might use LLMs to aid in their work. Here, I’ve compiled several concrete ways that I have incorporated LLMs into my workflow as a teaching professor. These fall into three main categories

  • Using LLMs to assist in the creation of visual aids
  • Formulating and checking high level ideas
  • Classroom management

While my own field of computational neuroscience is particularly amenable to the techniques suggested here (as I have developed them for my classes), I provide examples from other fields as well.

Part 1: Using LLMs to create visual aide

The value of this is easier to show than tell. Below is a quick video of me explaining a concept in computational neuroscience. It is explained at a broad level, and would be accessible to a students who are aware of fairly basic concepts in biology – namely that brains are composed of neurons that communicate by sending pulses. To note, essentially all of the non-anatomical figures and animations in the presentation were created using ChatGPT from relatively few prompts. At the end of the video, I also show the original figures used to inspire the presentation.

[Insert video]

While the original figures (Dayan and Abbott, 2001) are excellent, LLMs allow for the rapid creation of  visual aides that fit the pedagogical goals of the situation, rather than trying to make the learning environment fit to the available visual aides. This is particularly true in the case of animations, which can be time consuming to make. The initial animation used to explain how neurons fire in response to moving stimuli carries much of the scaffolding needed for students to understand the more complicated ideas in the video. While it could be explained using the whiteboard or textbook illustrations in a slideshow, I hope you agree that the animation shown is much more efficient than the alternatives the primary issue. The main issue for creating something like this in every presentation is that it is difficult to justify 2 minutes of lecture content receive 45 minutes of prep work. LLMs allow us to bring the initial figure creation down to approximately 5 minutes of work.

To see how exactly how I created these animations, you can review the conversation here. Many instructors might initially just try to ask for a figure or animation, and the LLM will create one directly. For a variety of reasons, these figures are often are science-y looking, but have fundamental flaws in them, making them useless for academic contexts. The primary trick for using LLMs in this method is to ask it to make the figure programmatically (e.g., with Python or MATLAB). While this may be daunting for instructors who do not know how to program (or haven’t done so in some time), I strongly encourage people to try anyway. For simpler tasks, most LLMs are now able to run code themselves and a user does not need to program anything; rather, the LLM will simply output the figure in question and you can comment on it directly asking for changes that you need (e.g., make the bars green, increase the font size on the axes, etc.). For more complicated task, or for making simple figures faster, an understanding of the basics of programming is useful and often still necessary. By being able to at least generally read the code and know how it flows, you will be able 1) understand what can be done and what to ask for, and 2) pinpoint where things might go wrong. I have set aside a section below for people who are uncomfortable with coding, and how you might approach it successfully with the aide LLMs.

The possibilities of programmatic figures

I teach in a field with a lot of math and coding, and the example video at the beginning demonstrated how these ideas may be put into figures for easier consumption. However, I want to stress that the utility of LLMs goes well beyond fields like mine. Below are several examples that might be pertinent to other fields. Notably, I gave myself a challenge to make these – all of these were done in less than 15 minutes, often starting from me knowing nearly nothing about the tools available.

Anatomical Figures

Above is the result of a conversation where I asked ChatGPT to use an online anatomical database (niLearn) to create a diagram that shows the hippocampus in blue and the left caudate nucleus in red. While it messed up the color slightly, it did correctly highlight the two regions in question. I am sure with a few more minutes beyond my self imposed clock I could get the color coded on the results correct. The conversation for this image is here.

Visualizing Chemical Structure

Stepping beyond my usual field of neuroscience, I asked GPT to visualize chemical structures. I began with something simple here, asking it to visualize the Diels-Alder reaction. After some fine tuning (seen in the conversation) I was able to get it to create the following image (note that I added the orange arrow).

This particular conversation highlights a basic issue with using LLMs, namely that they do need to have some outside verification on their outputs. Initially, it created a diagram for cyclohexane, not cyclohexene, which is an important distinction. As an instructor in a field, this is not a major issue –  a query of literally ‘This seems off?”  caused the system to correct. However, this is where expertise in a field comes in and shows where students may go off the rails.

Once the workflow for setting up an initial diagram was created, making more complicated molecular shapes became quite simple.

 

Mapping Waterways of Houston

Branching out to non-biological sciences, I had the GPT help map out the waterways in Houston by using publicly available databases. To make it easier to match to locations that a local student would recognize, I also mapped on our university as a purple star and the beltway around the city in red.

 

 

 

Mapping Wealth in Houston

In an attempt to move beyond standard STEM fields, I asked GPT to make a map of Houston based on wealth by zip code. This is the first one that I failed in my self imposed 15-minute deadline to create a figure. I believe that I could do it with a few more minutes of work, but I want to be honest with what the capabilities of LLMs as a tool are.

The primary hurdle, however, was this:

It looks like the Census Bureau is blocking Colab or Python’s default urllib user-agent from accessing the .zip directly. This is a known issue for some federal data hosts.

Let’s fix it by faking a browser user-agent string so we can download the file anyway

There are limitations on who can access these databases, mostly as a matter of keeping automated systems from taking up all the bandwidth of the organization (this is the same reason why many of the websites have some variation of ‘Prove that your human’ tasks). GPT initially tried to create code that faked being a person, but was unable to do it in the timespan allotted.

Suggestions for People Uncomfortable with Programming

Many instructors are not comfortable with coding, sometimes from lack of experience and sometimes exactly because of their bad experiences. I would strongly suggest that everyone takes another look at programming with the help of an LLM – it is an entirely different problem, and one that is much friendlier and more productive to people entering the field.

My first suggestion is to using  Google Colab, as it requires almost no buy in to get started and uses the beginnger friendly Python language (which is also the default for most LLMs). Simply go to www.colab.research.google.com, and start a new notebook. Copy and paste the code for the figure that your LLM suggests into the text box, and hit the play button (noted with a red arrow below).

At the bottom of the screen, it will output either the figure or an error message. If there is an error message, simply copy it back into your LLM, with a note “Below is the error message I received: [pasted error message]”. The LLM will attempt to diagnose it. Likewise, if the figure is not quite what you want, then paste it into the LLM and explain how you want it changed (see examples in the conversations above).  This process of feeding outputs to an LLM and having it make corrections with little input from the user has become known as ‘vibe coding’. To note, you should not do this with data that needs to be secure or for anything that needs to be absolutely correct (e.g., student grades, accounting, safety equipment, published research, etc.). While AI can assist with these things, each line of code should be verified by experienced professionals. However, due to the needs of a classroom settings where we often elide details to get across general points, vibe coding is excellent. If the above map of Houston waterways misses a small creek that we aren’t talking about the day of in class, then there is no real issue – the pedagogical need is for general points of reference, not absolute accuracy. If the city of Houston uses that map for flood planning, then ignoring that creek may cost millions of dollars and even lives.
If you get used to this process and have practical results for your classroom, it is only at this point that I would suggest trying to more formally learn how to code. This may seem like an odd order, but I find almost no one learns to program unless they have a project that they find interesting or useful. Stick with Python unless you have a good reason, and do a few classes at www.learnpython.org. After you get the basics, have conversations with your LLM about the code and focus on where you are confused. While you probably will never code something from scratch for your class, knowing how programming works will let you make rapid tweaks (e.g., how big a figure is, how fast an animation runs, etc.) without the longer process of going through an LLM. It will also let you get a better feel for what you can ask of LLMS, and what might be accomplished programmatically.

 

Suggestions and final remarks on visual aides

  • Don’t ask an LLM for images, ask it for figures made from code
  • Figures are best when they can be generated from data or models and not “Make a diagram of a cell”
  • Your expertise is still needed to check the veracity of the outputs and fine tune it for your classroom situations
  • When asking for a figure, provide context. Upload PDFs and describe the classroom setting a figure will be used in. In general, keep a chat going as long as you’re on the same subject.
  • For more detailed tasks, Before having it create the code ask ‘Do you have any questions before you begin?’

In every field, there are brilliant tools for making figures that are under-utilized because the learning curve was so high. One of the greatest things about LLMs is that the friction to use those tools is now so much lower that they can be used in a classroom setting without hesitation.

 

 

 

Part 2: Ideation and Communication

A large part of being an instructor is to thinking about ways to effectively communicate complicated ideas and eventually to do it. LLMs are quite helpful in both regards.

 

Working out an explanation

 

Practice giving an explanation

  • avoiding sycophancy

 

◦Practicing lectures snippets

◦Note taking and ideation

 

 Part 3: Classroom management

◦Canvas automation

◦Integrating Google forms

 

 

 

[1] Dayan and Abbott, Theoretical Neuroscience (2001) MIT Press