9 Tips, tricks, and useful prompts
Finally, here is a summary of some relevant things to keep in mind when using LLMs for teaching and learning.
9.1 Useful considerations
9.1.1 Give enough context
Include information in your prompt about your role and the purpose of your request.
i am teaching an upper division course for math majors in an higher education institution. check the following problems for clarity in language. [attach or copy-paste the problem statements...]
9.1.2 Follow-ups
Just as with writing, LLMs provide better results with edits/follow-ups. Instead of crafting a comprehensive single prompt, ask improvements or clarification with follow-ups.
9.1.3 Feedback and rephrasing
Ask LLMs to give feedback on a given text. You can take it a step further by assigning the LLM an editorial role:
for the text below, give me feedback as if you were an editor. focus on clarity of the text, as well as the language used. this is for a lower division chemistry course. check consistency, grammar, and technical language. keep my voice.
9.1.4 Summaries and outlines*
LLMs can be very useful for summarizing text and identifying important elements. However, summarizing a text involves identifying important elements in it. This identification can be very contextual, and course dependent. The same text could have multiple perspectives of what is more relevant depending on the particular focus of a given course or instructor. A particular focus could be included in the prompt in order to improve the usefulness of the output.
9.1.5 Uncovered reasoning
A good way to improve the quality of LLM outputs is by prompting it to explain it’s reasoning. When used for solving problems or analyzing situations, including to explain or show its reasoning at the end usually leads to better results.
9.1.6 Tutor/disciple (for learners)
You can endow LLMs with (temporary) roles. Use it as a tutor if you want it to help you clarify concepts related to a class:
i am a sophomore student taking BIO 20 at UCSC. you are a biology tutor helping me with my homework. i will ask you some questions related to concepts that are not very clear to me. help me understand them and guide me through these homework questions.
A very effective way to learn a topic is by teaching it. Consider also endowing the LLMs with a disciple role. You can prompt it to forget everything it knows about a topic and that you will explain it.
forget everything you know about projectile motion. i will explain to you the important concepts related to this. ask me clarifying questions when something is not clear.
9.1.7 Test student (for instructors)
You can give the LLM the role of a test student in your class and ask it to check an assignment for clarity and level.
you are a student in my calculus 1 class. at this moment, we are covering the chain rule. check the following assignment for clarity (for first year students in STEM) and estimate the level of the problems.
9.2 Careful considerations
Likewise, there are some uses that can be problematic, not ideal, or even dangerous for teaching and learning.
9.2.1 Writing essays or solving problems (learners)
LLMs are becoming better at these tasks, however careful considerations needs to be made here. As a learner, one of the main goals of writing an essay or solving a question/problem is the process of doing it yourself. Learning happens in thinking about how to structure the essay, on how to approach the question or problem, and the trail and error to finally achieve the goal. Generating essays or problem solutions without paying attention to the process skips the learning process altogether.
9.2.2 Grading (instructors)
Even more when LLMs behave as a black-box, using them for high-stakes tasks such as grading assignments can be highly inaccurate and problematic. For classification type tasks (such as grading), LLMs are biased towards their training data. Also, due to their probabilistic nature, these classifications could not be replicable. The results can be very unfair and inconsistent. Besides this, using LLMs for grading can be problematic due to privacy issues when sharing student data.
9.2.3 Hallucinations
In general, LLMs are stochastic models that produce text. As such, they are prone to generating non-factual outputs. This issue is improving, however, it is important to always double check the outputs and not to take them as 100% true.
9.2.4 Calculations
LLMs are language models and in general, do not have computational engines. As such, they are not optimal for performing calculations, such as arithmetic or calculus. It is much better to use a calculator instead, as the results are reliable and reproducible.
9.2.5 Short answers
Prompting LLMs to produce short answers (some times a single word) can increase the chances of hallucinations. One of the best strategies to improve accuracy in LLMs is to have them explain their reasoning. Going the opposite direction and prompting to only reply with a short answer can produce noise in the output.