5  Interacting with LLMs

Large Language Models are designed to provide the most probable sequence of tokens following up tokens given as an input sequence. We can think of this input sequence as the context given to the LLM. Just as what happens in conversations with strangers, context plays a big role. One way of providing context is by constructing complex and details prompts that try to fully describe the background information of the request. Another -more conversational- way is through back-and-forth interactions.

Currently, most publicly available LLMs are designed as chatbots. Their user interfaces are designed for conversational interactions and the back-end models are optimized to perform better in these types of interactions settings.

5.1 Conversation v. questions

When we ask a question, we are expected to offload all the relevant information needed for an accurate response. However, this can be challenging when tasks or queries are complex and lengthy. As current LLMs (web-based) have a more conversational design, it is often more effective to interact with them in a conversational structure.

Follow-ups are a great way to validate and refine LLMs outputs. This is the perfect complement for prompting. Instead of meticulously crafting the perfect prompt, we can initiate the task and provide follow up information, refining requests, or suggest actions.

– > give me a summary of Newton's Laws of Motion
– > make it two paragraphs
– > frame it on a first-year college level
– > give me an example

The conversation structure is also a good way for spotting mistakes or hallucinations. It also provides with a way to focus on particular aspects rather than others given by the LLM.

From the learning perspective, this conversational approach also helps the user engage in a dialectic method for understanding. By following up, asking clarifying questions, providing refining context, the user is actively engaging with the topic and not passively reading an output.

5.2 Chain of thought (CoT)

Not only users benefit from this dialectic approach to interacting with LLMs. The model itself generally performs better when they provide a step-by-step reasoning process.

Regular prompt CoT prompt
what is the best way to study for a midterm? what is the best way to study for a midterm? support your answer with pedagogical research and include pros and cons.

Even more when queries are subjective, or our own knowledge of the topic is limited, using CoT can provide with more context to value the output of the LLM.

5.3 Agent-like behavior

An agent is an independent entity that is able to make decisions and actions by itself. Even thought LLMs don’t present all the features of agents, it is useful to think about them as agent-like since they are actively making decisions on which text, facts, or references to use for outputs.

5.3.1 Context

Providing enough context about requests can be effective for obtaining better outputs. This includes giving a clear background of the task, topic, purpose, and user details.

Student Instructor
Make sure to include details about your course, your major, level of expertise, the purpose of the query, if it is for an assignment, for studying, etc. Include the background of your audience, the level of engagement, learning goals, desired outcomes, etc.

In this sense, it is useful to think about the LLM as a person that is assisting you and has no clue of who you are, what is your goal, and what is valuable for you.

Pro-tip: Save all the context that you’ll be using for your course as a text file for easy reuse. Alternatively, some LLMs offer to create special bots, agents, or projects (each company has their own terminology) that allows you to save general instructions that can be pre-appended to all your prompts.

5.3.2 Process

It is key to focus on processes and not only on products. This doesn’t only improve the quality of the outputs, but also serves the user better into gaining deeper understanding on the task at hand.

Prompting for the LLM to include intermediate steps or to support the responses is a good way to improve its effectiveness. Clarifying follow-ups and adding more context are also effective strategies for increasing accuracy and obtaining more useful outputs.

Dangerous practice

The opposite extreme is dangerous. When requesting LLMs to “reply in one word” or “reply in one sentence,” we are opening the door for possible hallucinations. Skipping context and reasoning steps increases the chances of hallucinations and noise.

5.3.3 Supervising

Although not fully autonomous, LLMs can fully or partially offload tasks. In this sense, users can uptake the role of a manager or supervisor when interacting with LLMs. This includes providing clear guidance and information, and evaluating and interpreting their results.

Make sure to constantly evaluate and reflect on the performance of your LLM. This can help you identify if you need to provide more context, me more clear with your directions, or perhaps try a different model or implementation.