8  Relevant concepts

Invariably there are a few concepts that keep coming up in most artificial intelligence conversations, discussions, panels, and events. Beyond general or specific AI implementations, these concepts can be used as individual or institutional reflections that can help us navigate this emerging landscape, especially in higher education.

8.1 Trust

This is one of the most ubiquitous concept appearing in AI discussions. We are experiencing a realignment in trust with respect to LLM companies and implementations. This includes how much we trust the models, the data used for training, the quality of the outputs, and the way that individuals and society are implementing this technology. In general, I can venture to say that we are still figuring out, as a society, the amount of trust that we can give to LLM implementations.

Usually, trust and reputation are concepts that require time to be solidified. Still, LLMs are very new to the general public and it might require some time to stablish their trust level. At this moment, I consider healthy to have a certain level of skepticism while this societal trust stabilizes.

8.2 Critical Thinking

Perhaps as a natural response to the trust development, the concept of critical thinking appears in every circle debating AI usage and influence in higher education. Among faculty, students, and even industry, all concur that the development of critical thinking is one of the most important qualities in this AI-enabled educational era.

However, the concept of critical thinking by itself can be challenging to accurately be defined. Focusing on what critical thinking means for you as a student or instructor, even more in relation with LLM usage will become crucial.

8.3 Agency

On the more practical side, when talking about general AI implementations and some more recent implementations, the concept of agency is key to understand and effectively promote LLM usage. In general, agency has to do with the quality of making decisions. Most LLM implementations have minimal agency, limiting this to making decisions about reasoning paths, which data sources to use, and what information is more relevant for the user. However, whenever more agency is given to LLMs (or AI systems for that matter), it becomes more relevant to define clear evaluation and oversight methods for these systems.

The more agency an AI system has, the more humans acquire a supervisor or manager role. This concept also goes in par with the amount of trust that is given -or earned- by the AI system.

8.4 Accountability

When decisions are made by AI, or by using AI (or LLMs) in the decision pipeline, it is important to think and define where accountability lies. Every time we take a decision based on the output of an LLM, there must be a clear accountability line. For example, when submitting an assignment, the student should have the accountability for false information, wrong deductions, and poor quality. Similarly, when creating slides or class materials, the instructor should bear with the accountability of wrong or insensitive information, or unhelpful descriptions.

8.5 Attribution

Depending on the level of LLM usage, attribution becomes a relevant concept supporting transparent implementations and decision making. Whether LLMs are used for searching information, as writing aids, or content organization, attribution becomes relevant for trust and accountability of the implementation.

LLMs can be cited as a source of information, or attributed as a collaborator in a project. This shows the spectrum of different roles that LLMs can play in a project, and the different levels of attribution that can be used.

8.6 Data

Finally, a relevant concept surrounding LLMs is data. Not only with the data used for training the specific model, but also with user data management.

Awareness of training data can help manage possible output bias in either the facts used or the way information is presented. A challenge presented with LLMs is the tendency towards some sort of average voice. Many instructors worry about students loosing their writing voice.

A practical consideration is with respect to the particular LLM data privacy policies. This becomes very relevant when sharing private information. For example, for instructors, sharing student data. Also, including information concerning research or copyrighted material could potentially be dangerous, as the LLM might be using user data to improve its training and this copyrighted data could become part of the LMM knowledge.