AI Literacy & Prompt Engineering Essentials (2026)
Master the core concepts of Generative AI, Large Language Models (LLMs), and advanced Prompt Engineering techniques. This deck covers essential terminology and framework-based prompting (like Chain-of-Thought and Few-Shot) to help you stay competitive in the AI-driven workforce.
Cards in this deck
What is a "System Prompt"?
A high-level instruction set that defines the AI's persona, boundaries, and tone before the user interacts with it.
Define "Chain-of-Thought" (CoT) Prompting.
A technique where the AI is asked to "think step-by-step" to improve its reasoning and accuracy in complex tasks.
What is "Hallucination" in LLMs?
A phenomenon where an AI generates confident but factually incorrect or nonsensical information.
Few-Shot Prompting vs. Zero-Shot Prompting
Zero-Shot: Asking the AI to perform a task with no examples. Few-Shot: Providing a few examples of the desired output within the prompt.
What does "Temperature" control in AI settings?
It controls the randomness of the output. Low temperature = focused and predictable; High temperature = creative and diverse.
What is an "AI Agent"?
An autonomous system that uses an LLM to plan tasks, use tools (like searching the web), and achieve a specific goal without constant human input.
Define "Token" in the context of LLMs.
The basic unit of text (chunks of characters or words) that an AI processes. Most models have a "Token Limit" for their memory/context window.
What is "RAG" (Retrieval-Augmented Generation)?
A method that gives an AI access to external, real-time data or specific documents to provide more accurate and up-to-date answers.
What is the "Context Window"?
The total amount of information (tokens) an AI can "remember" and consider at one time during a conversation.
Define "Multimodal AI"
An AI system that can process and generate multiple types of data, such as text, images, audio, and video simultaneously.
What is "Self-Consistency" in prompting?
A technique where you ask the AI to generate multiple different paths to an answer and then pick the most frequent (consistent) result to ensure accuracy.
Define "Tree-of-Thoughts" (ToT).
An advanced reasoning framework where the AI explores multiple branches of a problem simultaneously, evaluating and pruning them like a decision tree.
What is "Meta-Prompting"?
The act of using an AI to write, refine, or optimize a prompt for another AI (or itself) to achieve a better outcome.
Define "Prompt Chaining".
Breaking a complex task into smaller sub-tasks where the output of one prompt becomes the input for the next.
What is "ReAct" (Reason + Act)?
A framework where an AI generates both "reasoning traces" and "action steps," allowing it to use external tools (like a calculator or search engine) while thinking.
What is "Grounding" in AI?
Linking an AI's response to a specific, verifiable source of truth (like a company database) to prevent hallucinations.
What is a "Negative Prompt"?
Specific instructions telling the AI what not to do (e.g., "Do not use jargon" or "Exclude any mention of competitors").
Define "Constitutional AI".
A method of training or prompting AI to follow a specific set of "laws" or ethical principles (a "constitution") to guide its behavior.
What is "Prompt Leaking"?
A security vulnerability where a user cleverly prompts an AI to reveal its underlying system instructions or private developer notes.
What is "Delimiters" in prompting?
Special characters (like ###, """, or ---) used to clearly separate different parts of a prompt, such as instructions from reference text.
Define "Stochastic Parrots".
A critical term used to describe LLMs as systems that repeat patterns of language they’ve seen without a true "understanding" of the meaning.
What is "Emergent Ability"?
A skill or capability that an AI model develops only after reaching a certain size or complexity, which wasn't explicitly programmed into it.
Explain "Zero-Shot CoT".
Triggering an AI's reasoning simply by adding the phrase "Let's think step by step" to a prompt without providing any examples.
What is "Parameter Count"?
A measure of an AI's "brain size" (e.g., 70B, 400B); generally, more parameters allow for more nuanced understanding and knowledge.
Define "Adversarial Prompting".
The practice of testing an AI's limits by trying to trick it into breaking its rules or outputting harmful content (often used for safety testing).
What is "RLHF" (Reinforcement Learning from Human Feedback)?
A training method where humans rank AI responses to help the model learn what humans prefer and find helpful.
Define "Context Stuffing".
The (often poor) practice of cramming too much irrelevant information into a prompt, which can lead to the AI losing track of the main instruction.
What is "Fine-Tuning"?
The process of taking a pre-trained AI and training it further on a smaller, specialized dataset (like medical or legal records).
Explain "One-Shot Prompting".
A prompting style where exactly one example of the desired task is provided to the AI before asking it to perform the task itself.
What is "Data Contamination"?
When the data used to test an AI was actually included in its training set, leading to falsely high performance scores.
What is an "LLM Benchmark"?
Standardized tests (like MMLU or HumanEval) used to compare the performance of different AI models across logic, math, and coding.
Define "Recursive Prompting".
A process where an AI is asked to review its own previous output and improve it in an iterative loop.