Day 23 - How LLMs Learn and Respond: Training vs Inference

Context:

I wanted to understand the flow of data in Large Language Models (LLMs) — how they learn during training and respond during inference.

What I Learned:

  • Training Data:
    • Used during the training phase of the LLM.
    • Includes grammar, facts, reasoning, code, articles, books, etc.
    • Stored as the model’s built-in knowledge.
  • Inference Data (Input):
    • The user’s prompt or question during inference time.
    • No data is stored or learned from the user’s prompt.
  • Output Data:
    • The model’s response to the user’s input during inference time.
    • No data is stored or reused.

Key Insight:

Training builds the brain. Inference is the conversation.
Better inputs (prompt engineering) → Better outputs.

Why It Matters for QA / AI Testing:

  • Helps testers understand why models can’t “learn” from prompts in real-time.
  • Emphasizes the importance of prompt engineering for accurate and useful responses.
  • Critical for designing tests that separate training limitations from inference behavior.

My Takeaway:

LLMs learn during training, not during conversation. Prompt quality is the key to unlocking better outputs.



Popular Posts

JMeter Producing Error: Windows RegCreateKeyEx(...) returned error code 5

Understanding about Contract Testing