How to Fix 'prompt template error during development' in LlamaIndex (TypeScript)
If you’re seeing prompt template error during development in LlamaIndex TypeScript, it usually means the framework tried to render a prompt and one of the required variables was missing, misnamed, or passed in the wrong shape. In practice, this shows up when you call a query engine, chat engine, or custom PromptTemplate during local development and the prompt interpolation fails before the LLM request is sent.
The exact stack trace varies, but the common failure looks like:
Error: Missing value for input variable: context
or:
Error: Prompt template error: some variables are not provided
The Most Common Cause
The #1 cause is a mismatch between the variables your PromptTemplate expects and the object you pass at runtime.
In LlamaIndex TS, this usually happens with PromptTemplate, ChatPromptTemplate, or anything that internally calls .format() on a template. If your template expects {context} and {question}, but you pass { query }, it fails immediately.
Broken vs fixed
| Broken pattern | Fixed pattern |
|---|---|
Template expects context and question, but code passes query | Template variables and runtime keys match exactly |
| Uses an object with nested fields that are never flattened | Passes a flat object with the exact names the template needs |
import { PromptTemplate } from "llamaindex";
const prompt = new PromptTemplate({
template: "Context: {context}\nQuestion: {question}",
});
// ❌ Broken: "query" does not match "question"
const formatted = prompt.format({
context: "Policy docs...",
query: "What is covered?",
});
import { PromptTemplate } from "llamaindex";
const prompt = new PromptTemplate({
template: "Context: {context}\nQuestion: {question}",
});
// ✅ Fixed: keys match template placeholders exactly
const formatted = prompt.format({
context: "Policy docs...",
question: "What is covered?",
});
This same issue appears inside higher-level APIs too. For example, if you override a default QA prompt in VectorStoreIndex or RetrieverQueryEngine, make sure your custom template matches what that component injects.
Other Possible Causes
1) Passing undefined values into the template
If one field is present but evaluates to undefined, formatting still fails.
const prompt = new PromptTemplate({
template: "Answer using {context} and {userQuestion}",
});
const userQuestion = undefined;
// ❌ Broken
prompt.format({
context: "Claims policy text",
userQuestion,
});
Fix it by validating inputs before formatting.
if (!userQuestion) throw new Error("userQuestion is required");
prompt.format({
context: "Claims policy text",
userQuestion,
});
2) Using chat message roles incorrectly
With ChatPromptTemplate, each message needs the right content structure. A common mistake is mixing raw strings with message objects in a way that doesn’t match what the class expects.
import { ChatPromptTemplate } from "llamaindex";
// ❌ Broken
const chatPrompt = new ChatPromptTemplate({
messageTemplates: [
"You are a helpful assistant.",
"{context}",
"{question}",
],
});
Use proper chat message templates instead of plain strings.
import { ChatMessage, ChatPromptTemplate } from "llamaindex";
// ✅ Fixed
const chatPrompt = new ChatPromptTemplate({
messageTemplates: [
new ChatMessage({ role: "system", content: "You are a helpful assistant." }),
new ChatMessage({ role: "user", content: "Context: {context}" }),
new ChatMessage({ role: "user", content: "Question: {question}" }),
],
});
3) Mismatched partial variables
If you pre-fill part of a prompt using partials, then later pass the same key again or forget one of the remaining keys, you can trigger formatting errors.
const prompt = new PromptTemplate({
template: "{system}\nContext: {context}\nQuestion: {question}",
});
// ❌ Broken if system isn't supplied elsewhere
prompt.format({
context: "Internal claims notes",
question: "Is this covered?",
});
Fix by setting partials explicitly or providing all required variables.
const prompt = new PromptTemplate({
template: "{system}\nContext: {context}\nQuestion: {question}",
});
prompt.partialFormat?.({
system: "You are a compliance assistant.",
});
prompt.format({
context: "Internal claims notes",
question: "Is this covered?",
});
4) Incorrect custom prompt injection into query engines
If you override prompts in components like RetrieverQueryEngine or ResponseSynthesizer, the engine may expect specific placeholders such as {context_str} rather than {context}.
// ❌ Broken if engine expects different variable names
queryEngine.updatePrompts?.({
response_synthesizer:text_qa_template:
new PromptTemplate({ template: "{context}\n{question}" }),
});
Use the exact placeholder names expected by that component’s default prompt.
// ✅ Example pattern — match the engine's expected variable names
new PromptTemplate({
template:
"Context information is below.\n---------------------\n{context_str}\n---------------------\nGiven the context information and not prior knowledge, answer the query.\nQuery: {query_str}\nAnswer:",
});
How to Debug It
- •
Print the final template before execution
- •Confirm every placeholder has a matching runtime value.
- •Look for names like
{query}vs{question}vs{query_str}.
- •
Inspect the exact stack trace
- •Search for classes like
PromptTemplate,ChatPromptTemplate, orResponseSynthesizer. - •If the error mentions
Missing value for input variable, that’s almost always a key mismatch.
- •Search for classes like
- •
Log the payload passed into
.format()- •Dump the object right before rendering.
- •Check for
undefined, nested objects, or typos in property names.
- •
Compare against LlamaIndex defaults
- •If you customized a retriever or synthesizer prompt, compare your placeholders with the default prompt for that class.
- •Many internal prompts use names like
context_str,query_str, orchat_history.
Prevention
- •Keep prompt variable names consistent across your codebase.
- •Wrap all prompt inputs in validation before calling
.format(). - •When overriding LlamaIndex defaults, copy the original placeholder names first and only change wording after that.
- •Add one unit test per custom prompt that renders it with sample inputs.
A good test catches this early:
import { PromptTemplate } from "llamaindex";
test("custom QA prompt formats correctly", () => {
const prompt = new PromptTemplate({
template: "{context_str}\nQuery: {query_str}",
});
expect(
prompt.format({
context_str: "policy text",
query_str: "What is covered?",
})
).toContain("policy text");
});
If you’re debugging this in production code, start with variable names first. In LlamaIndex TypeScript, this error is rarely about model behavior — it’s almost always a bad prompt contract between your code and the template.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit