How to Fix 'prompt template error' in LlamaIndex (TypeScript)

By Cyprian AaronsUpdated 2026-04-21
prompt-template-errorllamaindextypescript

When you see Error: prompt template error in LlamaIndex TypeScript, it usually means the library tried to format a prompt and one of the required variables was missing or malformed. In practice, this shows up when building query engines, chat engines, or custom prompts with PromptTemplate, ChatPromptTemplate, or response synthesizers.

The message is vague, but the root cause is usually simple: your template placeholders do not match the variables you pass at runtime.

The Most Common Cause

The #1 cause is a placeholder mismatch. LlamaIndex formats prompts with named variables, so if your template expects {context} and {query_str}, both must be present when the prompt is rendered.

Here’s the broken pattern:

BrokenFixed
```ts
import { PromptTemplate } from "llamaindex";

const prompt = new PromptTemplate({ template: "Answer the question: {question}\nContext: {context}", });

const formatted = prompt.format({ query_str: "What is PCI DSS?", context: "PCI DSS is a security standard.", }); |ts import { PromptTemplate } from "llamaindex";

const prompt = new PromptTemplate({ template: "Answer the question: {query_str}\nContext: {context}", });

const formatted = prompt.format({ query_str: "What is PCI DSS?", context: "PCI DSS is a security standard.", });


Or the reverse problem:

| Broken | Fixed |
|---|---|
| ```ts
const prompt = new PromptTemplate({
  template: "Answer the question: {query_str}\nContext: {context}",
});

await queryEngine.query({
  question: "What is PCI DSS?",
});
``` | ```ts
await queryEngine.query({
  queryStr: "What is PCI DSS?",
});
``` |

In LlamaIndex TypeScript, many built-in components expect specific variable names like `query_str`, `context_str`, or engine-specific fields. If you rename them casually, you get errors like:

- `Error: prompt template error`
- `Error: Missing value for input variable 'query_str'`
- `Error formatting prompt template`

## Other Possible Causes

### 1) Passing the wrong shape to a chat prompt

`ChatPromptTemplate` expects messages with variables in the right places. If you pass plain strings where message objects are expected, formatting fails.

```ts
import { ChatPromptTemplate } from "llamaindex";

const chatPrompt = new ChatPromptTemplate({
  messageTemplates: [
    { role: "system", content: "You are a banking assistant." },
    { role: "user", content: "Summarize this: {text}" },
  ],
});

// Wrong
chatPrompt.format({ input: "KYC policy text" });

Fix:

chatPrompt.format({ text: "KYC policy text" });

2) Missing optional variables that are actually required

Some prompts look optional in your code but are required by downstream components. A common example is custom response synthesis prompts.

const prompt = new PromptTemplate({
  template: "{system_prompt}\n\nContext:\n{context}\n\nQuestion:\n{query_str}",
});

// Wrong if system_prompt isn't provided
prompt.format({
  context: "Internal policy text",
  query_str: "What does this mean?",
});

Fix:

prompt.format({
  system_prompt: "Answer only from the provided context.",
  context: "Internal policy text",
  query_str: "What does this mean?",
});

3) Using an incompatible LLM wrapper or model config

If your LLM client returns a non-standard completion shape, prompt rendering may succeed but the downstream call fails and surfaces as a template error.

import { OpenAI } from "llamaindex";

const llm = new OpenAI({
  model: "gpt-4o-mini",
  temperature: undefined, // avoid passing invalid config values
});

Watch for invalid values like:

  • undefined where numbers are expected
  • unsupported model names
  • bad API keys causing fallback failures that look like formatting issues

4) Mixing string templates with object-based APIs

This happens when you build a raw string but later treat it like a structured template.

// Wrong
const template = "Use this context:\n{context}";
template.format({ context: "..." }); // string has no format()

Fix:

import { PromptTemplate } from "llamaindex";

const template = new PromptTemplate({
  template: "Use this context:\n{context}",
});

template.format({ context: "..." });

How to Debug It

  1. Print the exact template before execution

    • Log the final string or PromptTemplate.template.
    • Verify every {placeholder} has a matching runtime value.
  2. Check the component contract

    • Query engines often expect query_str, not question.
    • Response synthesizers may expect context_str, summaries, or other fixed names.
  3. Inspect the stack trace

    • Look for classes like:
      • PromptTemplate
      • ChatPromptTemplate
      • ResponseSynthesizer
      • BaseQueryEngine
    • The first internal frame usually points to the failing formatter.
  4. Reduce to a minimal repro

    • Remove retrievers, tools, memory, and custom callbacks.
    • Keep only:
      • one prompt
      • one input object
      • one LLM call

If the minimal version works, your bug is in variable plumbing, not LlamaIndex itself.

Prevention

  • Keep placeholder names aligned with LlamaIndex conventions like query_str and context_str.
  • Wrap every custom prompt in tests that call .format() directly with sample inputs.
  • Centralize prompts in one module so renaming variables does not break half your pipeline.

A good rule in production code: if you customize a prompt, treat its variable contract like an API. Once that contract drifts, LlamaIndex will fail at runtime with a generic prompt template error, and you’ll waste time debugging what is really just bad input shape.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides