How to Fix 'prompt template error' in LlamaIndex (Python)
Opening
prompt template error in LlamaIndex usually means the framework tried to render a prompt, but the template variables and the values you passed did not line up. You typically hit it when building a query engine, custom prompt, chat prompt, or response synthesizer.
In practice, this shows up as an exception from PromptTemplate, ChatPromptTemplate, or a downstream component like RetrieverQueryEngine when LlamaIndex cannot fill placeholders such as {context_str} or {query_str}.
The Most Common Cause
The #1 cause is a mismatch between the placeholders in your prompt template and the variables LlamaIndex expects to inject.
A common failure looks like this:
| Broken code | Fixed code |
|---|---|
| ```python | |
| from llama_index.core import PromptTemplate | |
| from llama_index.core import VectorStoreIndex |
template = """ Use the context below to answer the question.
Context: {context}
Question: {question} """
qa_prompt = PromptTemplate(template)
index = VectorStoreIndex.from_documents(docs) query_engine = index.as_query_engine(text_qa_template=qa_prompt)
response = query_engine.query("What is our refund policy?")
|python
from llama_index.core import PromptTemplate
from llama_index.core import VectorStoreIndex
template = """ Use the context below to answer the question.
Context: {context_str}
Question: {query_str} """
qa_prompt = PromptTemplate(template)
index = VectorStoreIndex.from_documents(docs) query_engine = index.as_query_engine(text_qa_template=qa_prompt)
response = query_engine.query("What is our refund policy?")
Why this fails: `text_qa_template` expects LlamaIndex’s default variable names, usually `context_str` and `query_str`. If you use `{context}` and `{question}`, you will often get an error like:
```text
ValueError: Missing required variables for prompt template: {'context_str', 'query_str'}
or:
KeyError: 'context_str'
If you want custom variable names, you must build the chain/component that passes those exact names. For standard query engines, stick to LlamaIndex’s expected placeholders.
Other Possible Causes
1. Passing a plain string where a prompt object is expected
Some APIs want a PromptTemplate, not raw text.
# Broken
query_engine = index.as_query_engine(text_qa_template="""
Context: {context_str}
Question: {query_str}
""")
# Fixed
from llama_index.core import PromptTemplate
prompt = PromptTemplate("""
Context: {context_str}
Question: {query_str}
""")
query_engine = index.as_query_engine(text_qa_template=prompt)
2. Mixing chat templates with completion templates
If your model wrapper expects chat messages and you pass a completion-style prompt, rendering can fail.
# Broken
from llama_index.core.prompts import ChatPromptTemplate
chat_prompt = ChatPromptTemplate.from_messages([
("system", "Answer using {context_str}"),
("user", "{query_str}")
])
query_engine = index.as_query_engine(text_qa_template=chat_prompt)
# Fixed
from llama_index.core.prompts import ChatPromptTemplate
chat_prompt = ChatPromptTemplate.from_messages([
("system", "Answer using the provided context."),
("user", "Context:\n{context_str}\n\nQuestion:\n{query_str}")
])
# Use it only where chat prompts are supported by that component/model setup.
If you are unsure, check whether the component expects a completion prompt or a chat prompt. This matters more with OpenAI-style chat models and newer LlamaIndex abstractions.
3. Forgetting required partial variables in custom prompts
Some prompts need values injected ahead of time through partial_format.
# Broken
from llama_index.core import PromptTemplate
template = "You are answering for tenant {tenant_id}. Context: {context_str}"
prompt = PromptTemplate(template)
# tenant_id never provided
# Fixed
prompt = PromptTemplate(template).partial_format(tenant_id="acme-bank")
This comes up in multi-tenant apps where you add metadata into prompts and assume the runtime will fill it automatically. It won’t.
4. Using an outdated LlamaIndex API after upgrading
LlamaIndex has had breaking changes across versions. Code that worked with older imports or parameter names can fail with prompt-related errors after upgrade.
# Broken on newer versions
from llama_index import GPTVectorStoreIndex
index = GPTVectorStoreIndex.from_documents(docs)
# Fixed on current versions
from llama_index.core import VectorStoreIndex
index = VectorStoreIndex.from_documents(docs)
If you see odd prompt failures after an upgrade, check whether your code is mixing old package paths with new ones. That mismatch often surfaces during query engine construction, not just imports.
How to Debug It
- •
Print the actual template before passing it in
- •Confirm every placeholder matches what LlamaIndex expects.
- •Look for typos like
{query}instead of{query_str}.
- •
Inspect the exception text carefully
- •
ValueError: Missing required variables... - •
KeyError: 'context_str' - •
TypeErroraround prompt construction usually means wrong object type was passed.
- •
- •
Reduce to the smallest failing query
- •Remove custom prompts first.
- •Try the default query engine:
query_engine = index.as_query_engine() - •If that works, your issue is in custom prompt wiring.
- •
Check component expectations
- •
as_query_engine(text_qa_template=...)expects a QA-style template. - •Some retrievers/synthesizers use different variable names.
- •Verify whether your component wants:
- •
context_str - •
query_str - •chat messages via
ChatPromptTemplate
- •
- •
Prevention
- •
Use LlamaIndex’s default placeholder names unless you have a reason not to.
- •For most QA flows, start with
{context_str}and{query_str}.
- •For most QA flows, start with
- •
Keep prompt templates close to the component that uses them.
- •Don’t reuse one generic template across retrieval QA, summarization, and chat unless you’ve verified the variable contract.
- •
Pin your LlamaIndex version in production.
- •Prompt APIs change enough that unpinned upgrades can break working code overnight.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit