Description
The OpenInferenceSpanProcessor for PydanticAI does not set output.value when the agent returns plain text (output_type=str, the default). It only sets output.value when a final_result tool call is present (structured output mode).
This causes downstream consumers that rely on output.value (e.g. Langfuse OTLP ingestion) to show null for the output field on LLM generation spans.
Steps to reproduce
- Create a PydanticAI agent with default
output_type=str
- Instrument with
OpenInferenceSpanProcessor
- Run the agent — LLM responds with plain assistant text
- Inspect the exported span attributes
Expected: output.value is set to the assistant's text response
Actual: output.value is missing; only llm.output_messages.0.message.content is set
Root cause
In semantic_conventions.py, _extract_from_gen_ai_messages() processes gen_ai.output.messages:
- For text parts (line ~824): yields
llm.output_messages.*.message.content but does NOT set output_value
- For tool call parts with name
final_result (line ~842): sets output_value ✅
So output.value is only emitted for structured output agents. Plain text agents (the default) get no output.value.
Note that input.value IS correctly set from the last user text part in gen_ai.input.messages — the output side is just missing the equivalent logic.
Environment
openinference-instrumentation-pydantic-ai==0.1.12
pydantic-ai==1.70.0
Description
The
OpenInferenceSpanProcessorfor PydanticAI does not setoutput.valuewhen the agent returns plain text (output_type=str, the default). It only setsoutput.valuewhen afinal_resulttool call is present (structured output mode).This causes downstream consumers that rely on
output.value(e.g. Langfuse OTLP ingestion) to shownullfor the output field on LLM generation spans.Steps to reproduce
output_type=strOpenInferenceSpanProcessorExpected:
output.valueis set to the assistant's text responseActual:
output.valueis missing; onlyllm.output_messages.0.message.contentis setRoot cause
In
semantic_conventions.py,_extract_from_gen_ai_messages()processesgen_ai.output.messages:llm.output_messages.*.message.contentbut does NOT setoutput_valuefinal_result(line ~842): setsoutput_value✅So
output.valueis only emitted for structured output agents. Plain text agents (the default) get nooutput.value.Note that
input.valueIS correctly set from the last user text part ingen_ai.input.messages— the output side is just missing the equivalent logic.Environment
openinference-instrumentation-pydantic-ai==0.1.12pydantic-ai==1.70.0