Is your feature request related to a problem? Please describe.
finish_reason is not captured in the OpenInference semantic conventions or any instrumentation library (e.g. openinference-instrumentation-openai, openinference-instrumentation-langchain). This makes it impossible to observe why a model stopped generating — e.g. distinguishing stop, length, tool_calls, or content_filter — which is critical for debugging truncation issues, tool call flows, and safety filtering.
Describe the solution you'd like
-
Add finish_reason to the semantic conventions spec as a new attribute on output messages:
llm.output_messages.{i}.message.finish_reason String e.g. "stop", "length", "tool_calls"
-
Collect it in openinference-instrumentation-openai from choice.finish_reason when processing ChatCompletion and Completion responses.
-
Collect it in openinference-instrumentation-langchain from the generation_info field in LangChain run outputs.
Describe alternatives you've considered
Monkey-patching _ResponseAttributesExtractor._get_attributes_from_chat_completion to inject finish_reason manually — works but is fragile and requires users to maintain the patch across library upgrades.
Additional context
OpenAI's choice.finish_reason is a standard field in all Chat Completion responses. The OTel GenAI semantic conventions already track this via gen_ai.response.finish_reasons. Aligning OpenInference with this would improve interoperability and observability coverage.
Is your feature request related to a problem? Please describe.
finish_reasonis not captured in the OpenInference semantic conventions or any instrumentation library (e.g.openinference-instrumentation-openai,openinference-instrumentation-langchain). This makes it impossible to observe why a model stopped generating — e.g. distinguishingstop,length,tool_calls, orcontent_filter— which is critical for debugging truncation issues, tool call flows, and safety filtering.Describe the solution you'd like
Add
finish_reasonto the semantic conventions spec as a new attribute on output messages:Collect it in
openinference-instrumentation-openaifromchoice.finish_reasonwhen processingChatCompletionandCompletionresponses.Collect it in
openinference-instrumentation-langchainfrom thegeneration_infofield in LangChain run outputs.Describe alternatives you've considered
Monkey-patching
_ResponseAttributesExtractor._get_attributes_from_chat_completionto injectfinish_reasonmanually — works but is fragile and requires users to maintain the patch across library upgrades.Additional context
OpenAI's
choice.finish_reasonis a standard field in all Chat Completion responses. The OTel GenAI semantic conventions already track this viagen_ai.response.finish_reasons. Aligning OpenInference with this would improve interoperability and observability coverage.