Skip to content

[feature request] Add finish_reason to LLM output message attributes** #2901

@uu9

Description

@uu9

Is your feature request related to a problem? Please describe.

finish_reason is not captured in the OpenInference semantic conventions or any instrumentation library (e.g. openinference-instrumentation-openai, openinference-instrumentation-langchain). This makes it impossible to observe why a model stopped generating — e.g. distinguishing stop, length, tool_calls, or content_filter — which is critical for debugging truncation issues, tool call flows, and safety filtering.

Describe the solution you'd like

  1. Add finish_reason to the semantic conventions spec as a new attribute on output messages:

    llm.output_messages.{i}.message.finish_reason  String  e.g. "stop", "length", "tool_calls"
    
  2. Collect it in openinference-instrumentation-openai from choice.finish_reason when processing ChatCompletion and Completion responses.

  3. Collect it in openinference-instrumentation-langchain from the generation_info field in LangChain run outputs.

Describe alternatives you've considered

Monkey-patching _ResponseAttributesExtractor._get_attributes_from_chat_completion to inject finish_reason manually — works but is fragile and requires users to maintain the patch across library upgrades.

Additional context

OpenAI's choice.finish_reason is a standard field in all Chat Completion responses. The OTel GenAI semantic conventions already track this via gen_ai.response.finish_reasons. Aligning OpenInference with this would improve interoperability and observability coverage.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or requestinstrumentationAdding instrumentations to open source packages

    Type

    No type

    Projects

    Status

    No status

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions