Description
_get_attributes_from_response_usage in openinference-instrumentation-openai crashes with AttributeError: 'NoneType' object has no attribute 'reasoning_tokens' when the OpenAI API returns output_tokens_details: None in usage data. This happens with non-reasoning models (e.g. GPT-4o) where output_tokens_details is not populated.
The same issue exists for input_tokens_details.cached_tokens.
Stack trace
Traceback (most recent call last):
File "openinference/instrumentation/openai/_attributes/_responses_api.py", line 31, in wrapper
yield from wrapped(*args, **kwargs)
File "openinference/instrumentation/openai/_attributes/_responses_api.py", line 720, in _get_attributes_from_response_usage
obj.output_tokens_details.reasoning_tokens,
AttributeError: 'NoneType' object has no attribute 'reasoning_tokens'
Affected versions
Confirmed in 0.1.41 and 0.1.44 (latest). The code is unchanged between versions.
Impact
Low — @stop_on_exception catches the error and logs it, so requests succeed. But it generates noisy ERROR logs and loses the reasoning/cache token count telemetry attributes.
Suggested fix
Add null guards before accessing nested attributes:
@classmethod
@stop_on_exception
def _get_attributes_from_response_usage(
cls,
obj: responses.response_usage.ResponseUsage,
) -> Iterator[Tuple[str, AttributeValue]]:
yield SpanAttributes.LLM_TOKEN_COUNT_TOTAL, obj.total_tokens
yield SpanAttributes.LLM_TOKEN_COUNT_PROMPT, obj.input_tokens
yield SpanAttributes.LLM_TOKEN_COUNT_COMPLETION, obj.output_tokens
if obj.output_tokens_details:
yield (
SpanAttributes.LLM_TOKEN_COUNT_COMPLETION_DETAILS_REASONING,
obj.output_tokens_details.reasoning_tokens,
)
if obj.input_tokens_details:
yield (
SpanAttributes.LLM_TOKEN_COUNT_PROMPT_DETAILS_CACHE_READ,
obj.input_tokens_details.cached_tokens,
)
Description
_get_attributes_from_response_usageinopeninference-instrumentation-openaicrashes withAttributeError: 'NoneType' object has no attribute 'reasoning_tokens'when the OpenAI API returnsoutput_tokens_details: Nonein usage data. This happens with non-reasoning models (e.g. GPT-4o) whereoutput_tokens_detailsis not populated.The same issue exists for
input_tokens_details.cached_tokens.Stack trace
Affected versions
Confirmed in 0.1.41 and 0.1.44 (latest). The code is unchanged between versions.
Impact
Low —
@stop_on_exceptioncatches the error and logs it, so requests succeed. But it generates noisy ERROR logs and loses the reasoning/cache token count telemetry attributes.Suggested fix
Add null guards before accessing nested attributes: