Describe the bug
Since smolagents changed its internal message format from plain dict to ChatMessage objects, _llm_input_messages in _wrappers.py silently drops all messages and LLM input spans appear empty in the trace.
Root Cause
In _wrappers.py, _llm_input_messages checks:
for i, message in enumerate(messages):
if not isinstance(message, dict):
continue # ← ChatMessage objects are skipped here
smolagents generate() now passes a list[ChatMessage] instead of list[dict]. Because ChatMessage is not a dict, every message is skipped.
smolagents generate() now passes a list[ChatMessage] instead of list[dict]. Because ChatMessage is not a dict, every message is skipped.
Environment
openinference-instrumentation-smolagents: 0.1.25
smolagents: 1.24.0
Expected Behavior
llm.input_messages.* span attributes are populated with the full conversation history.
Actual Behavior
llm.input_messages.* attributes are empty. input.value falls back to str(messages), producing Python object repr strings like:
ChatMessage(role=<MessageRole.USER: 'user'>, content=[{'type': 'text', 'text': '...'}])
instead of JSON.
Describe the bug
Since smolagents changed its internal message format from plain
dicttoChatMessageobjects,_llm_input_messagesin_wrappers.pysilently drops all messages and LLM input spans appear empty in the trace.Root Cause
In
_wrappers.py,_llm_input_messageschecks:smolagents
generate()now passes alist[ChatMessage]instead oflist[dict]. BecauseChatMessageis not adict, every message is skipped.Environment
openinference-instrumentation-smolagents: 0.1.25smolagents: 1.24.0Expected Behavior
llm.input_messages.*span attributes are populated with the full conversation history.Actual Behavior
llm.input_messages.*attributes are empty.input.valuefalls back tostr(messages), producing Python object repr strings like:instead of JSON.