Skip to content

max output token and knowledge cutoff for Mistral-8B-Instruct-2410 #243

@zhoumengbo

Description

@zhoumengbo

Python -VV

Hello,

I would like to ask two questions about the Mistral-8B-Instruct-2410:

1. **What is the maximum number of output tokens** the model can generate during inference?
   - For example, is there a known limit like 2048, 8192 tokens?

2. **What is the knowledge cutoff date** for this version?
   - Is the model trained on data up to a specific month or year (e.g., 2023-03, 2023-08, etc.)?

I’ve searched the documentation but couldn’t find a definitive answer to these two questions.

Thank you in advance for your help!

Pip Freeze

None

Reproduction Steps

None

Expected Behavior

None

Additional Context

No response

Suggested Solutions

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions