Skip to content

[Feature Request] Auto-discover available models from custom providers #104

@MoooJianG

Description

@MoooJianG

Preflight Checklist

Problem/Usecase Description

Feature Description

When configuring a custom AI provider, it would be helpful to automatically fetch and display available models from the provider's API.

Current Behavior

Users must manually type the model ID, which requires looking up documentation.

Benefits

  • Reduces configuration errors
  • Improves user experience
  • Works with any OpenAI-compatible API

Proposed Solution

  1. Add a "Fetch Models" button next to the API endpoint field
  2. Call the provider's /v1/models endpoint to get available models
  3. Display models in a searchable dropdown
  4. Allow users to select from the list instead of typing manually

Feature Type

New functionality

Additional Context

Most OpenAI-compatible providers (Ollama, OpenRouter, etc.) support the /v1/models endpoint.

Contribution

  • I'd be willing to implement this feature
  • I'd be willing to test this feature once implemented
  • I can provide additional details or clarification if needed

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions