Skip to content

[Feature] Support named configs#1588

Open
ferenc-hechler wants to merge 15 commits intoPortkey-AI:mainfrom
ferenc-hechler:main
Open

[Feature] Support named configs#1588
ferenc-hechler wants to merge 15 commits intoPortkey-AI:mainfrom
ferenc-hechler:main

Conversation

@ferenc-hechler
Copy link
Copy Markdown

Description:

Currently configs have to be provided in plain text as header parameter "x-portkey-config" in the client request.
This makes the gateway not usable standalone.
One main goal of the gateway is to hide the configuration details, especially secrets, from the clients.
One proposal often given is to add a reverse proxy like nginx, which can then add the plain text config,
but this adds another component, which has to be deployed and maintained.

This PR adds the possibility to provide a named_configs.json file on startup time,
which defines multiple named configurations and the client has to use the name of the configuration
instead of providing the full configuration text in the header parameter "x-portkey-config".

Also there is a possibility to define a configuration named "default"
which is used if there is no "x-portkey-config" header parameter.

An example configuration for ./named_configs.json:

{ 
    "named_configs": {
        "default":	{
            "provider": "ollama",
            "custom_host": "http://localhost:11434",
            "API_KEY": "$OLLAMA_KEY"
        },
        "openai_dev":	{
            "provider": "openai",
            "api_key": "$OPENAI_DEV_KEY"
        },
        "ollama_dev":	{
            "provider": "ollama",
            "custom_host": "http://ollama-dev.example.com:11434"
        },
        "prod":	{
            "strategy": { "mode": "fallback" },
            "retry": { "attempts": 2 },
            "targets": [
                { "provider": "openai", "api_key": "$OPENAI_KEY", "retry": { "attempts": 5 } },
                { "provider": "anthropic", "api_key": "$ANTHROPIC_KEY" },
                { "provider": "ollama", "custom_host": "https://selfhosted.example.com" }
            ]
        }
    }
}

To use named configurations the environment variable NAMED_CONFIGS has to be set.
$NAMED_CONFIGS can contain the JSON directly or a filename which contains the JSON, e.g. "./named_config.json".

Environment variables in the JSON values are resolved if available. Unresolved variables remain unchanged.

If the environment variable NAMED_CONFIGS is not set,
the gateway behaves in the same way as before.
But if the environment variable is set only config names can be provided in "x-portkey-config".
For invalid config names are handled as if no x-portkey-config parameter was given.

Tests Run/Test cases added: (required)

  • resolveEnvVars (tested indirectly via processNamedConfig)
  • namedConfig fallback to "default"
  • processNamedConfig – no config file
  • NAMED_CONFIGS environment variable

I am struggeling getting the tests and the server running at the same time.
The problem seems to be related to "await import" in env.ts,
which is only allowed in ESM (server), while "npx jest" runs with JCS and needs "requires".
Maybe someone can give me a hint how to get both working at the same time.

Manual Tests:

Using the ./named_config.js from above and start the docker container with:

docker build -t portkeyai-gateway:namedconfigs .
docker run --rm --name portkeyai-gateway \
    -p 8787:8787 \
    -e "OPENAI_DEV_KEY=Your OpenAI API Key" \
    -e NAMED_CONFIGS=./named_configs.json \
    -v $(pwd)/named_configs.json:/app/named_configs.json \
    portkeyai-gateway:namedconfigs

In the startup logs there is a new line

✅ NAMED_CONFIGS loaded successfully.

Sending a curl request:

curl http://localhost:8787/v1/chat/completions \
    -H "Content-Type: application/json" \
     -H "x-portkey-config: openai_dev" \
    -d '{
        "model": "gpt-4o",
        "messages": [
            {"role": "user", "content": "Hello!"}
        ]
    }'

Currently there is debug logging showing the config resolution, so we can see, which config is used:

namedConfig( openai_dev ) = {"provider":"openai","api_key":"Your OpenAI API Key"}

This means, the request is internally handled as if {"provider":"openai","api_key":"Your OpenAI API Key"} was given in x-portkey-config.

Succesfully tested the communication with a locally started Ollama server.

Type of Change:

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Documentation update
  • Refactoring (no functional changes)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant