Red Hat Developer Lightspeed (Developer Lightspeed) is a virtual assistant powered by generative AI that offers in-depth insights into Red Hat Developer Hub (RHDH), including its wide range of capabilities. You can interact with this assistant to explore and learn more about RHDH in greater detail.
Developer Lightspeed provides a natural language interface within the RHDH console, helping you easily find information about the product, understand its features, and get answers to your questions as they come up.
Developer Lightspeed for Red Hat Developer Hub is available as a plug-in on all platforms that host RHDH, and it requires the use of Lightspeed Core.
Developer Lightspeed uses a Bring Your Own Model (BYOM) architecture. No inference provider is bundled by default — you must configure at least one external LLM provider. The application starts in an unconfigured state and the UI will reflect this until a provider is set up.
Important
Developer Lightspeed starts in an unconfigured state with no inference provider. You must configure at least one external LLM provider before you can use the chatbot. The UI will reflect the unconfigured state until a provider is set up.
Follow these steps to configure and launch Developer Lightspeed.
-
Load the Developer Lightspeed dynamic plugins
Add the
developer-lightspeed/configs/dynamic-plugins/dynamic-plugins.lightspeed.yamlfile to the list ofincludesin yourconfigs/dynamic-plugins/dynamic-plugins.override.yamlto enable Developer Lightspeed plugins within RHDH.Example:
includes: - dynamic-plugins.default.yaml - developer-lightspeed/configs/dynamic-plugins/dynamic-plugins.lightspeed.yaml # <-- to add to enable the developer lightspeed plugins # Below you can add your own custom dynamic plugins, including local ones. plugins: []
-
Copy the Lightspeed App Config example
Start by creating a new local app config file for Lightspeed:
cp developer-lightspeed/configs/app-config/app-config.lightspeed.local.example.yaml developer-lightspeed/configs/app-config/app-config.lightspeed.local.yaml
-
Set Environment Variables
[!NOTE] If you intend to use any environment variables in the Lightspeed Core configuration file, lightspeed-stack.yaml, it is important to note that Lightspeed Core parses environment variables differently than what is typical. Environment variables for this file must be in the form:
${env.VAR}${env.VAR:=default-value}${env.VAR:+value}In the root of this repository there is a
default.envfile. Copy its contents to.envand fill in the required values:cp default.env .env
[!IMPORTANT] You must configure at least one inference provider in your
.envfile before starting the application. Without a configured provider, Lightspeed will start in an unconfigured state and the chatbot will not be functional.
Developer Lightspeed supports any service that is OpenAI API compatible. Configure at least one of the following providers in your
.envfile. You can enable multiple providers simultaneously.[!NOTE] Supported Providers: Developer Lightspeed supports any service that is OpenAI API compatible, including but not limited to:
- vLLM: A high-performance inference server (self-hosted or cloud)
- OpenAI: OpenAI's API (GPT-4, etc.)
- Ollama: A locally or remotely hosted Ollama instance
- Vertex AI: Google Cloud's Vertex AI service (experimental)
Use vLLM for high-performance inference with self-hosted or cloud-based vLLM servers. This provider configuration also works with any OpenAI API compatible service (Azure OpenAI, LM Studio, Mistral, Nvidia NIM, etc.) that provides an OpenAI-compatible endpoint.
# Enable vLLM provider (or generic OpenAI API compatible endpoint) ENABLE_VLLM=true # REQUIRED: URL to your server (must end with /v1) # Examples: # - vLLM server: https://your-vllm-server.com/v1 # - Azure OpenAI: https://your-resource.openai.azure.com/v1 # - LM Studio: http://localhost:1234/v1 # - Any OpenAI-compatible endpoint VLLM_URL=https://your-server.com/v1 # REQUIRED: API key for authentication (if your server requires it) # For Azure OpenAI, use your Azure API key # For LM Studio or local servers, you can use any value or leave as default VLLM_API_KEY=your-api-key-here # OPTIONAL: Maximum tokens per request (default: 4096) # VLLM_MAX_TOKENS=4096 # OPTIONAL: TLS verification (default: true) # Set to false for local servers with self-signed certificates # VLLM_TLS_VERIFY=true
[!TIP] Using Other OpenAI API Compatible Services:
If you have an OpenAI API compatible endpoint that doesn't have its own provider configuration (like Azure OpenAI, LM Studio, Mistral, Nvidia NIM, etc.), you can use the vLLM provider configuration above. Simply:
- Set
ENABLE_VLLM=true - Set
VLLM_URLto your service's endpoint (must end with/v1) - Set
VLLM_API_KEYto your service's API key (if required)
The
remote::vllmprovider type accepts any OpenAI API compatible endpoint, not just vLLM servers.Use OpenAI's API to access GPT models (GPT-4, etc.).
# Enable OpenAI provider ENABLE_OPENAI=true # REQUIRED: Your OpenAI API key OPENAI_API_KEY=sk-your-openai-api-key-here
Use an externally hosted Ollama instance to serve models. You must run your own Ollama server separately — it is not bundled in the compose setup.
# Enable Ollama provider ENABLE_OLLAMA=true # REQUIRED: URL to your Ollama server (must end with /v1) # Examples: # - Local Ollama: http://host.docker.internal:11434/v1 # - Remote Ollama: https://your-ollama-server.com:11434/v1 OLLAMA_URL=http://host.docker.internal:11434/v1
[!NOTE] Since Ollama runs outside the compose stack, you need to ensure the URL is accessible from within the containers. For a locally running Ollama, use
host.docker.internal(Docker) orhost.containers.internal(Podman) instead oflocalhost.Use Google Cloud's Vertex AI service to run Gemini models.
[!WARNING] Experimental Feature: Using Vertex AI to run Google models is experimental. Vertex AI provides an OpenAI-compatible API for Gemini models, which is why it can work with Developer Lightspeed (which supports OpenAI API implementations). This is provided as an alternative way to access Google models since
remote:geminiis not yet fully supported.# Enable Vertex AI provider ENABLE_VERTEX_AI=true # REQUIRED: Absolute path to your Google Cloud credentials JSON file VERTEX_AI_CREDENTIALS_PATH=/absolute/path/to/your/google-cloud-credentials.json # REQUIRED: Your GCP project ID VERTEX_AI_PROJECT=your-gcp-project-id # OPTIONAL: GCP location/region (default: us-central1) # VERTEX_AI_LOCATION=us-central1
[!NOTE] To use Vertex AI, you need:
- A Google Cloud Platform (GCP) project with Vertex AI API enabled
- A service account with appropriate permissions
- A service account key file (JSON) downloaded from GCP
- Set
VERTEX_AI_PROJECTto your project ID - Set
VERTEX_AI_CREDENTIALS_PATHto the absolute path of your credentials JSON file
Developer Lightspeed supports query validation, which restricts the chatbot to RHDH-related questions. When enabled, off-topic queries (e.g., asking about the weather) will be rejected while development-related questions are allowed.
# Enable query validation ENABLE_VALIDATION=true # REQUIRED if validation is enabled: Must be one of your enabled providers # Example: if ENABLE_OPENAI=true, then set VALIDATION_PROVIDER=openai VALIDATION_PROVIDER=openai # REQUIRED if validation is enabled: Must be an available model for the chosen provider # Example: VALIDATION_MODEL_NAME=gpt-4o-mini VALIDATION_MODEL_NAME=gpt-4o-mini
[!NOTE] The validation provider must be one of your enabled inference providers, and the model must be available on that provider.
-
Start the application
To start Developer Lightspeed, run the following from the root of the repository:
bash ./developer-lightspeed/scripts/start-lightspeed.sh
The script will auto-detect your container runtime (podman or docker) and start the services. If auto-detection fails, it will prompt you to choose manually.
[!IMPORTANT] Ensure you have configured at least one inference provider in your
.envfile (see step 3 above) before starting. Without a provider, the chatbot will not be functional.
-
Verify that all services are running
After starting the application, make sure all services are running:
podman compose ps # OR docker compose psLook for all services to show
runningorUpin the Status column. You should see output similar to:CONTAINER ID IMAGE CREATED STATUS NAMES 31c3c681b742 quay.io/rhdh-community/rhdh:next 16 seconds ago Exited (0) 5 seconds ago rhdh-plugins-installer f7b74b9f241e quay.io/rhdh-community/rhdh:next 4 seconds ago Up 5 seconds (starting) rhdh a4e2b1f38d90 quay.io/redhat-ai-dev/rag-content:release-1.9-... 16 seconds ago Exited (0) 10 seconds ago rag-init 2860fc13b036 quay.io/lightspeed-core/lightspeed-stack:0.5.1 15 seconds ago Up 5 seconds (starting) lightspeed-core rhdh-plugins-installerandrag-initare init containers — they run once and exit with status0.rhdhandlightspeed-coreshould showUporrunning.
Note: If any service is not running, you can inspect the logs:
podman logs <container-name>
-
Open http://localhost:7007/lightspeed in your browser to access Developer Lightspeed.
The easiest way to stop Developer Lightspeed is using the stop script:
bash ./developer-lightspeed/scripts/stop-lightspeed.shWith volumes removal:
bash ./developer-lightspeed/scripts/stop-lightspeed.sh -v
# or
bash ./developer-lightspeed/scripts/stop-lightspeed.sh --volumesThe script will:
- Auto-detect your container runtime (podman or docker)
- Without
-vflag: Stop containers only (preserves volumes for faster restart) - With
-vflag: Stop containers and remove volumes (complete cleanup)
If you prefer to stop containers manually:
Stop containers only (preserves volumes):
podman compose -f compose.yaml -f developer-lightspeed/compose.yaml downStop containers and remove volumes:
podman compose -f compose.yaml -f developer-lightspeed/compose.yaml down -vNote: All instructions in this guide apply to both Podman and Docker.
Replacepodman composewithdocker composeif you are using Docker.
If you encounter issues while setting up or running Developer Lightspeed, try the following solutions:
-
Check container logs:
Use the following command to view logs for a specific container:podman logs <container-name> # OR docker logs <container-name>
Look for error messages that can help diagnose the problem.
-
Common causes:
- Port conflicts (another service is using the same port)
- Insufficient memory or CPU resources
- Incorrect environment variables
- Missing synced config files (run
sync-lightspeed-configs.sh— see Syncing Lightspeed Configuration Files)
- Ensure you have the necessary permissions to access files and directories, especially when mounting volumes.
- On Linux/macOS, you may need to adjust permissions with
chmodor run commands withsudo.
3. Web UI Not Accessible at http://localhost:7007/lightspeed
- Make sure all containers are running:
podman compose ps # OR docker compose ps - Check for firewall or VPN issues that may block access to localhost ports.
- Developer Lightspeed starts unconfigured by default. You must configure at least one inference provider in your
.envfile (see step 3). - Verify provider is enabled: Check that at least one of
ENABLE_VLLM=true,ENABLE_OPENAI=true,ENABLE_OLLAMA=true, orENABLE_VERTEX_AI=trueis set in your.envfile. - Check required variables: Ensure all required variables for your chosen provider are set.
- Verify connectivity: Ensure the provider URL is accessible from within the container.
- Check logs: Review
lightspeed-corecontainer logs for provider connection errors:podman logs lightspeed-core # OR docker logs lightspeed-core - Validate API keys: Ensure API keys are correct and have proper permissions.
- Double-check that your
.envfile is present and correctly configured. - Restart the containers after making changes to environment files.
If you enabled query validation but it isn't filtering queries:
- Verify validation is enabled: Check that
ENABLE_VALIDATION=trueis set in your.envfile. - Check provider: Ensure
VALIDATION_PROVIDERis set to one of your enabled inference providers. - Check model: Ensure
VALIDATION_MODEL_NAMEis set to a model available on the validation provider.
- Try stopping and removing all containers, then starting again — see Cleanup.
If your issue persists, please open an issue on GitHub with details about your problem so we can help you troubleshoot.
Available configuration options:
lightspeed:
# OPTIONAL: Custom users prompts displayed to users
# If not provided, the plugin uses built-in default prompts
prompts:
- title: <prompt_title> # REQUIRED: Display title for the prompt
message: <prompt_message> # REQUIRED: The actual prompt text/question
# OPTIONAL: Backend-only configurations
servicePort: 8080 # OPTIONAL: Port for lightspeed service (default: 8080)
systemPrompt: <custom_system_prompt> # OPTIONAL: Override default RHDH system prompt| Field | Type | Required | Default | Description |
|---|---|---|---|---|
prompts |
Array | No | Built-in prompts | Custom welcome prompts for users |
prompts[].title |
String | Yes* | - | Display title for the prompt (*required if prompts array is provided) |
prompts[].message |
String | Yes* | - | The actual prompt text/question (*required if prompts array is provided) |
servicePort |
Number | No | 8080 |
Port for lightspeed backend service |
systemPrompt |
String | No | RHDH default | Custom system prompt to override default behavior |
lightspeed:
prompts:
- title: "Quick Start"
message: "How do I enable a dynamic plugin?"
servicePort: 8080
systemPrompt: "You are a helpful assistant focused on Red Hat Developer Hub development."By default, the compose setup uses quay.io/lightspeed-core/lightspeed-stack:0.5.1. To use a different image (e.g., a newer version or a custom build), set the LIGHTSPEED_CORE_IMAGE environment variable in your .env file:
LIGHTSPEED_CORE_IMAGE=quay.io/lightspeed-core/lightspeed-stack:0.6.0If you encounter out-of-memory issues with the Lightspeed Core container, you can increase the memory available to your Podman or Docker virtual machine:
podman machine stop
podman machine set --memory=8192
podman machine start- The example above sets the memory to 8 GiB (
8192MB). - Adjust the value as needed (e.g.,
--memory=16384for 16 GiB). - Ensure your host system has enough free RAM.
After increasing the memory, restart your containers to use the new limits.
The Lightspeed Core configuration files (config.yaml, rhdh-profile.py, lightspeed-stack.yaml) are maintained upstream in the redhat-ai-dev/lightspeed-configs repository. The sync script downloads them into developer-lightspeed/configs/extra-files/.
Sync from default (main branch):
bash ./developer-lightspeed/scripts/sync-lightspeed-configs.shSync from a specific ref:
bash ./developer-lightspeed/scripts/sync-lightspeed-configs.sh --ref v1.0.0Sync from a different repository:
bash ./developer-lightspeed/scripts/sync-lightspeed-configs.sh --repo your-org/your-forkCheck if local files are up to date (dry run):
bash ./developer-lightspeed/scripts/sync-lightspeed-configs.sh --check