You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
-1Lines changed: 0 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -145,7 +145,6 @@ Run `docker compose build` inside of the repository.
145
145
|---|---|---|
146
146
|--model_path|Directory with downloaded model files which can be processed by llama.cpp|/path/to/models|
147
147
|--server_path|Path of llama cpp executable (on Windows: server.exe).|/path/to/llamacpp/executable/server|
148
-
|--n_gpu_layers|How many layers of the model to offload to the GPU. Adjust according to model and GPU memory. Default: 80|-1 for all, otherwise any number|
149
148
|--host|Hostname of the server. Default: 0.0.0.0|0.0.0.0 or localhost|
150
149
|--port|Port on which this web app should be started on. Default: 5001|5001|
151
150
|--config_file|Custom path to the configuration file.|config.yml|
0 commit comments