ollama/sysconfig.ollama
Eyad Issa 6334ea69a7 - Update to version 0.5.11:
* No notable changes for Linux
- Update to version 0.5.10:
  * Fixed issue on multi-GPU Windows and Linux machines where
    memory estimations would be incorrect
- Update to version 0.5.9:
  * New model: DeepScaleR
  * New model: OpenThinker
- Update to version 0.5.8:
  * Ollama will now use AVX-512 instructions where available for
    additional CPU acceleration
  * Fixed indexing error that would occur when downloading a model
    with ollama run or ollama pull
  * Fixes cases where download progress would reverse

OBS-URL: https://build.opensuse.org/package/show/science:machinelearning/ollama?expand=0&rev=75
2025-02-15 01:36:40 +00:00

59 lines
1.4 KiB
Plaintext

## Path: Network/Ollama
## Description: Ollama server access
## Type: string
## Default: "http://127.0.0.1:11434"
## ServiceRestart: ollama
#
# set it to 0.0.0.0 for global network access
#
OLLAMA_HOST="http://127.0.0.1:11434"
## Type: string
## Description: Ollama default quantization type for the K/V cache
## Default: "f16"
## ServiceRestart: ollama
OLLAMA_KV_CACHE_TYPE=f16
## Type: string
## Description: Ollama default quantization type for the K/V cache
## Default: "f16"
## ServiceRestart: ollama
OLLAMA_KEEP_ALIVE=
## Type: string
## Description: Parallel processes
## Default: ""
## ServiceRestart: ollama
OLLAMA_NUM_PARALLEL=
## Type: string
## Description: Maxmimal memory to be used
## Default: ""
## ServiceRestart: ollama
OLLAMA_MAX_VRAM=
## Type: string
## Description: Ollama runner directory
## Default: ""
## ServiceRestart: ollama
OLLAMA_RUNNERS_DIR=
## Type: string
## Description: Ollama temporary directory
## Default: ""
## ServiceRestart: ollama
OLLAMA_TMPDIR=
## Type: string
## Description: Models to be loaded by default
## Default: ""
## ServiceRestart: ollama
OLLAMA_MODELS=
## Type: string
## Description: List of allowed remote hosts
## Default: ""
## ServiceRestart: ollama
OLLAMA_ORIGINS=