3 Commits
bisect ... main

Author SHA256 Message Date
7a4fbde4e7 Update ollama to 0.14.0
Signed-off-by: Egbert Eich <eich@suse.com>
2026-01-20 18:57:11 +01:00
57f263b6f8 Make sure we build for all architectures supported by CUDA
Signed-off-by: Egbert Eich <eich@suse.com>
2026-01-16 12:28:49 +01:00
2acc3720ee Update to version 13.5
Signed-off-by: Egbert Eich <eich@suse.com>
2026-01-12 17:36:44 +01:00
6 changed files with 119 additions and 9 deletions

View File

@@ -4,6 +4,6 @@
<service name="go_modules" mode="manual">
<param name="compression">zstd</param>
<param name="replace">golang.org/x/net=golang.org/x/net@v0.46.0</param>
<param name="replace">golang.org/x/net=golang.org/x/net@v0.48.0</param>
</service>
</services>

Binary file not shown.

BIN
ollama-0.14.0.tar.gz LFS Normal file

Binary file not shown.

View File

@@ -1,3 +1,108 @@
-------------------------------------------------------------------
Fri Jan 16 11:26:15 UTC 2026 - Egbert Eich <eich@suse.com>
- Make sure we build for all architectures supported by CUDA.
-------------------------------------------------------------------
Wed Jan 14 18:39:46 UTC 2026 - Eyad Issa <eyadlorenzo@gmail.com>
- Update to version 0.14.0:
* ollama run --experimental CLI will now open a new Ollama CLI
that includes an agent loop and the bash tool
* Anthropic API compatibility: support for the /v1/messages API
* A new REQUIRES command for the Modelfile allows declaring which
version of Ollama is required for the model
* For older models, Ollama will avoid an integer underflow on low
VRAM systems during memory estimation
* More accurate VRAM measurements for AMD iGPUs
* An error will now return when embeddings return NaN or -Inf
-------------------------------------------------------------------
Fri Dec 19 12:01:05 UTC 2025 - Glen Masgai <glen.masgai@gmail.com>
- Added 'Requires:' tag for subpackages to spec file
- Update to version 0.13.5:
* New models: FunctionGemma
* 'bert' architecture models now run on Ollama's engine
* Added built-in renderer & tool parsing capabilities for
DeepSeek-V3.1
* Fixed issue where nested properties in tools may not have been
rendered properly
-------------------------------------------------------------------
Wed Dec 17 11:48:24 UTC 2025 - Glen Masgai <glen.masgai@gmail.com>
- Update vendored golang.org/x/net/html to v0.48.0
- Update to version 0.13.4:
* New models: Nemotron 3 Nano, Olmo 3, Olmo 3.1
* Enable Flash Attention automatically for models by default
* Fixed handling of long contexts with Gemma 3 models
* Fixed issue that would occur with Gemma 3 QAT models or
other models imported with the Gemma 3 architecture
- Update to version 0.13.3:
* New models: Devstral-Small-2, rnj-1, nomic-embed-text-v2
* Improved truncation logic when using /api/embed and
/v1/embeddings
* Extend Gemma 3 architecture to support rnj-1 model
* Fix error that would occur when running qwen2.5vl with image
input
- Update to version 0.13.2:
* New models: Qwen3-Next
* Flash attention is now enabled by default for vision models
such as mistral-3, gemma3, qwen3-vl and more. This improves
memory utilization and performance when providing images as
input.
* Fixed GPU detection on multi-GPU CUDA machines
* Fixed issue where deepseek-v3.1 would always think even with
thinking is disabled in Ollama's app
-------------------------------------------------------------------
Thu Dec 4 18:07:05 UTC 2025 - Eyad Issa <eyadlorenzo@gmail.com>
- Update to version 0.13.1:
* New models: Ministral-3, Mistral-Large-3
* nomic-embed-text will now use Ollama's engine by default
* Tool calling support for cogito-v2.1
* Ollama will now better render errors instead of showing
Unmarshal: errors
-------------------------------------------------------------------
Sat Nov 22 04:14:47 UTC 2025 - Glen Masgai <glen.masgai@gmail.com>
- Update to version 0.13.0
* New models: DeepSeek-OCR, Cogito-V2.1
* DeepSeek-V3.1 architecture is now supported in Ollama's engine
* Fixed performance issues that arose in Ollama 0.12.11 on CUDA
* Fixed issue where Linux install packages were missing required
Vulkan libraries
* Improved CPU and memory detection while in containers/cgroups
* Improved VRAM information detection for AMD GPUs
* Improved KV cache performance to no longer require
defragmentation
- Update to version 0.12.11
* Ollama's API and the OpenAI-compatible API now supports
Logprobs, see:
https://cookbook.openai.com/examples/using_logprobs) and
https://github.com/ollama/ollama/releases/tag/v0.12.11
* Ollama's new app now supports WebP images
* Improved rendering performance in Ollama's new app, especially
when rendering code
* The "required" field in tool definitions will now be omitted if
not specified
* Fixed issue where "tool_call_id" would be omitted when using
the OpenAI-compatible API.
* Fixed issue where ollama create would import data from both
consolidated.safetensors and other safetensor files.
* Ollama will now prefer dedicated GPUs over iGPUs when
scheduling models
* Vulkan can now be enabled by setting OLLAMA_VULKAN=1.
For example: OLLAMA_VULKAN=1 ollama serve
-------------------------------------------------------------------
Mon Nov 10 19:34:43 UTC 2025 - Egbert Eich <eich@suse.com>
@@ -25,6 +130,7 @@ Fri Nov 7 15:40:39 UTC 2025 - Glen Masgai <glen.masgai@gmail.com>
-------------------------------------------------------------------
Sun Nov 2 04:00:05 UTC 2025 - Glen Masgai <glen.masgai@gmail.com>
- Fixed issue with duplicated libraries (/usr/lib, /usr/lib64)
- Update to version 0.12.9

View File

@@ -1,7 +1,7 @@
#
# spec file for package ollama
#
# Copyright (c) 2025 SUSE LLC and contributors
# Copyright (c) 2026 SUSE LLC and contributors
#
# All modifications and additions to the file contributed by third parties
# remain the property of their copyright owners, unless otherwise agreed
@@ -35,7 +35,7 @@
%define cuda_version %{cuda_version_major}-%{cuda_version_minor}
Name: ollama
Version: 0.12.10
Version: 0.14.0
Release: 0
Summary: Tool for running AI models on-premise
License: MIT
@@ -102,18 +102,21 @@ can be imported.
%package vulkan
Summary: Ollama Module using Vulkan
Requires: %{name} = %{version}-%{release}
%description vulkan
Ollama plugin module using Vulkan.
%package cuda
Summary: Ollama Module using CUDA
Requires: %{name} = %{version}-%{release}
%description cuda
Ollama plugin module using NVIDIA CUDA.
%package rocm
Summary: Ollama Module using AMD ROCm
Requires: %{name} = %{version}-%{release}
%description rocm
Ollama plugin module for ROCm.
@@ -157,7 +160,8 @@ sed -i -e 's@"lib"@"%{_lib}"@' \
-UOLLAMA_INSTALL_DIR -DOLLAMA_INSTALL_DIR=%{_libdir}/ollama \
-UCMAKE_INSTALL_BINDIR -DCMAKE_INSTALL_BINDIR=%{_libdir}/ollama \
-DGGML_BACKEND_DIR=%{_libdir}/ollama \
%{?with_cuda:-DCMAKE_CUDA_COMPILER=/usr/local/cuda-%{cuda_version_major}.%{cuda_version_minor}/bin/nvcc} \
%{?with_cuda:-DCMAKE_CUDA_COMPILER=/usr/local/cuda-%{cuda_version_major}.%{cuda_version_minor}/bin/nvcc \
-DCMAKE_CUDA_ARCHITECTURES=all} \
%{?with_rocm:-DCMAKE_HIP_COMPILER=%rocmllvm_bindir/clang++ \
-DAMDGPU_TARGETS=%{rocm_gpu_list_default}} \
%{nil}

BIN
vendor.tar.zstd LFS

Binary file not shown.