2 Commits
test ... main

Author SHA256 Message Date
7a4fbde4e7 Update ollama to 0.14.0
Signed-off-by: Egbert Eich <eich@suse.com>
2026-01-20 18:57:11 +01:00
57f263b6f8 Make sure we build for all architectures supported by CUDA
Signed-off-by: Egbert Eich <eich@suse.com>
2026-01-16 12:28:49 +01:00
5 changed files with 28 additions and 8 deletions

Binary file not shown.

BIN
ollama-0.14.0.tar.gz LFS Normal file

Binary file not shown.

View File

@@ -1,3 +1,22 @@
-------------------------------------------------------------------
Fri Jan 16 11:26:15 UTC 2026 - Egbert Eich <eich@suse.com>
- Make sure we build for all architectures supported by CUDA.
-------------------------------------------------------------------
Wed Jan 14 18:39:46 UTC 2026 - Eyad Issa <eyadlorenzo@gmail.com>
- Update to version 0.14.0:
* ollama run --experimental CLI will now open a new Ollama CLI
that includes an agent loop and the bash tool
* Anthropic API compatibility: support for the /v1/messages API
* A new REQUIRES command for the Modelfile allows declaring which
version of Ollama is required for the model
* For older models, Ollama will avoid an integer underflow on low
VRAM systems during memory estimation
* More accurate VRAM measurements for AMD iGPUs
* An error will now return when embeddings return NaN or -Inf
-------------------------------------------------------------------
Fri Dec 19 12:01:05 UTC 2025 - Glen Masgai <glen.masgai@gmail.com>

View File

@@ -1,7 +1,7 @@
#
# spec file for package ollama
#
# Copyright (c) 2025 SUSE LLC and contributors
# Copyright (c) 2026 SUSE LLC and contributors
#
# All modifications and additions to the file contributed by third parties
# remain the property of their copyright owners, unless otherwise agreed
@@ -35,7 +35,7 @@
%define cuda_version %{cuda_version_major}-%{cuda_version_minor}
Name: ollama
Version: 0.13.5
Version: 0.14.0
Release: 0
Summary: Tool for running AI models on-premise
License: MIT
@@ -160,7 +160,8 @@ sed -i -e 's@"lib"@"%{_lib}"@' \
-UOLLAMA_INSTALL_DIR -DOLLAMA_INSTALL_DIR=%{_libdir}/ollama \
-UCMAKE_INSTALL_BINDIR -DCMAKE_INSTALL_BINDIR=%{_libdir}/ollama \
-DGGML_BACKEND_DIR=%{_libdir}/ollama \
%{?with_cuda:-DCMAKE_CUDA_COMPILER=/usr/local/cuda-%{cuda_version_major}.%{cuda_version_minor}/bin/nvcc} \
%{?with_cuda:-DCMAKE_CUDA_COMPILER=/usr/local/cuda-%{cuda_version_major}.%{cuda_version_minor}/bin/nvcc \
-DCMAKE_CUDA_ARCHITECTURES=all} \
%{?with_rocm:-DCMAKE_HIP_COMPILER=%rocmllvm_bindir/clang++ \
-DAMDGPU_TARGETS=%{rocm_gpu_list_default}} \
%{nil}

BIN
vendor.tar.zstd LFS

Binary file not shown.