Accepting request 1189591 from science:machinelearning

OBS-URL: https://build.opensuse.org/request/show/1189591
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/ollama?expand=0&rev=14
This commit is contained in:
Dominique Leuenberger 2024-07-26 14:15:22 +00:00 committed by Git OBS Bridge
commit 973749ceec
8 changed files with 37 additions and 11 deletions

View File

@ -3,7 +3,7 @@
<service name="obs_scm" mode="manual">
<param name="url">https://github.com/ollama/ollama.git</param>
<param name="scm">git</param>
<param name="revision">v0.2.6</param>
<param name="revision">v0.2.8</param>
<param name="versionformat">@PARENT_TAG@</param>
<param name="versionrewrite-pattern">v(.*)</param>
<param name="changesgenerate">enable</param>

View File

@ -1,4 +1,4 @@
<servicedata>
<service name="tar_scm">
<param name="url">https://github.com/ollama/ollama.git</param>
<param name="changesrevision">b2554455572b28c0e18423d6fe6896cf7137dbd6</param></service></servicedata>
<param name="changesrevision">c0648233f2236f82f6830d2aaed552ae0f72379b</param></service></servicedata>

BIN
ollama-0.2.6.obscpio (Stored with Git LFS)

Binary file not shown.

BIN
ollama-0.2.8.obscpio (Stored with Git LFS) Normal file

Binary file not shown.

View File

@ -1,3 +1,29 @@
-------------------------------------------------------------------
Wed Jul 24 14:28:08 UTC 2024 - adrian@suse.de
- Update to version 0.2.8:
* api embed docs (#5282)
* convert: capture `head_dim` for mistral (#5818)
* Update llama.cpp submodule commit to `d94c6e0c` (#5805)
* server: collect nested tool call objects when parsing (#5824)
* Remove no longer supported max vram var
* Refine error reporting for subprocess crash
* Remove out of space test temporarily (#5825)
* llm: consider `head_dim` in llama arch (#5817)
* Adjust windows ROCm discovery
* add patch for tekken (#5807)
* preserve last assistant message (#5802)
* Fix generate test flakyness (#5804)
* server: validate template (#5734)
* OpenAI: Function Based Testing (#5752)
* adjust openai chat msg processing (#5729)
* fix parsing tool calls
* server: check for empty tools array too (#5779)
* always provide content even if empty (#5778)
* server: only parse tool calls if tools are provided (#5771)
* Fix context exhaustion integration test for small gpus
* Refine scheduler unit tests for reliability
-------------------------------------------------------------------
Thu Jul 18 13:09:10 UTC 2024 - Eyad Issa <eyadlorenzo@gmail.com>

View File

@ -1,4 +1,4 @@
name: ollama
version: 0.2.6
mtime: 1721255711
commit: b2554455572b28c0e18423d6fe6896cf7137dbd6
version: 0.2.8
mtime: 1721680628
commit: c0648233f2236f82f6830d2aaed552ae0f72379b

View File

@ -17,7 +17,7 @@
Name: ollama
Version: 0.2.6
Version: 0.2.8
Release: 0
Summary: Tool for running AI models on-premise
License: MIT

BIN
vendor.tar.zstd (Stored with Git LFS)

Binary file not shown.