SHA256
1
0
forked from pool/ollama

- Update to version 0.5.1:

- Update to version 0.5.0:
- Update to version 0.4.7:

OBS-URL: https://build.opensuse.org/package/show/science:machinelearning/ollama?expand=0&rev=69
This commit is contained in:
Eyad Issa 2024-12-07 18:30:08 +00:00 committed by Git OBS Bridge
parent 46179bee73
commit 785c029f70
8 changed files with 43 additions and 13 deletions

View File

@ -3,7 +3,7 @@
<service name="obs_scm" mode="manual">
<param name="url">https://github.com/ollama/ollama.git</param>
<param name="scm">git</param>
<param name="revision">v0.4.6</param>
<param name="revision">v0.5.1</param>
<param name="versionformat">@PARENT_TAG@</param>
<param name="versionrewrite-pattern">v(.*)</param>
<param name="changesgenerate">enable</param>

View File

@ -1,4 +1,4 @@
<servicedata>
<service name="tar_scm">
<param name="url">https://github.com/ollama/ollama.git</param>
<param name="changesrevision">ce7455a8e1045ae12c5eaa9dc5bb5bdc84a098dc</param></service></servicedata>
<param name="changesrevision">de52b6c2f90ff220ed9469167d51e3f5d7474fa2</param></service></servicedata>

BIN
ollama-0.4.6.obscpio (Stored with Git LFS)

Binary file not shown.

BIN
ollama-0.5.1.obscpio (Stored with Git LFS) Normal file

Binary file not shown.

View File

@ -1,3 +1,33 @@
-------------------------------------------------------------------
Sat Dec 07 18:24:04 UTC 2024 - Eyad Issa <eyadlorenzo@gmail.com>
- Update to version 0.5.1:
* Fixed issue where Ollama's API would generate JSON output when
specifying "format": null
* Fixed issue where passing --format json to ollama run would
cause an error
- Update to version 0.5.0:
* New models:
~ Llama 3.3: a new state of the art 70B model.
~ Snowflake Arctic Embed 2: Snowflake's frontier embedding
model.
* Ollama now supports structured outputs, making it possible to
constrain a model's output to a specific format defined by a
JSON schema. The Ollama Python and JavaScript libraries have
been updated to support structured outputs, together with
Ollama's OpenAI-compatible API endpoints.
* Fixed error importing model vocabulary files
* Experimental: new flag to set KV cache quantization to 4-bit
(q4_0), 8-bit (q8_0) or 16-bit (f16). This reduces VRAM
requirements for longer context windows.
- Update to version 0.4.7:
* Enable index tracking for tools - openai api support (#7888)
* llama: fix typo and formatting in readme (#7876)
* readme: add SpaceLlama, YouLama, and DualMind to community
integrations (#7216)
-------------------------------------------------------------------
Sat Nov 30 19:47:23 UTC 2024 - Eyad Issa <eyadlorenzo@gmail.com>

View File

@ -1,4 +1,4 @@
name: ollama
version: 0.4.6
mtime: 1732743657
commit: ce7455a8e1045ae12c5eaa9dc5bb5bdc84a098dc
version: 0.5.1
mtime: 1733523195
commit: de52b6c2f90ff220ed9469167d51e3f5d7474fa2

View File

@ -17,7 +17,7 @@
Name: ollama
Version: 0.4.6
Version: 0.5.1
Release: 0
Summary: Tool for running AI models on-premise
License: MIT
@ -32,6 +32,8 @@ BuildRequires: git
BuildRequires: sysuser-tools
BuildRequires: zstd
BuildRequires: golang(API) >= 1.22
# 32bit seems not to be supported anymore
ExcludeArch: %{ix86} %{arm}
%sysusers_requires
%if 0%{?sle_version} == 150600
BuildRequires: gcc12-c++
@ -39,8 +41,6 @@ BuildRequires: libstdc++6-gcc12
%else
BuildRequires: gcc-c++ >= 11.4.0
%endif
# 32bit seems not to be supported anymore
ExcludeArch: %ix86 %arm
%description
Ollama is a tool for running AI models on one's own hardware.

BIN
vendor.tar.zstd (Stored with Git LFS)

Binary file not shown.