diff --git a/_service b/_service index 846db95..17cc185 100644 --- a/_service +++ b/_service @@ -3,7 +3,7 @@ https://github.com/ollama/ollama.git git - v0.2.6 + v0.2.8 @PARENT_TAG@ v(.*) enable diff --git a/_servicedata b/_servicedata index e9fa81a..9a08a9c 100644 --- a/_servicedata +++ b/_servicedata @@ -1,4 +1,4 @@ https://github.com/ollama/ollama.git - b2554455572b28c0e18423d6fe6896cf7137dbd6 \ No newline at end of file + c0648233f2236f82f6830d2aaed552ae0f72379b \ No newline at end of file diff --git a/ollama-0.2.6.obscpio b/ollama-0.2.6.obscpio deleted file mode 100644 index 1266c77..0000000 --- a/ollama-0.2.6.obscpio +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:391fad97bacee37e8fab00273fd5d5a0a20912fd47c51907131ee1f274c7d2bf -size 161902606 diff --git a/ollama-0.2.8.obscpio b/ollama-0.2.8.obscpio new file mode 100644 index 0000000..586056d --- /dev/null +++ b/ollama-0.2.8.obscpio @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f1dfa7d3fc6d8dc35af4bd9a458a9f22ab613d07c1e5e48db2b2803ff7f77214 +size 151425038 diff --git a/ollama.changes b/ollama.changes index 2824a0b..29a0647 100644 --- a/ollama.changes +++ b/ollama.changes @@ -1,3 +1,29 @@ +------------------------------------------------------------------- +Wed Jul 24 14:28:08 UTC 2024 - adrian@suse.de + +- Update to version 0.2.8: + * api embed docs (#5282) + * convert: capture `head_dim` for mistral (#5818) + * Update llama.cpp submodule commit to `d94c6e0c` (#5805) + * server: collect nested tool call objects when parsing (#5824) + * Remove no longer supported max vram var + * Refine error reporting for subprocess crash + * Remove out of space test temporarily (#5825) + * llm: consider `head_dim` in llama arch (#5817) + * Adjust windows ROCm discovery + * add patch for tekken (#5807) + * preserve last assistant message (#5802) + * Fix generate test flakyness (#5804) + * server: validate template (#5734) + * OpenAI: Function Based Testing (#5752) + * adjust openai chat msg processing (#5729) + * fix parsing tool calls + * server: check for empty tools array too (#5779) + * always provide content even if empty (#5778) + * server: only parse tool calls if tools are provided (#5771) + * Fix context exhaustion integration test for small gpus + * Refine scheduler unit tests for reliability + ------------------------------------------------------------------- Thu Jul 18 13:09:10 UTC 2024 - Eyad Issa diff --git a/ollama.obsinfo b/ollama.obsinfo index 05ebd4d..bcc568f 100644 --- a/ollama.obsinfo +++ b/ollama.obsinfo @@ -1,4 +1,4 @@ name: ollama -version: 0.2.6 -mtime: 1721255711 -commit: b2554455572b28c0e18423d6fe6896cf7137dbd6 +version: 0.2.8 +mtime: 1721680628 +commit: c0648233f2236f82f6830d2aaed552ae0f72379b diff --git a/ollama.spec b/ollama.spec index 642dd45..4962fc8 100644 --- a/ollama.spec +++ b/ollama.spec @@ -17,7 +17,7 @@ Name: ollama -Version: 0.2.6 +Version: 0.2.8 Release: 0 Summary: Tool for running AI models on-premise License: MIT diff --git a/vendor.tar.zstd b/vendor.tar.zstd index 37a1bf5..9e281da 100644 --- a/vendor.tar.zstd +++ b/vendor.tar.zstd @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:b9dabb1b28321cce2672e5b37eb792e904715539dad5ecabc0eee92d6b0b10e1 -size 5355343 +oid sha256:e4b96cf9ccbb2b5ac6750dd67375daafbdcda9e2db58d0673a2566478b776878 +size 5355002