Accepting request 1169871 from science:machinelearning

OBS-URL: https://build.opensuse.org/request/show/1169871
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/ollama?expand=0&rev=3
This commit is contained in:
Ana Guerrero 2024-04-23 16:57:20 +00:00 committed by Git OBS Bridge
commit c712805838
8 changed files with 106 additions and 17 deletions

View File

@ -4,7 +4,7 @@
<service name="tar_scm" mode="manual">
<param name="url">https://github.com/ollama/ollama.git</param>
<param name="scm">git</param>
<param name="revision">v0.1.31</param>
<param name="revision">v0.1.32</param>
<param name="versionformat">@PARENT_TAG@</param>
<param name="versionrewrite-pattern">v(.*)</param>
<param name="changesgenerate">enable</param>
@ -18,4 +18,5 @@
<service name="go_modules" mode="manual">
<param name="compression">xz</param>
</service>
<service name="set_version" mode="manual" />
</services>

View File

@ -1,4 +1,4 @@
<servicedata>
<service name="tar_scm">
<param name="url">https://github.com/ollama/ollama.git</param>
<param name="changesrevision">dc011d16b9ff160c0be3829fc39a43054f0315d0</param></service></servicedata>
<param name="changesrevision">fb9580df85c562295d919b6c2632117d3d8cea89</param></service></servicedata>

View File

@ -1,7 +1,7 @@
diff -rub ollama-0.1.27/llm/generate/gen_linux.sh ollama-0.1.27-patched/llm/generate/gen_linux.sh
--- ollama-0.1.27/llm/generate/gen_linux.sh 2024-02-22 23:41:43.000000000 +0100
+++ ollama-0.1.27-patched/llm/generate/gen_linux.sh 2024-02-25 03:16:43.566940450 +0100
@@ -48,7 +48,7 @@
diff -rub ollama/llm/generate/gen_linux.sh ollama-patched/llm/generate/gen_linux.sh
--- ollama/llm/generate/gen_linux.sh 2024-04-23 04:40:58.246062467 +0200
+++ ollama-patched/llm/generate/gen_linux.sh 2024-04-23 04:37:36.432294889 +0200
@@ -51,7 +51,7 @@
export CUDACXX=$(command -v nvcc)
fi
fi
@ -10,19 +10,19 @@ diff -rub ollama-0.1.27/llm/generate/gen_linux.sh ollama-0.1.27-patched/llm/gene
source $(dirname $0)/gen_common.sh
init_vars
git_module_setup
@@ -59,7 +59,7 @@
# llama.cpp, and we'll build only 1 CPU variant in that case as the default.
@@ -77,7 +77,7 @@
if [ -n "${OLLAMA_CUSTOM_CPU_DEFS}" ]; then
init_vars
echo "OLLAMA_CUSTOM_CPU_DEFS=\"${OLLAMA_CUSTOM_CPU_DEFS}\""
- CMAKE_DEFS="${OLLAMA_CUSTOM_CPU_DEFS} -DCMAKE_POSITION_INDEPENDENT_CODE=on ${CMAKE_DEFS}"
+ CMAKE_DEFS="${OLLAMA_CUSTOM_CPU_DEFS} -DCMAKE_POSITION_INDEPENDENT_CODE=on -DLLAMA_LTO=on -DCMAKE_BUILD_TYPE=Release ${CMAKE_DEFS}"
BUILD_DIR="${LLAMACPP_DIR}/build/linux/${ARCH}/cpu"
BUILD_DIR="../build/linux/${ARCH}/cpu"
echo "Building custom CPU"
build
@@ -75,7 +75,7 @@
@@ -93,7 +93,7 @@
# -DLLAMA_AVX512_VBMI -- 2018 Intel Cannon Lake
# -DLLAMA_AVX512_VNNI -- 2021 Intel Alder Lake
- COMMON_CPU_DEFS="-DCMAKE_POSITION_INDEPENDENT_CODE=on -DLLAMA_NATIVE=off"
+ COMMON_CPU_DEFS="-DCMAKE_POSITION_INDEPENDENT_CODE=on -DLLAMA_LTO=on -DCMAKE_BUILD_TYPE=Release -DLLAMA_NATIVE=off"
if [ -z "${OLLAMA_CPU_TARGET}" -o "${OLLAMA_CPU_TARGET}" = "cpu" ]; then

View File

@ -1,3 +0,0 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2e55ac3bbb965f56c6920b793f254e814f4bf5fea77c81e8d8d867e850b15394
size 80992137

3
ollama-0.1.32.tar.gz Normal file
View File

@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:69b648bcafa46320c876a83a817f4fc4ed6c8a8acc961d62f4adb017fa7ad053
size 70152034

View File

@ -1,3 +1,91 @@
-------------------------------------------------------------------
Tue Apr 23 02:26:34 UTC 2024 - rrahl0@disroot.org
- Update to version 0.1.32:
* scale graph based on gpu count
* Support unicode characters in model path (#3681)
* darwin: no partial offloading if required memory greater than system
* update llama.cpp submodule to `7593639` (#3665)
* fix padding in decode
* Revert "cmd: provide feedback if OLLAMA_MODELS is set on non-serve command (#3470)" (#3662)
* Added Solar example at README.md (#3610)
* Update langchainjs.md (#2030)
* Added MindsDB information (#3595)
* examples: add more Go examples using the API (#3599)
* Update modelfile.md
* Add llama2 / torch models for `ollama create` (#3607)
* Terminate subprocess if receiving `SIGINT` or `SIGTERM` signals while model is loading (#3653)
* app: gracefully shut down `ollama serve` on windows (#3641)
* types/model: add path helpers (#3619)
* update llama.cpp submodule to `4bd0f93` (#3627)
* types/model: make ParseName variants less confusing (#3617)
* types/model: remove (*Digest).Scan and Digest.Value (#3605)
* Fix rocm deps with new subprocess paths
* mixtral mem
* Revert "types/model: remove (*Digest).Scan and Digest.Value (#3589)"
* types/model: remove (*Digest).Scan and Digest.Value (#3589)
* types/model: remove DisplayLong (#3587)
* types/model: remove MarshalText/UnmarshalText from Digest (#3586)
* types/model: init with Name and Digest types (#3541)
* server: provide helpful workaround hint when stalling on pull (#3584)
* partial offloading
* refactor tensor query
* api: start adding documentation to package api (#2878)
* examples: start adding Go examples using api/ (#2879)
* Handle very slow model loads
* fix: rope
* Revert "build.go: introduce a friendlier way to build Ollama (#3548)" (#3564)
* build.go: introduce a friendlier way to build Ollama (#3548)
* update llama.cpp submodule to `1b67731` (#3561)
* ci: use go-version-file
* Correct directory reference in macapp/README (#3555)
* cgo quantize
* no blob create if already exists
* update generate scripts with new `LLAMA_CUDA` variable, set `HIP_PLATFORM` to avoid compiler errors (#3528)
* Docs: Remove wrong parameter for Chat Completion (#3515)
* no rope parameters
* add command-r graph estimate
* Fail fast if mingw missing on windows
* use an older version of the mac os sdk in release (#3484)
* Add test case for context exhaustion
* CI missing archive
* fix dll compress in windows building
* CI subprocess path fix
* Fix CI release glitches
* update graph size estimate
* Fix macOS builds on older SDKs (#3467)
* cmd: provide feedback if OLLAMA_MODELS is set on non-serve command (#3470)
* feat: add OLLAMA_DEBUG in ollama server help message (#3461)
* Revert options as a ref in the server
* default head_kv to 1
* fix metal gpu
* Bump to b2581
* Refined min memory from testing
* Release gpu discovery library after use
* Safeguard for noexec
* Detect too-old cuda driver
* Integration test improvements
* Apply 01-cache.diff
* Switch back to subprocessing for llama.cpp
* Simplify model conversion (#3422)
* fix generate output
* update memory calcualtions
* refactor model parsing
* Add chromem-go to community integrations (#3437)
* Update README.md (#3436)
* Community Integration: CRAG Ollama Chat (#3423)
* Update README.md (#3378)
* Community Integration: ChatOllama (#3400)
* Update 90_bug_report.yml
* Add gemma safetensors conversion (#3250)
* CI automation for tagging latest images
* Bump ROCm to 6.0.2 patch release
* CI windows gpu builds
* Update troubleshooting link
* fix: trim quotes on OLLAMA_ORIGINS
- add set_version to automatically switch over to the newer version
-------------------------------------------------------------------
Tue Apr 16 10:52:25 UTC 2024 - bwiedemann@suse.com

View File

@ -17,7 +17,7 @@
Name: ollama
Version: 0.1.31
Version: 0.1.32
Release: 0
Summary: Tool for running AI models on-premise
License: MIT

View File

@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:924b704ac695115d54330397b6d02d737de88d5f67fa760fd1065521357193fd
size 3656224
oid sha256:26f50ef1d227317f77b0a68eb9672f407c3bdd15ffcd3bf6011afdf9b7d3b5ff
size 3669792