- Update to version 0.1.28:
* Fix embeddings load model behavior (#2848) * Add Community Integration: NextChat (#2780) * prepend image tags (#2789) * fix: print usedMemory size right (#2827) * bump submodule to `87c91c07663b707e831c59ec373b5e665ff9d64a` (#2828) * Add ollama user to video group * Add env var so podman will map cuda GPUs * Omit build date from gzip headers * Log unexpected server errors checking for update * Refine container image build script * Bump llama.cpp to b2276 * Determine max VRAM on macOS using `recommendedMaxWorkingSetSize` (#2354) * Update types.go (#2744) * Update langchain python tutorial (#2737) * no extra disk space for windows installation (#2739) * clean up go.mod * remove format/openssh.go * Add Community Integration: Chatbox * better directory cleanup in `ollama.iss` * restore windows build flags and compression OBS-URL: https://build.opensuse.org/package/show/science:machinelearning/ollama?expand=0&rev=6
This commit is contained in:
parent
3e267704d5
commit
9c6d1dfa92
2
_service
2
_service
@ -4,7 +4,7 @@
|
|||||||
<service name="tar_scm" mode="manual">
|
<service name="tar_scm" mode="manual">
|
||||||
<param name="url">https://github.com/ollama/ollama.git</param>
|
<param name="url">https://github.com/ollama/ollama.git</param>
|
||||||
<param name="scm">git</param>
|
<param name="scm">git</param>
|
||||||
<param name="revision">v0.1.27</param>
|
<param name="revision">v0.1.28</param>
|
||||||
<param name="versionformat">@PARENT_TAG@</param>
|
<param name="versionformat">@PARENT_TAG@</param>
|
||||||
<param name="versionrewrite-pattern">v(.*)</param>
|
<param name="versionrewrite-pattern">v(.*)</param>
|
||||||
<param name="changesgenerate">enable</param>
|
<param name="changesgenerate">enable</param>
|
||||||
|
3
ollama-0.1.28.tar.gz
Normal file
3
ollama-0.1.28.tar.gz
Normal file
@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:30225f7d1a96b8a573e82810584950ed2f9c95dcd2157d794c278ca44a43861b
|
||||||
|
size 75624882
|
@ -1,3 +1,15 @@
|
|||||||
|
-------------------------------------------------------------------
|
||||||
|
Wed Mar 06 23:51:28 UTC 2024 - computersemiexpert@outlook.com
|
||||||
|
|
||||||
|
- Update to version 0.1.28:
|
||||||
|
* Fix embeddings load model behavior (#2848)
|
||||||
|
* Add Community Integration: NextChat (#2780)
|
||||||
|
* prepend image tags (#2789)
|
||||||
|
* fix: print usedMemory size right (#2827)
|
||||||
|
* bump submodule to `87c91c07663b707e831c59ec373b5e665ff9d64a` (#2828)
|
||||||
|
* Add ollama user to video group
|
||||||
|
* Add env var so podman will map cuda GPUs
|
||||||
|
|
||||||
-------------------------------------------------------------------
|
-------------------------------------------------------------------
|
||||||
Tue Feb 27 08:33:15 UTC 2024 - Jan Engelhardt <jengelh@inai.de>
|
Tue Feb 27 08:33:15 UTC 2024 - Jan Engelhardt <jengelh@inai.de>
|
||||||
|
|
||||||
|
@ -15,8 +15,9 @@
|
|||||||
# Please submit bugfixes or comments via https://bugs.opensuse.org/
|
# Please submit bugfixes or comments via https://bugs.opensuse.org/
|
||||||
#
|
#
|
||||||
|
|
||||||
|
|
||||||
Name: ollama
|
Name: ollama
|
||||||
Version: 0.1.27
|
Version: 0.1.28
|
||||||
Release: 0
|
Release: 0
|
||||||
Summary: Tool for running AI models on-premise
|
Summary: Tool for running AI models on-premise
|
||||||
License: MIT
|
License: MIT
|
||||||
|
@ -1,3 +1,3 @@
|
|||||||
version https://git-lfs.github.com/spec/v1
|
version https://git-lfs.github.com/spec/v1
|
||||||
oid sha256:1668fa3db9f05fbb58eaf3e9200bd23ac93991cdff56234fac154296acc4e419
|
oid sha256:548f8d5870f6b0b2881f5f68ef3f45c8b77bba282e10dc1aecafe14396213327
|
||||||
size 2995404
|
size 2993296
|
||||||
|
Loading…
Reference in New Issue
Block a user