Accepting request 1173462 from science:machinelearning
OBS-URL: https://build.opensuse.org/request/show/1173462 OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/ollama?expand=0&rev=4
This commit is contained in:
commit
cc201b9d5d
2
_service
2
_service
@ -4,7 +4,7 @@
|
||||
<service name="tar_scm" mode="manual">
|
||||
<param name="url">https://github.com/ollama/ollama.git</param>
|
||||
<param name="scm">git</param>
|
||||
<param name="revision">v0.1.32</param>
|
||||
<param name="revision">v0.1.36</param>
|
||||
<param name="versionformat">@PARENT_TAG@</param>
|
||||
<param name="versionrewrite-pattern">v(.*)</param>
|
||||
<param name="changesgenerate">enable</param>
|
||||
|
@ -1,4 +1,4 @@
|
||||
<servicedata>
|
||||
<service name="tar_scm">
|
||||
<param name="url">https://github.com/ollama/ollama.git</param>
|
||||
<param name="changesrevision">fb9580df85c562295d919b6c2632117d3d8cea89</param></service></servicedata>
|
||||
<param name="changesrevision">92ca2cca954e590abe5eecb0a87fa13cec83b0e1</param></service></servicedata>
|
@ -1,3 +0,0 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:69b648bcafa46320c876a83a817f4fc4ed6c8a8acc961d62f4adb017fa7ad053
|
||||
size 70152034
|
3
ollama-0.1.36.tar.gz
Normal file
3
ollama-0.1.36.tar.gz
Normal file
@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:285ea18c73f9d8cbebd19ed429fb691a84853fa06366a6d206e74a9a5cfd2243
|
||||
size 87336304
|
@ -1,3 +1,71 @@
|
||||
-------------------------------------------------------------------
|
||||
Sun May 12 01:39:26 UTC 2024 - Eyad Issa <eyadlorenzo@gmail.com>
|
||||
|
||||
- Update to version 0.1.36:
|
||||
* Fixed exit status 0xc0000005 error with AMD graphics cards on Windows
|
||||
* Fixed rare out of memory errors when loading a model to run with CPU
|
||||
|
||||
- Update to version 0.1.35:
|
||||
* New models: Llama 3 ChatQA: A model from NVIDIA based on Llama
|
||||
3 that excels at conversational question answering (QA) and
|
||||
retrieval-augmented generation (RAG).
|
||||
* Quantization: ollama create can now quantize models when
|
||||
importing them using the --quantize or -q flag
|
||||
* Fixed issue where inference subprocesses wouldn't be cleaned up
|
||||
on shutdown.
|
||||
* Fixed a series out of memory errors when loading models on
|
||||
multi-GPU systems
|
||||
* Ctrl+J characters will now properly add newlines in ollama run
|
||||
* Fixed issues when running ollama show for vision models
|
||||
* OPTIONS requests to the Ollama API will no longer result in
|
||||
errors
|
||||
* Fixed issue where partially downloaded files wouldn't be
|
||||
cleaned up
|
||||
* Added a new done_reason field in responses describing why
|
||||
generation stopped responding
|
||||
* Ollama will now more accurately estimate how much memory
|
||||
is available on multi-GPU systems especially when running
|
||||
different models one after another
|
||||
|
||||
- Update to version 0.1.34:
|
||||
* New model: Llava Llama 3
|
||||
* New model: Llava Phi 3
|
||||
* New model: StarCoder2 15B Instruct
|
||||
* New model: CodeGemma 1.1
|
||||
* New model: StableLM2 12B
|
||||
* New model: Moondream 2
|
||||
* Fixed issues with LLaVa models where they would respond
|
||||
incorrectly after the first request
|
||||
* Fixed out of memory errors when running large models such as
|
||||
Llama 3 70B
|
||||
* Fixed various issues with Nvidia GPU discovery on Linux and
|
||||
Windows
|
||||
* Fixed a series of Modelfile errors when running ollama create
|
||||
* Fixed no slots available error that occurred when cancelling a
|
||||
request and then sending follow up requests
|
||||
* Improved AMD GPU detection on Fedora
|
||||
* Improved reliability when using the experimental
|
||||
OLLAMA_NUM_PARALLEL and OLLAMA_MAX_LOADED flags
|
||||
* ollama serve will now shut down quickly, even if a model is
|
||||
loading
|
||||
|
||||
- Update to version 0.1.33:
|
||||
* New model: Llama 3
|
||||
* New model: Phi 3 Mini
|
||||
* New model: Moondream
|
||||
* New model: Llama 3 Gradient 1048K
|
||||
* New model: Dolphin Llama 3
|
||||
* New model: Qwen 110B
|
||||
* Fixed issues where the model would not terminate, causing the
|
||||
API to hang.
|
||||
* Fixed a series of out of memory errors on Apple Silicon Macs
|
||||
* Fixed out of memory errors when running Mixtral architecture
|
||||
models
|
||||
* Aded experimental concurrency features:
|
||||
~ OLLAMA_NUM_PARALLEL: Handle multiple requests simultaneously
|
||||
for a single model
|
||||
~ OLLAMA_MAX_LOADED_MODELS: Load multiple models simultaneously
|
||||
|
||||
-------------------------------------------------------------------
|
||||
Tue Apr 23 02:26:34 UTC 2024 - rrahl0@disroot.org
|
||||
|
||||
|
@ -17,7 +17,7 @@
|
||||
|
||||
|
||||
Name: ollama
|
||||
Version: 0.1.32
|
||||
Version: 0.1.36
|
||||
Release: 0
|
||||
Summary: Tool for running AI models on-premise
|
||||
License: MIT
|
||||
|
@ -1,3 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:26f50ef1d227317f77b0a68eb9672f407c3bdd15ffcd3bf6011afdf9b7d3b5ff
|
||||
size 3669792
|
||||
oid sha256:21390f2f5bbd12b7a6c134b3ced1bafe76b929f85077e273d6c8f378cb156eb2
|
||||
size 4310640
|
||||
|
Loading…
Reference in New Issue
Block a user