Commit Graph

5 Commits

Author SHA256 Message Date
Loren Burkholder
32d8d25838 Accepting request 1174682 from home:VaiTon:branches:science:machinelearning
- Update to version 0.1.38:
  * New model: Falcon 2: A new 11B parameters causal decoder-only
    model built by TII and trained over 5T tokens.
  * New model: Yi 1.5: A new high-performing version of Yi, now 
    licensed as Apache 2.0. Available in 6B, 9B and 34B sizes.
  * Added ollama ps command
  * Added /clear command
  * Fixed issue where switching loaded models on Windows would take
    several seconds
  * Running /save will no longer abort the chat session if an
    incorrect name is provided
  * The /api/tags API endpoint will now correctly return an empty
    list [] instead of null if no models are provided

OBS-URL: https://build.opensuse.org/request/show/1174682
OBS-URL: https://build.opensuse.org/package/show/science:machinelearning/ollama?expand=0&rev=15
2024-05-16 20:34:11 +00:00
Loren Burkholder
cf9cf2a4df Accepting request 1173521 from home:VaiTon:branches:science:machinelearning
- Update to version 0.1.37:
  * Fixed issue where models with uppercase characters in the name
    would not show with ollama list
  * Fixed usage string for ollama create
  * Fix finish_reason being "" instead of null in the Open-AI
    compatible chat API.

- Use obs_scm service instead of the deprecated tar_scm
- Use zstd for vendor tarball compression

OBS-URL: https://build.opensuse.org/request/show/1173521
OBS-URL: https://build.opensuse.org/package/show/science:machinelearning/ollama?expand=0&rev=13
2024-05-13 03:27:47 +00:00
Loren Burkholder
dfc1a9fa3a Accepting request 1173461 from home:VaiTon:branches:science:machinelearning
- Update to version 0.1.36:
- Update to version 0.1.35:
- Update to version 0.1.34:

OBS-URL: https://build.opensuse.org/request/show/1173461
OBS-URL: https://build.opensuse.org/package/show/science:machinelearning/ollama?expand=0&rev=11
2024-05-12 01:58:52 +00:00
Loren Burkholder
a4111a1692 Accepting request 1169791 from home:rrahl0:branches:science:machinelearning
- Update to version 0.1.32:
  * scale graph based on gpu count
  * Support unicode characters in model path (#3681)
  * darwin: no partial offloading if required memory greater than system
  * update llama.cpp submodule to `7593639` (#3665)
  * fix padding in decode
  * Revert "cmd: provide feedback if OLLAMA_MODELS is set on non-serve command (#3470)" (#3662)
  * Added Solar example at README.md (#3610)
  * Update langchainjs.md (#2030)
  * Added MindsDB information (#3595)
  * examples: add more Go examples using the API (#3599)
  * Update modelfile.md
  * Add llama2 / torch models for `ollama create` (#3607)
  * Terminate subprocess if receiving `SIGINT` or `SIGTERM` signals while model is loading (#3653)
  * app: gracefully shut down `ollama serve` on windows (#3641)
  * types/model: add path helpers (#3619)
  * update llama.cpp submodule to `4bd0f93` (#3627)
  * types/model: make ParseName variants less confusing (#3617)
  * types/model: remove (*Digest).Scan and Digest.Value (#3605)
  * Fix rocm deps with new subprocess paths
  * mixtral mem
  * Revert "types/model: remove (*Digest).Scan and Digest.Value (#3589)"
  * types/model: remove (*Digest).Scan and Digest.Value (#3589)
  * types/model: remove DisplayLong (#3587)
  * types/model: remove MarshalText/UnmarshalText from Digest (#3586)
  * types/model: init with Name and Digest types (#3541)
  * server: provide helpful workaround hint when stalling on pull (#3584)
  * partial offloading
  * refactor tensor query
  * api: start adding documentation to package api (#2878)

OBS-URL: https://build.opensuse.org/request/show/1169791
OBS-URL: https://build.opensuse.org/package/show/science:machinelearning/ollama?expand=0&rev=9
2024-04-23 12:00:24 +00:00
Loren Burkholder
8ef2b26afe Accepting request 1168020 from home:bmwiedemann:branches:science:machinelearning
Update to version 0.1.31:
  * Backport MacOS SDK fix from main
  * Apply 01-cache.diff
  * fix: workflows
  * stub stub
  * mangle arch
  * only generate on changes to llm subdirectory
  * only generate cuda/rocm when changes to llm detected
  * Detect arrow keys on windows (#3363)
  * add license in file header for vendored llama.cpp code (#3351)
  * remove need for `$VSINSTALLDIR` since build will fail if `ninja` cannot be found (#3350)
  * change `github.com/jmorganca/ollama` to `github.com/ollama/ollama` (#3347)
  * malformed markdown link (#3358)
  * Switch runner for final release job
  * Use Rocky Linux Vault to get GCC 10.2 installed
  * Revert "Switch arm cuda base image to centos 7"
  * Switch arm cuda base image to centos 7
  * Bump llama.cpp to b2527
  * Fix ROCm link in `development.md`
  * adds ooo to community integrations (#1623)
  * Add cliobot to ollama supported list (#1873)
  * Add Dify.AI to community integrations (#1944)
  * enh: add ollero.nvim to community applications (#1905)
  * Add typechat-cli to Terminal apps (#2428)
  * add new Web & Desktop link in readme for alpaca webui (#2881)
  * Add LibreChat to Web & Desktop Apps (#2918)
  * Add Community Integration: OllamaGUI (#2927)
  * Add Community Integration: OpenAOE (#2946)
  * Add Saddle (#3178)
  * tlm added to README.md terminal section. (#3274)
...

OBS-URL: https://build.opensuse.org/request/show/1168020
OBS-URL: https://build.opensuse.org/package/show/science:machinelearning/ollama?expand=0&rev=7
2024-04-17 00:53:52 +00:00