ollama/ollama.obsinfo
Eyad Issa da3e66a886 - Update to version 0.4.0-rc6:
* Refine default thread selection for NUMA systems (#7322)
  * runner.go: Better abstract vision model integration
  * Soften windows clang requirement (#7428)
  * Remove submodule and shift to Go server - 0.4.0  (#7157)
  * Move windows app out of preview (#7347)
  * windows: Support alt install paths, fit and finish (#6967)
  * add more tests for getting the optimal tiled canvas (#7411)
  * Switch windows to clang (#7407)
  * tests: Add test for Unicode processing
  * runner.go: Better handle return NULL values from llama.cpp
  * add mllama image processing to the generate handler (#7384)
  * Bump to latest Go 1.22 patch (#7379)
  * Fix deepseek deseret regex (#7369)
  * Better support for AMD multi-GPU on linux (#7212)
  * Fix unicode output on windows with redirect to file (#7358)
  * Fix incremental build file deps (#7361)
  * Improve dependency gathering logic (#7345)
  * fix #7247 - invalid image input (#7249)
  * integration: harden embedding test (#7306)
  * default to "FROM ." if a Modelfile isn't present (#7250)
  * Fix rocm windows build and clean up dependency gathering (#7305)
  * runner.go: Merge partial unicode characters before sending
  * readme: add Ollama for Swift to the community integrations (#7295)
  * server: allow vscode-webview origin (#7273)
  * image processing for llama3.2 (#6963)
  * llama: Decouple patching script from submodule (#7139)
  * llama: add compiler tags for cpu features (#7137)

OBS-URL: https://build.opensuse.org/package/show/science:machinelearning/ollama?expand=0&rev=59
2024-11-01 02:20:51 +00:00

5 lines
99 B
Plaintext

name: ollama
version: 0.4.0-rc6
mtime: 1730325945
commit: 16f4eabe2d409b2b8a6e50fa08c8ce3a2a3b18d1