ollama/_servicedata

4 lines
234 B
Plaintext
Raw Normal View History

- Update to version 0.4.0-rc6: * Refine default thread selection for NUMA systems (#7322) * runner.go: Better abstract vision model integration * Soften windows clang requirement (#7428) * Remove submodule and shift to Go server - 0.4.0 (#7157) * Move windows app out of preview (#7347) * windows: Support alt install paths, fit and finish (#6967) * add more tests for getting the optimal tiled canvas (#7411) * Switch windows to clang (#7407) * tests: Add test for Unicode processing * runner.go: Better handle return NULL values from llama.cpp * add mllama image processing to the generate handler (#7384) * Bump to latest Go 1.22 patch (#7379) * Fix deepseek deseret regex (#7369) * Better support for AMD multi-GPU on linux (#7212) * Fix unicode output on windows with redirect to file (#7358) * Fix incremental build file deps (#7361) * Improve dependency gathering logic (#7345) * fix #7247 - invalid image input (#7249) * integration: harden embedding test (#7306) * default to "FROM ." if a Modelfile isn't present (#7250) * Fix rocm windows build and clean up dependency gathering (#7305) * runner.go: Merge partial unicode characters before sending * readme: add Ollama for Swift to the community integrations (#7295) * server: allow vscode-webview origin (#7273) * image processing for llama3.2 (#6963) * llama: Decouple patching script from submodule (#7139) * llama: add compiler tags for cpu features (#7137) OBS-URL: https://build.opensuse.org/package/show/science:machinelearning/ollama?expand=0&rev=59
2024-11-01 03:20:51 +01:00
<servicedata>
<service name="tar_scm">
<param name="url">https://github.com/ollama/ollama.git</param>
<param name="changesrevision">9d71bcc3e2a97c8e62d758450f43aa212346410e</param></service></servicedata>