Logo
Explore Help
Sign In
pool/ollama
SHA256
9
0
Fork 2
You've already forked ollama
Code Issues Pull Requests Actions Packages Projects Releases Wiki Activity
Files
808a0b582d6df2b29f35a5a774c3fdf2ae2dcf5bcfc1c18cd53129e8579ce8cc
ollama/_servicedata

4 lines
234 B
Plaintext
Raw Normal View History

- Update to version 0.2.8: * api embed docs (#5282) * convert: capture `head_dim` for mistral (#5818) * Update llama.cpp submodule commit to `d94c6e0c` (#5805) * server: collect nested tool call objects when parsing (#5824) * Remove no longer supported max vram var * Refine error reporting for subprocess crash * Remove out of space test temporarily (#5825) * llm: consider `head_dim` in llama arch (#5817) * Adjust windows ROCm discovery * add patch for tekken (#5807) * preserve last assistant message (#5802) * Fix generate test flakyness (#5804) * server: validate template (#5734) * OpenAI: Function Based Testing (#5752) * adjust openai chat msg processing (#5729) * fix parsing tool calls * server: check for empty tools array too (#5779) * always provide content even if empty (#5778) * server: only parse tool calls if tools are provided (#5771) * Fix context exhaustion integration test for small gpus * Refine scheduler unit tests for reliability OBS-URL: https://build.opensuse.org/package/show/science:machinelearning/ollama?expand=0&rev=37
2024-07-25 11:03:50 +00:00
<servicedata>
<service name="tar_scm">
<param name="url">https://github.com/ollama/ollama.git</param>
<param name="changesrevision">c0648233f2236f82f6830d2aaed552ae0f72379b</param></service></servicedata>
Reference in New Issue Copy Permalink
Powered by Gitea Version: 1.24.4 Page: 44ms Template: 1ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API