ollama/_service
Loren Burkholder 9c6d1dfa92 - Update to version 0.1.28:
* Fix embeddings load model behavior (#2848)
  * Add Community Integration: NextChat (#2780)
  * prepend image tags (#2789)
  * fix: print usedMemory size right (#2827)
  * bump submodule to `87c91c07663b707e831c59ec373b5e665ff9d64a` (#2828)
  * Add ollama user to video group
  * Add env var so podman will map cuda GPUs
  * Omit build date from gzip headers
  * Log unexpected server errors checking for update
  * Refine container image build script
  * Bump llama.cpp to b2276
  * Determine max VRAM on macOS using `recommendedMaxWorkingSetSize` (#2354)
  * Update types.go (#2744)
  * Update langchain python tutorial (#2737)
  * no extra disk space for windows installation (#2739)
  * clean up go.mod
  * remove format/openssh.go
  * Add Community Integration: Chatbox
  * better directory cleanup in `ollama.iss`
  * restore windows build flags and compression

OBS-URL: https://build.opensuse.org/package/show/science:machinelearning/ollama?expand=0&rev=6
2024-03-06 23:53:38 +00:00

22 lines
761 B
Plaintext

<services>
<service name="format_spec_file" mode="manual">
</service>
<service name="tar_scm" mode="manual">
<param name="url">https://github.com/ollama/ollama.git</param>
<param name="scm">git</param>
<param name="revision">v0.1.28</param>
<param name="versionformat">@PARENT_TAG@</param>
<param name="versionrewrite-pattern">v(.*)</param>
<param name="changesgenerate">enable</param>
<param name="submodules">enable</param>
<param name="package-meta">yes</param>
</service>
<service name="recompress" mode="manual">
<param name="file">*.tar</param>
<param name="compression">gz</param>
</service>
<service name="go_modules" mode="manual">
<param name="compression">xz</param>
</service>
</services>