forked from pool/ollama
9c6d1dfa92
* Fix embeddings load model behavior (#2848) * Add Community Integration: NextChat (#2780) * prepend image tags (#2789) * fix: print usedMemory size right (#2827) * bump submodule to `87c91c07663b707e831c59ec373b5e665ff9d64a` (#2828) * Add ollama user to video group * Add env var so podman will map cuda GPUs * Omit build date from gzip headers * Log unexpected server errors checking for update * Refine container image build script * Bump llama.cpp to b2276 * Determine max VRAM on macOS using `recommendedMaxWorkingSetSize` (#2354) * Update types.go (#2744) * Update langchain python tutorial (#2737) * no extra disk space for windows installation (#2739) * clean up go.mod * remove format/openssh.go * Add Community Integration: Chatbox * better directory cleanup in `ollama.iss` * restore windows build flags and compression OBS-URL: https://build.opensuse.org/package/show/science:machinelearning/ollama?expand=0&rev=6
24 lines
965 B
Plaintext
24 lines
965 B
Plaintext
-------------------------------------------------------------------
|
|
Wed Mar 06 23:51:28 UTC 2024 - computersemiexpert@outlook.com
|
|
|
|
- Update to version 0.1.28:
|
|
* Fix embeddings load model behavior (#2848)
|
|
* Add Community Integration: NextChat (#2780)
|
|
* prepend image tags (#2789)
|
|
* fix: print usedMemory size right (#2827)
|
|
* bump submodule to `87c91c07663b707e831c59ec373b5e665ff9d64a` (#2828)
|
|
* Add ollama user to video group
|
|
* Add env var so podman will map cuda GPUs
|
|
|
|
-------------------------------------------------------------------
|
|
Tue Feb 27 08:33:15 UTC 2024 - Jan Engelhardt <jengelh@inai.de>
|
|
|
|
- Edit description, answer _what_ the package is and use nominal
|
|
phrase. (https://en.opensuse.org/openSUSE:Package_description_guidelines)
|
|
|
|
-------------------------------------------------------------------
|
|
Fri Feb 23 21:13:53 UTC 2024 - Loren Burkholder <computersemiexpert@outlook.com>
|
|
|
|
- Added the Ollama package
|
|
- Included a systemd service
|