forked from pool/ollama
9c6d1dfa92
* Fix embeddings load model behavior (#2848) * Add Community Integration: NextChat (#2780) * prepend image tags (#2789) * fix: print usedMemory size right (#2827) * bump submodule to `87c91c07663b707e831c59ec373b5e665ff9d64a` (#2828) * Add ollama user to video group * Add env var so podman will map cuda GPUs * Omit build date from gzip headers * Log unexpected server errors checking for update * Refine container image build script * Bump llama.cpp to b2276 * Determine max VRAM on macOS using `recommendedMaxWorkingSetSize` (#2354) * Update types.go (#2744) * Update langchain python tutorial (#2737) * no extra disk space for windows installation (#2739) * clean up go.mod * remove format/openssh.go * Add Community Integration: Chatbox * better directory cleanup in `ollama.iss` * restore windows build flags and compression OBS-URL: https://build.opensuse.org/package/show/science:machinelearning/ollama?expand=0&rev=6 |
||
---|---|---|
_service | ||
.gitattributes | ||
.gitignore | ||
enable-lto.patch | ||
ollama-0.1.27.tar.gz | ||
ollama-0.1.28.tar.gz | ||
ollama-user.conf | ||
ollama.changes | ||
ollama.service | ||
ollama.spec | ||
vendor.tar.xz |