forked from pool/ollama
* Fix embeddings load model behavior (#2848) * Add Community Integration: NextChat (#2780) * prepend image tags (#2789) * fix: print usedMemory size right (#2827) * bump submodule to `87c91c07663b707e831c59ec373b5e665ff9d64a` (#2828) * Add ollama user to video group * Add env var so podman will map cuda GPUs * Omit build date from gzip headers * Log unexpected server errors checking for update * Refine container image build script * Bump llama.cpp to b2276 * Determine max VRAM on macOS using `recommendedMaxWorkingSetSize` (#2354) * Update types.go (#2744) * Update langchain python tutorial (#2737) * no extra disk space for windows installation (#2739) * clean up go.mod * remove format/openssh.go * Add Community Integration: Chatbox * better directory cleanup in `ollama.iss` * restore windows build flags and compression OBS-URL: https://build.opensuse.org/package/show/science:machinelearning/ollama?expand=0&rev=6
4 lines
133 B
Plaintext
4 lines
133 B
Plaintext
version https://git-lfs.github.com/spec/v1
|
|
oid sha256:30225f7d1a96b8a573e82810584950ed2f9c95dcd2157d794c278ca44a43861b
|
|
size 75624882
|