SHA256
1
0
forked from pool/ollama
Commit Graph

8 Commits

Author SHA256 Message Date
Loren Burkholder
32d8d25838 Accepting request 1174682 from home:VaiTon:branches:science:machinelearning
- Update to version 0.1.38:
  * New model: Falcon 2: A new 11B parameters causal decoder-only
    model built by TII and trained over 5T tokens.
  * New model: Yi 1.5: A new high-performing version of Yi, now 
    licensed as Apache 2.0. Available in 6B, 9B and 34B sizes.
  * Added ollama ps command
  * Added /clear command
  * Fixed issue where switching loaded models on Windows would take
    several seconds
  * Running /save will no longer abort the chat session if an
    incorrect name is provided
  * The /api/tags API endpoint will now correctly return an empty
    list [] instead of null if no models are provided

OBS-URL: https://build.opensuse.org/request/show/1174682
OBS-URL: https://build.opensuse.org/package/show/science:machinelearning/ollama?expand=0&rev=15
2024-05-16 20:34:11 +00:00
Loren Burkholder
cf9cf2a4df Accepting request 1173521 from home:VaiTon:branches:science:machinelearning
- Update to version 0.1.37:
  * Fixed issue where models with uppercase characters in the name
    would not show with ollama list
  * Fixed usage string for ollama create
  * Fix finish_reason being "" instead of null in the Open-AI
    compatible chat API.

- Use obs_scm service instead of the deprecated tar_scm
- Use zstd for vendor tarball compression

OBS-URL: https://build.opensuse.org/request/show/1173521
OBS-URL: https://build.opensuse.org/package/show/science:machinelearning/ollama?expand=0&rev=13
2024-05-13 03:27:47 +00:00
Loren Burkholder
dfc1a9fa3a Accepting request 1173461 from home:VaiTon:branches:science:machinelearning
- Update to version 0.1.36:
- Update to version 0.1.35:
- Update to version 0.1.34:

OBS-URL: https://build.opensuse.org/request/show/1173461
OBS-URL: https://build.opensuse.org/package/show/science:machinelearning/ollama?expand=0&rev=11
2024-05-12 01:58:52 +00:00
Loren Burkholder
a4111a1692 Accepting request 1169791 from home:rrahl0:branches:science:machinelearning
- Update to version 0.1.32:
  * scale graph based on gpu count
  * Support unicode characters in model path (#3681)
  * darwin: no partial offloading if required memory greater than system
  * update llama.cpp submodule to `7593639` (#3665)
  * fix padding in decode
  * Revert "cmd: provide feedback if OLLAMA_MODELS is set on non-serve command (#3470)" (#3662)
  * Added Solar example at README.md (#3610)
  * Update langchainjs.md (#2030)
  * Added MindsDB information (#3595)
  * examples: add more Go examples using the API (#3599)
  * Update modelfile.md
  * Add llama2 / torch models for `ollama create` (#3607)
  * Terminate subprocess if receiving `SIGINT` or `SIGTERM` signals while model is loading (#3653)
  * app: gracefully shut down `ollama serve` on windows (#3641)
  * types/model: add path helpers (#3619)
  * update llama.cpp submodule to `4bd0f93` (#3627)
  * types/model: make ParseName variants less confusing (#3617)
  * types/model: remove (*Digest).Scan and Digest.Value (#3605)
  * Fix rocm deps with new subprocess paths
  * mixtral mem
  * Revert "types/model: remove (*Digest).Scan and Digest.Value (#3589)"
  * types/model: remove (*Digest).Scan and Digest.Value (#3589)
  * types/model: remove DisplayLong (#3587)
  * types/model: remove MarshalText/UnmarshalText from Digest (#3586)
  * types/model: init with Name and Digest types (#3541)
  * server: provide helpful workaround hint when stalling on pull (#3584)
  * partial offloading
  * refactor tensor query
  * api: start adding documentation to package api (#2878)

OBS-URL: https://build.opensuse.org/request/show/1169791
OBS-URL: https://build.opensuse.org/package/show/science:machinelearning/ollama?expand=0&rev=9
2024-04-23 12:00:24 +00:00
Loren Burkholder
8ef2b26afe Accepting request 1168020 from home:bmwiedemann:branches:science:machinelearning
Update to version 0.1.31:
  * Backport MacOS SDK fix from main
  * Apply 01-cache.diff
  * fix: workflows
  * stub stub
  * mangle arch
  * only generate on changes to llm subdirectory
  * only generate cuda/rocm when changes to llm detected
  * Detect arrow keys on windows (#3363)
  * add license in file header for vendored llama.cpp code (#3351)
  * remove need for `$VSINSTALLDIR` since build will fail if `ninja` cannot be found (#3350)
  * change `github.com/jmorganca/ollama` to `github.com/ollama/ollama` (#3347)
  * malformed markdown link (#3358)
  * Switch runner for final release job
  * Use Rocky Linux Vault to get GCC 10.2 installed
  * Revert "Switch arm cuda base image to centos 7"
  * Switch arm cuda base image to centos 7
  * Bump llama.cpp to b2527
  * Fix ROCm link in `development.md`
  * adds ooo to community integrations (#1623)
  * Add cliobot to ollama supported list (#1873)
  * Add Dify.AI to community integrations (#1944)
  * enh: add ollero.nvim to community applications (#1905)
  * Add typechat-cli to Terminal apps (#2428)
  * add new Web & Desktop link in readme for alpaca webui (#2881)
  * Add LibreChat to Web & Desktop Apps (#2918)
  * Add Community Integration: OllamaGUI (#2927)
  * Add Community Integration: OpenAOE (#2946)
  * Add Saddle (#3178)
  * tlm added to README.md terminal section. (#3274)
...

OBS-URL: https://build.opensuse.org/request/show/1168020
OBS-URL: https://build.opensuse.org/package/show/science:machinelearning/ollama?expand=0&rev=7
2024-04-17 00:53:52 +00:00
Loren Burkholder
9c6d1dfa92 - Update to version 0.1.28:
* Fix embeddings load model behavior (#2848)
  * Add Community Integration: NextChat (#2780)
  * prepend image tags (#2789)
  * fix: print usedMemory size right (#2827)
  * bump submodule to `87c91c07663b707e831c59ec373b5e665ff9d64a` (#2828)
  * Add ollama user to video group
  * Add env var so podman will map cuda GPUs
  * Omit build date from gzip headers
  * Log unexpected server errors checking for update
  * Refine container image build script
  * Bump llama.cpp to b2276
  * Determine max VRAM on macOS using `recommendedMaxWorkingSetSize` (#2354)
  * Update types.go (#2744)
  * Update langchain python tutorial (#2737)
  * no extra disk space for windows installation (#2739)
  * clean up go.mod
  * remove format/openssh.go
  * Add Community Integration: Chatbox
  * better directory cleanup in `ollama.iss`
  * restore windows build flags and compression

OBS-URL: https://build.opensuse.org/package/show/science:machinelearning/ollama?expand=0&rev=6
2024-03-06 23:53:38 +00:00
Loren Burkholder
e775b54f22 Accepting request 1152042 from home:jengelh:branches:science:machinelearning
factory review.

- Edit description, answer _what_ the package is and use nominal
  phrase. (https://en.opensuse.org/openSUSE:Package_description_guidelines)

OBS-URL: https://build.opensuse.org/request/show/1152042
OBS-URL: https://build.opensuse.org/package/show/science:machinelearning/ollama?expand=0&rev=4
2024-02-27 12:34:45 +00:00
5a3ae9ab21 Accepting request 1150495 from home:LorenDB
I've created a package for Ollama (https://ollama.com) so that users don't have to use an install script. I will point out that this does not have CUDA support or ROCm enabled; we won't be able to package CUDA for obvious reasons, and ROCm is currently not packaged in Factory. However, for basic CPU-enabled use, this is better than curling a random script from the interwebs :)

OBS-URL: https://build.opensuse.org/request/show/1150495
OBS-URL: https://build.opensuse.org/package/show/science:machinelearning/ollama?expand=0&rev=1
2024-02-26 09:11:49 +00:00