* New safety models: ~ Llama Guard 3: a series of models by Meta, fine-tuned for content safety classification of LLM inputs and responses. ~ ShieldGemma: ShieldGemma is set of instruction tuned models from Google DeepMind for evaluating the safety of text prompt input and text output responses against a set of defined safety policies. * Fixed issue where ollama pull would leave connections when encountering an error * ollama rm will now stop a model if it is running prior to deleting it OBS-URL: https://build.opensuse.org/package/show/science:machinelearning/ollama?expand=0&rev=55
4 lines
134 B
Plaintext
4 lines
134 B
Plaintext
version https://git-lfs.github.com/spec/v1
|
|
oid sha256:77731c90fc14e1507f16ad8604fbb3397c19f587f7b6ec3a8d7ef18c85b22f3d
|
|
size 162358798
|