143 Commits

Author SHA256 Message Date
8f6a537dea Accepting request 1329736 from science:machinelearning
- Updated to version 0.15.2:
- Updated to version 0.15.1:
- Updated to version 0.15.0:
- Updated to version 0.14.3:

OBS-URL: https://build.opensuse.org/request/show/1329736
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/ollama?expand=0&rev=55
2026-01-29 16:46:03 +00:00
a6be67967f * New ollama launch clawdbot command for launching Clawdbot
Image generation:

OBS-URL: https://build.opensuse.org/package/show/science:machinelearning/ollama?expand=0&rev=142
2026-01-28 23:21:33 +00:00
aca5577791 Fix version and remove duplicated copyright comment in .spec
OBS-URL: https://build.opensuse.org/package/show/science:machinelearning/ollama?expand=0&rev=141
2026-01-28 23:20:58 +00:00
96c17fa94a Accepting request 1329648 from home:mslacken:ml
- Updated to version 0.15.2:
  * New ollama launch clawdbot command for launching Clawdbot 
    using Ollama models
- Updated to version 0.15.1:
  * GLM-4.7-Flash performance and correctness improvements, fixing
    repetitive answers and tool calling quality
  * Fixed performance issues on arm64
  * Fixed issue where ollama launch would not detect claude and would
    incorrectly update opencode configurations
- Updated to version 0.15.1:
  * New command: ollama launch
    A new ollama launch command to use Ollama's models with Claude
    Code, Codex, OpenCode, and Droid without separate configuration.
    New ollama launch command for Claude Code, Codex, OpenCode, and Droid
  * Fixed issue where creating multi-line strings with """ would not
    work when using ollama run
  * Ctrl+J and Shift+Enter now work for inserting newlines in ollama run
  * Reduced memory usage for GLM-4.7-Flash models
- Updated to version 0.14.3:
  Image generation:  
  * Z-Image Turbo: 6 billion parameter text-to-image model from
    Alibaba’s Tongyi Lab. It generates high-quality photorealistic
    images.
  * Flux.2 Klein: Black Forest Labs’ fastest image-generation models
    to date.
  New models:
  * GLM-4.7-Flash: As the strongest model in the 30B
    class, GLM-4.7-Flash offers a new option for lightweight
    deployment that balances performance and efficiency.
  * LFM2.5-1.2B-Thinking: LFM2.5 is a new family of hybrid models

OBS-URL: https://build.opensuse.org/request/show/1329648
OBS-URL: https://build.opensuse.org/package/show/science:machinelearning/ollama?expand=0&rev=140
2026-01-28 23:19:26 +00:00
d1fcc05458 Accepting request 1328568 from science:machinelearning
- Update to version 0.14.2:
  * New models: TranslateGemma
  * Shift + Enter will now enter a newline in Ollama's CLI
  * Improve /v1/responses API to better confirm to OpenResponses
    specification
- Update to version 0.14.1:
  * Experimental image generation models are available Linux (CUDA)
    `ollama run x/z-image-turbo`

- Update to version 0.14.0:
  * ollama run --experimental CLI will now open a new Ollama CLI
    that includes an agent loop and the bash tool
  * Anthropic API compatibility: support for the /v1/messages API
  * A new REQUIRES command for the Modelfile allows declaring which
    version of Ollama is required for the model
  * For older models, Ollama will avoid an integer underflow on low
    VRAM systems during memory estimation
  * More accurate VRAM measurements for AMD iGPUs
  * An error will now return when embeddings return NaN or -Inf

OBS-URL: https://build.opensuse.org/request/show/1328568
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/ollama?expand=0&rev=54
2026-01-22 14:15:44 +00:00
26551ac3a3 - Update to version 0.14.2:
* New models: TranslateGemma
  * Shift + Enter will now enter a newline in Ollama's CLI
  * Improve /v1/responses API to better confirm to OpenResponses
    specification
- Update to version 0.14.1:
  * Experimental image generation models are available Linux (CUDA)
    `ollama run x/z-image-turbo`

OBS-URL: https://build.opensuse.org/package/show/science:machinelearning/ollama?expand=0&rev=138
2026-01-21 18:48:33 +00:00
a1aa13bec5 - Update to version 0.14.0:
* ollama run --experimental CLI will now open a new Ollama CLI
    that includes an agent loop and the bash tool
  * Anthropic API compatibility: support for the /v1/messages API
  * A new REQUIRES command for the Modelfile allows declaring which
    version of Ollama is required for the model
  * For older models, Ollama will avoid an integer underflow on low
    VRAM systems during memory estimation
  * More accurate VRAM measurements for AMD iGPUs
  * An error will now return when embeddings return NaN or -Inf

OBS-URL: https://build.opensuse.org/package/show/science:machinelearning/ollama?expand=0&rev=137
2026-01-14 18:41:04 +00:00
c5264cdec3 Accepting request 1324368 from science:machinelearning
OBS-URL: https://build.opensuse.org/request/show/1324368
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/ollama?expand=0&rev=53
2025-12-25 18:57:30 +00:00
5a22398f23 Accepting request 1323716 from home:mimosius:science:machinelearning
- Added 'Requires:' tag for subpackages to spec file
- Update to version 0.13.5:
  * New models: FunctionGemma
  * 'bert' architecture models now run on Ollama's engine
  * Added built-in renderer & tool parsing capabilities for
    DeepSeek-V3.1
  * Fixed issue where nested properties in tools may not have been
    rendered properly

OBS-URL: https://build.opensuse.org/request/show/1323716
OBS-URL: https://build.opensuse.org/package/show/science:machinelearning/ollama?expand=0&rev=135
2025-12-24 12:55:19 +00:00
d70048e75c Accepting request 1323403 from science:machinelearning
- Update vendored golang.org/x/net/html to v0.48.0
- Update to version 0.13.4:
  * New models: Nemotron 3 Nano, Olmo 3, Olmo 3.1
  * Enable Flash Attention automatically for models by default
  * Fixed handling of long contexts with Gemma 3 models
  * Fixed issue that would occur with Gemma 3 QAT models or
    other models imported with the Gemma 3 architecture
- Update to version 0.13.3:
  * New models: Devstral-Small-2, rnj-1, nomic-embed-text-v2
  * Improved truncation logic when using /api/embed and
    /v1/embeddings
  * Extend Gemma 3 architecture to support rnj-1 model
  * Fix error that would occur when running qwen2.5vl with image
    input
- Update to version 0.13.2:
  * New models: Qwen3-Next
  * Flash attention is now enabled by default for vision models
    such as mistral-3, gemma3, qwen3-vl and more. This improves
    memory utilization and performance when providing images as
    input.
  * Fixed GPU detection on multi-GPU CUDA machines
  * Fixed issue where deepseek-v3.1 would always think even with
    thinking is disabled in Ollama's app

OBS-URL: https://build.opensuse.org/request/show/1323403
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/ollama?expand=0&rev=52
2025-12-18 17:32:23 +00:00
c0713fb29e OBS-URL: https://build.opensuse.org/package/show/science:machinelearning/ollama?expand=0&rev=133 2025-12-17 18:26:26 +00:00
6bb25745d3 Accepting request 1323340 from home:mimosius:science:machinelearning
- Update vendored golang.org/x/net/html to v0.48.0
- Update to version 0.13.4

OBS-URL: https://build.opensuse.org/request/show/1323340
OBS-URL: https://build.opensuse.org/package/show/science:machinelearning/ollama?expand=0&rev=132
2025-12-17 18:18:59 +00:00
6ab999dfe8 Accepting request 1321364 from science:machinelearning
OBS-URL: https://build.opensuse.org/request/show/1321364
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/ollama?expand=0&rev=51
2025-12-08 10:54:38 +00:00
8fce087f1b Accepting request 1321193 from home:VaiTon:branches:science:machinelearning
- Update to version 0.13.1:
  * New models: Ministral-3, Mistral-Large-3
  * nomic-embed-text will now use Ollama's engine by default
  * Tool calling support for cogito-v2.1
  * Ollama will now better render errors instead of showing
    Unmarshal: errors

OBS-URL: https://build.opensuse.org/request/show/1321193
OBS-URL: https://build.opensuse.org/package/show/science:machinelearning/ollama?expand=0&rev=130
2025-12-06 14:54:55 +00:00
81f96995e6 Accepting request 1319255 from science:machinelearning
OBS-URL: https://build.opensuse.org/request/show/1319255
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/ollama?expand=0&rev=50
2025-11-24 13:10:47 +00:00
bc0f28f53d Accepting request 1319192 from home:mimosius:science:machinelearning
Update to version 0.13.0

OBS-URL: https://build.opensuse.org/request/show/1319192
OBS-URL: https://build.opensuse.org/package/show/science:machinelearning/ollama?expand=0&rev=128
2025-11-22 16:05:11 +00:00
225f77e0f6 Accepting request 1317118 from science:machinelearning
OBS-URL: https://build.opensuse.org/request/show/1317118
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/ollama?expand=0&rev=49
2025-11-11 18:22:26 +00:00
e5457d3d02 Accepting request 1317097 from home:eeich:branches:science:machinelearning
- Consolidate spec file to build for CPU or GPUs from NVIDIA (CUDA)
  and AMD (ROCm). Both are presently disabled on openSUSE, ROCm
  will be available on Tumbleweed soon.
- Splitting Vulkan, CUDA and ROCm into separate packages. The
  Vulkan, CUDA and ROCm modules are recommended.

        returned correctly

OBS-URL: https://build.opensuse.org/request/show/1317097
OBS-URL: https://build.opensuse.org/package/show/science:machinelearning/ollama?expand=0&rev=126
2025-11-11 14:19:36 +00:00
2dd2e711ba Accepting request 1316476 from science:machinelearning
OBS-URL: https://build.opensuse.org/request/show/1316476
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/ollama?expand=0&rev=48
2025-11-08 15:36:34 +00:00
ef07c70fef Accepting request 1316467 from home:mimosius:science:machinelearning
- Update to version 0.12.10
  * Fixed errors when running qwen3-vl:235b and
    qwen3-vl:235b-instruct
  * Enable flash attention for Vulkan (currently needs to be built
    from source)
  * Add Vulkan memory detection for Intel GPU using DXGI+PDH
  * Ollama will now return tool call IDs from the /api/chat API
  * Fixed hanging due to CPU discovery
  * Ollama will now show login instructions when switching to a
    cloud model in interactive mode
  * Fix reading stale VRAM data
  * 'ollama run' now works with embedding models

OBS-URL: https://build.opensuse.org/request/show/1316467
OBS-URL: https://build.opensuse.org/package/show/science:machinelearning/ollama?expand=0&rev=124
2025-11-07 16:45:55 +00:00
625a28aed6 Accepting request 1315207 from science:machinelearning
OBS-URL: https://build.opensuse.org/request/show/1315207
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/ollama?expand=0&rev=47
2025-11-03 17:54:58 +00:00
ffa132f9b9 Accepting request 1315028 from home:mimosius:science:machinelearning
- Fixed issue with duplicated libraries (/usr/lib, /usr/lib64)
- Update to version 0.12.9
  * Fix performance regression on CPU-only systems
- Update to version 0.12.8
  * qwen3-vl performance improvements, including flash attention
    support by default
  * qwen3-vl will now output less leading whitespace in the
    response when thinking
  * Fixed issue where deepseek-v3.1 thinking could not be disabled
    in Ollama's new app
  * Fixed issue where qwen3-vl would fail to interpret images with
    transparent backgrounds
  * Ollama will now stop running a model before removing it via
    ollama rm
  * Fixed issue where prompt processing would be slower on
    Ollama's engine
- Update to version 0.12.7
  * New model: Qwen3-VL: Qwen3-VL is now available in all parameter
    sizes ranging from 2B to 235B
  * New model: MiniMax-M2: a 230 Billion parameter model built for
    coding & agentic workflows available on Ollama's cloud
  * Model load failures now include more information on Windows
  * Fixed embedding results being incorrect when running
    embeddinggemma
  * Fixed gemma3n on Vulkan backend
  * Increased time allocated for ROCm to discover devices
  * Fixed truncation error when generating embeddings
  * Fixed request status code when running cloud models
  * The OpenAI-compatible /v1/embeddings endpoint now supports
    encoding_format parameter
  * Ollama will now parse tool calls that don't conform to
    {"name": name, "arguments": args} (thanks @rick-github!)
  * Fixed prompt processing reporting in the llama runner
  * Increase speed when scheduling models
  * Fixed issue where FROM <model> would not inherit RENDERER or
    PARSER commands

OBS-URL: https://build.opensuse.org/request/show/1315028
OBS-URL: https://build.opensuse.org/package/show/science:machinelearning/ollama?expand=0&rev=122
2025-11-03 01:56:17 +00:00
68cb516a9e Accepting request 1313741 from science:machinelearning
OBS-URL: https://build.opensuse.org/request/show/1313741
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/ollama?expand=0&rev=46
2025-10-27 13:41:10 +00:00
dda32dab82 Accepting request 1313740 from home:Yoshio_Sato:branches:science:machinelearning
- Require groups video and render instead of providing them while
  competing with the system-group-hardware package

OBS-URL: https://build.opensuse.org/request/show/1313740
OBS-URL: https://build.opensuse.org/package/show/science:machinelearning/ollama?expand=0&rev=120
2025-10-26 19:51:04 +00:00
3c45ab8003 Accepting request 1312160 from science:machinelearning
OBS-URL: https://build.opensuse.org/request/show/1312160
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/ollama?expand=0&rev=45
2025-10-18 16:36:26 +00:00
98424f4344 Accepting request 1312121 from home:mimosius:science:machinelearning
- Update vendored golang.org/x/net/html to v0.46.0
- Update to version 0.12.6
  * Experimental Vulkan support
  * Ollama's app now supports searching when running DeepSeek-V3.1,
     Qwen3 and other models that support tool calling.
  * Flash attention is now enabled by default for Gemma 3,
    improving performance and memory utilization
  * Fixed issue where Ollama would hang while generating responses
  * Fixed issue where qwen3-coder would act in raw mode when using
    /api/generate or ollama run qwen3-coder <prompt>
  * Fixed qwen3-embedding providing invalid results
  * Ollama will now evict models correctly when num_gpu is set
  * Fixed issue where tool_index with a value of 0 would not be
    sent to the model
- Add ollama user to render group

OBS-URL: https://build.opensuse.org/request/show/1312121
OBS-URL: https://build.opensuse.org/package/show/science:machinelearning/ollama?expand=0&rev=118
2025-10-18 14:10:00 +00:00
944f54f239 - Update vendored golang.org/x/net/html to v0.45.0
[boo#1251413] [CVE-2025-47911] [boo#1241757] [CVE-2025-22872]
- Update to version 0.12.5:
  * Fixed issue where "think": false would show an error instead of
    being silently ignored
  * Fixed deepseek-r1 output issues
- Update to version 0.12.4:
  * Flash attention is now enabled by default for Qwen 3 and Qwen 3
    Coder
  * Fixed an issue where keep_alive in the API would accept
    different values for the /api/chat and /api/generate endpoints
  * Fixed tool calling rendering with qwen3-coder
  * More reliable and accurate VRAM detection
  * OLLAMA_FLASH_ATTENTION can now be overridden to 0 for models
    that have flash attention enabled by default
  * Fixed crash where templates were not correctly defined
  * openai: always provide reasoning
  * Bug fixes
  * No notable changes.
  * Fixed issue when quantizing models with the Gemma 3n
  * Ollama will now limit context length to what the model was
  * Fixed issue where tool calls without parameters would not be
  	returned correctly
  * Fixed issue where some special tokens would not be tokenized
- Allow to build for Package Hub for SLE-15-SP7

OBS-URL: https://build.opensuse.org/package/show/science:machinelearning/ollama?expand=0&rev=117
2025-10-11 15:15:50 +00:00
bf8ed313e6 Accepting request 1309021 from science:machinelearning
- Update to version 0.12.3:
- Update to version 0.12.2:
- Update to version 0.12.1:
- Update to version 0.12.0:
- Update to version 0.11.11:

OBS-URL: https://build.opensuse.org/request/show/1309021
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/ollama?expand=0&rev=44
2025-10-05 15:51:13 +00:00
65512c06ec - Update to version 0.12.3:
* New models: DeepSeek-V3.1-Terminus, Kimi K2-Instruct-0905
  * Fixed issue where tool calls provided as stringified JSON
    would not be parsed correctly
  * ollama push will now provide a URL to follow to sign in
  * Fixed issues where qwen3-coder would output unicode characters
    incorrectly
  * Fix issue where loading a model with /load would crash
- Update to version 0.12.2:
  * A new web search API is now available in Ollama
  * Models with Qwen3's architecture including MoE now run in
    Ollama's new engine
  * Fixed issue where built-in tools for gpt-oss were not being
    rendered correctly
  * Support multi-regex pretokenizers in Ollama's new engine
  * Ollama's new engine can now load tensors by matching a prefix
    or suffix
- Update to version 0.12.1:
  * New model: Qwen3 Embedding: state of the art open embedding
    model by the Qwen team
  * Qwen3-Coder now supports tool calling
  * Fixed issue where Gemma3 QAT models would not output correct
    tokens
  * Fix issue where & characters in Qwen3-Coder would not be parsed
    correctly when function calling
  * Fixed issues where ollama signin would not work properly
- Update to version 0.12.0:
  * Cloud models are now available in preview
  * Models with the Bert architecture now run on Ollama's engine
  * Models with the Qwen 3 architecture now run on Ollama's engine

OBS-URL: https://build.opensuse.org/package/show/science:machinelearning/ollama?expand=0&rev=115
2025-10-04 21:24:33 +00:00
44b7a8907c Accepting request 1298233 from science:machinelearning
- Update to version 0.11.4:
  * openai: allow for content and tool calls in the same message
  * openai: when converting role=tool messages, propagate the tool
    name
  * openai: always provide reasoning 
  * Bug fixes

OBS-URL: https://build.opensuse.org/request/show/1298233
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/ollama?expand=0&rev=43
2025-08-08 13:12:48 +00:00
aaf1294c95 - Update to version 0.11.4:
* openai: allow for content and tool calls in the same message
  * openai: when converting role=tool messages, propagate the tool
    name
  * openai: always provide reasoning 
  * Bug fixes

OBS-URL: https://build.opensuse.org/package/show/science:machinelearning/ollama?expand=0&rev=113
2025-08-07 23:21:08 +00:00
a33f11b76e Accepting request 1298006 from science:machinelearning
- Update to version 0.11.0:
  * New model: OpenAI gpt-oss 20B and 120B
  * Quantization - MXFP4 format

OBS-URL: https://build.opensuse.org/request/show/1298006
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/ollama?expand=0&rev=42
2025-08-07 14:48:46 +00:00
349d4de67b - Update to version 0.11.0:
* New model: OpenAI gpt-oss 20B and 120B
  * Quantization - MXFP4 format

OBS-URL: https://build.opensuse.org/package/show/science:machinelearning/ollama?expand=0&rev=111
2025-08-06 12:51:20 +00:00
9bbc684da6 Accepting request 1297591 from science:machinelearning
- Update to version 0.10.1:
  * No notable changes.
- Update to version 0.10.0:
  * ollama ps will now show the context length of loaded models
  * Improved performance in gemma3n models by 2-3x
  * Parallel request processing now defaults to 1
  * Fixed issue where tool calling would not work correctly with
    granite3.3 and mistral-nemo models
  * Fixed issue where Ollama's tool calling would not work
    correctly if a tool's name was part of of another one, such as
    add and get_address
  * Improved performance when using multiple GPUs by 10-30%
  * Ollama's OpenAI-compatible API will now support WebP images
  * Fixed issue where ollama show would report an error
  * ollama run will more gracefully display errors

OBS-URL: https://build.opensuse.org/request/show/1297591
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/ollama?expand=0&rev=41
2025-08-05 12:21:43 +00:00
172a9eec78 - Update to version 0.10.1:
* No notable changes.
- Update to version 0.10.0:
  * ollama ps will now show the context length of loaded models
  * Improved performance in gemma3n models by 2-3x
  * Parallel request processing now defaults to 1
  * Fixed issue where tool calling would not work correctly with
    granite3.3 and mistral-nemo models
  * Fixed issue where Ollama's tool calling would not work
    correctly if a tool's name was part of of another one, such as
    add and get_address
  * Improved performance when using multiple GPUs by 10-30%
  * Ollama's OpenAI-compatible API will now support WebP images
  * Fixed issue where ollama show would report an error
  * ollama run will more gracefully display errors

OBS-URL: https://build.opensuse.org/package/show/science:machinelearning/ollama?expand=0&rev=109
2025-08-05 00:09:03 +00:00
7624762a9a Accepting request 1290234 from science:machinelearning
- Update to version 0.9.5:
  * No notable changes.	
- Update to version 0.9.4:
  * The directory in which models are stored can now be modified.
  * Tool calling with empty parameters will now work correctly
  * Fixed issue when quantizing models with the Gemma 3n 
  	architecture
- Update to version 0.9.3:
  * Ollama now supports Gemma 3n
  * Ollama will now limit context length to what the model was 
  	trained against to avoid strange overflow behavior
- Update to version 0.9.2:
  * Fixed issue where tool calls without parameters would not be 
  	returned correctly 
  * Fixed does not support generate errors
  * Fixed issue where some special tokens would not be tokenized 
  	properly for some model architectures

OBS-URL: https://build.opensuse.org/request/show/1290234
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/ollama?expand=0&rev=40
2025-07-06 15:07:50 +00:00
9b2f052e10 - Update to version 0.9.5:
* No notable changes.	
- Update to version 0.9.4:
  * The directory in which models are stored can now be modified.
  * Tool calling with empty parameters will now work correctly
  * Fixed issue when quantizing models with the Gemma 3n 
  	architecture
- Update to version 0.9.3:
  * Ollama now supports Gemma 3n
  * Ollama will now limit context length to what the model was 
  	trained against to avoid strange overflow behavior
- Update to version 0.9.2:
  * Fixed issue where tool calls without parameters would not be 
  	returned correctly 
  * Fixed does not support generate errors
  * Fixed issue where some special tokens would not be tokenized 
  	properly for some model architectures

OBS-URL: https://build.opensuse.org/package/show/science:machinelearning/ollama?expand=0&rev=107
2025-07-03 00:15:58 +00:00
b72896bbb7 Accepting request 1288227 from science:machinelearning
Automatic submission by obs-autosubmit

OBS-URL: https://build.opensuse.org/request/show/1288227
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/ollama?expand=0&rev=39
2025-06-24 18:50:15 +00:00
73961852fd - Update to version 0.9.1:
* Tool calling reliability and performance has been improved for
    the following models: Magistral Llama 4 Mistral
    DeepSeek-R1-2508
  * Magistral now supports disabling thinking mode
  * Error messages that previously showed POST predict will now be
    more informative

OBS-URL: https://build.opensuse.org/package/show/science:machinelearning/ollama?expand=0&rev=105
2025-06-17 10:54:45 +00:00
d9f77ab949 Accepting request 1283893 from science:machinelearning
Automatic submission by obs-autosubmit

OBS-URL: https://build.opensuse.org/request/show/1283893
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/ollama?expand=0&rev=38
2025-06-10 07:05:27 +00:00
9b760ab447 - Update to version 0.9.0:
* Ollama now has the ability to enable or disable thinking.
    This gives users the flexibility to choose the model’s thinking
    behavior for different applications and use cases.
- Update to version 0.8.0:
  * Ollama will now stream responses with tool calls
  * Logs will now include better memory estimate debug information
    when running models in Ollama's engine.
- Update to version 0.7.1:
  * Improved model memory management to allocate sufficient memory
    to prevent crashes when running multimodal models in certain
    situations
  * Enhanced memory estimation for models to prevent unintended
    memory offloading
  * ollama show will now show ... when data is truncated
  * Fixed crash that would occur with qwen2.5vl
  * Fixed crash on Nvidia's CUDA for llama3.2-vision
  * Support for Alibaba's Qwen 3 and Qwen 2 architectures in
    Ollama's new multimodal engine

OBS-URL: https://build.opensuse.org/package/show/science:machinelearning/ollama?expand=0&rev=103
2025-06-01 00:00:21 +00:00
b7464d2582 Accepting request 1279778 from science:machinelearning
- Cleanup part in spec file where build for SLE-15-SP6 and above
  is defined to make if condition more robust

- Allow to build for Package Hub for SLE-15-SP7 
  (openSUSE:Backports:SLE-15-SP7) with g++-12/gcc-12
  by checking for sle_version >= 150600 in spec file (bsc#1243438)

OBS-URL: https://build.opensuse.org/request/show/1279778
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/ollama?expand=0&rev=37
2025-05-26 16:32:37 +00:00
fa54a05c54 OBS-URL: https://build.opensuse.org/package/show/science:machinelearning/ollama?expand=0&rev=101 2025-05-24 13:40:07 +00:00
c4d3594049 Accepting request 1279534 from home:bigironman:branches:science:machinelearning
- Cleanup part in spec file where build for SLE-15-SP6 and above
  is defined to make if condition more robust

- Allow to build for Package Hub for SLE-15-SP7 
  (openSUSE:Backports:SLE-15-SP7) with g++-12/gcc-12
  by checking for sle_version >= 150600 in spec file (bsc#1243438)

OBS-URL: https://build.opensuse.org/request/show/1279534
OBS-URL: https://build.opensuse.org/package/show/science:machinelearning/ollama?expand=0&rev=100
2025-05-23 12:19:15 +00:00
329d0b3b07 Accepting request 1279105 from home:bigironman:branches:science:machinelearning
-  Allow to build for Package Hub for SLE-15-SP7 
   (openSUSE:Backports:SLE-15-SP7) with g++-12/gcc-12
   by checking for sle_version >= 150600 in spec file (bsc#1243438)

OBS-URL: https://build.opensuse.org/request/show/1279105
OBS-URL: https://build.opensuse.org/package/show/science:machinelearning/ollama?expand=0&rev=99
2025-05-21 21:23:32 +00:00
cf4576f29f Accepting request 1278142 from science:machinelearning
- Update to version 0.7.0:
  * Ollama now supports multimodal models via Ollama’s new engine,
    starting with new vision multimodal models:
    ~ Meta Llama 4
    ~ Google Gemma 3
    ~ Qwen 2.5 VL
    ~ Qwen 2.5 VL
  * Ollama now supports providing WebP images as input to
    multimodal models
  * Improved performance of importing safetensors models via
    ollama create
  * Various bug fixes and performance enhancements

OBS-URL: https://build.opensuse.org/request/show/1278142
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/ollama?expand=0&rev=36
2025-05-20 07:36:41 +00:00
4bf7dbc507 - Update to version 0.7.0:
* Ollama now supports multimodal models via Ollama’s new engine,
    starting with new vision multimodal models:
    ~ Meta Llama 4
    ~ Google Gemma 3
    ~ Qwen 2.5 VL
    ~ Qwen 2.5 VL
  * Ollama now supports providing WebP images as input to
    multimodal models
  * Improved performance of importing safetensors models via
    ollama create
  * Various bug fixes and performance enhancements

OBS-URL: https://build.opensuse.org/package/show/science:machinelearning/ollama?expand=0&rev=97
2025-05-17 14:49:57 +00:00
46e1c2bc3f Accepting request 1277233 from science:machinelearning
- Update to version 0.6.8
- Update to version 0.6.7
- Use source url (https://en.opensuse.org/SourceUrls)

OBS-URL: https://build.opensuse.org/request/show/1277233
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/ollama?expand=0&rev=35
2025-05-14 15:01:10 +00:00
f968f1aa72 - Use source url (https://en.opensuse.org/SourceUrls)
OBS-URL: https://build.opensuse.org/package/show/science:machinelearning/ollama?expand=0&rev=95
2025-05-13 17:03:33 +00:00
3f196feccd OBS-URL: https://build.opensuse.org/package/show/science:machinelearning/ollama?expand=0&rev=94 2025-05-13 17:02:11 +00:00