diff --git a/_service b/_service
index cb9bbd3..f6e654f 100644
--- a/_service
+++ b/_service
@@ -4,7 +4,7 @@
https://github.com/ollama/ollama.git
git
- v0.1.32
+ v0.1.36
@PARENT_TAG@
v(.*)
enable
diff --git a/_servicedata b/_servicedata
index 43b619e..73f1d34 100644
--- a/_servicedata
+++ b/_servicedata
@@ -1,4 +1,4 @@
https://github.com/ollama/ollama.git
- fb9580df85c562295d919b6c2632117d3d8cea89
\ No newline at end of file
+ 92ca2cca954e590abe5eecb0a87fa13cec83b0e1
\ No newline at end of file
diff --git a/ollama-0.1.32.tar.gz b/ollama-0.1.32.tar.gz
deleted file mode 100644
index 1451f75..0000000
--- a/ollama-0.1.32.tar.gz
+++ /dev/null
@@ -1,3 +0,0 @@
-version https://git-lfs.github.com/spec/v1
-oid sha256:69b648bcafa46320c876a83a817f4fc4ed6c8a8acc961d62f4adb017fa7ad053
-size 70152034
diff --git a/ollama-0.1.36.tar.gz b/ollama-0.1.36.tar.gz
new file mode 100644
index 0000000..baa3564
--- /dev/null
+++ b/ollama-0.1.36.tar.gz
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:285ea18c73f9d8cbebd19ed429fb691a84853fa06366a6d206e74a9a5cfd2243
+size 87336304
diff --git a/ollama.changes b/ollama.changes
index 6085995..014fd4c 100644
--- a/ollama.changes
+++ b/ollama.changes
@@ -1,3 +1,71 @@
+-------------------------------------------------------------------
+Sun May 12 01:39:26 UTC 2024 - Eyad Issa
+
+- Update to version 0.1.36:
+ * Fixed exit status 0xc0000005 error with AMD graphics cards on Windows
+ * Fixed rare out of memory errors when loading a model to run with CPU
+
+- Update to version 0.1.35:
+ * New models: Llama 3 ChatQA: A model from NVIDIA based on Llama
+ 3 that excels at conversational question answering (QA) and
+ retrieval-augmented generation (RAG).
+ * Quantization: ollama create can now quantize models when
+ importing them using the --quantize or -q flag
+ * Fixed issue where inference subprocesses wouldn't be cleaned up
+ on shutdown.
+ * Fixed a series out of memory errors when loading models on
+ multi-GPU systems
+ * Ctrl+J characters will now properly add newlines in ollama run
+ * Fixed issues when running ollama show for vision models
+ * OPTIONS requests to the Ollama API will no longer result in
+ errors
+ * Fixed issue where partially downloaded files wouldn't be
+ cleaned up
+ * Added a new done_reason field in responses describing why
+ generation stopped responding
+ * Ollama will now more accurately estimate how much memory
+ is available on multi-GPU systems especially when running
+ different models one after another
+
+- Update to version 0.1.34:
+ * New model: Llava Llama 3
+ * New model: Llava Phi 3
+ * New model: StarCoder2 15B Instruct
+ * New model: CodeGemma 1.1
+ * New model: StableLM2 12B
+ * New model: Moondream 2
+ * Fixed issues with LLaVa models where they would respond
+ incorrectly after the first request
+ * Fixed out of memory errors when running large models such as
+ Llama 3 70B
+ * Fixed various issues with Nvidia GPU discovery on Linux and
+ Windows
+ * Fixed a series of Modelfile errors when running ollama create
+ * Fixed no slots available error that occurred when cancelling a
+ request and then sending follow up requests
+ * Improved AMD GPU detection on Fedora
+ * Improved reliability when using the experimental
+ OLLAMA_NUM_PARALLEL and OLLAMA_MAX_LOADED flags
+ * ollama serve will now shut down quickly, even if a model is
+ loading
+
+- Update to version 0.1.33:
+ * New model: Llama 3
+ * New model: Phi 3 Mini
+ * New model: Moondream
+ * New model: Llama 3 Gradient 1048K
+ * New model: Dolphin Llama 3
+ * New model: Qwen 110B
+ * Fixed issues where the model would not terminate, causing the
+ API to hang.
+ * Fixed a series of out of memory errors on Apple Silicon Macs
+ * Fixed out of memory errors when running Mixtral architecture
+ models
+ * Aded experimental concurrency features:
+ ~ OLLAMA_NUM_PARALLEL: Handle multiple requests simultaneously
+ for a single model
+ ~ OLLAMA_MAX_LOADED_MODELS: Load multiple models simultaneously
+
-------------------------------------------------------------------
Tue Apr 23 02:26:34 UTC 2024 - rrahl0@disroot.org
diff --git a/ollama.spec b/ollama.spec
index a31ce21..b01357a 100644
--- a/ollama.spec
+++ b/ollama.spec
@@ -17,7 +17,7 @@
Name: ollama
-Version: 0.1.32
+Version: 0.1.36
Release: 0
Summary: Tool for running AI models on-premise
License: MIT
diff --git a/vendor.tar.xz b/vendor.tar.xz
index 851ca2e..8226d89 100644
--- a/vendor.tar.xz
+++ b/vendor.tar.xz
@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
-oid sha256:26f50ef1d227317f77b0a68eb9672f407c3bdd15ffcd3bf6011afdf9b7d3b5ff
-size 3669792
+oid sha256:21390f2f5bbd12b7a6c134b3ced1bafe76b929f85077e273d6c8f378cb156eb2
+size 4310640