SHA256
1
0
forked from pool/ollama
ollama/ollama.spec
Loren Burkholder 9c6d1dfa92 - Update to version 0.1.28:
* Fix embeddings load model behavior (#2848)
  * Add Community Integration: NextChat (#2780)
  * prepend image tags (#2789)
  * fix: print usedMemory size right (#2827)
  * bump submodule to `87c91c07663b707e831c59ec373b5e665ff9d64a` (#2828)
  * Add ollama user to video group
  * Add env var so podman will map cuda GPUs
  * Omit build date from gzip headers
  * Log unexpected server errors checking for update
  * Refine container image build script
  * Bump llama.cpp to b2276
  * Determine max VRAM on macOS using `recommendedMaxWorkingSetSize` (#2354)
  * Update types.go (#2744)
  * Update langchain python tutorial (#2737)
  * no extra disk space for windows installation (#2739)
  * clean up go.mod
  * remove format/openssh.go
  * Add Community Integration: Chatbox
  * better directory cleanup in `ollama.iss`
  * restore windows build flags and compression

OBS-URL: https://build.opensuse.org/package/show/science:machinelearning/ollama?expand=0&rev=6
2024-03-06 23:53:38 +00:00

88 lines
2.3 KiB
RPMSpec

#
# spec file for package ollama
#
# Copyright (c) 2024 SUSE LLC
#
# All modifications and additions to the file contributed by third parties
# remain the property of their copyright owners, unless otherwise agreed
# upon. The license for this file, and modifications and additions to the
# file, is the same license as for the pristine package itself (unless the
# license for the pristine package is not an Open Source License, in which
# case the license is the MIT License). An "Open Source License" is a
# license that conforms to the Open Source Definition (Version 1.9)
# published by the Open Source Initiative.
# Please submit bugfixes or comments via https://bugs.opensuse.org/
#
Name: ollama
Version: 0.1.28
Release: 0
Summary: Tool for running AI models on-premise
License: MIT
URL: https://ollama.com
Source: %{name}-%{version}.tar.gz
Source1: vendor.tar.xz
Source2: ollama.service
Source3: %{name}-user.conf
Patch0: enable-lto.patch
BuildRequires: cmake >= 3.24
BuildRequires: gcc-c++ >= 11.4.0
BuildRequires: git
BuildRequires: sysuser-tools
BuildRequires: golang(API) >= 1.21
%{sysusers_requires}
%description
Ollama is a tool for running AI models on one's own hardware.
It offers a command-line interface and a RESTful API.
New models can be created or existing ones modified in the
Ollama library using the Modelfile syntax.
Source model weights found on Hugging Face and similar sites
can be imported.
%prep
%autosetup -a1 -p1
%build
%sysusers_generate_pre %{SOURCE3} %{name} %{name}-user.conf
%ifnarch ppc64
export GOFLAGS="-buildmode=pie -mod=vendor"
%endif
export OLLAMA_SKIP_PATCHING=1
go generate ./...
go build .
%install
install -D -m 0755 %{name} %{buildroot}/%{_bindir}/%{name}
install -D -m 0644 %{SOURCE2} %{buildroot}%{_unitdir}/%{name}.service
install -D -m 0644 %{SOURCE3} %{buildroot}%{_sysusersdir}/%{name}-user.conf
install -d %{buildroot}/var/lib/%{name}
%pre -f %{name}.pre
%service_add_pre %{name}.service
%post
%service_add_post %{name}.service
%preun
%service_del_preun %{name}.service
%postun
%service_del_postun %{name}.service
%files
%doc README.md
%license LICENSE
%{_bindir}/%{name}
%{_unitdir}/%{name}.service
%{_sysusersdir}/%{name}-user.conf
%attr(-, ollama, ollama) /var/lib/%{name}
%changelog