ceph/_constraints
Stefen Allen a6c12d699d Accepting request 967132 from filesystems:ceph:pacific
- Update to v16.2.7-654-gd5a90ff46f0
  + (bsc#1196733) remove build directory during %clean 

- Update to v16.2.7-654-gd5a90ff46f0
  + (bsc#1196733) remove build directory during %clean

OBS-URL: https://build.opensuse.org/request/show/967132
OBS-URL: https://build.opensuse.org/package/show/filesystems:ceph/ceph?expand=0&rev=320
2022-04-05 19:52:03 +00:00

55 lines
1.7 KiB
XML

<?xml version="1.0"?>
<constraints>
<sandbox>kvm</sandbox>
<!--
2022-03-31 - Tim Serong <tserong@suse.com>
Builds of ceph 16.2.7 on IBS showed the following resource usage (in MB):
ceph aarch64 max disk: 41568 max mem: 13698 (on ibs-centriq-6:3 disk: 65536 mem: 18432)
ceph x86_64 max disk: 41621 max mem: 9852 (on sheep74:2 disk: 51200 mem: 12500)
ceph ppc64le max disk: 42005 max mem: 8754 (on ibs-power9-10:1 disk: 61440 mem: 20480)
ceph s390x max disk: 40698 max mem: 8875 (on s390zl36:1 disk: 51200 mem: 10240)
ceph-test x86_64 max disk: 51760 max mem: 16835 (on sheep94:2 disk: 112640 mem: 16384)
Based on the above (and to hopefully provide a little wiggle room for
the future while at the same time not being too demanding of workers)
I've set the disk constraints to 50GB for ceph and 60GB for ceph-test.
Memory requirements remain at 8GB and 10GB respectively as they were
previously - despite the memory usage shown above, AFAIK we haven't
run out of memory during builds, and this keeps the pool of possible
workers noticeably larger than it would be if we required 16GB.
Note to future hackers: please add comments here to describe any further
changes made. Thank you!
-->
<hardware>
<disk>
<size unit="G">50</size>
</disk>
<physicalmemory>
<size unit="G">8</size>
</physicalmemory>
</hardware>
<overwrite>
<conditions>
<package>ceph-test</package>
</conditions>
<hardware>
<disk>
<size unit="G">60</size>
</disk>
<physicalmemory>
<size unit="G">10</size>
</physicalmemory>
</hardware>
</overwrite>
</constraints>