- fixed spec-file:
* operator.yaml does not get changed to use the SUSE-images
- helm chart, manifests:
* fixed tolerations
* Update SUSE documentation URL in NOTES.txt
- ceph: fix drive group deployment failure (bsc#1176170)
- helm chart, manifests:
* Add tolerations to cluster & CRDs
* Require kubeVersion >= 1.11
* Use rbac.authorization.k8s.io/v1
* Add affinities for label schema
* Set Rook log level to DEBUG
* Remove FlexVolume agent
* Require currentNamespaceOnly=true
* Replace NOTES.txt with SUSE specific version
- Include operator and common yamls in manifest package
- Update to v1.4.3
* The Ceph-CSI driver was being unexpectedly removed by the garbage
collector in some clusters. For more details to apply a fix during
the upgrade to this patch release, see these steps. (#616)
* Add storageClassDeviceSet label to osd pods (#6225)
* DNS suffix issue for OBCs in custom DNS suffix clusters (#6234)
* Cleanup mon canary pvc if the failover failed (#6224)
* Only enable mgr init container if the dashboard is enabled (#6198)
* cephobjectstore monitoring goroutine must be stopped during
uninstall (#6208)
* Remove NParts and Cache_Size from MDCACHE block in the NFS
configuration (#6207)
* Purge a down osd with a job created by the admin (#6127)
* Do not use label selector on external mgr service (#6142)
* Allow uninstall even if volumes still exist with a new CephCluster
setting (#6145)
Fri Sep 10 21:16:34 UTC 2020 - Mike Latimer <mlatimer@suse.com>
- Update to v1.4.2
- Patch release focusing on small feature additions and bug fixes.
* Improve check for LVM on the host to allow installing of OSDs (#6175)
* Set the OSD prepare resource limits (#6118)
* Allow memory limits below recommended settings (#6116)
* Use full DNS suffix for object endpoint with OBCs (#6170)
* Remove the CSI driver lifecycle preStop hook (#6141)
* External cluster optional settings for provisioners (#6048)
* Operator watches nodes that match OSD placement rules (#6156)
* Allow user to add labels to the cluster daemon pods (#6084 #6082)
* Fix vulnerability in package golang.org/x/text (#6136)
* Add expansion support for encrypted osd on pvc (#6126)
* Do not use realPath for OSDs on PVCs (#6120, @leseb)
* Example object store manifests updated for consistency (#6123)
* Separate topology spread constrinats for osd prepare jobs and
osd daemons (#6103)
* Pass CSI resources as strings in the helm chart (#6104)
* Improve callCephVolume() for list and prepare (#6059)
* Improved multus support for the CSI driver configuration (#5740)
* Object store healthcheck yaml examples (#6090)
* Add support for wal encrypted device on pvc (#6062)
* Updated helm usage in documentation (#6086)
* More details for RBD Mirroring documentation (#6083)
- Build process changes:
- Set CSI sidecar versions through _service, and set all versions in
code through a single patch file
+ csi-images-SUSE.patch
- csi-dummy-images.patch
- Use github.com/SUSE/rook and suse-release-1.4 tag in update.sh
- Create module dependencies through _service, and store these dependencies
in vendor.tar.gz (replacing rook-[version]-vendor.tar.xz)
- Modify build commands to include "-mod=vendor" to use new vendor tarball
- Add CSI sidecars as BuildRequires, in order to determine versions through
_service process
- Replace %setup of vendor tarball with a simple tar extraction
- Move registry detection to %prep, and set correct registry through a
search and replace on the SUSE_REGISTRY string
- Use variables to track rook, ceph and cephcsi versions
- Add '#!BuildTag', and 'appVersion' to chart.yaml
- Add required versioning to helm chart
- Leave ceph-csi templates in /etc, and include them in main rook package.
- csi-template-paths.patch
- Include only designated yaml examples in rook-k8s-yaml package
OBS-URL: https://build.opensuse.org/request/show/836111
OBS-URL: https://build.opensuse.org/package/show/filesystems:ceph/rook?expand=0&rev=84
36 lines
2.1 KiB
Plaintext
36 lines
2.1 KiB
Plaintext
The Rook Operator has been installed. Check its status by running:
|
|
kubectl --namespace {{ .Release.Namespace }} get pods -l "app=rook-ceph-operator"
|
|
|
|
*************************************************************************************************
|
|
**** CREATING A CEPH CLUSTER ****
|
|
*************************************************************************************************
|
|
After confirming the Rook Operator is running properly, the Ceph cluster can be deployed through
|
|
YAML files which describe the desired state of the cluster. Sample YAML configuration files are
|
|
included in the 'rook-k8s-yaml' package provided with SUSE Enterprise Storage 7. If installed
|
|
(on an admin workstation or CaaSP master node) the sample YAML files can be found in
|
|
the /usr/share/k8s-yaml/rook/ceph directory structure.
|
|
|
|
Further documentation can be found in the 'Deploying and Administering Ceph on SUSE CaaS Platform'
|
|
guide. This is under active development, but can be seen at the following URL:
|
|
|
|
https://susedoc.github.io/doc-ses/master/single-html/ses-rook/
|
|
|
|
NOTE: The sample configuration files can serve as templates for production deployments. Whether
|
|
creating entirely new configuration files or modifying the provided samples, ensure the following
|
|
requirements are met:
|
|
|
|
- The CephCluster resource must be customized to meet deployment needs.
|
|
- The CephCluster must be deployed in its own namespace (samples default to 'rook-ceph').
|
|
- At this time, only one Ceph cluster per Kubernetes cluster is supported.
|
|
- Role Based Access Control (RBAC) roles and role bindings must be configured.
|
|
- This helm chart includes the required RBAC configuration to create a CephCluster CRD
|
|
in the same namespace.
|
|
- Any disk devices added to the cluster must be empty (no filesystem or partitions).
|
|
- Disk devices must be referenced using their devnode name (e.g. '/dev/sdb' or '/dev/xvde')
|
|
|
|
Additional upstream documentation may be found at:
|
|
|
|
- https://rook.github.io/docs/rook/master/ceph-quickstart.html
|
|
- https://github.com/rook/rook/blob/master/cluster/examples/kubernetes/ceph/cluster.yaml
|
|
|