diff --git a/docs/rook/v1.15/CRDs/Block-Storage/ceph-block-pool-crd/index.html b/docs/rook/v1.15/CRDs/Block-Storage/ceph-block-pool-crd/index.html index 0dc27a4e7..eafbabb29 100644 --- a/docs/rook/v1.15/CRDs/Block-Storage/ceph-block-pool-crd/index.html +++ b/docs/rook/v1.15/CRDs/Block-Storage/ceph-block-pool-crd/index.html @@ -147,7 +147,7 @@ size: 4 replicasPerFailureDomain: 2 subFailureDomain: rack -
name
: The name of the pool to create.namespace
: The namespace of the Rook cluster where the pool is created.replicated
: Settings for a replicated pool. If specified, erasureCoded
settings must not be specified.size
: The desired number of copies to make of the data in the pool.requireSafeReplicaSize
: set to false if you want to create a pool with size 1, setting pool size 1 could lead to data loss without recovery. Make sure you are ABSOLUTELY CERTAIN that is what you want.replicasPerFailureDomain
: Sets up the number of replicas to place in a given failure domain. For instance, if the failure domain is a datacenter (cluster is stretched) then you will have 2 replicas per datacenter where each replica ends up on a different host. This gives you a total of 4 replicas and for this, the size
must be set to 4. The default is 1.subFailureDomain
: Name of the CRUSH bucket representing a sub-failure domain. In a stretched configuration this option represent the "last" bucket where replicas will end up being written. Imagine the cluster is stretched across two datacenters, you can then have 2 copies per datacenter and each copy on a different CRUSH bucket. The default is "host".erasureCoded
: Settings for an erasure-coded pool. If specified, replicated
settings must not be specified. See below for more details on erasure coding.dataChunks
: Number of chunks to divide the original object intocodingChunks
: Number of coding chunks to generatefailureDomain
: The failure domain across which the data will be spread. This can be set to a value of either osd
or host
, with host
being the default setting. A failure domain can also be set to a different type (e.g. rack
), if the OSDs are created on nodes with the supported topology labels. If the failureDomain
is changed on the pool, the operator will create a new CRUSH rule and update the pool. If a replicated
pool of size 3
is configured and the failureDomain
is set to host
, all three copies of the replicated data will be placed on OSDs located on 3
different Ceph hosts. This case is guaranteed to tolerate a failure of two hosts without a loss of data. Similarly, a failure domain set to osd
, can tolerate a loss of two OSD devices.
If erasure coding is used, the data and coding chunks are spread across the configured failure domain.
Caution
Neither Rook, nor Ceph, prevent the creation of a cluster where the replicated data (or Erasure Coded chunks) can be written safely. By design, Ceph will delay checking for suitable OSDs until a write request is made and this write can hang if there are not sufficient OSDs to satisfy the request.
deviceClass
: Sets up the CRUSH rule for the pool to distribute data only on the specified device class. If left empty or unspecified, the pool will use the cluster's default CRUSH root, which usually distributes data over all OSDs, regardless of their class. If deviceClass
is specified on any pool, ensure that it is added to every pool in the cluster, otherwise Ceph will warn about pools with overlapping roots.crushRoot
: The root in the crush map to be used by the pool. If left empty or unspecified, the default root will be used. Creating a crush hierarchy for the OSDs currently requires the Rook toolbox to run the Ceph tools described here.enableCrushUpdates
: Enables rook to update the pool crush rule using Pool Spec. Can cause data remapping if crush rule changes, Defaults to false.enableRBDStats
: Enables collecting RBD per-image IO statistics by enabling dynamic OSD performance counters. Defaults to false. For more info see the ceph documentation.name
: The name of Ceph pools is based on the metadata.name
of the CephBlockPool CR. Some built-in Ceph pools require names that are incompatible with K8s resource names. These special pools can be configured by setting this name
to override the name of the Ceph pool that is created instead of using the metadata.name
for the pool. Only the following pool names are supported: .nfs
, .mgr
, and .rgw.root
. See the example builtin mgr pool.application
: The type of application set on the pool. By default, Ceph pools for CephBlockPools will be rbd
, CephObjectStore pools will be rgw
, and CephFilesystem pools will be cephfs
.parameters
: Sets any parameters listed to the given pool
target_size_ratio:
gives a hint (%) to Ceph in terms of expected consumption of the total cluster capacity of a given pool, for more info see the ceph documentationcompression_mode
: Sets up the pool for inline compression when using a Bluestore OSD. If left unspecified does not setup any compression mode for the pool. Values supported are the same as Bluestore inline compression modes, such as none
, passive
, aggressive
, and force
.mirroring
: Sets up mirroring of the pool
enabled
: whether mirroring is enabled on that pool (default: false)mode
: mirroring mode to run, possible values are "pool" or "image" (required). Refer to the mirroring modes Ceph documentation for more details.snapshotSchedules
: schedule(s) snapshot at the pool level. One or more schedules are supported.interval
: frequency of the snapshots. The interval can be specified in days, hours, or minutes using d, h, m suffix respectively.startTime
: optional, determines at what time the snapshot process starts, specified using the ISO 8601 time format.peers
: to configure mirroring peers. See the prerequisite RBD Mirror documentation first.secretNames
: a list of peers to connect to. Currently only a single peer is supported where a peer represents a Ceph cluster.statusCheck
: Sets up pool mirroring status
mirror
: displays the mirroring statusdisabled
: whether to enable or disable pool mirroring statusinterval
: time interval to refresh the mirroring status (default 60s)quotas
: Set byte and object quotas. See the ceph documentation for more info.
maxSize
: quota in bytes as a string with quantity suffixes (e.g. "10Gi")maxObjects
: quota in objects as an integerNote
A value of 0 disables the quota.
With poolProperties
you can set any pool property:
1
+ |
name
: The name of the pool to create.namespace
: The namespace of the Rook cluster where the pool is created.replicated
: Settings for a replicated pool. If specified, erasureCoded
settings must not be specified.size
: The desired number of copies to make of the data in the pool.requireSafeReplicaSize
: set to false if you want to create a pool with size 1, setting pool size 1 could lead to data loss without recovery. Make sure you are ABSOLUTELY CERTAIN that is what you want.replicasPerFailureDomain
: Sets up the number of replicas to place in a given failure domain. For instance, if the failure domain is a datacenter (cluster is stretched) then you will have 2 replicas per datacenter where each replica ends up on a different host. This gives you a total of 4 replicas and for this, the size
must be set to 4. The default is 1.subFailureDomain
: Name of the CRUSH bucket representing a sub-failure domain. In a stretched configuration this option represent the "last" bucket where replicas will end up being written. Imagine the cluster is stretched across two datacenters, you can then have 2 copies per datacenter and each copy on a different CRUSH bucket. The default is "host".erasureCoded
: Settings for an erasure-coded pool. If specified, replicated
settings must not be specified. See below for more details on erasure coding.dataChunks
: Number of chunks to divide the original object intocodingChunks
: Number of coding chunks to generatefailureDomain
: The failure domain across which the data will be spread. This can be set to a value of either osd
or host
, with host
being the default setting. A failure domain can also be set to a different type (e.g. rack
), if the OSDs are created on nodes with the supported topology labels. If the failureDomain
is changed on the pool, the operator will create a new CRUSH rule and update the pool. If a replicated
pool of size 3
is configured and the failureDomain
is set to host
, all three copies of the replicated data will be placed on OSDs located on 3
different Ceph hosts. This case is guaranteed to tolerate a failure of two hosts without a loss of data. Similarly, a failure domain set to osd
, can tolerate a loss of two OSD devices.
If erasure coding is used, the data and coding chunks are spread across the configured failure domain.
Caution
Neither Rook, nor Ceph, prevent the creation of a cluster where the replicated data (or Erasure Coded chunks) can be written safely. By design, Ceph will delay checking for suitable OSDs until a write request is made and this write can hang if there are not sufficient OSDs to satisfy the request.
deviceClass
: Sets up the CRUSH rule for the pool to distribute data only on the specified device class. If left empty or unspecified, the pool will use the cluster's default CRUSH root, which usually distributes data over all OSDs, regardless of their class. If deviceClass
is specified on any pool, ensure that it is added to every pool in the cluster, otherwise Ceph will warn about pools with overlapping roots.crushRoot
: The root in the crush map to be used by the pool. If left empty or unspecified, the default root will be used. Creating a crush hierarchy for the OSDs currently requires the Rook toolbox to run the Ceph tools described here.enableCrushUpdates
: Enables rook to update the pool crush rule using Pool Spec. Can cause data remapping if crush rule changes, Defaults to false.enableRBDStats
: Enables collecting RBD per-image IO statistics by enabling dynamic OSD performance counters. Defaults to false. For more info see the ceph documentation.name
: The name of Ceph pools is based on the metadata.name
of the CephBlockPool CR. Some built-in Ceph pools require names that are incompatible with K8s resource names. These special pools can be configured by setting this name
to override the name of the Ceph pool that is created instead of using the metadata.name
for the pool. Only the following pool names are supported: .nfs
, .mgr
, and .rgw.root
. See the example builtin mgr pool.application
: The type of application set on the pool. By default, Ceph pools for CephBlockPools will be rbd
, CephObjectStore pools will be rgw
, and CephFilesystem pools will be cephfs
.parameters
: Sets any parameters listed to the given pool
target_size_ratio:
gives a hint (%) to Ceph in terms of expected consumption of the total cluster capacity of a given pool, for more info see the ceph documentationcompression_mode
: Sets up the pool for inline compression when using a Bluestore OSD. If left unspecified does not setup any compression mode for the pool. Values supported are the same as Bluestore inline compression modes, such as none
, passive
, aggressive
, and force
.mirroring
: Sets up mirroring of the pool
enabled
: whether mirroring is enabled on that pool (default: false)mode
: mirroring mode to run, possible values are "pool" or "image" (required). Refer to the mirroring modes Ceph documentation for more details.snapshotSchedules
: schedule(s) snapshot at the pool level. One or more schedules are supported.interval
: frequency of the snapshots. The interval can be specified in days, hours, or minutes using d, h, m suffix respectively.startTime
: optional, determines at what time the snapshot process starts, specified using the ISO 8601 time format.peers
: to configure mirroring peers. See the prerequisite RBD Mirror documentation first.secretNames
: a list of peers to connect to. Currently only a single peer is supported where a peer represents a Ceph cluster.statusCheck
: Sets up pool mirroring status
mirror
: displays the mirroring statusdisabled
: whether to enable or disable pool mirroring statusinterval
: time interval to refresh the mirroring status (default 60s)quotas
: Set byte and object quotas. See the ceph documentation for more info.
maxSize
: quota in bytes as a string with quantity suffixes (e.g. "10Gi")maxObjects
: quota in objects as an integerNote
A value of 0 disables the quota.
With poolProperties
you can set any pool property:
This guide assumes you have created a Rook cluster as explained in the main Quickstart guide
If any setting is unspecified, a suitable default will be used automatically.
name
: The name that will be used for the Ceph RBD Mirror daemon.namespace
: The Kubernetes namespace that will be created for the Rook cluster. The services, pods, and other resources created by the operator will be added to this namespace.count
: The number of rbd mirror instance to run.placement
: The rbd mirror pods can be given standard Kubernetes placement restrictions with nodeAffinity
, tolerations
, podAffinity
, and podAntiAffinity
similar to placement defined for daemons configured by the cluster CRD..annotations
: Key value pair list of annotations to add.labels
: Key value pair list of labels to add.resources
: The resource requirements for the rbd mirror pods.priorityClassName
: The priority class to set on the rbd mirror pods.Configure mirroring peers individually for each CephBlockPool. Refer to the CephBlockPool documentation for more detail.