Skip to content

Commit

Permalink
Deployed 522b49bbd to v1.12 in docs/rook with MkDocs 1.5.2 and mike 1…
Browse files Browse the repository at this point in the history
….1.2
  • Loading branch information
Rook committed Sep 7, 2023
1 parent 4fcd4ef commit 3cce0fd
Show file tree
Hide file tree
Showing 5 changed files with 317 additions and 310 deletions.
619 changes: 313 additions & 306 deletions docs/rook/v1.12/CRDs/Cluster/ceph-cluster-crd/index.html

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion docs/rook/v1.12/CRDs/specification/index.html

Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
Expand Up @@ -184,7 +184,7 @@
</code></pre></div></td></tr></table></div> <p>Restart the Rook operator pod and wait for CSI pods to be recreated.</p> <h2 id=osd-crush-settings>OSD CRUSH Settings<a class=headerlink href=#osd-crush-settings title="Permanent link">&para;</a></h2> <p>A useful view of the <a href=http://docs.ceph.com/docs/master/rados/operations/crush-map/ >CRUSH Map</a> is generated with the following command:</p> <div class=highlight><table class=highlighttable><tr><td class=linenos><div class=linenodiv><pre><span></span><span class=normal><a href=#__codelineno-11-1>1</a></span></pre></div></td><td class=code><div><pre><span></span><code><a id=__codelineno-11-1 name=__codelineno-11-1></a><span class=go>ceph osd tree</span>
</code></pre></div></td></tr></table></div> <p>In this section we will be tweaking some of the values seen in the output.</p> <h3 id=osd-weight>OSD Weight<a class=headerlink href=#osd-weight title="Permanent link">&para;</a></h3> <p>The CRUSH weight controls the ratio of data that should be distributed to each OSD. This also means a higher or lower amount of disk I/O operations for an OSD with higher/lower weight, respectively.</p> <p>By default OSDs get a weight relative to their storage capacity, which maximizes overall cluster capacity by filling all drives at the same rate, even if drive sizes vary. This should work for most use-cases, but the following situations could warrant weight changes:</p> <ul> <li>Your cluster has some relatively slow OSDs or nodes. Lowering their weight can reduce the impact of this bottleneck.</li> <li>You're using bluestore drives provisioned with Rook v0.3.1 or older. In this case you may notice OSD weights did not get set relative to their storage capacity. Changing the weight can fix this and maximize cluster capacity.</li> </ul> <p>This example sets the weight of osd.0 which is 600GiB</p> <div class=highlight><table class=highlighttable><tr><td class=linenos><div class=linenodiv><pre><span></span><span class=normal><a href=#__codelineno-12-1>1</a></span></pre></div></td><td class=code><div><pre><span></span><code><a id=__codelineno-12-1 name=__codelineno-12-1></a><span class=go>ceph osd crush reweight osd.0 .600</span>
</code></pre></div></td></tr></table></div> <h3 id=osd-primary-affinity>OSD Primary Affinity<a class=headerlink href=#osd-primary-affinity title="Permanent link">&para;</a></h3> <p>When pools are set with a size setting greater than one, data is replicated between nodes and OSDs. For every chunk of data a Primary OSD is selected to be used for reading that data to be sent to clients. You can control how likely it is for an OSD to become a Primary using the Primary Affinity setting. This is similar to the OSD weight setting, except it only affects reads on the storage device, not capacity or writes.</p> <p>In this example we will ensure that <code>osd.0</code> is only selected as Primary if all other OSDs holding data replicas are unavailable:</p> <div class=highlight><table class=highlighttable><tr><td class=linenos><div class=linenodiv><pre><span></span><span class=normal><a href=#__codelineno-13-1>1</a></span></pre></div></td><td class=code><div><pre><span></span><code><a id=__codelineno-13-1 name=__codelineno-13-1></a><span class=go>ceph osd primary-affinity osd.0 0</span>
</code></pre></div></td></tr></table></div> <h2 id=osd-dedicated-network>OSD Dedicated Network<a class=headerlink href=#osd-dedicated-network title="Permanent link">&para;</a></h2> <p>It is possible to configure ceph to leverage a dedicated network for the OSDs to communicate across. A useful overview is the <a href=http://docs.ceph.com/docs/master/rados/configuration/network-config-ref/#ceph-networks>CEPH Networks</a> section of the Ceph documentation. If you declare a cluster network, OSDs will route heartbeat, object replication and recovery traffic over the cluster network. This may improve performance compared to using a single network, especially when slower network technologies are used, with the tradeoff of additional expense and subtle failure modes.</p> <p>Two changes are necessary to the configuration to enable this capability:</p> <h3 id=use-hostnetwork-in-the-cluster-configuration>Use hostNetwork in the cluster configuration<a class=headerlink href=#use-hostnetwork-in-the-cluster-configuration title="Permanent link">&para;</a></h3> <p>Enable the <code>hostNetwork</code> setting in the <a href=../../../CRDs/Cluster/ceph-cluster-crd/#samples>Ceph Cluster CRD configuration</a>. For example,</p> <div class=highlight><table class=highlighttable><tr><td class=linenos><div class=linenodiv><pre><span></span><span class=normal><a href=#__codelineno-14-1>1</a></span>
</code></pre></div></td></tr></table></div> <h2 id=osd-dedicated-network>OSD Dedicated Network<a class=headerlink href=#osd-dedicated-network title="Permanent link">&para;</a></h2> <div class="admonition tip"> <p class=admonition-title>Tip</p> <p>This documentation is left for historical purposes. It is still valid, but Rook offers native support for this feature via the <a href=../../../CRDs/Cluster/ceph-cluster-crd/#ceph-public-and-cluster-networks>CephCluster network configuration</a>.</p> </div> <p>It is possible to configure ceph to leverage a dedicated network for the OSDs to communicate across. A useful overview is the <a href=http://docs.ceph.com/docs/master/rados/configuration/network-config-ref/#ceph-networks>Ceph Networks</a> section of the Ceph documentation. If you declare a cluster network, OSDs will route heartbeat, object replication, and recovery traffic over the cluster network. This may improve performance compared to using a single network, especially when slower network technologies are used. The tradeoff is additional expense and subtle failure modes.</p> <p>Two changes are necessary to the configuration to enable this capability:</p> <h3 id=use-hostnetwork-in-the-cluster-configuration>Use hostNetwork in the cluster configuration<a class=headerlink href=#use-hostnetwork-in-the-cluster-configuration title="Permanent link">&para;</a></h3> <p>Enable the <code>hostNetwork</code> setting in the <a href=../../../CRDs/Cluster/ceph-cluster-crd/#samples>Ceph Cluster CRD configuration</a>. For example,</p> <div class=highlight><table class=highlighttable><tr><td class=linenos><div class=linenodiv><pre><span></span><span class=normal><a href=#__codelineno-14-1>1</a></span>
<span class=normal><a href=#__codelineno-14-2>2</a></span></pre></div></td><td class=code><div><pre><span></span><code><a id=__codelineno-14-1 name=__codelineno-14-1></a><span class=w> </span><span class=nt>network</span><span class=p>:</span>
<a id=__codelineno-14-2 name=__codelineno-14-2></a><span class=w> </span><span class=nt>provider</span><span class=p>:</span><span class=w> </span><span class="l l-Scalar l-Scalar-Plain">host</span>
</code></pre></div></td></tr></table></div> <div class="admonition important"> <p class=admonition-title>Important</p> <p>Changing this setting is not supported in a running Rook cluster. Host networking should be configured when the cluster is first created.</p> </div> <h3 id=define-the-subnets-to-use-for-public-and-private-osd-networks>Define the subnets to use for public and private OSD networks<a class=headerlink href=#define-the-subnets-to-use-for-public-and-private-osd-networks title="Permanent link">&para;</a></h3> <p>Edit the <code>rook-config-override</code> configmap to define the custom network configuration:</p> <div class=highlight><table class=highlighttable><tr><td class=linenos><div class=linenodiv><pre><span></span><span class=normal><a href=#__codelineno-15-1>1</a></span></pre></div></td><td class=code><div><pre><span></span><code><a id=__codelineno-15-1 name=__codelineno-15-1></a><span class=go>kubectl -n rook-ceph edit configmap rook-config-override</span>
Expand All @@ -199,7 +199,7 @@
<a id=__codelineno-16-2 name=__codelineno-16-2></a><span class=nt>data</span><span class=p>:</span>
<a id=__codelineno-16-3 name=__codelineno-16-3></a><span class=w> </span><span class=nt>config</span><span class=p>:</span><span class=w> </span><span class="p p-Indicator">|</span>
<a id=__codelineno-16-4 name=__codelineno-16-4></a><span class=w> </span><span class=no>[global]</span>
<a id=__codelineno-16-5 name=__codelineno-16-5></a><span class=w> </span><span class=no>public network = 10.0.7.0/24</span>
<a id=__codelineno-16-5 name=__codelineno-16-5></a><span class=w> </span><span class=no>public network = 10.0.7.0/24</span>
<a id=__codelineno-16-6 name=__codelineno-16-6></a><span class=w> </span><span class=no>cluster network = 10.0.10.0/24</span>
<a id=__codelineno-16-7 name=__codelineno-16-7></a><span class=w> </span><span class=no>public addr = &quot;&quot;</span>
<a id=__codelineno-16-8 name=__codelineno-16-8></a><span class=w> </span><span class=no>cluster addr = &quot;&quot;</span>
Expand Down
2 changes: 1 addition & 1 deletion docs/rook/v1.12/search/search_index.json

Large diffs are not rendered by default.

Binary file modified docs/rook/v1.12/sitemap.xml.gz
Binary file not shown.

0 comments on commit 3cce0fd

Please sign in to comment.