Skip to content

Commit

Permalink
Admin Guide: integrate proofing corrections
Browse files Browse the repository at this point in the history
  • Loading branch information
taroth21 committed Apr 29, 2019
1 parent dd82381 commit 3a4a190
Show file tree
Hide file tree
Showing 7 changed files with 44 additions and 42 deletions.
8 changes: 4 additions & 4 deletions xml/ha_cluster_lvm.xml
Original file line number Diff line number Diff line change
Expand Up @@ -830,7 +830,7 @@ vdc 253:32 0 20G 0 disk
logical volume for a cmirrord setup on &productname; 11 or 12 as
described in <link
xlink:href="https://www.suse.com/documentation/sle-ha-12/singlehtml/book_sleha/book_sleha.html#sec.ha.clvm.config.cmirrord"
/>). </para>
/>.)</para>
</formalpara>
<para>
By default, <command>mdadm</command> reserves a certain amount of space
Expand All @@ -843,13 +843,13 @@ vdc 253:32 0 20G 0 disk
The <option>data-offset</option> must leave enough space on the device
for cluster MD to write its metadata to it. On the other hand, the offset
must be small enough for the remaining capacity of the device to accommodate
all physical volume extents of the migrated volume. Because the volume can
all physical volume extents of the migrated volume. Because the volume may
have spanned the complete device minus the mirror log, the offset must be
smaller than the size of the mirror log.
</para>
<para>
We recommend to set the <option>data-offset</option> to 128&nbsp;KB.
If no value is specified for the offset, its default value is 1&nbsp;KB
We recommend to set the <option>data-offset</option> to 128&nbsp;kB.
If no value is specified for the offset, its default value is 1&nbsp;kB
(1024&nbsp;bytes).
</para>
</listitem>
Expand Down
8 changes: 4 additions & 4 deletions xml/ha_concepts.xml
Original file line number Diff line number Diff line change
Expand Up @@ -622,7 +622,7 @@
<title>Cluster Resource Manager (Pacemaker)</title>
<para>
Pacemaker as cluster resource manager is the <quote>brain</quote>
which reacts to events occurring in the cluster. Its is implemented as
which reacts to events occurring in the cluster. It is implemented as
<systemitem class="daemon">pacemaker-controld</systemitem>, the cluster
controller, which coordinates all actions. Events can be nodes that join
or leave the cluster, failure of resources, or scheduled activities such
Expand All @@ -637,7 +637,7 @@
The local resource manager is located between the Pacemaker layer and the
resources layer on each node. It is implemented as <systemitem
class="daemon">pacemaker-execd</systemitem> daemon. Through this daemon,
Pacemaker can start, stop and monitor resources.
Pacemaker can start, stop, and monitor resources.
</para>
</listitem>
</varlistentry>
Expand Down Expand Up @@ -688,8 +688,8 @@
<sect3 xml:id="sec.ha.architecture.layers.rsc">
<title>Resources and Resource Agents</title>
<para>
In an &ha; cluster, the services that need to be highly available are
called resources. Resource agents (RAs) are scripts that start, stop and
In a &ha; cluster, the services that need to be highly available are
called resources. Resource agents (RAs) are scripts that start, stop, and
monitor cluster resources.
</para>
</sect3>
Expand Down
6 changes: 3 additions & 3 deletions xml/ha_config_basics.xml
Original file line number Diff line number Diff line change
Expand Up @@ -224,7 +224,7 @@
Whenever communication fails between one or more nodes and the rest of the
cluster, a cluster partition occurs. The nodes can only communicate with
other nodes in the same partition and are unaware of the separated nodes.
A cluster partition is defined as having quorum (can <quote>quorate</quote>)
A cluster partition is defined as having quorum (being <quote>quorate</quote>)
if it has the majority of nodes (or votes).
How this is achieved is done by <emphasis>quorum calculation</emphasis>.
Quorum is a requirement for fencing.
Expand Down Expand Up @@ -256,8 +256,8 @@ C = number of cluster nodes</screen>
We strongly recommend to use either a two-node cluster or an odd number
of cluster nodes.
Two-node clusters make sense for stretched setups across two sites.
Clusters with an odd number of nodes can be built on either one single
site or might being spread across three sites.
Clusters with an odd number of nodes can either be built on one single
site or might be spread across three sites.
</para>
</listitem>
</varlistentry>
Expand Down
18 changes: 10 additions & 8 deletions xml/ha_fencing.xml
Original file line number Diff line number Diff line change
Expand Up @@ -184,15 +184,16 @@
<term>pacemaker-fenced</term>
<listitem>
<para>
pacemaker-fenced is a daemon which can be accessed by local processes or over
<systemitem class="daemon">pacemaker-fenced</systemitem> is a daemon which can be accessed by local processes or over
the network. It accepts the commands which correspond to fencing
operations: reset, power-off, and power-on. It can also check the
status of the fencing device.
</para>
<para>
The pacemaker-fenced daemon runs on every node in the &ha; cluster. The
pacemaker-fenced instance running on the DC node receives a fencing request
from the pacemaker-controld. It is up to this and other pacemaker-fenced programs to carry
The <systemitem class="daemon">pacemaker-fenced</systemitem> daemon runs on every node in the &ha; cluster. The
<systemitem class="resource">pacemaker-fenced</systemitem> instance running on the DC node receives a fencing request
from the <systemitem class="daemon">pacemaker-controld</systemitem>. It
is up to this and other <systemitem class="daemon">pacemaker-fenced</systemitem> programs to carry
out the desired fencing operation.
</para>
</listitem>
Expand All @@ -210,8 +211,9 @@
<package>fence-agents</package> package, too,
the plug-ins contained there are installed in
<filename>/usr/sbin/fence_*</filename>.) All &stonith; plug-ins look
the same to pacemaker-fenced, but are quite different on the other side
reflecting the nature of the fencing device.
the same to <systemitem class="daemon">pacemaker-fenced</systemitem>,
but are quite different on the other side, reflecting the nature of the
fencing device.
</para>
<para>
Some plug-ins support more than one device. A typical example is
Expand All @@ -229,7 +231,7 @@

<para>
To set up fencing, you need to configure one or more &stonith;
resources&mdash;the pacemaker-fenced daemon requires no configuration. All
resources&mdash;the <systemitem class="daemon">pacemaker-fenced</systemitem> daemon requires no configuration. All
configuration is stored in the CIB. A &stonith; resource is a resource of
class <literal>stonith</literal> (see
<xref linkend="sec.ha.config.basics.raclasses"/>). &stonith; resources
Expand Down Expand Up @@ -328,7 +330,7 @@ commit</screen>
outcome. The only way to do that is to assume that the operation is
going to succeed and send the notification beforehand. But if the
operation fails, problems could arise. Therefore, by convention,
pacemaker-fenced refuses to terminate its host.
<systemitem class="daemon">pacemaker-fenced</systemitem> refuses to terminate its host.
</para>
</example>
<example>
Expand Down
10 changes: 5 additions & 5 deletions xml/ha_glossary.xml
Original file line number Diff line number Diff line change
Expand Up @@ -171,7 +171,7 @@
<glossdef>
<para>
The management entity responsible for coordinating all non-local
interactions in an &ha; cluster. The &hasi; uses Pacemaker as CRM.
interactions in a &ha; cluster. The &hasi; uses Pacemaker as CRM.
The CRM is implemented as <systemitem
class="daemon">pacemaker-controld</systemitem>. It interacts with several
components: local resource managers, both on its own node and on the other nodes,
Expand Down Expand Up @@ -282,8 +282,8 @@
isolated or failing cluster members. There are two classes of fencing:
resource level fencing and node level fencing. Resource level fencing ensures
exclusive access to a given resource. Node level fencing prevents a failed
node from accessing shared resources entirely and prevents that resources run
a node whose status is uncertain. This is usually done in a simple and
node from accessing shared resources entirely and prevents resources from running
on a node whose status is uncertain. This is usually done in a simple and
abrupt way: reset or power off the node.
</para>
</glossdef>
Expand Down Expand Up @@ -329,7 +329,7 @@ performance will be met during a contractual measurement period.</para>
The local resource manager is located between the Pacemaker layer and the
resources layer on each node. It is implemented as <systemitem
class="daemon">pacemaker-execd</systemitem> daemon. Through this daemon,
Pacemaker can start, stop and monitor resources.
Pacemaker can start, stop, and monitor resources.
</para>
</glossdef>
</glossentry>
Expand Down Expand Up @@ -419,7 +419,7 @@ performance will be met during a contractual measurement period.</para>
<glossentry xml:id="gloss.quorum"><glossterm>quorum</glossterm>
<glossdef>
<para>
In a cluster, a cluster partition is defined to have quorum (can
In a cluster, a cluster partition is defined to have quorum (be
<quote>quorate</quote>) if it has the majority of nodes (or votes).
Quorum distinguishes exactly one partition. It is part of the algorithm
to prevent several disconnected partitions or nodes from proceeding and
Expand Down
4 changes: 2 additions & 2 deletions xml/ha_hawk2_history_i.xml
Original file line number Diff line number Diff line change
Expand Up @@ -317,7 +317,7 @@
<title>Viewing Transition Details in the History Explorer</title>
<para>
For each transition, the cluster saves a copy of the state which it provides
as input to <systemitem class="daemon">pacemaker-schedulerd</systemitem>.
as input to <systemitem class="daemon">pacemaker-schedulerd</systemitem>.
The path to this archive is logged. All
<filename>pe-*</filename> files are generated on the Designated
Coordinator (DC). As the DC can change in a cluster, there may be
Expand Down Expand Up @@ -376,7 +376,7 @@
<screen>crm history transition log <replaceable>peinput</replaceable></screen>
<para>
This includes details from the following daemons:
<systemitem class="daemon">pacemaker-schedulerd </systemitem>,
<systemitem class="daemon">pacemaker-schedulerd</systemitem>,
<systemitem class="daemon">pacemaker-controld</systemitem>, and
<systemitem class="daemon">pacemaker-execd</systemitem>.
</para>
Expand Down
32 changes: 16 additions & 16 deletions xml/ha_maintenance.xml
Original file line number Diff line number Diff line change
Expand Up @@ -147,7 +147,7 @@ Node <replaceable>&node2;</replaceable>: standby

<variablelist>
<varlistentry xml:id="vle.ha.maint.mode.cluster">
<!--<term>Putting the Cluster into Maintenance Mode</term>-->
<!--<term>Putting the Cluster in Maintenance Mode</term>-->
<term><xref linkend="sec.ha.maint.mode.cluster" xrefstyle="select:title"/></term>
<listitem>
<para>
Expand All @@ -158,7 +158,7 @@ Node <replaceable>&node2;</replaceable>: standby
</listitem>
</varlistentry>
<varlistentry xml:id="vle.ha.maint.mode.node">
<!--<term>Putting a Node into Maintenance Mode</term>-->
<!--<term>Putting a Node in Maintenance Mode</term>-->
<term><xref linkend="sec.ha.maint.mode.node" xrefstyle="select:title"/></term>
<listitem>
<para>
Expand All @@ -169,7 +169,7 @@ Node <replaceable>&node2;</replaceable>: standby
</listitem>
</varlistentry>
<varlistentry xml:id="vle.ha.maint.node.standby">
<!--<term>Putting a Node into Standby Mode</term>-->
<!--<term>Putting a Node in Standby Mode</term>-->
<term><xref linkend="sec.ha.maint.node.standby" xrefstyle="select:title"/></term>
<listitem>
<para>
Expand All @@ -186,7 +186,7 @@ Node <replaceable>&node2;</replaceable>: standby
</listitem>
</varlistentry>
<varlistentry xml:id="vle.ha.maint.mode.rsc">
<!--<term>Putting a Resource into Maintenance Mode</term>-->
<!--<term>Putting a Resource in Maintenance Mode</term>-->
<term><xref linkend="sec.ha.maint.mode.rsc" xrefstyle="select:title"/></term>
<listitem>
<para>
Expand Down Expand Up @@ -266,16 +266,16 @@ Node <replaceable>&node2;</replaceable>: standby
</sect1>

<sect1 xml:id="sec.ha.maint.mode.cluster">
<title>Putting the Cluster into Maintenance Mode</title>
<title>Putting the Cluster in Maintenance Mode</title>
<para>
To put the cluster into maintenance mode on the &crmshell;, use the following command:</para>
To put the cluster in maintenance mode on the &crmshell;, use the following command:</para>
<screen>&prompt.root;<command>crm</command> configure property maintenance-mode=true</screen>
<para>
To put the cluster back into normal mode after your maintenance work is done, use the following command:</para>
To put the cluster back to normal mode after your maintenance work is done, use the following command:</para>
<screen>&prompt.root;<command>crm</command> configure property maintenance-mode=false</screen>

<procedure xml:id="pro.ha.maint.mode.cluster.hawk2">
<title>Putting the Cluster into Maintenance Mode with &hawk2;</title>
<title>Putting the Cluster in Maintenance Mode with &hawk2;</title>
<step>
<para>
Start a Web browser and log in to the cluster as described in
Expand Down Expand Up @@ -315,16 +315,16 @@ Node <replaceable>&node2;</replaceable>: standby
</sect1>

<sect1 xml:id="sec.ha.maint.mode.node">
<title>Putting a Node into Maintenance Mode</title>
<title>Putting a Node in Maintenance Mode</title>
<para>
To put a node into maintenance mode on the &crmshell;, use the following command:</para>
To put a node in maintenance mode on the &crmshell;, use the following command:</para>
<screen>&prompt.root;<command>crm</command> node maintenance <replaceable>NODENAME</replaceable></screen>
<para>
To put the node back into normal mode after your maintenance work is done, use the following command:</para>
To put the node back to normal mode after your maintenance work is done, use the following command:</para>
<screen>&prompt.root;<command>crm</command> node ready <replaceable>NODENAME</replaceable></screen>

<procedure xml:id="pro.ha.maint.mode.nodes.hawk2">
<title>Putting a Node into Maintenance Mode with &hawk2;</title>
<title>Putting a Node in Maintenance Mode with &hawk2;</title>
<step>
<para>
Start a Web browser and log in to the cluster as described in
Expand Down Expand Up @@ -352,16 +352,16 @@ Node <replaceable>&node2;</replaceable>: standby
</sect1>

<sect1 xml:id="sec.ha.maint.node.standby">
<title>Putting a Node into Standby Mode</title>
<title>Putting a Node in Standby Mode</title>
<para>
To put a node into standby mode on the &crmshell;, use the following command:</para>
To put a node in standby mode on the &crmshell;, use the following command:</para>
<screen>&prompt.root;crm node standby <replaceable>NODENAME</replaceable></screen>
<para>
To bring the node back online after your maintenance work is done, use the following command:</para>
<screen>&prompt.root;crm node online <replaceable>NODENAME</replaceable></screen>

<procedure xml:id="pro.ha.maint.node.standby.hawk2">
<title>Putting a Node into Standby Mode with &hawk2;</title>
<title>Putting a Node in Standby Mode with &hawk2;</title>
<step>
<para>
Start a Web browser and log in to the cluster as described in
Expand Down Expand Up @@ -518,7 +518,7 @@ Node <replaceable>&node2;</replaceable>: standby
</sect1>

<sect1 xml:id="sec.ha.maint.shutdown.node.maint.mode">
<title>Rebooting a Cluster Node While In Maintenance Mode</title>
<title>Rebooting a Cluster Node While in Maintenance Mode</title>
<note>
<title>Implications</title>
<para>
Expand Down

0 comments on commit 3a4a190

Please sign in to comment.