From f73583f4a1d666ea97f1cba2de9afb54c4604d97 Mon Sep 17 00:00:00 2001 From: Tahlia Richardson <3069029+tahliar@users.noreply.github.com> Date: Thu, 14 Sep 2023 11:02:13 +1000 Subject: [PATCH 01/39] Remove installation content from admin guide jsc#PED-2842 --- xml/book_administration.xml | 12 ------------ 1 file changed, 12 deletions(-) diff --git a/xml/book_administration.xml b/xml/book_administration.xml index 9641bdd9..819f3fd7 100644 --- a/xml/book_administration.xml +++ b/xml/book_administration.xml @@ -55,18 +55,6 @@ - - - - - Installation and setup - - - - - - - From 9eb85ac8dba15dcc29af41bf21d0303ee2604452 Mon Sep 17 00:00:00 2001 From: Tahlia Richardson <3069029+tahliar@users.noreply.github.com> Date: Thu, 14 Sep 2023 13:28:35 +1000 Subject: [PATCH 02/39] Add initial skeleton of full install guide jsc#PED-2842 --- DC-SLE-HA-full-install | 25 +++++++++++ xml/MAIN.SLEHA.xml | 3 ++ xml/book_full_install.xml | 88 +++++++++++++++++++++++++++++++++++++++ xml/ha_install_intro.xml | 30 +++++++++++++ 4 files changed, 146 insertions(+) create mode 100644 DC-SLE-HA-full-install create mode 100644 xml/book_full_install.xml create mode 100644 xml/ha_install_intro.xml diff --git a/DC-SLE-HA-full-install b/DC-SLE-HA-full-install new file mode 100644 index 00000000..a8aa8ad6 --- /dev/null +++ b/DC-SLE-HA-full-install @@ -0,0 +1,25 @@ +## ---------------------------- +## Doc Config File for SUSE Linux Enterprise High Availability Extension +## Full installation guide +## ---------------------------- +## +## Basics +MAIN="MAIN.SLEHA.xml" +ROOTID=book-full-install + +## Profiling +PROFOS="sles" +PROFCONDITION="suse-product" + +## stylesheet location +STYLEROOT="/usr/share/xml/docbook/stylesheet/suse2022-ns" +FALLBACK_STYLEROOT="/usr/share/xml/docbook/stylesheet/suse-ns" + +## enable sourcing +export DOCCONF=$BASH_SOURCE + +##do not show remarks directly in the (PDF) text +#XSLTPARAM="--param use.xep.annotate.pdf=0" + +### Sort the glossary +XSLTPARAM="--param glossary.sort=1" diff --git a/xml/MAIN.SLEHA.xml b/xml/MAIN.SLEHA.xml index b452e831..b9b7b334 100644 --- a/xml/MAIN.SLEHA.xml +++ b/xml/MAIN.SLEHA.xml @@ -42,6 +42,9 @@ + + + diff --git a/xml/book_full_install.xml b/xml/book_full_install.xml new file mode 100644 index 00000000..93fa1734 --- /dev/null +++ b/xml/book_full_install.xml @@ -0,0 +1,88 @@ + + + + %entities; +]> + + + + + + + + Installing High Availability clusters for critical workloads + &productname; + &productnameshort; + &productnumber; + + + + + + + TBD + + + + + yes + + + + + + + + + Planning for deployment + + + + + + + + + + Installing HA nodes + + + + + + + + + + Additional configuration + + + + + + + + + Testing the setup + + + + + + + + + + diff --git a/xml/ha_install_intro.xml b/xml/ha_install_intro.xml new file mode 100644 index 00000000..dd655a3d --- /dev/null +++ b/xml/ha_install_intro.xml @@ -0,0 +1,30 @@ + + + %entities; +]> + + + Preface + + + + editing + + + yes + + + + + + + + + + + From 6b583b6fb788353a6173b90f8a1dbb11ba7a05a9 Mon Sep 17 00:00:00 2001 From: Tahlia Richardson <3069029+tahliar@users.noreply.github.com> Date: Thu, 9 Nov 2023 17:19:20 +1000 Subject: [PATCH 03/39] Move Architecture section above Benefits --- xml/ha_concepts.xml | 348 ++++++++++++++++++++++---------------------- 1 file changed, 174 insertions(+), 174 deletions(-) diff --git a/xml/ha_concepts.xml b/xml/ha_concepts.xml index 7f97ae5b..6e400150 100644 --- a/xml/ha_concepts.xml +++ b/xml/ha_concepts.xml @@ -334,6 +334,180 @@ + + Architecture + + This section provides a brief overview of &productname; architecture. It + identifies and provides information on the architectural components, and + describes how those components interoperate. + + + + Architecture layers + + &productname; has a layered architecture. + illustrates + the different layers and their associated components. + +
+ Architecture + + + + + + + + +
+ + + Membership and messaging layer (Corosync) + + This component provides reliable messaging, membership, and quorum information + about the cluster. This is handled by the Corosync cluster engine, a group + communication system. + + + + Cluster resource manager (Pacemaker) + + Pacemaker as cluster resource manager is the brain + which reacts to events occurring in the cluster. It is implemented as + pacemaker-controld, the cluster + controller, which coordinates all actions. Events can be nodes that join + or leave the cluster, failure of resources, or scheduled activities such + as maintenance, for example. + + + + Local resource manager + + + + The local resource manager is located between the Pacemaker layer and the + resources layer on each node. It is implemented as pacemaker-execd daemon. Through this daemon, + Pacemaker can start, stop, and monitor resources. + + + + + Cluster Information Database (CIB) + + + On every node, Pacemaker maintains the cluster information database + (CIB). It is an XML representation of the cluster configuration + (including cluster options, nodes, resources, constraints and the + relationship to each other). The CIB also reflects the current cluster + status. Each cluster node contains a CIB replica, which is synchronized + across the whole cluster. The pacemaker-based + daemon takes care of reading and writing cluster configuration and + status. + + + + Designated Coordinator (DC) + + + The DC is elected from all nodes in the cluster. This happens if there + is no DC yet or if the current DC leaves the cluster for any reason. + The DC is the only entity in the cluster that can decide that a + cluster-wide change needs to be performed, such as fencing a node or + moving resources around. All other nodes get their configuration and + resource allocation information from the current DC. + + + + + Policy Engine + + + + The policy engine runs on every node, but the one on the DC is the active + one. The engine is implemented as + pacemaker-schedulerd daemon. + When a cluster transition is needed, based on the current state and + configuration, pacemaker-schedulerd + calculates the expected next state of the cluster. It determines what + actions need to be scheduled to achieve the next state. + + + + + + + Resources and resource agents + + In a &ha; cluster, the services that need to be highly available are + called resources. Resource agents (RAs) are scripts that start, stop, and + monitor cluster resources. + + +
+ + + Process flow + + The pacemakerd daemon launches and + monitors all other related daemons. The daemon that coordinates all actions, + pacemaker-controld, has an instance on + each cluster node. Pacemaker centralizes all cluster decision-making by + electing one of those instances as a primary. Should the elected pacemaker-controld daemon fail, a new primary is + established. + + + Many actions performed in the cluster will cause a cluster-wide change. + These actions can include things like adding or removing a cluster + resource or changing resource constraints. It is important to understand + what happens in the cluster when you perform such an action. + + + For example, suppose you want to add a cluster IP address resource. To + do this, you can use the &crmshell; or the Web interface to modify the CIB. + It is not required to perform the actions on the DC. + You can use either tool on any node in the cluster and they will be + relayed to the DC. The DC will then replicate the CIB change to all + cluster nodes. + + + Based on the information in the CIB, the pacemaker-schedulerd then computes the ideal + state of the cluster and how it should be achieved. It feeds a list of + instructions to the DC. The DC sends commands via the messaging/infrastructure + layer which are received by the pacemaker-controld peers on + other nodes. Each of them uses its local resource agent executor (implemented + as pacemaker-execd) to perform + resource modifications. The pacemaker-execd is not cluster-aware and interacts + directly with resource agents. + + + All peer nodes report the results of their operations back to the DC. + After the DC concludes that all necessary operations are successfully + performed in the cluster, the cluster will go back to the idle state and + wait for further events. If any operation was not carried out as + planned, the pacemaker-schedulerd + is invoked again with the new information recorded in + the CIB. + + + In some cases, it might be necessary to power off nodes to protect shared + data or complete resource recovery. In a Pacemaker cluster, the implementation + of node level fencing is &stonith;. For this, Pacemaker comes with a + fencing subsystem, pacemaker-fenced. + &stonith; devices must be configured as cluster resources (that use + specific fencing agents), because this allows monitoring of the fencing devices. + When clients detect a failure, they send a request to pacemaker-fenced, + which then executes the fencing agent to bring down the node. + + +
Benefits @@ -624,179 +798,5 @@ - - Architecture - - - This section provides a brief overview of &productname; architecture. It - identifies and provides information on the architectural components, and - describes how those components interoperate. - - - - Architecture layers - - &productname; has a layered architecture. - illustrates - the different layers and their associated components. - -
- Architecture - - - - - - - - -
- - - Membership and messaging layer (Corosync) - - This component provides reliable messaging, membership, and quorum information - about the cluster. This is handled by the Corosync cluster engine, a group - communication system. - - - - Cluster resource manager (Pacemaker) - - Pacemaker as cluster resource manager is the brain - which reacts to events occurring in the cluster. It is implemented as - pacemaker-controld, the cluster - controller, which coordinates all actions. Events can be nodes that join - or leave the cluster, failure of resources, or scheduled activities such - as maintenance, for example. - - - - Local resource manager - - - - The local resource manager is located between the Pacemaker layer and the - resources layer on each node. It is implemented as pacemaker-execd daemon. Through this daemon, - Pacemaker can start, stop, and monitor resources. - - - - - Cluster Information Database (CIB) - - - On every node, Pacemaker maintains the cluster information database - (CIB). It is an XML representation of the cluster configuration - (including cluster options, nodes, resources, constraints and the - relationship to each other). The CIB also reflects the current cluster - status. Each cluster node contains a CIB replica, which is synchronized - across the whole cluster. The pacemaker-based - daemon takes care of reading and writing cluster configuration and - status. - - - - Designated Coordinator (DC) - - - The DC is elected from all nodes in the cluster. This happens if there - is no DC yet or if the current DC leaves the cluster for any reason. - The DC is the only entity in the cluster that can decide that a - cluster-wide change needs to be performed, such as fencing a node or - moving resources around. All other nodes get their configuration and - resource allocation information from the current DC. - - - - - Policy Engine - - - - The policy engine runs on every node, but the one on the DC is the active - one. The engine is implemented as - pacemaker-schedulerd daemon. - When a cluster transition is needed, based on the current state and - configuration, pacemaker-schedulerd - calculates the expected next state of the cluster. It determines what - actions need to be scheduled to achieve the next state. - - - - - - - Resources and resource agents - - In a &ha; cluster, the services that need to be highly available are - called resources. Resource agents (RAs) are scripts that start, stop, and - monitor cluster resources. - - -
- - Process flow - - The pacemakerd daemon launches and - monitors all other related daemons. The daemon that coordinates all actions, - pacemaker-controld, has an instance on - each cluster node. Pacemaker centralizes all cluster decision-making by - electing one of those instances as a primary. Should the elected pacemaker-controld daemon fail, a new primary is - established. - - - Many actions performed in the cluster will cause a cluster-wide change. - These actions can include things like adding or removing a cluster - resource or changing resource constraints. It is important to understand - what happens in the cluster when you perform such an action. - - - For example, suppose you want to add a cluster IP address resource. To - do this, you can use the &crmshell; or the Web interface to modify the CIB. - It is not required to perform the actions on the DC. - You can use either tool on any node in the cluster and they will be - relayed to the DC. The DC will then replicate the CIB change to all - cluster nodes. - - - Based on the information in the CIB, the pacemaker-schedulerd then computes the ideal - state of the cluster and how it should be achieved. It feeds a list of - instructions to the DC. The DC sends commands via the messaging/infrastructure - layer which are received by the pacemaker-controld peers on - other nodes. Each of them uses its local resource agent executor (implemented - as pacemaker-execd) to perform - resource modifications. The pacemaker-execd is not cluster-aware and interacts - directly with resource agents. - - - All peer nodes report the results of their operations back to the DC. - After the DC concludes that all necessary operations are successfully - performed in the cluster, the cluster will go back to the idle state and - wait for further events. If any operation was not carried out as - planned, the pacemaker-schedulerd - is invoked again with the new information recorded in - the CIB. - - - In some cases, it might be necessary to power off nodes to protect shared - data or complete resource recovery. In a Pacemaker cluster, the implementation - of node level fencing is &stonith;. For this, Pacemaker comes with a - fencing subsystem, pacemaker-fenced. - &stonith; devices must be configured as cluster resources (that use - specific fencing agents), because this allows monitoring of the fencing devices. - When clients detect a failure, they send a request to pacemaker-fenced, - which then executes the fencing agent to bring down the node. - - -
From f74a2c52251b8a605b50eeeae0f6d66cc89c491a Mon Sep 17 00:00:00 2001 From: Tahlia Richardson <3069029+tahliar@users.noreply.github.com> Date: Thu, 9 Nov 2023 17:25:53 +1000 Subject: [PATCH 04/39] Move storage config examples into the architecture section --- xml/ha_concepts.xml | 165 ++++++++++++++++++++++---------------------- 1 file changed, 82 insertions(+), 83 deletions(-) diff --git a/xml/ha_concepts.xml b/xml/ha_concepts.xml index 6e400150..347b2b6b 100644 --- a/xml/ha_concepts.xml +++ b/xml/ha_concepts.xml @@ -507,6 +507,88 @@ which then executes the fencing agent to bring down the node. + + Cluster configurations: storage + + + Cluster configurations with &productname; might or might not include a + shared disk subsystem. The shared disk subsystem can be connected via + high-speed Fibre Channel cards, cables, and switches, or it can be + configured to use iSCSI. If a node fails, another designated node in + the cluster automatically mounts the shared disk directories that were + previously mounted on the failed node. This gives network users + continuous access to the directories on the shared disk subsystem. + + + + Shared disk subsystem with LVM + + When using a shared disk subsystem with LVM, that subsystem must be + connected to all servers in the cluster from which it needs to be + accessed. + + + + + Typical resources might include data, applications, and services. The + following figures show how a typical Fibre Channel cluster configuration + might look. + The green lines depict connections to an Ethernet power switch. Such + a device can be controlled over a network and can reboot + a node when a ping request fails. + + +
+ Typical Fibre Channel cluster configuration + + + + + + + + +
+ + + Although Fibre Channel provides the best performance, you can also + configure your cluster to use iSCSI. iSCSI is an alternative to Fibre + Channel that can be used to create a low-cost Storage Area Network (SAN). + The following figure shows how a typical iSCSI cluster configuration + might look. + + +
+ Typical iSCSI cluster configuration + + + + + + + + +
+ + + Although most clusters include a shared disk subsystem, it is also + possible to create a cluster without a shared disk subsystem. The + following figure shows how a cluster without a shared disk subsystem + might look. + + +
+ Typical cluster configuration without shared storage + + + + + + + + +
+
Benefits @@ -716,87 +798,4 @@ or increasing performance or accessibility of the Web sites. - - Cluster configurations: storage - - - Cluster configurations with &productname; might or might not include a - shared disk subsystem. The shared disk subsystem can be connected via - high-speed Fibre Channel cards, cables, and switches, or it can be - configured to use iSCSI. If a node fails, another designated node in - the cluster automatically mounts the shared disk directories that were - previously mounted on the failed node. This gives network users - continuous access to the directories on the shared disk subsystem. - - - - Shared disk subsystem with LVM - - When using a shared disk subsystem with LVM, that subsystem must be - connected to all servers in the cluster from which it needs to be - accessed. - - - - - Typical resources might include data, applications, and services. The - following figures show how a typical Fibre Channel cluster configuration - might look. - The green lines depict connections to an Ethernet power switch. Such - a device can be controlled over a network and can reboot - a node when a ping request fails. - - -
- Typical Fibre Channel cluster configuration - - - - - - - - -
- - - Although Fibre Channel provides the best performance, you can also - configure your cluster to use iSCSI. iSCSI is an alternative to Fibre - Channel that can be used to create a low-cost Storage Area Network (SAN). - The following figure shows how a typical iSCSI cluster configuration - might look. - - -
- Typical iSCSI cluster configuration - - - - - - - - -
- - - Although most clusters include a shared disk subsystem, it is also - possible to create a cluster without a shared disk subsystem. The - following figure shows how a cluster without a shared disk subsystem - might look. - - -
- Typical cluster configuration without shared storage - - - - - - - - -
-
- From 9f3e56f39a1efb81955a7d50d2032b3fac045baa Mon Sep 17 00:00:00 2001 From: Tahlia Richardson <3069029+tahliar@users.noreply.github.com> Date: Thu, 9 Nov 2023 17:33:50 +1000 Subject: [PATCH 05/39] Remove xref to the glossary It's no longer in the same guide, and the weirdly the xref resulted in a blank space instead of the Admin Guide's title --- xml/ha_concepts.xml | 6 +----- 1 file changed, 1 insertion(+), 5 deletions(-) diff --git a/xml/ha_concepts.xml b/xml/ha_concepts.xml index 347b2b6b..4475e617 100644 --- a/xml/ha_concepts.xml +++ b/xml/ha_concepts.xml @@ -30,11 +30,7 @@ overview of the architecture, describing the individual architecture layers and processes within the cluster. - - For explanations of some common terms used in the context of &ha; - clusters, refer to . - - + editing From de27d8ba698dc28b768a2c051947a2f2a4a8338e Mon Sep 17 00:00:00 2001 From: Tahlia Richardson <3069029+tahliar@users.noreply.github.com> Date: Thu, 9 Nov 2023 17:49:32 +1000 Subject: [PATCH 06/39] Move Architecture back to the end of the chapter Oops I changed my mind --- xml/ha_concepts.xml | 418 ++++++++++++++++++++++---------------------- 1 file changed, 209 insertions(+), 209 deletions(-) diff --git a/xml/ha_concepts.xml b/xml/ha_concepts.xml index 4475e617..69a7e2d4 100644 --- a/xml/ha_concepts.xml +++ b/xml/ha_concepts.xml @@ -330,7 +330,215 @@ - + + Benefits + + + &productname; allows you to configure up to 32 Linux servers into a + high-availability cluster (HA cluster). Resources can be + dynamically switched or moved to any node in the cluster. Resources can + be configured to automatically migrate if a node fails, or they can be + moved manually to troubleshoot hardware or balance the workload. + + + + &productname; provides high availability from commodity components. Lower + costs are obtained through the consolidation of applications and + operations onto a cluster. &productname; also allows you to centrally + manage the complete cluster. You can adjust resources to meet changing + workload requirements (thus, manually load balance the + cluster). Allowing clusters of more than two nodes also provides savings + by allowing several nodes to share a hot spare. + + + + An equally important benefit is the potential reduction of unplanned + service outages and planned outages for software and hardware + maintenance and upgrades. + + + + Reasons that you would want to implement a cluster include: + + + + + + Increased availability + + + + + Improved performance + + + + + Low cost of operation + + + + + Scalability + + + + + Disaster recovery + + + + + Data protection + + + + + Server consolidation + + + + + Storage consolidation + + + + + + Shared disk fault tolerance can be obtained by implementing RAID on the + shared disk subsystem. + + + + The following scenario illustrates some benefits &productname; can + provide. + + + Example cluster scenario + + + Suppose you have configured a three-node cluster, with a Web server + installed on each of the three nodes in the cluster. Each of the + nodes in the cluster hosts two Web sites. All the data, graphics, and + Web page content for each Web site are stored on a shared disk subsystem + connected to each of the nodes in the cluster. The following figure + depicts how this setup might look. + + +
+ Three-server cluster + + + + + + + + +
+ + + During normal cluster operation, each node is in constant communication + with the other nodes in the cluster and performs periodic polling of + all registered resources to detect failure. + + + + Suppose Web Server 1 experiences hardware or software problems and the + users depending on Web Server 1 for Internet access, e-mail, and + information lose their connections. The following figure shows how + resources are moved when Web Server 1 fails. + + +
+ Three-server cluster after one server fails + + + + + + + + +
+ + + Web Site A moves to Web Server 2 and Web Site B moves to Web Server 3. IP + addresses and certificates also move to Web Server 2 and Web Server 3. + + + + When you configured the cluster, you decided where the Web sites hosted + on each Web server would go should a failure occur. In the previous + example, you configured Web Site A to move to Web Server 2 and Web Site B + to move to Web Server 3. This way, the workload formerly handled by Web + Server 1 continues to be available and is evenly distributed between any + surviving cluster members. + + + + When Web Server 1 failed, the &ha; software did the following: + + + + + + Detected a failure and verified with &stonith; that Web Server 1 was + really dead. &stonith; is an acronym for Shoot The Other Node + In The Head. It is a means of bringing down misbehaving nodes + to prevent them from causing trouble in the cluster. + + + + + Remounted the shared data directories that were formerly mounted on Web + server 1 on Web Server 2 and Web Server 3. + + + + + Restarted applications that were running on Web Server 1 on Web Server + 2 and Web Server 3. + + + + + Transferred IP addresses to Web Server 2 and Web Server 3. + + + + + + In this example, the failover process happened quickly and users regained + access to Web site information within seconds, usually without needing to + log in again. + + + + Now suppose the problems with Web Server 1 are resolved, and Web Server 1 + is returned to a normal operating state. Web Site A and Web Site B can + either automatically fail back (move back) to Web Server 1, or they can + stay where they are. This depends on how you configured the resources for + them. Migrating the services back to Web Server 1 will incur some + down-time. Therefore &productname; also allows you to defer the migration until + a period when it will cause little or no service interruption. There are + advantages and disadvantages to both alternatives. + + + + &productname; also provides resource migration capabilities. You can move + applications, Web sites, etc. to other servers in your cluster as + required for system management. + + + + For example, you could have manually moved Web Site A or Web Site B from + Web Server 1 to either of the other servers in the cluster. Use cases for + this are upgrading or performing scheduled maintenance on Web Server 1, + or increasing performance or accessibility of the Web sites. + +
+ Architecture This section provides a brief overview of &productname; architecture. It @@ -586,212 +794,4 @@ - - Benefits - - - &productname; allows you to configure up to 32 Linux servers into a - high-availability cluster (HA cluster). Resources can be - dynamically switched or moved to any node in the cluster. Resources can - be configured to automatically migrate if a node fails, or they can be - moved manually to troubleshoot hardware or balance the workload. - - - - &productname; provides high availability from commodity components. Lower - costs are obtained through the consolidation of applications and - operations onto a cluster. &productname; also allows you to centrally - manage the complete cluster. You can adjust resources to meet changing - workload requirements (thus, manually load balance the - cluster). Allowing clusters of more than two nodes also provides savings - by allowing several nodes to share a hot spare. - - - - An equally important benefit is the potential reduction of unplanned - service outages and planned outages for software and hardware - maintenance and upgrades. - - - - Reasons that you would want to implement a cluster include: - - - - - - Increased availability - - - - - Improved performance - - - - - Low cost of operation - - - - - Scalability - - - - - Disaster recovery - - - - - Data protection - - - - - Server consolidation - - - - - Storage consolidation - - - - - - Shared disk fault tolerance can be obtained by implementing RAID on the - shared disk subsystem. - - - - The following scenario illustrates some benefits &productname; can - provide. - - - Example cluster scenario - - - Suppose you have configured a three-node cluster, with a Web server - installed on each of the three nodes in the cluster. Each of the - nodes in the cluster hosts two Web sites. All the data, graphics, and - Web page content for each Web site are stored on a shared disk subsystem - connected to each of the nodes in the cluster. The following figure - depicts how this setup might look. - - -
- Three-server cluster - - - - - - - - -
- - - During normal cluster operation, each node is in constant communication - with the other nodes in the cluster and performs periodic polling of - all registered resources to detect failure. - - - - Suppose Web Server 1 experiences hardware or software problems and the - users depending on Web Server 1 for Internet access, e-mail, and - information lose their connections. The following figure shows how - resources are moved when Web Server 1 fails. - - -
- Three-server cluster after one server fails - - - - - - - - -
- - - Web Site A moves to Web Server 2 and Web Site B moves to Web Server 3. IP - addresses and certificates also move to Web Server 2 and Web Server 3. - - - - When you configured the cluster, you decided where the Web sites hosted - on each Web server would go should a failure occur. In the previous - example, you configured Web Site A to move to Web Server 2 and Web Site B - to move to Web Server 3. This way, the workload formerly handled by Web - Server 1 continues to be available and is evenly distributed between any - surviving cluster members. - - - - When Web Server 1 failed, the &ha; software did the following: - - - - - - Detected a failure and verified with &stonith; that Web Server 1 was - really dead. &stonith; is an acronym for Shoot The Other Node - In The Head. It is a means of bringing down misbehaving nodes - to prevent them from causing trouble in the cluster. - - - - - Remounted the shared data directories that were formerly mounted on Web - server 1 on Web Server 2 and Web Server 3. - - - - - Restarted applications that were running on Web Server 1 on Web Server - 2 and Web Server 3. - - - - - Transferred IP addresses to Web Server 2 and Web Server 3. - - - - - - In this example, the failover process happened quickly and users regained - access to Web site information within seconds, usually without needing to - log in again. - - - - Now suppose the problems with Web Server 1 are resolved, and Web Server 1 - is returned to a normal operating state. Web Site A and Web Site B can - either automatically fail back (move back) to Web Server 1, or they can - stay where they are. This depends on how you configured the resources for - them. Migrating the services back to Web Server 1 will incur some - down-time. Therefore &productname; also allows you to defer the migration until - a period when it will cause little or no service interruption. There are - advantages and disadvantages to both alternatives. - - - - &productname; also provides resource migration capabilities. You can move - applications, Web sites, etc. to other servers in your cluster as - required for system management. - - - - For example, you could have manually moved Web Site A or Web Site B from - Web Server 1 to either of the other servers in the cluster. Use cases for - this are upgrading or performing scheduled maintenance on Web Server 1, - or increasing performance or accessibility of the Web sites. - -
From 3830285fd4521f51dcdfb2ebe0cb8fb53b014773 Mon Sep 17 00:00:00 2001 From: Tahlia Richardson <3069029+tahliar@users.noreply.github.com> Date: Fri, 10 Nov 2023 16:29:59 +1000 Subject: [PATCH 07/39] Add 'Starting the' to 'YaST Cluster module' --- xml/ha_yast_cluster.xml | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/xml/ha_yast_cluster.xml b/xml/ha_yast_cluster.xml index dd73e56a..56af2b22 100644 --- a/xml/ha_yast_cluster.xml +++ b/xml/ha_yast_cluster.xml @@ -204,13 +204,13 @@
- &yast; <guimenu>Cluster</guimenu> module + Starting the &yast; <guimenu>Cluster</guimenu> module Start &yast; and select &ha; Cluster . Alternatively, start the - module from command line: + module from the command line: - sudo yast2 cluster + &prompt.user;sudo yast2 cluster The following list shows an overview of the available screens in the From a21c4d07f5ab1623d666d2253946abab34011d9f Mon Sep 17 00:00:00 2001 From: Tahlia Richardson <3069029+tahliar@users.noreply.github.com> Date: Wed, 24 Jan 2024 14:10:40 +1000 Subject: [PATCH 08/39] Add (empty) new chapter files --- xml/book_full_install.xml | 3 +++ xml/ha_add_nodes.xml | 28 ++++++++++++++++++++++++++++ xml/ha_bootstrap_install.xml | 28 ++++++++++++++++++++++++++++ xml/ha_installation_overview.xml | 28 ++++++++++++++++++++++++++++ 4 files changed, 87 insertions(+) create mode 100644 xml/ha_add_nodes.xml create mode 100644 xml/ha_bootstrap_install.xml create mode 100644 xml/ha_installation_overview.xml diff --git a/xml/book_full_install.xml b/xml/book_full_install.xml index 93fa1734..ae670e77 100644 --- a/xml/book_full_install.xml +++ b/xml/book_full_install.xml @@ -46,6 +46,7 @@ + @@ -55,7 +56,9 @@ Installing HA nodes + + diff --git a/xml/ha_add_nodes.xml b/xml/ha_add_nodes.xml new file mode 100644 index 00000000..74b1d7ef --- /dev/null +++ b/xml/ha_add_nodes.xml @@ -0,0 +1,28 @@ + + + + %entities; +]> + + + Adding more nodes + + + + + + + + + yes + + + + diff --git a/xml/ha_bootstrap_install.xml b/xml/ha_bootstrap_install.xml new file mode 100644 index 00000000..cf362e14 --- /dev/null +++ b/xml/ha_bootstrap_install.xml @@ -0,0 +1,28 @@ + + + + %entities; +]> + + + Using the bootstrap script + + + + + + + + + yes + + + + diff --git a/xml/ha_installation_overview.xml b/xml/ha_installation_overview.xml new file mode 100644 index 00000000..123d12cc --- /dev/null +++ b/xml/ha_installation_overview.xml @@ -0,0 +1,28 @@ + + + + %entities; +]> + + + Installation overview + + + +You can also use a combination of both setup methods, for example: set up one node with YaST cluster and then use one of the bootstrap scripts to integrate more nodes (or vice versa). + + + + + yes + + + + From 4805b2aa36182dc3284990456f2d84a191ba6afe Mon Sep 17 00:00:00 2001 From: Tahlia Richardson <3069029+tahliar@users.noreply.github.com> Date: Wed, 24 Jan 2024 16:33:51 +1000 Subject: [PATCH 09/39] Move autoyast to new Add Nodes chapter --- xml/ha_add_nodes.xml | 129 +++++++++++++++++++++++++++++++++++++++ xml/ha_install.xml | 141 ------------------------------------------- 2 files changed, 129 insertions(+), 141 deletions(-) diff --git a/xml/ha_add_nodes.xml b/xml/ha_add_nodes.xml index 74b1d7ef..23194009 100644 --- a/xml/ha_add_nodes.xml +++ b/xml/ha_add_nodes.xml @@ -25,4 +25,133 @@
+ + + Adding nodes with &ay; + + + After you have installed and set up a two-node cluster, you can extend the + cluster by cloning existing nodes with &ay; and adding the clones to the cluster. + + + &ay; uses profiles that contains installation and configuration data. + A profile tells &ay; what to install and how to configure the installed system to + get a ready-to-use system in the end. This profile can then be used + for mass deployment in different ways (for example, to clone existing + cluster nodes). + + + For detailed instructions on how to use &ay; in various scenarios, + see the + &ayguide; for &sls; &productnumber;. + + + + Identical hardware + + assumes you are rolling + out &productname; &productnumber; to a set of machines with identical hardware + configurations. + + + If you need to deploy cluster nodes on non-identical hardware, refer to the + &deploy; for &sls; &productnumber;, + chapter Automated Installation, section + Rule-Based Autoinstallation. + + + + + Cloning a cluster node with &ay; + + + Make sure the node you want to clone is correctly installed and + configured. For details, see the &haquick; or + . + + + + + Follow the description outlined in the &sle; + &productnumber; &deploy; for simple mass + installation. This includes the following basic steps: + + + + + Creating an &ay; profile. Use the &ay; GUI to create and modify + a profile based on the existing system configuration. In &ay;, + choose the &ha; module and click the + Clone button. If needed, adjust the configuration + in the other modules and save the resulting control file as XML. + + + If you have configured DRBD, you can select and clone this module in + the &ay; GUI, too. + + + + + Determining the source of the &ay; profile and the parameter to + pass to the installation routines for the other nodes. + + + + + Determining the source of the &sls; and &productname; + installation data. + + + + + Determining and setting up the boot scenario for autoinstallation. + + + + + Passing the command line to the installation routines, either by + adding the parameters manually or by creating an + info file. + + + + + Starting and monitoring the autoinstallation process. + + + + + + + + After the clone has been successfully installed, execute the following + steps to make the cloned node join the cluster: + + + + Bringing the cloned node online + + + Transfer the key configuration files from the already configured nodes + to the cloned node with &csync; as described in + . + + + + + To bring the node online, start the cluster services on the cloned + node as described in . + + + + + + The cloned node now joins the cluster because the + /etc/corosync/corosync.conf file has been applied to + the cloned node via &csync;. The CIB is automatically synchronized + among the cluster nodes. + + + diff --git a/xml/ha_install.xml b/xml/ha_install.xml index 806464d1..1cd874da 100644 --- a/xml/ha_install.xml +++ b/xml/ha_install.xml @@ -20,10 +20,6 @@ have the same packages installed and the same system configuration as the original ones. - - If you want to upgrade an existing cluster that runs an older version of - &productname;, refer to . - @@ -45,142 +41,5 @@ basic two-node cluster. - - Mass installation and deployment with &ay; - - - After you have installed and set up a two-node cluster, you can extend the - cluster by cloning existing nodes with &ay; and adding the clones to the cluster. - - - &ay; uses profiles that contains installation and configuration data. - A profile tells &ay; what to install and how to configure the installed system to - get a ready-to-use system in the end. This profile can then be used - for mass deployment in different ways (for example, to clone existing - cluster nodes). - - - For detailed instructions on how to use &ay; in various scenarios, - see the - &ayguide; for &sls; &productnumber;. - - - - Identical hardware - - assumes you are rolling - out &productname; &productnumber; to a set of machines with identical hardware - configurations. - - - If you need to deploy cluster nodes on non-identical hardware, refer to the - &deploy; for &sls; &productnumber;, - chapter Automated Installation, section - Rule-Based Autoinstallation. - - - - - - - Cloning a cluster node with &ay; - - - Make sure the node you want to clone is correctly installed and - configured. For details, see the &haquick; or - . - - - - - Follow the description outlined in the &sle; - &productnumber; &deploy; for simple mass - installation. This includes the following basic steps: - - - - - Creating an &ay; profile. Use the &ay; GUI to create and modify - a profile based on the existing system configuration. In &ay;, - choose the &ha; module and click the - Clone button. If needed, adjust the configuration - in the other modules and save the resulting control file as XML. - - - - If you have configured DRBD, you can select and clone this module in - the &ay; GUI, too. - - - - - Determining the source of the &ay; profile and the parameter to - pass to the installation routines for the other nodes. - - - - - Determining the source of the &sls; and &productname; - installation data. - - - - - Determining and setting up the boot scenario for autoinstallation. - - - - - Passing the command line to the installation routines, either by - adding the parameters manually or by creating an - info file. - - - - - Starting and monitoring the autoinstallation process. - - - - - - - - After the clone has been successfully installed, execute the following - steps to make the cloned node join the cluster: - - - - Bringing the cloned node online - - - Transfer the key configuration files from the already configured nodes - to the cloned node with &csync; as described in - . - - - - - To bring the node online, start the cluster services on the cloned - node as described in . - - - - - - The cloned node now joins the cluster because the - /etc/corosync/corosync.conf file has been applied to - the cloned node via &csync;. The CIB is automatically synchronized - among the cluster nodes. - - From 843df987201b3835152686fef1dfc7993291bee6 Mon Sep 17 00:00:00 2001 From: Tahlia Richardson <3069029+tahliar@users.noreply.github.com> Date: Wed, 31 Jan 2024 16:22:01 +1000 Subject: [PATCH 10/39] Moving some things around --- xml/book_full_install.xml | 2 +- xml/ha_add_nodes.xml | 13 ++++++++ xml/ha_install.xml | 57 ++++++++++++++++++++++++-------- xml/ha_installation_overview.xml | 22 ++++++++++++ xml/ha_yast_cluster.xml | 33 +++++------------- 5 files changed, 88 insertions(+), 39 deletions(-) diff --git a/xml/book_full_install.xml b/xml/book_full_install.xml index ae670e77..ef60c553 100644 --- a/xml/book_full_install.xml +++ b/xml/book_full_install.xml @@ -53,7 +53,7 @@ - Installing HA nodes + Installing cluster nodes diff --git a/xml/ha_add_nodes.xml b/xml/ha_add_nodes.xml index 23194009..c3fa6035 100644 --- a/xml/ha_add_nodes.xml +++ b/xml/ha_add_nodes.xml @@ -25,6 +25,19 @@ + + Adding nodes with <command>crm cluster join</command> + + + + + + + Adding nodes manually + + + + Adding nodes with &ay; diff --git a/xml/ha_install.xml b/xml/ha_install.xml index 1cd874da..5d489538 100644 --- a/xml/ha_install.xml +++ b/xml/ha_install.xml @@ -4,21 +4,19 @@ %entities; ]> - + - + Installing &productname; - If you are setting up a &ha; cluster with &productnamereg; for the first time, the - easiest way is to start with a basic two-node cluster. You can also use the - two-node cluster to run some tests. Afterward, you can add more - nodes by cloning existing cluster nodes with &ay;. The cloned nodes will - have the same packages installed and the same system configuration as the - original ones. + + The packages for configuring and managing a cluster are included in the &ha; installation pattern. + This pattern is only available after the &productname; extension (&slehaa;) is installed. + &slehaa; can be installed along with &sles; (&slsa;), or after &slsa; is already installed. @@ -33,13 +31,44 @@ - - Manual installation - For the manual installation of the packages for &ha; refer to - . It leads you through the setup of a - basic two-node cluster. + To install &slehaa; along with &slsa;, see the + + &deploy; for &sles;. + To install &slehaa; after &slsa; is already installed, use this procedure: - + + Requirements + + + &sles; is installed and registered with the &scc;. + + + + + You have an additional registration code for &productname;. + + + + + Installing the &ha; packages + + + Enable the &ha; extension: + +&prompt.user;sudo SUSEConnect -p sle-ha/&product-ga;.&product-sp;/x86_64 -r ADDITIONAL_REGCODE + + + + Install the &ha; pattern: +&prompt.user;sudo zypper install -t pattern ha_sles + + + + Install the &ha; pattern on all machines that + will be part of your cluster. + + + diff --git a/xml/ha_installation_overview.xml b/xml/ha_installation_overview.xml index 123d12cc..f921b6dd 100644 --- a/xml/ha_installation_overview.xml +++ b/xml/ha_installation_overview.xml @@ -24,5 +24,27 @@ You can also use a combination of both setup methods, for example: set up one no yes + + If you are setting up a &ha; cluster with &productnamereg; for the first time, the + easiest way is to start with a basic two-node cluster. You can also use the + two-node cluster to run some tests. Afterward, you can add more + nodes by cloning existing cluster nodes with &ay;. The cloned nodes will + have the same packages installed and the same system configuration as the + original ones. + + + + Workflow options + + + + + + + Preconfiguration options + + + + diff --git a/xml/ha_yast_cluster.xml b/xml/ha_yast_cluster.xml index 56af2b22..2e2b3688 100644 --- a/xml/ha_yast_cluster.xml +++ b/xml/ha_yast_cluster.xml @@ -16,17 +16,6 @@ The &yast; cluster module allows you to set up a cluster manually (from scratch) or to modify options for an existing cluster. - - However, if you prefer an automated approach for setting up a cluster, - refer to . It describes how to install the - needed packages and leads you to a basic two-node cluster, which is - set up with the bootstrap scripts provided by the &crmshell;. - - - You can also use a combination of both setup methods, for example: set up - one node with &yast; cluster and then use one of the bootstrap scripts - to integrate more nodes (or vice versa). - @@ -211,7 +200,12 @@ module from the command line: &prompt.user;sudo yast2 cluster - + + If you start the cluster module for the first time, it appears as a + wizard, guiding you through all the steps necessary for basic setup. + Otherwise, click the categories on the left panel to access the + configuration options for each step. + The following list shows an overview of the available screens in the &yast; cluster module. It also mentions whether the screen contains parameters that @@ -229,9 +223,8 @@ Redundant communication paths For a supported cluster setup, two or more redundant communication - paths are required. The preferred way is to use network device bonding as - described in . - If this is impossible, you need to define a second communication + paths are required. The preferred way is to use network device bonding. + If this is impossible, you must define a second communication channel in &corosync;. @@ -279,14 +272,6 @@ - - - If you start the cluster module for the first time, it appears as a - wizard, guiding you through all the steps necessary for basic setup. - Otherwise, click the categories on the left panel to access the - configuration options for each step. - - Settings in the &yast; <guimenu>Cluster</guimenu> module Certain settings in the &yast; cluster module apply only to the @@ -1036,7 +1021,7 @@ Finished with 1 errors. crm status command. If all nodes are online, the output should be similar to the following: -&prompt.root;crm status +&prompt.root;crm status Cluster Summary: * Stack: corosync * Current DC: &node1; (version ...) - partition with quorum From 0693f98e0758d30a1e2db703c629a30cda95c9b3 Mon Sep 17 00:00:00 2001 From: Tahlia Richardson <3069029+tahliar@users.noreply.github.com> Date: Wed, 31 Jan 2024 17:42:54 +1000 Subject: [PATCH 11/39] Add watchdog procedures --- xml/book_full_install.xml | 1 + xml/ha_sbd_watchdog.xml | 216 ++++++++++++++++++++++++++++++++++++++ 2 files changed, 217 insertions(+) create mode 100644 xml/ha_sbd_watchdog.xml diff --git a/xml/book_full_install.xml b/xml/book_full_install.xml index ef60c553..63acf620 100644 --- a/xml/book_full_install.xml +++ b/xml/book_full_install.xml @@ -56,6 +56,7 @@ Installing cluster nodes + diff --git a/xml/ha_sbd_watchdog.xml b/xml/ha_sbd_watchdog.xml new file mode 100644 index 00000000..df3848d7 --- /dev/null +++ b/xml/ha_sbd_watchdog.xml @@ -0,0 +1,216 @@ + + + + %entities; +]> + + + Setting up a watchdog for SBD + + + + If you are using SBD as your &stonith; device, you must enable a watchdog on each + cluster node. If you are using a different &stonith; device, you can skip this chapter. + + + + + yes + + + + + + + &productname; ships with several kernel modules that provide hardware-specific watchdog drivers. + For clusters in production environments, we recommend using a hardware watchdog. + However, if no watchdog matches your hardware, the software watchdog + (softdog) can be used instead. + + + &productname; uses the SBD daemon as the software component that feeds the watchdog. + + + + Using a hardware watchdog + + Finding the right watchdog kernel module for a given system is not + trivial. Automatic probing fails often. As a result, many modules + are already loaded before the right one gets a chance. + + The following table lists some commonly used watchdog drivers. However, this is + not a complete list of supported drivers. If your hardware is not listed here, + you can also find a list of choices in the following directories: + + + + + /lib/modules/KERNEL_VERSION/kernel/drivers/watchdog + + + + + /lib/modules/KERNEL_VERSION/kernel/drivers/ipmi + + + + + Alternatively, ask your hardware or + system vendor for details on system-specific watchdog configuration. + + + Commonly used watchdog drivers + + + + Hardware + Driver + + + + + HP + hpwdt + + + Dell, Lenovo (Intel TCO) + iTCO_wdt + + + Fujitsu + ipmi_watchdog + + + LPAR on IBM Power + pseries-wdt + + + VM on IBM z/VM + vmwatchdog + + + Xen VM (DomU) + xen_xdt + + + VM on VMware vSphere + wdat_wdt + + + Generic + softdog + + + +
+ + Accessing the watchdog timer + + Some hardware vendors ship systems management software that uses the + watchdog for system resets (for example, HP ASR daemon). If the watchdog is + used by SBD, disable such software. No other software must access the + watchdog timer. + + + + Loading the correct kernel module + + + List the drivers that are installed with your kernel version: + +&prompt.root;rpm -ql kernel-VERSION | grep watchdog + + + + List any watchdog modules that are currently loaded in the kernel: + +&prompt.root;lsmod | egrep "(wd|dog)" + + + + If you get a result, unload the wrong module: + +&prompt.root;rmmod WRONG_MODULE + + + + Enable the watchdog module that matches your hardware: + +&prompt.root;echo WATCHDOG_MODULE > /etc/modules-load.d/watchdog.conf +&prompt.root;systemctl restart systemd-modules-load + + + + Test whether the watchdog module is loaded correctly: + +&prompt.root;lsmod | grep dog + + + + Verify if the watchdog device is available: + +&prompt.root;ls -l /dev/watchdog* +&prompt.root;sbd query-watchdog + + If the watchdog device is not available, check the module name and options. + Maybe use another driver. + + + + + Verify if the watchdog device works: + +&prompt.root;sbd -w WATCHDOG_DEVICE test-watchdog + + + + Reboot your machine to make sure there are no conflicting kernel modules. For example, + if you find the message cannot register ... in your log, this would indicate + such conflicting modules. To ignore such modules, refer to + . + + + +
+ + + Using the software watchdog (softdog) + + For clusters in production environments, we recommend using a hardware-specific watchdog + driver. However, if no watchdog matches your hardware, + softdog can be used instead. + + + Softdog limitations + + The softdog driver assumes that at least one CPU is still running. If all CPUs are stuck, + the code in the softdog driver that should reboot the system is never executed. + In contrast, hardware watchdogs keep working even if all CPUs are stuck. + + + + Loading the softdog kernel module + + + Enable the softdog watchdog: + +&prompt.root;echo softdog > /etc/modules-load.d/watchdog.conf +&prompt.root;systemctl restart systemd-modules-load + + + + Check whether the softdog watchdog module is loaded correctly: + +&prompt.root;lsmod | grep softdog + + + + +
From ab84656431109d06f247a66c1983315327e13714 Mon Sep 17 00:00:00 2001 From: Tahlia Richardson <3069029+tahliar@users.noreply.github.com> Date: Wed, 7 Feb 2024 14:35:22 +1000 Subject: [PATCH 12/39] Add crm cluster join procedure --- xml/ha_add_nodes.xml | 74 ++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 72 insertions(+), 2 deletions(-) diff --git a/xml/ha_add_nodes.xml b/xml/ha_add_nodes.xml index c3fa6035..ffed805a 100644 --- a/xml/ha_add_nodes.xml +++ b/xml/ha_add_nodes.xml @@ -25,10 +25,80 @@
- + + Adding nodes with <command>crm cluster join</command> - + You can add more nodes to the cluster with the crm cluster join bootstrap script. + The script only needs access to an existing cluster node, and completes the basic setup + on the current machine automatically. + + + For more information, run the crm cluster join --help command. + + + Adding nodes with <command>crm cluster join</command> + + + Log in to a node as &rootuser;, or as a user with sudo privileges. + + + + + Start the bootstrap script: + + + + + If you set up the first node as &rootuser;, you can run this command with + no additional parameters: + +&prompt.root;crm cluster join + + + + If you set up the first node as a sudo user, you must + specify the user and node with the option: + +&prompt.user;sudo crm cluster join -c USER@&node1; + + + + If you set up the first node as a sudo user with SSH agent forwarding, + use the following command: + +&prompt.user;sudo --preserve-env=SSH_AUTH_SOCK crm cluster join --use-ssh-agent -c USER@&node1; + + + + If NTP is not configured to start at boot time, a message + appears. The script also checks for a hardware watchdog device. + You are warned if none is present. + + + + + If you did not already specify &node1; + with , you will be prompted for the IP address of the first node. + + + + + If you did not already configure passwordless SSH access between + both machines, you will be prompted for the password of the first node. + + + After logging in to the specified node, the script copies the + &corosync; configuration, configures SSH and &csync;, + brings the current machine online as a new cluster node, and + starts the service needed for &hawk2;. + + + + + Repeat this procedure for each node. You can check the status of the cluster at any time + with the crm status command, or by logging in to &hawk2; and navigating to + StatusNodes. From 2ebbeb9a12a57c14a71dc0e5bc1412c4879c09f3 Mon Sep 17 00:00:00 2001 From: Tahlia Richardson <3069029+tahliar@users.noreply.github.com> Date: Thu, 8 Feb 2024 17:01:14 +1000 Subject: [PATCH 13/39] Add inital crm cluster init section Will expand to be more detailed --- xml/ha_bootstrap_install.xml | 268 +++++++++++++++++++++++++++++++++++ 1 file changed, 268 insertions(+) diff --git a/xml/ha_bootstrap_install.xml b/xml/ha_bootstrap_install.xml index cf362e14..85f8b25c 100644 --- a/xml/ha_bootstrap_install.xml +++ b/xml/ha_bootstrap_install.xml @@ -25,4 +25,272 @@ + + + Overview of the <command>crm cluster init</command> script + + The crm cluster init command executes a bootstrap script that defines the + basic parameters needed for cluster communication, resulting in a running one-node cluster. + The script checks and configures the following components: + + + + NTP + + + Checks if NTP is configured to start at boot time. If not, a message appears. + + + + + SSH + + Creates SSH keys for passwordless login between cluster nodes. + + + + + &csync; + + + Configures &csync; to replicate configuration files across all nodes + in a cluster. + + + + + &corosync; + + Configures the cluster communication system. + + + + SBD/watchdog + + Checks if a watchdog exists and asks you whether to configure SBD + as node fencing mechanism. + + + + Virtual floating IP + + Asks you whether to configure a virtual IP address for cluster + administration with &hawk2;. + + + + Firewall + + Opens the ports in the firewall that are needed for cluster communication. + + + + Cluster name + + Defines a name for the cluster, by default + hacluster. This + is optional and mostly useful for &geo; clusters. Usually, the cluster + name reflects the geographical location and makes it easier to distinguish a site + inside a &geo; cluster. + + + + &qdevice;/&qnet; + + + Asks you whether to configure &qdevice;/&qnet; to participate in + quorum decisions. We recommend using &qdevice; and &qnet; for clusters + with an even number of nodes, and especially for two-node clusters. + + + + + + &pace; default settings + + The options set by the bootstrap script might not be the same as the &pace; + default settings. You can check which settings the bootstrap script changed in + /var/log/crmsh/crmsh.log. Any options set during the bootstrap + process can be modified later with the &yast; cluster module. + + + + Cluster configuration for different platforms + + The crm cluster init script detects the system environment (for example, + &ms; Azure) and adjusts certain cluster settings based on the profile for that environment. + For more information, see the file /etc/crm/profiles.yml. + + + + + + + Setting up the first node with <command>crm cluster init</command> + + Set up the first node with the crm cluster init script. + This requires only a minimum of time and manual intervention. + + + Setting up the first node (<systemitem class="server">&node1;</systemitem>) with + <command>crm cluster init</command> + + + Log in to the first cluster node as &rootuser;, or as a user with + sudo privileges. + + + <command>sudo</command> user SSH key access + + The cluster uses passwordless SSH access for communication between the nodes. + The crm cluster init script checks for SSH keys and generates + them if they do not already exist. + + + If you intend to set up the first node as a user with sudo privileges, + you must ensure the user's SSH keys exist (or will be generated) locally on the node, + not on a remote system. + + + + + + Start the bootstrap script: + + &prompt.root;crm cluster init --name CLUSTERNAME + Replace the CLUSTERNAME + placeholder with a meaningful name, like the geographical location of your + cluster (for example, &cluster1;). + This is especially helpful to create a &geo; cluster later on, + as it simplifies the identification of a site. + + + If you need to use multicast instead of unicast (the default) for your cluster + communication, use the option (or ). + + + The script checks for NTP configuration and a hardware watchdog service. + If required, it generates the public and private SSH keys used for SSH access and + &csync; synchronization and starts the respective services. + + + + + Configure the cluster communication layer (&corosync;): + + + + + Enter a network address to bind to. By default, the script + proposes the network address of eth0. + Alternatively, enter a different network address, for example the + address of bond0. + + + + + Accept the proposed port (5405) or enter a different one. + + + + + + + Set up SBD as the node fencing mechanism: + + + Confirm with y that you want to use SBD. + + + Enter a persistent path to the partition of your block device that + you want to use for SBD. + The path must be consistent across all nodes in the cluster. + The script creates a small partition on the device to be used for SBD. + + + + + Configure a virtual IP address for cluster administration with &hawk2;: + + + Confirm with y that you want to configure a + virtual IP address. + + Enter an unused IP address that you want to use as administration IP + for &hawk2;: &subnetI;.10 + + Instead of logging in to an individual cluster node with &hawk2;, + you can connect to the virtual IP address. + + + + + + Choose whether to configure &qdevice; and &qnet;. For the minimal setup + described in this document, decline with n for now. + + + + + Finally, the script will start the cluster services to bring the + cluster online and enable &hawk2;. The URL to use for &hawk2; is + displayed on the screen. + + + + + + Logging in to the &hawk2; web interface + + You now have a running one-node cluster. To view its status, proceed as follows: + + + Logging in to the &hawk2; Web interface + + On any machine, start a Web browser and make sure that JavaScript and + cookies are enabled. + + + As URL, enter the virtual IP address that you configured with the bootstrap script: + https://&subnetI;.10:7630/ + + Certificate warning + If a certificate warning appears when you try to access the URL for + the first time, a self-signed certificate is in use. Self-signed + certificates are not considered trustworthy by default. + Ask your cluster operator for the certificate details to verify the + certificate. + To proceed anyway, you can add an exception in the browser to bypass + the warning. + + + + On the &hawk2; login screen, enter the + Username and Password of the + user that was created by the bootstrap script (user hacluster, password + linux). + + Secure password + Replace the default password with a secure one as soon as possible: + + &prompt.root;passwd hacluster + + + + + Click Log In. The &hawk2; Web interface + shows the Status screen by default: + +
+ Status of the one-node cluster in &hawk2; + + + + + +
+
+
+
From d23a9386fb0fe92cd2827eb7c68d0a1bc098fdaa Mon Sep 17 00:00:00 2001 From: Tahlia Richardson <3069029+tahliar@users.noreply.github.com> Date: Thu, 15 Feb 2024 14:27:05 +1000 Subject: [PATCH 14/39] Add autoyast note to pattern installation procedure --- xml/ha_install.xml | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/xml/ha_install.xml b/xml/ha_install.xml index 5d489538..9321f158 100644 --- a/xml/ha_install.xml +++ b/xml/ha_install.xml @@ -65,9 +65,15 @@ - Install the &ha; pattern on all machines that - will be part of your cluster. + Repeat these steps on all machines that will be part of the cluster. + + Cloning nodes with &ay; + + You do not need to repeat these steps if you intend to use &ay; to install the rest of + the cluster nodes. The clones will have the same installed packages as the original node. + + From d1ffb86b324002e366c10b0b116eef41c3fdad2d Mon Sep 17 00:00:00 2001 From: Tahlia Richardson <3069029+tahliar@users.noreply.github.com> Date: Fri, 16 Feb 2024 17:04:38 +1000 Subject: [PATCH 15/39] Started expanding the crm cluster init section --- xml/ha_bootstrap_install.xml | 217 ++++++++++++++++++++--------------- 1 file changed, 124 insertions(+), 93 deletions(-) diff --git a/xml/ha_bootstrap_install.xml b/xml/ha_bootstrap_install.xml index 85f8b25c..98499ede 100644 --- a/xml/ha_bootstrap_install.xml +++ b/xml/ha_bootstrap_install.xml @@ -37,71 +37,77 @@ NTP - - Checks if NTP is configured to start at boot time. If not, a message appears. - + + Checks if NTP is configured to start at boot time. If not, a message appears. + SSH - Creates SSH keys for passwordless login between cluster nodes. - + + Detects or generates SSH keys for passwordless login between cluster nodes. + &csync; - - Configures &csync; to replicate configuration files across all nodes - in a cluster. - + + Configures &csync; to replicate configuration files across all nodes in a cluster. + &corosync; - Configures the cluster communication system. + + Configures the cluster communication system. + SBD/watchdog - Checks if a watchdog exists and asks you whether to configure SBD - as node fencing mechanism. + + Checks if a watchdog exists and asks you whether to configure SBD as the node fencing mechanism. + Virtual floating IP - Asks you whether to configure a virtual IP address for cluster - administration with &hawk2;. + + Asks you whether to configure a virtual IP address for cluster administration with &hawk2;. + Firewall - Opens the ports in the firewall that are needed for cluster communication. + + Opens the ports in the firewall that are needed for cluster communication. + Cluster name - Defines a name for the cluster, by default - hacluster. This - is optional and mostly useful for &geo; clusters. Usually, the cluster - name reflects the geographical location and makes it easier to distinguish a site - inside a &geo; cluster. + + Defines a name for the cluster, by default hacluster. This is + optional and mostly useful for &geo; clusters. Usually, the cluster name reflects the + geographical location and makes it easier to distinguish a site inside a &geo; cluster. + &qdevice;/&qnet; - - Asks you whether to configure &qdevice;/&qnet; to participate in - quorum decisions. We recommend using &qdevice; and &qnet; for clusters - with an even number of nodes, and especially for two-node clusters. - + + Asks you whether to configure &qdevice;/&qnet; to participate in quorum decisions. + We recommend using &qdevice; and &qnet; for clusters with an even number of nodes, + and especially for two-node clusters. + @@ -128,30 +134,47 @@ Setting up the first node with <command>crm cluster init</command> - Set up the first node with the crm cluster init script. - This requires only a minimum of time and manual intervention. + Setting up the first node with the crm cluster init script + requires only a minimum of time and manual intervention. + + + This steps in this procedure show the default option followed by alternative or additional + options. For a minimal setup with only the default options, see . - Setting up the first node (<systemitem class="server">&node1;</systemitem>) with - <command>crm cluster init</command> + Setting up the first node with <command>crm cluster init</command> - Log in to the first cluster node as &rootuser;, or as a user with - sudo privileges. - - - <command>sudo</command> user SSH key access - - The cluster uses passwordless SSH access for communication between the nodes. - The crm cluster init script checks for SSH keys and generates - them if they do not already exist. + Log in to the first cluster node: - - If you intend to set up the first node as a user with sudo privileges, - you must ensure the user's SSH keys exist (or will be generated) locally on the node, - not on a remote system. - - + + + Default + + + Log into the node as the &rootuser; user. + + + + + sudo user (no SSH agent forwarding) + + + Log into the node as a user with sudo privileges. The user's SSH keys + must exist (or be generated) locally on the node, not on a remote system. + + + + + SSH agent forwarding + + + Log into the node as a user with sudo privileges, using + SSH agent forwarding. ++WIP, add more details here.++ + + + + @@ -169,8 +192,8 @@ communication, use the option (or ). - The script checks for NTP configuration and a hardware watchdog service. - If required, it generates the public and private SSH keys used for SSH access and + The script checks for NTP configuration and a hardware watchdog service. If required, + it generates the public and private SSH keys used for passwordless SSH access and &csync; synchronization and starts the respective services. @@ -183,7 +206,7 @@ Enter a network address to bind to. By default, the script proposes the network address of eth0. - Alternatively, enter a different network address, for example the + Alternatively, enter a different network address, for example, the address of bond0. @@ -245,52 +268,60 @@ You now have a running one-node cluster. To view its status, proceed as follows: - Logging in to the &hawk2; Web interface - - On any machine, start a Web browser and make sure that JavaScript and - cookies are enabled. - - - As URL, enter the virtual IP address that you configured with the bootstrap script: - https://&subnetI;.10:7630/ - - Certificate warning - If a certificate warning appears when you try to access the URL for - the first time, a self-signed certificate is in use. Self-signed - certificates are not considered trustworthy by default. - Ask your cluster operator for the certificate details to verify the - certificate. - To proceed anyway, you can add an exception in the browser to bypass - the warning. - - - - On the &hawk2; login screen, enter the - Username and Password of the - user that was created by the bootstrap script (user hacluster, password - linux). - - Secure password - Replace the default password with a secure one as soon as possible: - - &prompt.root;passwd hacluster - - - - - Click Log In. The &hawk2; Web interface - shows the Status screen by default: - -
- Status of the one-node cluster in &hawk2; - - - - - -
-
+ Logging in to the &hawk2; Web interface + + + On any machine, start a Web browser and make sure that JavaScript and cookies are enabled. + + + + + As URL, enter the virtual IP address that you configured with the bootstrap script: + +https://VIRTUAL_IP:7630/ + + Certificate warning + + If a certificate warning appears when you try to access the URL for the first time, + a self-signed certificate is in use. Self-signed certificates are not considered + trustworthy by default. + + + Ask your cluster operator for the certificate details to verify the certificate. + + + To proceed anyway, you can add an exception in the browser to bypass the warning. + + + + + + On the &hawk2; login screen, enter the Username and + Password of the user that was created by the bootstrap script + (user hacluster, password linux). + + + Secure password + + Replace the default password with a secure one as soon as possible: + +&prompt.root;passwd hacluster + + + + + Click Log In. The &hawk2; Web interface shows the + Status screen by default: + +
+ Status of the one-node cluster in &hawk2; + + + + + +
+
From bee8575b4f395d4ded47b461aa88f98600c0fc21 Mon Sep 17 00:00:00 2001 From: Tahlia Richardson <3069029+tahliar@users.noreply.github.com> Date: Wed, 24 Apr 2024 16:19:35 +1000 Subject: [PATCH 16/39] Add new metadata from PR#371 --- xml/book_full_install.xml | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/xml/book_full_install.xml b/xml/book_full_install.xml index 63acf620..0a18d281 100644 --- a/xml/book_full_install.xml +++ b/xml/book_full_install.xml @@ -12,8 +12,9 @@ - @@ -35,6 +36,12 @@ yes + + Installation + Administration + Clustering + + Product Documentation From 6cfa3e756b03addac067de44017ccab21d5ff482 Mon Sep 17 00:00:00 2001 From: Tahlia Richardson <3069029+tahliar@users.noreply.github.com> Date: Wed, 29 May 2024 14:20:13 +1000 Subject: [PATCH 17/39] Fix command prompts in Logging In --- xml/ha_config_cli.xml | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/xml/ha_config_cli.xml b/xml/ha_config_cli.xml index ffe0550d..86caad77 100644 --- a/xml/ha_config_cli.xml +++ b/xml/ha_config_cli.xml @@ -125,13 +125,13 @@ Log in to the first cluster node as a user with sudo privileges, using the option to enable SSH agent forwarding: -user@local > ssh -A USER@NODE1 +user@local> ssh -A USER@NODE1 Initialize the cluster with the crm cluster init script: -user@node1 > sudo --preserve-env=SSH_AUTH_SOCK \ +user@node1> sudo --preserve-env=SSH_AUTH_SOCK \ crm cluster init --use-ssh-agent @@ -159,7 +159,7 @@ Use the -c option to specify the user and node that initialized the cluster: -user@node2 > sudo --preserve-env=SSH_AUTH_SOCK \ +user@node2> sudo --preserve-env=SSH_AUTH_SOCK \ crm cluster join --use-ssh-agent -c USER@NODE1 @@ -172,12 +172,12 @@ crm cluster join --use-ssh-agent -c USER@NODE1 Run the following command on the first node: -user@node1 > sudo --preserve-env=SSH_AUTH_SOCK \ +user@node1> sudo --preserve-env=SSH_AUTH_SOCK \ crm cluster init ssh --use-ssh-agent Run the following command on all other nodes: -user@node2 > sudo --preserve-env=SSH_AUTH_SOCK \ +user@node2> sudo --preserve-env=SSH_AUTH_SOCK \ crm cluster join ssh --use-ssh-agent -c USER@NODE1 From c85cb1ad6c675f326b4d59ecfdde20c1444ac777 Mon Sep 17 00:00:00 2001 From: Tahlia Richardson <3069029+tahliar@users.noreply.github.com> Date: Wed, 29 May 2024 15:22:27 +1000 Subject: [PATCH 18/39] Change admin guide authentication section title --- xml/ha_config_cli.xml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/xml/ha_config_cli.xml b/xml/ha_config_cli.xml index 86caad77..f396b815 100644 --- a/xml/ha_config_cli.xml +++ b/xml/ha_config_cli.xml @@ -61,7 +61,7 @@ - Logging in + User privileges and authentication Managing a cluster requires sufficient privileges. The following users can run the crm command and its subcommands: From a8dab519614d0eb9da56c6a4505d2b89f4c434e8 Mon Sep 17 00:00:00 2001 From: Tahlia Richardson <3069029+tahliar@users.noreply.github.com> Date: Thu, 30 May 2024 15:47:31 +1000 Subject: [PATCH 19/39] Move log in steps to a new section --- xml/book_full_install.xml | 1 + xml/ha_bootstrap_install.xml | 33 ----------- xml/ha_install.xml | 4 +- xml/ha_log_in.xml | 109 +++++++++++++++++++++++++++++++++++ 4 files changed, 112 insertions(+), 35 deletions(-) create mode 100644 xml/ha_log_in.xml diff --git a/xml/book_full_install.xml b/xml/book_full_install.xml index 0a18d281..f7c2fb46 100644 --- a/xml/book_full_install.xml +++ b/xml/book_full_install.xml @@ -62,6 +62,7 @@ Installing cluster nodes + diff --git a/xml/ha_bootstrap_install.xml b/xml/ha_bootstrap_install.xml index 98499ede..770b3faf 100644 --- a/xml/ha_bootstrap_install.xml +++ b/xml/ha_bootstrap_install.xml @@ -143,39 +143,6 @@ Setting up the first node with <command>crm cluster init</command> - - - Log in to the first cluster node: - - - - Default - - - Log into the node as the &rootuser; user. - - - - - sudo user (no SSH agent forwarding) - - - Log into the node as a user with sudo privileges. The user's SSH keys - must exist (or be generated) locally on the node, not on a remote system. - - - - - SSH agent forwarding - - - Log into the node as a user with sudo privileges, using - SSH agent forwarding. ++WIP, add more details here.++ - - - - - Start the bootstrap script: diff --git a/xml/ha_install.xml b/xml/ha_install.xml index 9321f158..6429c9db 100644 --- a/xml/ha_install.xml +++ b/xml/ha_install.xml @@ -56,12 +56,12 @@ Enable the &ha; extension: -&prompt.user;sudo SUSEConnect -p sle-ha/&product-ga;.&product-sp;/x86_64 -r ADDITIONAL_REGCODE +&prompt.root;SUSEConnect -p sle-ha/&product-ga;.&product-sp;/x86_64 -r ADDITIONAL_REGCODE Install the &ha; pattern: -&prompt.user;sudo zypper install -t pattern ha_sles +&prompt.root;zypper install -t pattern ha_sles diff --git a/xml/ha_log_in.xml b/xml/ha_log_in.xml new file mode 100644 index 00000000..914ef49f --- /dev/null +++ b/xml/ha_log_in.xml @@ -0,0 +1,109 @@ + + + + %entities; +]> + + + Logging in to the cluster nodes + + + + &sleha; clusters use passwordless SSH access for communication between the nodes. + If you set up the cluster with crm cluster init, the script checks + for SSH keys and generates them if they do not exist. If you set up the cluster + with the YaST cluster module, you must configure the SSH keys yourself. + + + By default, the cluster performs operations as the &rootuser; user. However, if you cannot + allow passwordless root SSH access, you can set up the cluster as a user with + sudo privileges instead. + + + + + yes + + + + + The following users can set up the cluster on the first node, and add more nodes to the cluster: + + + + The &rootuser; user + + + Setting up and running the cluster as &rootuser; is &pace;'s default and does not + require any additional configuration. The &rootuser; user's SSH keys must exist + (or be generated) locally on the node, not on a remote system. + + + To log into to the first cluster node as the &rootuser; user, run the following command: + +user@local> ssh root@NODE1 + + + + A user with sudo privileges (without SSH agent forwarding) + + + You will need to specify this user when you add more nodes to the cluster with + crm cluster join. The user's SSH keys must exist (or be generated) + locally on the node, not on a remote system. + + + To log into to the first cluster node as a sudo user, run the + following command: + +user@local> ssh USER@NODE1 + + + + A user with sudo privileges (with SSH agent forwarding) + + + You can use SSH forwarding to pass your local SSH keys to the cluster nodes. + This can be useful if you need to avoid storing SSH keys on the nodes, but requires + additional configuration on your local machine and on the cluster nodes. + + + To log in to the first cluster node with SSH agent forwarding enabled, + perform the following steps: + + + + + On your local machine, start the SSH agent and add your keys to it. For more information, + see + Automated public key logins with ssh-agent in + &secguide; for &sles;. + + + + + Log in to the first node with the option to enable + SSH agent forwarding: + +user@local> ssh -A USER@NODE1 + + + + + + + When you add nodes to the cluster, you must log in to each node as the same user you set up the first node with. + + + + For simplicity, the commands in this guide assume you are logged in as the &rootuser; user. If you logged in as a sudo user, adjust the commands accordingly. + + + From dc14179e816a8879371a1df40fd4448a5bdf1a7b Mon Sep 17 00:00:00 2001 From: Tahlia Richardson <3069029+tahliar@users.noreply.github.com> Date: Mon, 3 Jun 2024 14:51:08 +1000 Subject: [PATCH 20/39] Move crm cluster join to bootstrap chapter --- xml/book_full_install.xml | 2 +- xml/ha_add_nodes.xml | 240 ----------------------------------- xml/ha_autoyast_deploy.xml | 148 +++++++++++++++++++++ xml/ha_bootstrap_install.xml | 72 +++++++++++ xml/ha_log_in.xml | 4 +- 5 files changed, 223 insertions(+), 243 deletions(-) delete mode 100644 xml/ha_add_nodes.xml create mode 100644 xml/ha_autoyast_deploy.xml diff --git a/xml/book_full_install.xml b/xml/book_full_install.xml index f7c2fb46..3b0e9278 100644 --- a/xml/book_full_install.xml +++ b/xml/book_full_install.xml @@ -67,7 +67,7 @@ - + diff --git a/xml/ha_add_nodes.xml b/xml/ha_add_nodes.xml deleted file mode 100644 index ffed805a..00000000 --- a/xml/ha_add_nodes.xml +++ /dev/null @@ -1,240 +0,0 @@ - - - - %entities; -]> - - - Adding more nodes - - - - - - - - - yes - - - - - - Adding nodes with <command>crm cluster join</command> - - You can add more nodes to the cluster with the crm cluster join bootstrap script. - The script only needs access to an existing cluster node, and completes the basic setup - on the current machine automatically. - - - For more information, run the crm cluster join --help command. - - - Adding nodes with <command>crm cluster join</command> - - - Log in to a node as &rootuser;, or as a user with sudo privileges. - - - - - Start the bootstrap script: - - - - - If you set up the first node as &rootuser;, you can run this command with - no additional parameters: - -&prompt.root;crm cluster join - - - - If you set up the first node as a sudo user, you must - specify the user and node with the option: - -&prompt.user;sudo crm cluster join -c USER@&node1; - - - - If you set up the first node as a sudo user with SSH agent forwarding, - use the following command: - -&prompt.user;sudo --preserve-env=SSH_AUTH_SOCK crm cluster join --use-ssh-agent -c USER@&node1; - - - - If NTP is not configured to start at boot time, a message - appears. The script also checks for a hardware watchdog device. - You are warned if none is present. - - - - - If you did not already specify &node1; - with , you will be prompted for the IP address of the first node. - - - - - If you did not already configure passwordless SSH access between - both machines, you will be prompted for the password of the first node. - - - After logging in to the specified node, the script copies the - &corosync; configuration, configures SSH and &csync;, - brings the current machine online as a new cluster node, and - starts the service needed for &hawk2;. - - - - - Repeat this procedure for each node. You can check the status of the cluster at any time - with the crm status command, or by logging in to &hawk2; and navigating to - StatusNodes. - - - - - Adding nodes manually - - - - - - - Adding nodes with &ay; - - - After you have installed and set up a two-node cluster, you can extend the - cluster by cloning existing nodes with &ay; and adding the clones to the cluster. - - - &ay; uses profiles that contains installation and configuration data. - A profile tells &ay; what to install and how to configure the installed system to - get a ready-to-use system in the end. This profile can then be used - for mass deployment in different ways (for example, to clone existing - cluster nodes). - - - For detailed instructions on how to use &ay; in various scenarios, - see the - &ayguide; for &sls; &productnumber;. - - - - Identical hardware - - assumes you are rolling - out &productname; &productnumber; to a set of machines with identical hardware - configurations. - - - If you need to deploy cluster nodes on non-identical hardware, refer to the - &deploy; for &sls; &productnumber;, - chapter Automated Installation, section - Rule-Based Autoinstallation. - - - - - Cloning a cluster node with &ay; - - - Make sure the node you want to clone is correctly installed and - configured. For details, see the &haquick; or - . - - - - - Follow the description outlined in the &sle; - &productnumber; &deploy; for simple mass - installation. This includes the following basic steps: - - - - - Creating an &ay; profile. Use the &ay; GUI to create and modify - a profile based on the existing system configuration. In &ay;, - choose the &ha; module and click the - Clone button. If needed, adjust the configuration - in the other modules and save the resulting control file as XML. - - - If you have configured DRBD, you can select and clone this module in - the &ay; GUI, too. - - - - - Determining the source of the &ay; profile and the parameter to - pass to the installation routines for the other nodes. - - - - - Determining the source of the &sls; and &productname; - installation data. - - - - - Determining and setting up the boot scenario for autoinstallation. - - - - - Passing the command line to the installation routines, either by - adding the parameters manually or by creating an - info file. - - - - - Starting and monitoring the autoinstallation process. - - - - - - - - After the clone has been successfully installed, execute the following - steps to make the cloned node join the cluster: - - - - Bringing the cloned node online - - - Transfer the key configuration files from the already configured nodes - to the cloned node with &csync; as described in - . - - - - - To bring the node online, start the cluster services on the cloned - node as described in . - - - - - - The cloned node now joins the cluster because the - /etc/corosync/corosync.conf file has been applied to - the cloned node via &csync;. The CIB is automatically synchronized - among the cluster nodes. - - - - diff --git a/xml/ha_autoyast_deploy.xml b/xml/ha_autoyast_deploy.xml new file mode 100644 index 00000000..4cb8732e --- /dev/null +++ b/xml/ha_autoyast_deploy.xml @@ -0,0 +1,148 @@ + + + + %entities; +]> + + + + Deploying nodes with &ay; + + + + After you have installed and set up a two-node cluster, you can extend the + cluster by cloning existing nodes with &ay; and adding the clones to the cluster. + + &ay; uses profiles that contains installation and configuration data. + A profile tells &ay; what to install and how to configure the installed system to + get a ready-to-use system in the end. This profile can then be used + for mass deployment in different ways (for example, to clone existing cluster nodes). + + + For detailed instructions on how to use &ay; in various scenarios, see the + + &ayguide; for &sls; &productnumber;. + + + + + yes + + + + + + + Identical hardware + + assumes you are rolling + out &productname; &productnumber; to a set of machines with identical hardware + configurations. + + + If you need to deploy cluster nodes on non-identical hardware, refer to the + &deploy; for &sls; &productnumber;, + chapter Automated Installation, section + Rule-Based Autoinstallation. + + + + + Cloning a cluster node with &ay; + + + Make sure the node you want to clone is correctly installed and + configured. For details, see the &haquick; or + . + + + + + Follow the description outlined in the &sle; + &productnumber; &deploy; for simple mass + installation. This includes the following basic steps: + + + + + Creating an &ay; profile. Use the &ay; GUI to create and modify + a profile based on the existing system configuration. In &ay;, + choose the &ha; module and click the + Clone button. If needed, adjust the configuration + in the other modules and save the resulting control file as XML. + + + If you have configured DRBD, you can select and clone this module in + the &ay; GUI, too. + + + + + Determining the source of the &ay; profile and the parameter to + pass to the installation routines for the other nodes. + + + + + Determining the source of the &sls; and &productname; + installation data. + + + + + Determining and setting up the boot scenario for autoinstallation. + + + + + Passing the command line to the installation routines, either by + adding the parameters manually or by creating an + info file. + + + + + Starting and monitoring the autoinstallation process. + + + + + + + + After the clone has been successfully installed, execute the following + steps to make the cloned node join the cluster: + + + + Bringing the cloned node online + + + Transfer the key configuration files from the already configured nodes + to the cloned node with &csync; as described in + . + + + + + To bring the node online, start the cluster services on the cloned + node as described in . + + + + + + The cloned node now joins the cluster because the + /etc/corosync/corosync.conf file has been applied to + the cloned node via &csync;. The CIB is automatically synchronized + among the cluster nodes. + + + diff --git a/xml/ha_bootstrap_install.xml b/xml/ha_bootstrap_install.xml index 770b3faf..9247ca5b 100644 --- a/xml/ha_bootstrap_install.xml +++ b/xml/ha_bootstrap_install.xml @@ -291,4 +291,76 @@
+ + + + Adding nodes with <command>crm cluster join</command> + + You can add more nodes to the cluster with the crm cluster join bootstrap script. + The script only needs access to an existing cluster node, and completes the basic setup + on the current machine automatically. + + + For more information, run the crm cluster join --help command. + + + Adding nodes with <command>crm cluster join</command> + + + Start the bootstrap script: + + + + + If you set up the first node as &rootuser;, you can run this command with + no additional parameters: + +&prompt.root;crm cluster join + + + + If you set up the first node as a sudo user, you must + specify the user and node with the option: + +&prompt.user;sudo crm cluster join -c USER@&node1; + + + + If you set up the first node as a sudo user with SSH agent forwarding, + use the following command: + +&prompt.user;sudo --preserve-env=SSH_AUTH_SOCK crm cluster join --use-ssh-agent -c USER@&node1; + + + + If NTP is not configured to start at boot time, a message + appears. The script also checks for a hardware watchdog device. + You are warned if none is present. + + + + + If you did not already specify the first cluster node + with , you will be prompted for its IP address. + + + + + If you did not already configure passwordless SSH access between the cluster nodes, + you will be prompted for the password of the first node. + + + After logging in to the specified node, the script copies the + &corosync; configuration, configures SSH and &csync;, + brings the current machine online as a new cluster node, and + starts the service needed for &hawk2;. + + + + + Repeat this procedure for each node. You can check the status of the cluster at any time + with the crm status command, or by logging in to &hawk2; and navigating to + StatusNodes. + + diff --git a/xml/ha_log_in.xml b/xml/ha_log_in.xml index 914ef49f..29e89fc9 100644 --- a/xml/ha_log_in.xml +++ b/xml/ha_log_in.xml @@ -46,7 +46,7 @@ (or be generated) locally on the node, not on a remote system. - To log into to the first cluster node as the &rootuser; user, run the following command: + To log in to the first cluster node as the &rootuser; user, run the following command: user@local> ssh root@NODE1 @@ -60,7 +60,7 @@ locally on the node, not on a remote system. - To log into to the first cluster node as a sudo user, run the + To log in to the first cluster node as a sudo user, run the following command: user@local> ssh USER@NODE1 From 8af6a494afe77a13ed6ac16f10e352d72b20cb65 Mon Sep 17 00:00:00 2001 From: Tahlia Richardson <3069029+tahliar@users.noreply.github.com> Date: Tue, 4 Jun 2024 15:14:22 +1000 Subject: [PATCH 21/39] Split csync section to flow better in yast chapter --- xml/ha_yast_cluster.xml | 267 +++++++++++++++++++--------------------- 1 file changed, 127 insertions(+), 140 deletions(-) diff --git a/xml/ha_yast_cluster.xml b/xml/ha_yast_cluster.xml index 2e2b3688..765be4f7 100644 --- a/xml/ha_yast_cluster.xml +++ b/xml/ha_yast_cluster.xml @@ -641,6 +641,115 @@ + + Configuring &csync; to synchronize files + + Instead of copying the configuration files to all nodes + manually, use the csync2 tool for replication across + all nodes in the cluster. &csync; helps you to keep track of configuration changes + and to keep files synchronized across the cluster nodes: + + + + + You can define a list of files that are important for operation. + + + + + You can show changes to these files (against the other cluster nodes). + + + + + You can synchronize the configured files with a single command. + + + + + With a simple shell script in ~/.bash_logout, you + can be reminded about unsynchronized changes before logging out of the + system. + + + + + Find detailed information about &csync; at + and + . + + + Pushing synchronization after any changes + + &csync; only pushes changes. It does not continuously + synchronize files between the machines. Each time you update files that need + to be synchronized, you need to push the changes to the other machines. + Using csync2 to push changes is described later, after + the cluster configuration with &yast; is complete. + + + + Configuring &csync; with &yast; + + Start the &yast; cluster module and switch to the + &csync; category. + + + To specify the synchronization group, click Add + in the Sync Host group and enter the local host names + of all nodes in your cluster. For each node, you must use exactly the + strings that are returned by the hostname command. + + Host name resolution + If host name resolution does not work properly in your + network, you can also specify a combination of host name and IP address + for each cluster node. To do so, use the string + HOSTNAME@IP such as + &node1;@&wsIip;, for example. &csync; + then uses the IP addresses when connecting. + + + + Click Generate Pre-Shared-Keys to create a key + file for the synchronization group. The key file is written to + /etc/csync2/key_hagroup. After it has been created, + it must be copied manually to all members of the cluster. + + + To populate the Sync File list with the files + that usually need to be synchronized among all nodes, click Add + Suggested Files. + + + To Edit, Add or + Remove files from the list of files to be synchronized + use the respective buttons. You must enter the absolute path for each + file. + + + Activate &csync; by clicking Turn &csync; + ON. This executes the following command to start + &csync; automatically at boot time: + &prompt.root;systemctl enable csync2.socket + + + Click Finish. &yast; writes the &csync; + configuration to /etc/csync2/csync2.cfg. + + +
+ &yast; <guimenu>Cluster</guimenu>—&csync; + + + + + + + + +
+
+ Synchronizing connection status between cluster nodes @@ -782,138 +891,22 @@ - + Transferring the configuration to all nodes - Instead of copying the resulting configuration files to all nodes - manually, use the csync2 tool for replication across - all nodes in the cluster. - - - This requires the following basic steps: - - - - - . - - - - - . - - - - - &csync; helps you to keep track of configuration changes and to keep - files synchronized across the cluster nodes: + After the cluster configuration with &yast; is complete, use csync2 + to copy the configuration files to the rest of the cluster nodes. To receive the files, + nodes must be included in the Sync Host group you configured in + . - - - - You can define a list of files that are important for operation. - - - - - You can show changes to these files (against the other cluster nodes). - - - - - You can synchronize the configured files with a single command. - - - - - With a simple shell script in ~/.bash_logout, you - can be reminded about unsynchronized changes before logging out of the - system. - - - - - Find detailed information about &csync; at - and - . - - - - Configuring &csync; with &yast; - - Configuring &csync; with &yast; - - Start the &yast; cluster module and switch to the - &csync; category. - - - To specify the synchronization group, click Add - in the Sync Host group and enter the local host names - of all nodes in your cluster. For each node, you must use exactly the - strings that are returned by the hostname command. - - - Host name resolution - If host name resolution does not work properly in your - network, you can also specify a combination of host name and IP address - for each cluster node. To do so, use the string - HOSTNAME@IP such as - &node1;@&wsIip;, for example. &csync; - then uses the IP addresses when connecting. - - - - Click Generate Pre-Shared-Keys to create a key - file for the synchronization group. The key file is written to - /etc/csync2/key_hagroup. After it has been created, - it must be copied manually to all members of the cluster. - - - To populate the Sync File list with the files - that usually need to be synchronized among all nodes, click Add - Suggested Files. - - - To Edit, Add or - Remove files from the list of files to be synchronized - use the respective buttons. You must enter the absolute path for each - file. - - - Activate &csync; by clicking Turn &csync; - ON. This executes the following command to start - &csync; automatically at boot time: - &prompt.root;systemctl enable csync2.socket - - - Click Finish. &yast; writes the &csync; - configuration to /etc/csync2/csync2.cfg. - - -
- &yast; <guimenu>Cluster</guimenu>—&csync; - - - - - - - - -
-
- - - Synchronizing changes with &csync; Before running &csync; for the first time, you need to make the following preparations: - Preparing for initial synchronization with &csync; - Copy the file /etc/csync2/csync2.cfg - manually to all nodes after you have configured it as described in . + + Copy the file /etc/csync2/csync2.cfg manually to all nodes. + Copy the file /etc/csync2/key_hagroup that you @@ -925,24 +918,19 @@ regenerate the file on the other nodes—it needs to be the same file on all nodes. - - - Execute the following command on all nodes to start the service now: + Run the following command on all nodes to start the service now: &prompt.root;systemctl start csync2.socket + + Use the following procedure to transfer the configuration files to all cluster nodes: + - Synchronizing the configuration files with &csync; - + Synchronizing changes with &csync; - To initially synchronize all files once, execute the following + To synchronize all files once, run the following command on the machine that you want to copy the configuration from: &prompt.root;csync2 -xv @@ -966,17 +954,16 @@ Finished with 1 errors. For more information on the &csync; options, run &prompt.root;csync2 -help - + Pushing synchronization after any changes &csync; only pushes changes. It does not continuously synchronize files between the machines. Each time you update files that need to be synchronized, you need to - push the changes to the other machines by running csync2  - on the machine where you did the changes. If you run + push the changes to the other machines by running csync2 -xv + on the machine where you did the changes. If you run the command on any of the other machines with unchanged files, nothing happens. - - +
From c4b3d15f03ce948fda1dc146d42072e4f885a74e Mon Sep 17 00:00:00 2001 From: Tahlia Richardson <3069029+tahliar@users.noreply.github.com> Date: Fri, 12 Jul 2024 13:55:54 +1000 Subject: [PATCH 22/39] Update to latest metadata --- xml/book_full_install.xml | 100 --------------------------- xml/html/rh-book-administration.html | 85 +++++++++++++++++++++++ 2 files changed, 85 insertions(+), 100 deletions(-) delete mode 100644 xml/book_full_install.xml create mode 100644 xml/html/rh-book-administration.html diff --git a/xml/book_full_install.xml b/xml/book_full_install.xml deleted file mode 100644 index 3b0e9278..00000000 --- a/xml/book_full_install.xml +++ /dev/null @@ -1,100 +0,0 @@ - - - - %entities; -]> - - - - - - - - Installing High Availability clusters for critical workloads - &productname; - &productnameshort; - &productnumber; - - - - - - - TBD - - - - - yes - - - Installation - Administration - Clustering - - Product Documentation - - - - - - - - Planning for deployment - - - - - - - - - - - Installing cluster nodes - - - - - - - - - - - - - - Additional configuration - - - - - - - - - Testing the setup - - - - - - - - - - diff --git a/xml/html/rh-book-administration.html b/xml/html/rh-book-administration.html new file mode 100644 index 00000000..47bbfc19 --- /dev/null +++ b/xml/html/rh-book-administration.html @@ -0,0 +1,85 @@ + +Revision History: Administration Guide + + + + + + + + + + + + + + +

Revision History: Administration Guide

2024-06-26

+

+ Updated for the initial release of SUSE Linux Enterprise High Availability 15 SP6. +

+
From 25ecc0971e093b45f67950973db17c44babc3982 Mon Sep 17 00:00:00 2001 From: Tahlia Richardson <3069029+tahliar@users.noreply.github.com> Date: Fri, 12 Jul 2024 14:00:37 +1000 Subject: [PATCH 23/39] Change title --- DC-SLE-HA-full-install => DC-SLE-HA-deployment | 2 +- xml/MAIN.SLEHA.xml | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) rename DC-SLE-HA-full-install => DC-SLE-HA-deployment (96%) diff --git a/DC-SLE-HA-full-install b/DC-SLE-HA-deployment similarity index 96% rename from DC-SLE-HA-full-install rename to DC-SLE-HA-deployment index a8aa8ad6..0f0b4c94 100644 --- a/DC-SLE-HA-full-install +++ b/DC-SLE-HA-deployment @@ -5,7 +5,7 @@ ## ## Basics MAIN="MAIN.SLEHA.xml" -ROOTID=book-full-install +ROOTID=book-deployment ## Profiling PROFOS="sles" diff --git a/xml/MAIN.SLEHA.xml b/xml/MAIN.SLEHA.xml index b9b7b334..93b07e89 100644 --- a/xml/MAIN.SLEHA.xml +++ b/xml/MAIN.SLEHA.xml @@ -42,8 +42,8 @@ - - + + From cd20db32c54aefa279269fb45e9302eb9b5ff6c1 Mon Sep 17 00:00:00 2001 From: Tahlia Richardson <3069029+tahliar@users.noreply.github.com> Date: Wed, 28 Aug 2024 14:57:03 +1000 Subject: [PATCH 24/39] Typo --- xml/ha_bootstrap_install.xml | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/xml/ha_bootstrap_install.xml b/xml/ha_bootstrap_install.xml index 9247ca5b..414391e8 100644 --- a/xml/ha_bootstrap_install.xml +++ b/xml/ha_bootstrap_install.xml @@ -138,8 +138,9 @@ requires only a minimum of time and manual intervention. - This steps in this procedure show the default option followed by alternative or additional - options. For a minimal setup with only the default options, see . + The steps in this procedure show the default option followed by alternative or additional + options. For a minimal setup with only the default options, + see . Setting up the first node with <command>crm cluster init</command> From a795331bfde98e08d7bd44f21d593b5def729b61 Mon Sep 17 00:00:00 2001 From: Tahlia Richardson <3069029+tahliar@users.noreply.github.com> Date: Wed, 28 Aug 2024 15:07:04 +1000 Subject: [PATCH 25/39] alice -> node1 --- xml/ha_bootstrap_install.xml | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/xml/ha_bootstrap_install.xml b/xml/ha_bootstrap_install.xml index 414391e8..4af11651 100644 --- a/xml/ha_bootstrap_install.xml +++ b/xml/ha_bootstrap_install.xml @@ -323,14 +323,15 @@ If you set up the first node as a sudo user, you must specify the user and node with the option: -&prompt.user;sudo crm cluster join -c USER@&node1; +&prompt.user;sudo crm cluster join -c USER@NODE1 If you set up the first node as a sudo user with SSH agent forwarding, use the following command: -&prompt.user;sudo --preserve-env=SSH_AUTH_SOCK crm cluster join --use-ssh-agent -c USER@&node1; +&prompt.user;sudo --preserve-env=SSH_AUTH_SOCK \ +crm cluster join --use-ssh-agent -c USER@NODE1 From b232248e4f3178e2f49a15f42e0bcb6a63419068 Mon Sep 17 00:00:00 2001 From: Tahlia Richardson <3069029+tahliar@users.noreply.github.com> Date: Thu, 29 Aug 2024 10:20:30 +1000 Subject: [PATCH 26/39] Clarify node login --- xml/ha_bootstrap_install.xml | 13 ++++++++++++- xml/ha_log_in.xml | 23 +++++++++++------------ 2 files changed, 23 insertions(+), 13 deletions(-) diff --git a/xml/ha_bootstrap_install.xml b/xml/ha_bootstrap_install.xml index 4af11651..10e6db8c 100644 --- a/xml/ha_bootstrap_install.xml +++ b/xml/ha_bootstrap_install.xml @@ -144,11 +144,17 @@ Setting up the first node with <command>crm cluster init</command> + + + Log in to the first cluster node as &rootuser;, or as a user with sudo + privileges. + + Start the bootstrap script: - &prompt.root;crm cluster init --name CLUSTERNAME + &prompt.root;crm cluster init Replace the CLUSTERNAME placeholder with a meaningful name, like the geographical location of your cluster (for example, &cluster1;). @@ -306,6 +312,11 @@ Adding nodes with <command>crm cluster join</command> + + + Log in to this node as the same user you set up the first node with. + + Start the bootstrap script: diff --git a/xml/ha_log_in.xml b/xml/ha_log_in.xml index 29e89fc9..fdc04895 100644 --- a/xml/ha_log_in.xml +++ b/xml/ha_log_in.xml @@ -46,9 +46,9 @@ (or be generated) locally on the node, not on a remote system. - To log in to the first cluster node as the &rootuser; user, run the following command: + To log in to a node as the &rootuser; user, run the following command: -user@local> ssh root@NODE1 +user@local> ssh root@NODE @@ -60,10 +60,9 @@ locally on the node, not on a remote system. - To log in to the first cluster node as a sudo user, run the - following command: + To log in to a node as a sudo user, run the following command: -user@local> ssh USER@NODE1 +user@local> ssh USER@NODE @@ -75,8 +74,7 @@ additional configuration on your local machine and on the cluster nodes. - To log in to the first cluster node with SSH agent forwarding enabled, - perform the following steps: + To log in to a node with SSH agent forwarding enabled, perform the following steps: @@ -89,21 +87,22 @@ - Log in to the first node with the option to enable - SSH agent forwarding: + Log in to the node with the option to enable SSH agent forwarding: -user@local> ssh -A USER@NODE1 +user@local> ssh -A USER@NODE - When you add nodes to the cluster, you must log in to each node as the same user you set up the first node with. + When you add nodes to the cluster, you must log in to each node as the same user you set + up the first node with. - For simplicity, the commands in this guide assume you are logged in as the &rootuser; user. If you logged in as a sudo user, adjust the commands accordingly. + For simplicity, the commands in this guide assume you are logged in as the &rootuser; user. + If you logged in as a sudo user, adjust the commands accordingly. From 6a703bcd061f6a4c7deddf7bfe625ea75672946c Mon Sep 17 00:00:00 2001 From: Tahlia Richardson <3069029+tahliar@users.noreply.github.com> Date: Fri, 30 Aug 2024 17:24:15 +1000 Subject: [PATCH 27/39] Add more info about starting the init script --- xml/ha_bootstrap_install.xml | 91 ++++++++++++++++++++++++++++++------ 1 file changed, 76 insertions(+), 15 deletions(-) diff --git a/xml/ha_bootstrap_install.xml b/xml/ha_bootstrap_install.xml index 10e6db8c..cc356dff 100644 --- a/xml/ha_bootstrap_install.xml +++ b/xml/ha_bootstrap_install.xml @@ -154,22 +154,74 @@ Start the bootstrap script: - &prompt.root;crm cluster init - Replace the CLUSTERNAME - placeholder with a meaningful name, like the geographical location of your - cluster (for example, &cluster1;). - This is especially helpful to create a &geo; cluster later on, - as it simplifies the identification of a site. - - If you need to use multicast instead of unicast (the default) for your cluster - communication, use the option (or ). + You can start the script without specifying any options. This prompts you for input for + some settings, as described in the next steps, and uses &crmsh;'s default values for + other settings. + + + + If you logged in as &rootuser;, you can run this command with no additional parameters: + +&prompt.root;crm cluster init + + + + If you logged in as a sudo user without SSH agent forwarding, + run this command with sudo: + +&prompt.user;sudo crm cluster init + + + + If you logged in as a sudo user with SSH agent forwarding enabled, + you must preserve the environment variable SSH_AUTH_SOCK + and tell the script to use your local SSH keys instead of generating keys on the node: + +&prompt.user;sudo --preserve-env=SSH_AUTH_SOCK crm cluster init --use-ssh-agent + + - The script checks for NTP configuration and a hardware watchdog service. If required, - it generates the public and private SSH keys used for passwordless SSH access and - &csync; synchronization and starts the respective services. + Alternatively, you can specify additional options as part of the initialization command. + You can include multiple options in the same command. Some examples are shown below. + For more options, run crm cluster help init. + + + Multicast + + + Unicast is the default transport type for cluster communication. To use multicast + instead, use the option (or ). + For example: + +&prompt.root;crm cluster init --multicast + + + + SBD disks + + + In a later step, the script asks if you want to set up SBD and prompts you for a disk + to use. To configure the cluster with multiple SBD disks, use the option + (or ) multiple times. For example: + +&prompt.root;crm cluster init --sbd-device /dev/disk/by-id/ID1 --sbd-device /dev/disk/by-id/ID2 + + + + Network interfaces + + + In a later step, the script prompts you for a network interface for &corosync; to use. + To configure the cluster with two network interfaces, use the option + (or ) twice. For example: + +&prompt.root;crm cluster init --interface eth0 --interface eth1 + + + @@ -206,6 +258,13 @@ + + + Enter a name for the cluster. Choose a meaningful name, like the geographical location + of the cluster (for example, &cluster1;). This is especially helpful + if you create a &geo; cluster later, as it simplifies the identification of a site. + + Configure a virtual IP address for cluster administration with &hawk2;: @@ -229,9 +288,11 @@ - Finally, the script will start the cluster services to bring the - cluster online and enable &hawk2;. The URL to use for &hawk2; is - displayed on the screen. + The script checks for NTP configuration and a hardware watchdog service. If required, + it generates the public and private SSH keys used for passwordless SSH access and + &csync; synchronization and starts the respective services. Finally, the script + starts the cluster services to bring the cluster online and enables &hawk2;. + The URL to use for &hawk2; is displayed on the screen.
From 6c885c4e9b9e0e79eda0d02168da73867c11723a Mon Sep 17 00:00:00 2001 From: Tahlia Richardson <3069029+tahliar@users.noreply.github.com> Date: Fri, 6 Sep 2024 17:16:36 +1000 Subject: [PATCH 28/39] Further expand crm cluster init procedure --- xml/ha_bootstrap_install.xml | 136 ++++++++++++++++++++++++++--------- 1 file changed, 104 insertions(+), 32 deletions(-) diff --git a/xml/ha_bootstrap_install.xml b/xml/ha_bootstrap_install.xml index cc356dff..6a655b96 100644 --- a/xml/ha_bootstrap_install.xml +++ b/xml/ha_bootstrap_install.xml @@ -188,6 +188,21 @@ For more options, run crm cluster help init. + + Cluster name + + + The default cluster name is hacluster. To choose a different name, + use the option (or ). For example: + +&prompt.root;crm cluster init --name &cluster1; + + Choose a meaningful name, like the geographical location of the cluster. This is + especially helpful if you create a &geo; cluster later, as it simplifies the + identification of a site. + + + Multicast @@ -214,11 +229,26 @@ Network interfaces - In a later step, the script prompts you for a network interface for &corosync; to use. + In a later step, the script prompts you for a network address for &corosync; to use. To configure the cluster with two network interfaces, use the option (or ) twice. For example: &prompt.root;crm cluster init --interface eth0 --interface eth1 + TODO: This and -M seem to do the same thing. What's the difference? + + + + Redundant communication channel + + + Supported clusters must have two communication channels. The preferred method is to + use network device bonding. If you cannot use bonding, the alternative is to set up + a redundant communication channel in &corosync;. By default, the script prompts you + for a network address for a single communication channel. To configure the cluster + with two communication channels, use the option + (or ). For example: + +&prompt.root;crm cluster init --multi-heartbeats @@ -241,50 +271,92 @@ Accept the proposed port (5405) or enter a different one. - - - - - Set up SBD as the node fencing mechanism: - - - Confirm with y that you want to use SBD. - - Enter a persistent path to the partition of your block device that - you want to use for SBD. - The path must be consistent across all nodes in the cluster. - The script creates a small partition on the device to be used for SBD. + + TODO: Better words. If you used -M or -i twice, enter a second network address and port. + - Enter a name for the cluster. Choose a meaningful name, like the geographical location - of the cluster (for example, &cluster1;). This is especially helpful - if you create a &geo; cluster later, as it simplifies the identification of a site. + Choose whether to set up SBD as the node fencing mechanism. If you are using a different + fencing mechanism or want to set up SBD later, enter n to skip this step. + To continue with this step, enter y. + + + Select the type of SBD to use: + + + + To use diskless SBD, enter none. + + + + + To use disk-based SBD, enter a persistent path to the partition of the block device you + want to use. The path must be consistent across all nodes in the cluster, for example, + /dev/disk/by-id/ID. + + + The script creates a small partition on the device to be used for SBD. + + + - Configure a virtual IP address for cluster administration with &hawk2;: - - - Confirm with y that you want to configure a - virtual IP address. - - Enter an unused IP address that you want to use as administration IP - for &hawk2;: &subnetI;.10 - - Instead of logging in to an individual cluster node with &hawk2;, - you can connect to the virtual IP address. - - + + Choose whether to configure a virtual IP address for cluster administration with &hawk2;. + Instead of logging in to an individual cluster node with &hawk2;, you can connect + to the virtual IP address. + + + If you choose y, enter an unused IP address to use for &hawk2;. + - Choose whether to configure &qdevice; and &qnet;. For the minimal setup - described in this document, decline with n for now. + Choose whether to configure &qdevice; and &qnet;. If you have not set up the &qnet; server + yet, enter n to skip this step and set up &qdevice; and &qnet; later. + If you choose y, provide the following information: + + + + Enter the host name or IP address of the &qnet; server. + + + For the remaining fields, you can accept the default values or change them as required: + + + + + Accept the proposed port (5403) or enter a different one. + + + + + Choose the algorithm that determines how votes are assigned. + + + + + Choose the method to use when a tie-breaker is required. + + + + + Choose whether to enable TLS. TODO: More info on the options, see that other bug. + + + + + Enter heuristics commands to affect how votes are determined. To skip this step, leave + the field blank. + + + From 5dc8088440308702abaee86ce81ace820b670c11 Mon Sep 17 00:00:00 2001 From: Tahlia Richardson <3069029+tahliar@users.noreply.github.com> Date: Tue, 10 Sep 2024 15:10:01 +1000 Subject: [PATCH 29/39] Fill out crm cluster init more --- xml/ha_bootstrap_install.xml | 56 ++++++++++++++++++------------------ 1 file changed, 28 insertions(+), 28 deletions(-) diff --git a/xml/ha_bootstrap_install.xml b/xml/ha_bootstrap_install.xml index 6a655b96..9d1a7bd6 100644 --- a/xml/ha_bootstrap_install.xml +++ b/xml/ha_bootstrap_install.xml @@ -195,7 +195,7 @@ The default cluster name is hacluster. To choose a different name, use the option (or ). For example: -&prompt.root;crm cluster init --name &cluster1; +&prompt.root;crm cluster init --name CLUSTERNAME Choose a meaningful name, like the geographical location of the cluster. This is especially helpful if you create a &geo; cluster later, as it simplifies the @@ -225,30 +225,25 @@ &prompt.root;crm cluster init --sbd-device /dev/disk/by-id/ID1 --sbd-device /dev/disk/by-id/ID2 - - Network interfaces - - - In a later step, the script prompts you for a network address for &corosync; to use. - To configure the cluster with two network interfaces, use the option - (or ) twice. For example: - -&prompt.root;crm cluster init --interface eth0 --interface eth1 - TODO: This and -M seem to do the same thing. What's the difference? - - Redundant communication channel Supported clusters must have two communication channels. The preferred method is to - use network device bonding. If you cannot use bonding, the alternative is to set up - a redundant communication channel in &corosync;. By default, the script prompts you - for a network address for a single communication channel. To configure the cluster - with two communication channels, use the option - (or ). For example: + use network device bonding. If you cannot use bonding, you can set up a redundant + communication channel in &corosync; (also known as a second ring or heartbeat line). + By default, the script prompts you for a network address for a single ring. + To configure the cluster with two rings, use the option + (or ) twice. For example: -&prompt.root;crm cluster init --multi-heartbeats +&prompt.root;crm cluster init --interface eth0 --interface eth1 + + + You can also use (or ) to set + up a second &corosync; ring . This option uses the first two network interfaces by + default, whereas allows you to specify any two network interfaces. + + @@ -273,7 +268,9 @@ - TODO: Better words. If you used -M or -i twice, enter a second network address and port. + If you started the script with an option that configures a redundant communication channel, + enter y to accept a second heartbeat line, then either accept the + proposed network address and port or enter different ones. @@ -282,10 +279,9 @@ Choose whether to set up SBD as the node fencing mechanism. If you are using a different fencing mechanism or want to set up SBD later, enter n to skip this step. - To continue with this step, enter y. - Select the type of SBD to use: + If you chose y, select the type of SBD to use: @@ -312,19 +308,23 @@ to the virtual IP address. - If you choose y, enter an unused IP address to use for &hawk2;. + If you chose y, enter an unused IP address to use for &hawk2;. - Choose whether to configure &qdevice; and &qnet;. If you have not set up the &qnet; server - yet, enter n to skip this step and set up &qdevice; and &qnet; later. - If you choose y, provide the following information: + Choose whether to configure &qdevice; and &qnet;. If you do not need to use &qdevice; or + have not set up the &qnet; server yet, enter n to skip this step. + You can set up &qdevice; and &qnet; later if required. + + + If you chose y, provide the following information: - Enter the host name or IP address of the &qnet; server. + Enter the host name or IP address of the &qnet; server. The cluster node must have + SSH access to this server to complete the configuration. For the remaining fields, you can accept the default values or change them as required: @@ -347,7 +347,7 @@ - Choose whether to enable TLS. TODO: More info on the options, see that other bug. + Choose whether to enable TLS. From 1a31940355faccd486682bc18763086aa48090fb Mon Sep 17 00:00:00 2001 From: Tahlia Richardson <3069029+tahliar@users.noreply.github.com> Date: Tue, 10 Sep 2024 15:34:24 +1000 Subject: [PATCH 30/39] Move hacluster password warning earlier --- xml/ha_bootstrap_install.xml | 23 ++++++++++++++++------- 1 file changed, 16 insertions(+), 7 deletions(-) diff --git a/xml/ha_bootstrap_install.xml b/xml/ha_bootstrap_install.xml index 9d1a7bd6..c1a5cc8b 100644 --- a/xml/ha_bootstrap_install.xml +++ b/xml/ha_bootstrap_install.xml @@ -366,6 +366,16 @@ starts the cluster services to bring the cluster online and enables &hawk2;. The URL to use for &hawk2; is displayed on the screen. + + Secure password for <systemitem class="username">hacluster</systemitem> + + The crm cluster init script creates a default user + (hacluster) and password + (linux). Replace the default password with a secure one + as soon as possible: + +&prompt.root;passwd hacluster +
@@ -403,16 +413,15 @@ - On the &hawk2; login screen, enter the Username and - Password of the user that was created by the bootstrap script - (user hacluster, password linux). + On the &hawk2; login screen, enter the Username of the user that was + created by the bootstrap script (hacluster) + and the secure Password that you changed from the bootstrap script's + default password. - - Secure password + - Replace the default password with a secure one as soon as possible: + If you have not already changed the default password to a secure one, do so now. -&prompt.root;passwd hacluster From c5bc6b8244758a2a1bc50e7e8134f3070f7d0a6e Mon Sep 17 00:00:00 2001 From: Tahlia Richardson <3069029+tahliar@users.noreply.github.com> Date: Fri, 13 Sep 2024 16:54:58 +1000 Subject: [PATCH 31/39] Move back to Admin Guide For SLE 15, it just isn't viable to have a new guide. A lot of the content is already in the Admin Guide and would need to be either duplicated or linked to. Reusing content is currently not simple. For SLE 16, we can revisit having a separate guide. --- DC-SLE-HA-deployment | 25 ---- xml/MAIN.SLEHA.xml | 3 - xml/book_administration.xml | 16 +++ xml/ha_bootstrap_install.xml | 4 +- xml/ha_install_intro.xml | 30 ----- xml/ha_sbd_watchdog.xml | 216 ----------------------------------- 6 files changed, 19 insertions(+), 275 deletions(-) delete mode 100644 DC-SLE-HA-deployment delete mode 100644 xml/ha_install_intro.xml delete mode 100644 xml/ha_sbd_watchdog.xml diff --git a/DC-SLE-HA-deployment b/DC-SLE-HA-deployment deleted file mode 100644 index 0f0b4c94..00000000 --- a/DC-SLE-HA-deployment +++ /dev/null @@ -1,25 +0,0 @@ -## ---------------------------- -## Doc Config File for SUSE Linux Enterprise High Availability Extension -## Full installation guide -## ---------------------------- -## -## Basics -MAIN="MAIN.SLEHA.xml" -ROOTID=book-deployment - -## Profiling -PROFOS="sles" -PROFCONDITION="suse-product" - -## stylesheet location -STYLEROOT="/usr/share/xml/docbook/stylesheet/suse2022-ns" -FALLBACK_STYLEROOT="/usr/share/xml/docbook/stylesheet/suse-ns" - -## enable sourcing -export DOCCONF=$BASH_SOURCE - -##do not show remarks directly in the (PDF) text -#XSLTPARAM="--param use.xep.annotate.pdf=0" - -### Sort the glossary -XSLTPARAM="--param glossary.sort=1" diff --git a/xml/MAIN.SLEHA.xml b/xml/MAIN.SLEHA.xml index 93b07e89..b452e831 100644 --- a/xml/MAIN.SLEHA.xml +++ b/xml/MAIN.SLEHA.xml @@ -42,9 +42,6 @@ - - - diff --git a/xml/book_administration.xml b/xml/book_administration.xml index 819f3fd7..6eb0abd6 100644 --- a/xml/book_administration.xml +++ b/xml/book_administration.xml @@ -55,6 +55,22 @@ + + + + + Installation and setup + + + + + + + + + + + diff --git a/xml/ha_bootstrap_install.xml b/xml/ha_bootstrap_install.xml index c1a5cc8b..f409e7da 100644 --- a/xml/ha_bootstrap_install.xml +++ b/xml/ha_bootstrap_install.xml @@ -16,7 +16,9 @@ - + &productname; includes bootstrap scripts to simplify the installation of a cluster. + You can use these scripts to set up the cluster on the first node, add more nodes to the + cluster, remove nodes from the cluster, and adjust certain settings in an existing cluster. diff --git a/xml/ha_install_intro.xml b/xml/ha_install_intro.xml deleted file mode 100644 index dd655a3d..00000000 --- a/xml/ha_install_intro.xml +++ /dev/null @@ -1,30 +0,0 @@ - - - %entities; -]> - - - Preface - - - - editing - - - yes - - - - - - - - - - - diff --git a/xml/ha_sbd_watchdog.xml b/xml/ha_sbd_watchdog.xml deleted file mode 100644 index df3848d7..00000000 --- a/xml/ha_sbd_watchdog.xml +++ /dev/null @@ -1,216 +0,0 @@ - - - - %entities; -]> - - - Setting up a watchdog for SBD - - - - If you are using SBD as your &stonith; device, you must enable a watchdog on each - cluster node. If you are using a different &stonith; device, you can skip this chapter. - - - - - yes - - - - - - - &productname; ships with several kernel modules that provide hardware-specific watchdog drivers. - For clusters in production environments, we recommend using a hardware watchdog. - However, if no watchdog matches your hardware, the software watchdog - (softdog) can be used instead. - - - &productname; uses the SBD daemon as the software component that feeds the watchdog. - - - - Using a hardware watchdog - - Finding the right watchdog kernel module for a given system is not - trivial. Automatic probing fails often. As a result, many modules - are already loaded before the right one gets a chance. - - The following table lists some commonly used watchdog drivers. However, this is - not a complete list of supported drivers. If your hardware is not listed here, - you can also find a list of choices in the following directories: - - - - - /lib/modules/KERNEL_VERSION/kernel/drivers/watchdog - - - - - /lib/modules/KERNEL_VERSION/kernel/drivers/ipmi - - - - - Alternatively, ask your hardware or - system vendor for details on system-specific watchdog configuration. - - - Commonly used watchdog drivers - - - - Hardware - Driver - - - - - HP - hpwdt - - - Dell, Lenovo (Intel TCO) - iTCO_wdt - - - Fujitsu - ipmi_watchdog - - - LPAR on IBM Power - pseries-wdt - - - VM on IBM z/VM - vmwatchdog - - - Xen VM (DomU) - xen_xdt - - - VM on VMware vSphere - wdat_wdt - - - Generic - softdog - - - -
- - Accessing the watchdog timer - - Some hardware vendors ship systems management software that uses the - watchdog for system resets (for example, HP ASR daemon). If the watchdog is - used by SBD, disable such software. No other software must access the - watchdog timer. - - - - Loading the correct kernel module - - - List the drivers that are installed with your kernel version: - -&prompt.root;rpm -ql kernel-VERSION | grep watchdog - - - - List any watchdog modules that are currently loaded in the kernel: - -&prompt.root;lsmod | egrep "(wd|dog)" - - - - If you get a result, unload the wrong module: - -&prompt.root;rmmod WRONG_MODULE - - - - Enable the watchdog module that matches your hardware: - -&prompt.root;echo WATCHDOG_MODULE > /etc/modules-load.d/watchdog.conf -&prompt.root;systemctl restart systemd-modules-load - - - - Test whether the watchdog module is loaded correctly: - -&prompt.root;lsmod | grep dog - - - - Verify if the watchdog device is available: - -&prompt.root;ls -l /dev/watchdog* -&prompt.root;sbd query-watchdog - - If the watchdog device is not available, check the module name and options. - Maybe use another driver. - - - - - Verify if the watchdog device works: - -&prompt.root;sbd -w WATCHDOG_DEVICE test-watchdog - - - - Reboot your machine to make sure there are no conflicting kernel modules. For example, - if you find the message cannot register ... in your log, this would indicate - such conflicting modules. To ignore such modules, refer to - . - - - -
- - - Using the software watchdog (softdog) - - For clusters in production environments, we recommend using a hardware-specific watchdog - driver. However, if no watchdog matches your hardware, - softdog can be used instead. - - - Softdog limitations - - The softdog driver assumes that at least one CPU is still running. If all CPUs are stuck, - the code in the softdog driver that should reboot the system is never executed. - In contrast, hardware watchdogs keep working even if all CPUs are stuck. - - - - Loading the softdog kernel module - - - Enable the softdog watchdog: - -&prompt.root;echo softdog > /etc/modules-load.d/watchdog.conf -&prompt.root;systemctl restart systemd-modules-load - - - - Check whether the softdog watchdog module is loaded correctly: - -&prompt.root;lsmod | grep softdog - - - - -
From 438571d010653863efc5901ce5d506e39e5c5619 Mon Sep 17 00:00:00 2001 From: Tahlia Richardson <3069029+tahliar@users.noreply.github.com> Date: Fri, 20 Sep 2024 17:12:03 +1000 Subject: [PATCH 32/39] Remove duplicate Hawk2 procedure --- xml/ha_bootstrap_install.xml | 73 +++++------------------------------- 1 file changed, 10 insertions(+), 63 deletions(-) diff --git a/xml/ha_bootstrap_install.xml b/xml/ha_bootstrap_install.xml index f409e7da..bb646ca9 100644 --- a/xml/ha_bootstrap_install.xml +++ b/xml/ha_bootstrap_install.xml @@ -368,6 +368,9 @@ starts the cluster services to bring the cluster online and enables &hawk2;. The URL to use for &hawk2; is displayed on the screen. + + To log in to &hawk2;, see . + Secure password for <systemitem class="username">hacluster</systemitem> @@ -380,69 +383,6 @@
- - - Logging in to the &hawk2; web interface - - You now have a running one-node cluster. To view its status, proceed as follows: - - - Logging in to the &hawk2; Web interface - - - On any machine, start a Web browser and make sure that JavaScript and cookies are enabled. - - - - - As URL, enter the virtual IP address that you configured with the bootstrap script: - -https://VIRTUAL_IP:7630/ - - Certificate warning - - If a certificate warning appears when you try to access the URL for the first time, - a self-signed certificate is in use. Self-signed certificates are not considered - trustworthy by default. - - - Ask your cluster operator for the certificate details to verify the certificate. - - - To proceed anyway, you can add an exception in the browser to bypass the warning. - - - - - - On the &hawk2; login screen, enter the Username of the user that was - created by the bootstrap script (hacluster) - and the secure Password that you changed from the bootstrap script's - default password. - - - - If you have not already changed the default password to a secure one, do so now. - - - - - - Click Log In. The &hawk2; Web interface shows the - Status screen by default: - -
- Status of the one-node cluster in &hawk2; - - - - - -
-
-
-
- Adding nodes with <command>crm cluster join</command> @@ -520,4 +460,11 @@ crm cluster join --use-ssh-agent -c USER@StatusNodes. + + + Modifying the cluster with <command>crm cluster init</command> stages + + TO DO + + From 8bd8a5ebb887fbfdcd17d125fec9e9bf5dcfb7e6 Mon Sep 17 00:00:00 2001 From: Tahlia Richardson <3069029+tahliar@users.noreply.github.com> Date: Wed, 2 Oct 2024 17:13:41 +1000 Subject: [PATCH 33/39] Add section for crm cluster remove --- xml/ha_bootstrap_install.xml | 57 ++++++++++++++++++++++++++++-------- 1 file changed, 44 insertions(+), 13 deletions(-) diff --git a/xml/ha_bootstrap_install.xml b/xml/ha_bootstrap_install.xml index bb646ca9..ef18926b 100644 --- a/xml/ha_bootstrap_install.xml +++ b/xml/ha_bootstrap_install.xml @@ -12,7 +12,7 @@ xmlns="http://docbook.org/ns/docbook" version="5.1" xmlns:xi="http://www.w3.org/2001/XInclude" xmlns:xlink="http://www.w3.org/1999/xlink"> - Using the bootstrap script + Using the bootstrap scripts @@ -158,7 +158,7 @@ You can start the script without specifying any options. This prompts you for input for - some settings, as described in the next steps, and uses &crmsh;'s default values for + certain settings as described in later steps, and uses &crmsh;'s default values for other settings. @@ -225,6 +225,11 @@ (or ) multiple times. For example: &prompt.root;crm cluster init --sbd-device /dev/disk/by-id/ID1 --sbd-device /dev/disk/by-id/ID2 + + This option is also useful because you can use tab completion for the device ID, + which is not available later when the script prompts you for the path. + + @@ -352,7 +357,7 @@ Choose whether to enable TLS. - + Enter heuristics commands to affect how votes are determined. To skip this step, leave the field blank. @@ -430,27 +435,25 @@ crm cluster join --use-ssh-agent -c USER@ - If NTP is not configured to start at boot time, a message - appears. The script also checks for a hardware watchdog device. - You are warned if none is present. + If NTP is not configured to start at boot time, a message appears. The script also checks + for a hardware watchdog device. You are warned if none is present. - If you did not already specify the first cluster node - with , you will be prompted for its IP address. + If you did not already specify the first cluster node with , + you are prompted for its IP address. If you did not already configure passwordless SSH access between the cluster nodes, - you will be prompted for the password of the first node. + you are prompted for the password of the first node. - After logging in to the specified node, the script copies the - &corosync; configuration, configures SSH and &csync;, - brings the current machine online as a new cluster node, and - starts the service needed for &hawk2;. + After logging in to the specified node, the script copies the &corosync; configuration, + configures SSH and &csync;, brings the current machine online as a new cluster node, + and starts the service needed for &hawk2;. @@ -467,4 +470,32 @@ crm cluster join --use-ssh-agent -c USER@ + + + Removing nodes with <command>crm cluster remove</command> + + You can remove nodes from the cluster with the crm cluster remove + bootstrap script. + + + If you run crm cluster remove with no additional parameters, you are + prompted for the IP address or host name of the node to remove. Alternatively, you can + specify the node when you run the command: + +&prompt.root;crm cluster remove NODE + + On the specified node, this stops all cluster services and removes the local cluster + configuration files. On the rest of the cluster nodes, the specified node is removed + from the cluster configuration. + + + In most cases, you must run crm cluster remove from a different node, + not from the node you want to remove. However, to remove the last node + and delete the cluster, you can use (or ): + +&prompt.root;crm cluster remove --force LASTNODE + + For more information, run crm cluster help remove. + + From 49bddec237cbb7ee818815863e428732a1312009 Mon Sep 17 00:00:00 2001 From: Tahlia Richardson <3069029+tahliar@users.noreply.github.com> Date: Thu, 3 Oct 2024 15:57:34 +1000 Subject: [PATCH 34/39] Fix help command --- xml/ha_bootstrap_install.xml | 11 ++--------- 1 file changed, 2 insertions(+), 9 deletions(-) diff --git a/xml/ha_bootstrap_install.xml b/xml/ha_bootstrap_install.xml index ef18926b..d4e2df2e 100644 --- a/xml/ha_bootstrap_install.xml +++ b/xml/ha_bootstrap_install.xml @@ -187,7 +187,7 @@ Alternatively, you can specify additional options as part of the initialization command. You can include multiple options in the same command. Some examples are shown below. - For more options, run crm cluster help init. + For more options, run crm cluster init --help. @@ -464,13 +464,6 @@ crm cluster join --use-ssh-agent -c USER@ - - Modifying the cluster with <command>crm cluster init</command> stages - - TO DO - - - Removing nodes with <command>crm cluster remove</command> @@ -495,7 +488,7 @@ crm cluster join --use-ssh-agent -c USER@ &prompt.root;crm cluster remove --force LASTNODE - For more information, run crm cluster help remove. + For more information, run crm cluster remove --help. From ed313d3856e3813fcd42003df4b9d8e9b8270501 Mon Sep 17 00:00:00 2001 From: Tahlia Richardson <3069029+tahliar@users.noreply.github.com> Date: Thu, 3 Oct 2024 16:48:15 +1000 Subject: [PATCH 35/39] Fix pattern name --- xml/ha_requirements.xml | 4 +- xml/html/rh-book-administration.html | 85 ---------------------------- 2 files changed, 2 insertions(+), 87 deletions(-) delete mode 100644 xml/html/rh-book-administration.html diff --git a/xml/ha_requirements.xml b/xml/ha_requirements.xml index c7e3d238..0a2ed5ea 100644 --- a/xml/ha_requirements.xml +++ b/xml/ha_requirements.xml @@ -96,7 +96,7 @@ HA Node system role - &ha; (sles_ha) + &ha; (ha_sles) Enhanced Base System (enhanced_base) @@ -123,7 +123,7 @@ You might need to add more packages manually, if required. For machines that originally had another system role assigned, you need to - manually install the sles_ha or + manually install the ha_sles or ha_geo patterns and any further packages that you need. diff --git a/xml/html/rh-book-administration.html b/xml/html/rh-book-administration.html deleted file mode 100644 index 47bbfc19..00000000 --- a/xml/html/rh-book-administration.html +++ /dev/null @@ -1,85 +0,0 @@ - -Revision History: Administration Guide - - - - - - - - - - - - - - -

Revision History: Administration Guide

2024-06-26

-

- Updated for the initial release of SUSE Linux Enterprise High Availability 15 SP6. -

-
From 41cfc3761b7c86c5e84c6ab1f64f46b50626987e Mon Sep 17 00:00:00 2001 From: Tahlia Richardson <3069029+tahliar@users.noreply.github.com> Date: Wed, 9 Oct 2024 13:35:31 +1000 Subject: [PATCH 36/39] Move installation overview ahead of system requirements --- xml/book_administration.xml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/xml/book_administration.xml b/xml/book_administration.xml index 6eb0abd6..18c37004 100644 --- a/xml/book_administration.xml +++ b/xml/book_administration.xml @@ -62,8 +62,8 @@ Installation and setup - + From 79f6ac153861833d05b5780c45b5aced34c8a054 Mon Sep 17 00:00:00 2001 From: Tahlia Richardson <3069029+tahliar@users.noreply.github.com> Date: Thu, 10 Oct 2024 16:28:01 +1000 Subject: [PATCH 37/39] Add yast cluster qdevice section --- images/src/png/yast_cluster_qdevice.png | Bin 0 -> 60416 bytes xml/ha_qdevice-qnetd.xml | 6 +- xml/ha_yast_cluster.xml | 155 ++++++++++++++++++++++++ 3 files changed, 158 insertions(+), 3 deletions(-) create mode 100644 images/src/png/yast_cluster_qdevice.png diff --git a/images/src/png/yast_cluster_qdevice.png b/images/src/png/yast_cluster_qdevice.png new file mode 100644 index 0000000000000000000000000000000000000000..05202033e7fef201b9be29869a13f1224730aa5e GIT binary patch literal 60416 zcmbTdby!qw_cly-H%LhisY5q|l=RRb0y0Q=l8zf@xFh2-#!k8nH_7-?CZSNI@em~6$jN;B_?1XKtn?#R(q_ZhlYl!gNBAd zjE4d2ap;)J2mWDtJyJ8k!^2zJ&{+pQ(t9hLcvSokx{%R6T-lHWKY9-UuD$( z5&NUl1X~fEFcqxVUu3!wzLT!p7J1%w2j0ug-#A?Tj9R9Wkn>|-kiHZHbxZT1H{FFt zFXqZGEmn+waimuqS2r0~U5<75t)1pIba!@3cWH8^`gr}jU4G9d4bS;^tCSW0&t@Fx z`k$>Dwfvv$DYEUq+v0yrczHE#Z~ePDj*tFlo5}x=*_{8e+mihrRieEj5V%|HSM_5| zTBqViimF^|w)v{h++fHx$(s9j}QO+aMPaX&JL*E9Bmrdqf7IV@2TDoPiGrKQ4PH)ypx8!clH=amE5#O_T+HzRM za`lYOq;@>mb>mI9jMEpm(5IN^w~KSO4?Su=oVX4KF4Ws#GTV+W?&f;@c@&i^j&2)0 zE{{8rOLGm&?-$w%RgFV7`5F$Ej4fBZ(jDEte-8FxRs3r~r<|gK=IxtLth#P*NOyZ#u0AcF zKiRIe!Zm?wc<6ZVo+gSnH!PP;M2R|KfZz-GJ^q{91#6Wkg$R;9WS(@v7kd4S95LA% z9}|lvM`0Gz^-AeOiQ;tRd4>~8i$^O!0-O&fgs#~jKnZ`lY5 zX`F>y4pK673sH4f-&5Ph;0Z3+jh+WZ+IhoL6LjJf568fN@Y%vpg`&uS2I7)N3#)wV z?;Je|y-Ieg<=<=xxi-5!x=l2%g9$e~Oht$OF2%PO6=7u2$u;@p7)zSr6G*RjRB4&_ zVt10F>bCo4t`UF>dyz$G@IZ2?On&e+03!GELY73!HqTQZy+EO&)q$sGpj3(C&1&PI^kV5&LbpTXB`)2x0Cv9HCHzyD5SkG}tWPWmo z3T>}lcSJds{y5p>^jqm{q;WR_s^4C)SB&w ztCNn$o$^<`Z3VYi7?M^?I-vnSWY2QTDi*CHZ_kr0ZzCY)F_|0{M_;T{^)V;q7Mk$ixs42Frw0XCTW()!LJ={(zIgGyBlryOpkx<`u<6}eoJpOHg~fN=M1wRY0BBr?e$BG&ZK^0PxP^q zi&G6zb(UZe6m2(MYBlQQXlaTm*MD|^Uf6fCH{h4xM6BSO?eQR6a+Zm5ZQ;Lo(#c+I z5S?Jega5ol6`_)t&FSS01NN)7wUBjT3Gg4AZ>C(O6*KynYiR_=%*~o&JV&1kexv_P zUEVp8K49E^9CZN_SeL(YKfKti)D0|NX~89<@qR02iR`@C;<{JHqwi~3yqKzn1UvoU zCt~#f?C{rJ8hOpEy0ql2HaVH&JCZ7zY38;LXrB~UcD|gu)I^$_4I2mc#<0LK)Aa+} zwDhK#QR@+^;p2i|g7}S-7x$Mh5N2*J?~4|v7W5n1eIATv)}5-#Gu)z( zRgUP<5;xC(xPfH3^}o72s!=cN+|FMbkP6Cy2F7YEO8d(ATu)nk>!+m=T#h~+%@IHL zI9`viyj&z|_t2VN-I+M5-FWpNnp=!a(Z1wChzAOjl(B=Msl{{CHb`Bxq^vWB&ozDJg2TGtc z>rYTIBi0Md;v;=_^ObgEn)cU?^jg+rhu`6WHsE8fXDqdhCFeRc4%@b|Hm|yG&vxI- z`0TmTbp0$|036-n(pfMSbD~t+uj?b7X2NU%v;37>t8aHR379+#?~Z|}#Fi8CO`K-K zbpLirUJK#{uMD}KqrE+_vtC#U{jKBn)N5WVJGVey9#0PSxmx}-OW8YO>u)S?eZ~fFZ`Nw={OqjRc<7$K zDxGGu0dm)oRM#H}Pi`Sz6-L0wW&YgC$;lSw3$ki=UD2@a|8QD!E5&R+w#852Jp=HigrK&)}CW~a(ZctQ< zw*$BSE}#F2kh+Lbx;c4cH#n%ulE@@+49$~zqkAfUy(=Mlb5Ss^BbX z0QsBIcUG%t`F-d35WcyU^Pn%$-gGn3oHYAOAE(|zkU!sM6R&K&6QWt}Y_ERFdelH)iY1DAUL z;?%3^NuKB_`rqLHzmlM>|04_f9}*<~|C3z(FLwWrX5mWvie~1wX@N}Jvi*Q% zlS8Ug&~EYMwQ#UMo&4&;<{$rNR5`s`m~4At*2H_c9WlmMU=ygD(&2ZLE4<3N;3v9c&IScx|h*lw6!P&Ai`Jm3a$$yC4 zsxIHwgf7=VO3=L82njxtu;R^o_FvxBdHl%t?9jb^_(_H!u|VRzH)SZ>kEgkiFyiH+|1qC94urF62^6ZmL|cC(>&x&8fzC9Dn!k=%r4tR<+=~@caH46^`qv zcXV~(zPDeJqplms67-L(SII$tN3%-DIq!{g znstt!S@jyXnX4J^x(z{9ckmhn`m0dAYdKDky$zZtA8LB-I3AgjR-BtaM=qG$b>#_d zx1S}ICz4Z5B*;!5`x08T8byenH`^T~`I$YEJR%}gRPNf_UC8yR@AuEOs{51glsBx; z4y{Uw@N(c+CM?iERHQ7kOs1?m>-O`{{~YX2DXi+o9dw!G=y1^gV#XvzWfa(6No2+A{aNUS zORFM#QBmlm#mDKekZQ8g!HT}EpM{O$ygldm<}yO?`*^=+Gbnu6(DvbN)9yN{Q4MG=ZYjEEp1tH&#d^NrdD=|x!`%m%&tf6aUmR10s$x6+j=Ah)t7w>_(e|=p>Z6p_uIdb>;y?5w}1Vzg&F(sxATtPF1 zs)u9EuaB01tSHxH#5hnkfbsH8Xx4FH_nAtA?Y8a6yeSk}$1mWyo#)`_Hk`pIWjZ8^ zn4Dhy=e`xak_4`SLR0B}k9h}9Ibz`A)N9Qmagh(%)oG%$UD?T_Ka@zl;UVfcuIZRu z8<)3F*7G-|6j&=#JAT$m@RYvk&8;A{ z67PJLhiu-c=EMvAJwv^~v*mBx^M7LluT33MYm%cI>fpF)PwIBfpFQnrwX=6%G)Hl; zB~|5pG@d{O%KfFz4RffBbG?K9LRM)|(;2|lLQxHMciH9MOvlf?%0Ri__6OmZm2O=Z zughg4v9)JwNZ+U;v!G9kP$plbEzdKUevsdjco>x9S$MBGG&HP4KaBsOP?>D5^HPS_ zWOt~$kd6933Y1)t`9v%hhnn0J8N>_f`wc+{CS;)!3V0PMVh$f5Wp?(u)HbhI&lXz2EhUZF zvR;M#O-{Ym;{1<=NbVVpy;$qodeqeuzIu_jT z3=AIm`(S`-iD9Aa#q-ujjx^ofi+@o(I+OoDOu#tse~K$lk)|3;&2|<6yXFn9D{={Q zN0Xc~S+7JL4Wz;@4+)toZUT0k-14Ni*Qt=2-}>7`uZunYEzvZYAe$V#Z0D+d0E(B^ zVE4w#^5*0{kmF1M#Jk~FL7?E(Ax$VMzJo$jUWn$_b1||0tzxg+`bp{9tyY~c($me; zqVcohm%rbzYPJH1m?WWBKQe*DvrbOXv_3t#(#R!Ae4%#3afX?}X=`LT2j$wj8`ZBo z!#vv=Xw?+s%p}JDL3=@Q>R%ham7Tx@c@T_Tv(!e)o@&~s5i>JB$rdFXPL|O-e90ql zW!Gk0vpn>pV=bEWf!9kK1M62bHi;(ftoKAfEl;+Fl6^U(tZBN}uzsn$3fURSZAh;o zY8)z)y@ou#+H?4K2wE0PQOqF{2Xs|lJJ77fz{9qKi_?WV>jB!BCZN=d-T`PLSMnAVUzBYg`6ky+zmy^hn00;i|M(vhHsOJcCUwy31{pPYiyPT{^ z)rg7fDNiC6;B8{rt$jJfw_w`P)AU5rw2tkAZ?)Zct=w&k3!%W#2mKP%Piu-)uFY9o z2DDcdvMSN((%#2ky#J=tl}(N{It9m-_Jp+2OLIVY8Ayt<;b1^5I$33n0h!*_C2C$e z6!71Gt8RGkflT|zXD{t?!(7r5@Ux8K)v=3;d*)q?AvuOGbwb@b)6KTaOM6rixoWad zgt6yZusFX?PR0^XD5}Gb=fAY`t|GQO)$6Dulk&k$$MI;lbojkxccosStzKZ@S%w$$ zEr+W@(_EP@YO?sC&1-{vk*Z1DRzNKzF>-()o;lx`7$s3IA<(+ z?50W%t*|F?R9f#Jx)yLqM(-yC2w#A(1&w(87X3yB}-Dxc$Z3aL!@Ee(W2v?dN z4O`1y*Sc8W?tF?M#c@z(Z}xQ|3oY3=H)`qnVR(hr`#EKSUG>prhQNj(gr{w)w4u^C zMs)7|AhSHPH+8$qblE`+%U15u0;Ow1$2F>=T-H62j+}95@Ni{8e2plt4cUA!S!~7V z5QJJdNMhO`{|AMP*uunsG|Gjbfo>17ea->OTdVeAU;1byWKVx#YgAU`-#o+d!(#7~ zW*KSwFZT~t{N~^&F_%TSuM!0tk>X>YLq-~JF_IUhzV9{Oc#iF|B|fffow#~Nd~b`P z!&kuR!|hFdYSe6Oad*K{*JY2C_zKkx`{KvGp3bvqbAhF%)eADoti*8j&_=P8`ZUCbUml;oo zWz9p?KOYl%c^0!_<9tt)fHZceZ~AQYUOvX*inO$~3Yw^|^C9}g`*pp`d z596CvE%jAqF$0nJMDXBybEIx8&NI9KgF|ZydVwZtr|l07=!rZI0>f33$4D46^W2$JoQ+i_i0p5B-=VaFf9@~5u^@n( zpiw2deIPe_CJL!jYp}IC&2c5DcEyLmA`a2|s&R?%R{Rg!fT{xyNa|ue;%%?r z3bhm|76g5Jz78$X<_X$x3!0r4PI%RPhSeuh#Yi2QZv~wX!wynDJwN&xzELbsBpPfz zzsuce!?q)tia}5thWAvHo-Rx77$Y9v1}pQaN<^IPbF{u`VWUX&Z#W=~h!l>;bcPtw z^o1-1Ce$ zN5~HsDKNTWT4vY~Hy1pff|C0J@!U^2POaq?1ts(Xfn}fz!>AAe{y<>#bRA6yH76Hb zo&C?L{~oRnaX=KsD97QK1^qqEkl}x`2>ss=T>zFAl;yy?ntv_V54=MZZU0~Hl#mIO z-+>7mogis@PDk=pv95=aQhWpqMlhZo(8FFC^e_r@4kX8HNO*FLfbr-_*j`Io)N_20 zjCyR76!&B+6MHLDWh;}KYSVeL*=0T#@+tjs5>QR+O~SL>$}|@q2*4yXr>NQC*wBbM z>Xw)B9!aQxC8@i{Lpc`MO3=_4#{iq^2-$Gy4d?C;T)Pq3*5 zX|*YSM?Qe#S-y)o=6nqVR~Z#z5C-ibxDVLNI`b*wKFtP)huS%YYVe~-XOJV(g z<5J&7RN)SYB?K?<{{U3|B}XiN**iqq4o`Sy7_YkKU=!9K2vWpylIf>E$ont4p~d9- zy!Z1mc}MX7V71t7@(*s+b&DzW*-EfrCx*JP`d2|P_29ev17}kR=VktsDah2@+Xe3B zfJkH10Vc``aMe2mR$6S9|EN0>8YqAB)t?o+|o@gzRfj5HK#uDE@Qf(`tNk3BZ((J8HKjD`KZFzrD z3!`Y2l9CJ*iw)>`qdTj)ZyeBv4w1cd_U}&679Y!JN|N+G*D&y)d;5)JD|4gM{k~t( zuoW2`kIVlZrYVGH6R4e>w~57ar1zr}>}vO>z)aZ8uO%SA>}6O_vq`=2Lv|j&_L9f#dMH5eCOw0Nd`ZY?m>94bEs_Rq`dWg z;nrdAQYl1==nSbK=rQu}y7z^ed25rVYMUO>*zzhIR}Hjro;do{QGf@m;3Ja0E^3jo z{!|q9D5iNGHst-R^h3F#mzpeBb_zO`{y?`b`9*S`mcD5K-O>(^Cd8m4w9tPpdM>MJtx6W=U@*z+H*e zIU%nj0DYx_K1_K$JE)QOK#jXZTk&o{|GJs490i$Dgqhl!?dE?RU&)wz(cjVDj{l2? zy~uz&xA?y~veaK1>|C<^>Y9+vKhSi*#Ww_;A>cFr>nA~A%#W3HDCS#X){ zlC9zE7D!mBJ*DvSJ-1QiRe#}I1S|a{AvH1K`>D4k%_~sqVWJJzM25r_U{j^&{Be|u z#IsVeuT#%_&!fETZ2`mEsGWz1+3@nD=Dz;(W4`O<{G{f@dEZ7RmXKYlm+|4exp|MC z4SlrVMtL{7EX1df-=nVt&f8qJ2x_cyR$d3|dm!zYk;}{qP74a9WN8D2xv`1oSKdJu zh!ZX_P8ske;!dZEo=HEz0OirD$+sA%We(-w!xniWsbF`b#EY|w(9Ff*d#ouPMMkOr zLI#-1yos5<4E#}<*gE&;9cromgr^G45=vnYR~oT%hcjyI5&_q?aRR=i2!zC<2Gp^dp0K z7rZ7L=+e?GjEY;ntMl1!;MYLbeW{o+y)ani_~8ATQCG+J%OEq;^VoR?s=Pks2&W|K zRz{1R=}MTklAiBBA1A4;q#bJ5CDj6-Zv6AHPx2EYDHF3VHx?l>#kuR@0YTU@c>Ceq zuawN7m;!H54HL|4h#H%iwY<&OuKiD$`esgR4aQJ}|ohUenlH3Jm^DgEwsNkWNyThxT?yTPtESk&Rp`u^!R1R+*Jh&JL4N6xzD3xat4>}JAx|G4Y8Vwh*iLw^QvrFO7WaKua6Mzzt)O8JGDiId%7Zd8P_P||2UJ~5jaK-KdNHj>(xBmv(3 z6~n03Nr9*B|GdqSq4Hv*m~sXV#m;MDZVYQu)@NHYY3&_9!hIPpNgkJ0>K=~S&i4pc zmJxktT9yiy3%(S?y z0Bl@k?+5whRtXPE9U8A1fXPzQ2Kvy+b7s?wNSNVTWB^AL-eT%A>F@1WB8-~#2#m~i zmBC`XZWVN8J@{1jIc7kAz7nPXNE&baP>~*ySH&HU?hqDV>0G>2>g|gt%0Jwl&|*ep zjwzD;WzdGX1|Gol$aum#Ta@n@Dh1XeC?(TxlK(?SxU^5=9Ab2M5Yr2>-mq--S8?Tq zw*b;abnp=d=4yX9LwP*lUGhdi;h7bnT}0YPbu#@$&>a{|Fu;af0A#7fxk}<~q`(48 z2I-j;BD;ubhaLbGgvL@}9~{2C%*h&jn0f$!K2;b%Y`6<1;~YxSV6e%ocZg}jU5eOp zGc>>ouK-txfwdI4*iwVF1k3X7T3!Xs-AmxxSm?xB%TKif5)Po)RDH)d2=V?Jzns?(L`|0u1y?v)cWV@c1|$mo#n7N_Rufk~^psGf*N7>NL4+gZ_bX0`StfU-BH`!}dS! zhAA!`^5-^yy8*5dT+e`L=3Q!X-g-Gvz6Iu5I^zr@w9L& z0WEnBHC2~0EH#pESpVSBfaZwO(~X_i9*_lf;VPwG4sy`D#x0X@LJ*rGxRjkb>J@~? zj3ba6b{}7}cRY~`7 zB?BNQ2XN&+y}0NsisJMjLoRIfKFGo+2>I%y3U7vUS+7msL*1h9m8tYYITh}_ONfUP z_VCA_cr!9&IbS;HDgz|brjiO)@s1<5Tpw>2Ap!-14S_&Z-tL(f>oC|-ftcng00>79 zXKcB!Q=A00smD$ZQ;%ykQJw49c42{oXKj2w<%tAMh3;55hAi z&p64Z>a_^KT3E*EnR<1AgGf<%naLHUAL{_GP}gK~ON0C|nFflHa9Nr)XcBi_UP^zV zUTD?p$rEil@wvY*P9d(>IQ4m&SUI{oxo}XL|7(nfeDypswS&w2F8|tW)KHgD1i}y!{ z#{3hY-`E7Tu)I2B-ucm*0jwI8aGCi_B1i<#5t4M(jjVI&iD@E9C`J-wi!JJOOrwAcnGPW@zL~IHk zY2v@f0^IUbQkeQ4kSYqXDFDxTE+W&HfHf+UA_yPWUs1*e>(^^lN2L7(X5Cde z%3N+H#V1ZOVWT?WozznxJ#*VoS}>HLjuq3yQicU^&G z#E)^1b&tt9PxQiqsR( zGKfy8i^R($KB%Swrk!k z;lAws#k2#WwE4$Hyj$KqdKD88o{3B}Y^qyo5I%G$#~(7CCcL z4?p9Al=JiGfXQLj3Q)iy+jBrD;c2B5;HPA16k3eE7jXVQ6C%Fk?FhDWN^mYIF!^lE z!0$27f}j;H@m8DV-Y{P5QXmJ|dqS>n1F)$fU{)9)b;yLr#Hh)Eml?$oi6sN2yBAWh zH5Ay#GX0CoRt_jj8DZ`H1_&JcWE{^7koKun?s1ePA%IVYX{@%nF1&2OOGQkkM{{{> z=7rFD0HrXFAvu`Xpj6wr{&wH8ASt$|3)Mug7{mPGljBj-oNw@Hj{e!9xv=es^!#11 zozOCKtrA_cdyfORLBIVURj_Bwod60~XQq#pyL#%@0G1 zXPU|m(TdK~KS5jkPx-NP-diaq)+9RQmI9Aqp~QN~$lEY3T1_vB)jn2ic>QibH??46 zJ)Ahj2M%%LCogpu=QBy-!ycj&fHtw8C^^iMgPPUD>YZ)k`aL5zu`8Q+fV9{|Aud`g zjolsg;uDA|;@!QB6MnNS3=l!Q0vRY;nTNC>qe>3~KEUWxicK^k0>YmOSz5^wL=B7Y zVr#%%0qQ}-x3D=-8?T@MjkijVjEA+dg8FLp+$ZRa^40H`=(z*rW{595(*xPf0ISdh zVvfx*KtsNo1BAxdl@)B5tIk>yUcn1IOT)MX$=R0hc4V#Y3Cb!W!?u9)R(1Nz?5FFY>}7Nkv><*v|%*;NkgUi|)P z&Y$D6<6@&^~=pJ3+@loR5(07_K1&>W8t2<4EA9td&=eKXU>l_D(CkbpCC}8lR-Iu#=Jy*f{TD01O~lRf>R0NsK@m~HV}{v8JxaqVk+}^g^3Ni zG?o4$O``Z;gfI2Z`@P?hVkp5bh^VTG(q89YJdTK~%9tCXC}V3b>;RNkoEzxfiS-UZFHXH!aF-OH`qb zXms|Hjirxh-vSOA#%1E{1pwz8a{l2Fe@^&w4cBwXW}L&qRsh))#0QzJ4;`29Hhxr@ zaqan}!L|)-sBC5A{9JVEUFw9Hejiz^%n)?a&M4!%5_@G<^JmIVPFIm31}#EU-N!G~ z!FC((U~0yFSQ_H$p)_aef?$xOT3CEu8X7Ni7G0W9R8*{FIS1$-eRw_a`}?%j(bDBt zGEU-OswZd?@q$-MD!%QOaR2bWSBCgZS9Z1A>}Ur+DV64ou4=b`t_Pt`uo@;;R@l^O zFc zX>Vz8UZ(cQqVkAkle4}Hc)3(Zz&LO;CuWIashN1ZZy9K_l>E*Q?9m8j_Ggs|Kl!4# z(K!id36!jWF3{PpXcFQF&zZq&R~SEp$0IFU@5xAf+@E`Jb&lUV@m3MI^su8>Be>m{ zm@udaSS`>)@P4+@VLp<0A(v#GAf~lXvg4U*vK+(etI_IRw3VGnPVcxRp94B6HRE0P z2HTCV0fj0X-vxS4mc2G^k2f}G4{?Y-hda&*%GBbN9j^4=3&z?Tla6>K4%LyR4zWz` zoU6L0y+mNp8@<7W29f=t)WKd(Vk@!8#afWxfgcl05D1|>_^qu^3UlRJ> z8h+T$_hA|r>c77rj2qusTSGUkC5{dK={R4@2o&RcC^f_0-NGM2`RPD^dOG;`2&n$_ za~cRS`>J=tAQON3IBLe&@ne|itRK;-zF-m52Ju0OFwE)%dS9|&MulSbJ*)To{qcB` zB^>r|-=-}bgS-_hei!orJQJz{0j4cRp9wwS1tH`Q6CT2;dU1we5;(GfZjkb8vCxJ@_%l;q>7%##f<}P5jUWS zNc@T-mE=2RYBkOOwezB8EeBrs00Qr-I3S+qy+b^=yg(Eg-?d%d{b8Z#KG(5>#(=Fc zD$Wv{oC=Cm*$fp;IT}vDHH!iDvl9e=MTukk|ExCYVCd*_B#o1Dv&gz1qj3nTr>)9c zOY(f<_1y)TheKi-iD4f4YgOWW-M%+sX|xKim8F-FW;@=+Yb7X&YIhbTKDxDZgH5Zw;PY< z<6#VX{YKD&=z%C`lv@2qPt-qW6iBV?$a_xbIr7~!q-iG?KfZ0mX3N*NK<q1hz>?z*ciut+_j2)J9dgUpdV2NxEGBt4V^hxa z`@kCEgEarWSKTiiNzEEcW-DbAoP7W}HX`nmjzUw5yA>TDX+VpmjbhXBT%&_YwLUjL zi?q=@K2N653r8J_@04(qb^%F?ipTQUnweXag5q)Tu>#bGF|XN;#c2P_{j1M2-;!fC z6YuqA`z@1a&>P-Vye&|Sj%1=X2-Gi)PMG`97Soxtln4z3wEou-@|n{9$Df#_XY0aN zg3s`$T)tB+eC=y9=&2{6Pi7__u44|^E&k&9Rmr1C6+>t|Gymh4sP$I!7X*d~&j_6R zFY0O?4egy*r1-m0PqM~UnehcMaiazMw1JrxG)(?*xMi6_?>?cjO-6a=8%vV?H7}j& z;7>1w#`ovmbY^m`vgdqx++%Zbys>V#)>S(~h3I&Oc2U^#V!5Mj@zo3<9|(T{;6a2< zd%?7-6jbUU>%4nhvL9+Mw*8d>8;*5832w^m-w8 z9xe_(=1fupoUnBMW8+(q+NPkE zy$oLss@;#2L^T1=4^heOn3k?eV0CnK#-gdBGl@y(y5Eie_m+66c3yi2um}ne+$R7Fg#uzoNyu6BUc5 z;WH*})&uJ=&JF?9?S}e~zE}4r5Ij23xHn0R_`Hr^5xJ>=(hctb*1*cgo3Ex6RQ4=w=sZZft2Z4^FQ6*hKYqL*qyJ|3-gk_wsGVQ@G>T{ ze+V9aXqYWP_wnjrcC2w-ISyLR65e=Bn)KluCfEY6W<5{ik*uZ%58?!+!Q0MOm(lZ+ z@Xrp3&oV%gMU{Ydr2BfQ6<8gVf>URC1se&y8L#OzeSPvNqD|IQf}rEAFjKymz#Er; zTp3^1lSpPneq^}@w_K&Te3s#$l-!^L^kneKf5K9^aY@gu!oLUYK($3nOxS+O{xZ-* zW&x`+dZhiL8+qzXzA%WB!08|X%A}Gd#VxVA%RtvvP^)bFJsKG=A=|x}1fohHE@bjx zu~-)ejj{aB*;L|GjGi=cX(4TIYg^Q?7?ceO>H`*?(Mx)=4Br(mG61zkq#k!0#nc9a zY2iD*HV)Vc?J9#9e+VaS^VvQv1>)!K2D{|TD;wN$4mo5R^|>O&g2O{9v(5lXa-)>R zDVF^KZEuV)%^{X_xDahZT7yb5$v{03Lnskzd}?=w!0?GgnOi&0a(V&*hw|iSye}=b zqw$u5z&gLm9rZ&dzet;(jKe$lH&El}bXBar1#)@pwx9Ugigc;yD&Jo4dU3V@YwK=L zL%UFOp|Vv#Kfst`E5^K?vDEtNlPglU!~a|2)5&i~GscQIIpPoaf&9V*xqH+Tl}nj- zr25&m`h&)!FJH_W>|!?Go#`1HB4w}K=dm=p6Cg=nlY@%4ZO?lfin|basXs-?U6!K# zERI`zU4HPj0RlErW8_=s5zSBED`hlHbVmK2XuK8@!QuCSX|=sLPhmw8GX9wt2q zn4JIcg1qVMWEea5F>Kv~z~REg?O5*TomHco>E z3OtESc5gm8LpC}2ZJDiQ|L#1}x&cKPU7>V#ts2x4Yfo90^F4vn{Y3sJlmX)1;@ z`R%H{?4qo%zq}|x5p>OS-qMI?1pdS`ru0^PTg|0ekgsp7BX9mu?LDgEXPSy`6Za?F z+yQ1n)#C<9L(3$Kl<=9-y=? z(q#8W6!vN+-xMG@9Aa7hD(f7QFRt@- z`RxBgE78@p8dY7mxW6VJ%kloiF_{Ets@9u4Lf((I58W*W6i)S|W}O$d+Kt34%u>Pu z-g?M7j7lCu>L7S{vTR`V^C8DclkafA2bOnI^YygkNn5Hm4XFCoxtLXJ7;v>0EoE3Y zsTNx`otH3%s`rkLPOmOBgtG!V+E+ADS_i%&Q*0uSsw*lwfrdew{Ka5v$D!09;OJ>X zf4l=YJKW$iC+P}kDqZ7djp{m4 zKWsD3I4u~g3TYDebqXfTXkxITJWb002;*$r|wz1@wrZ=hbS-thl z!^%VU!TcW27F*n^%@@!W*UF+g|0y($ePG9-Notmv@^pIa(ejw16t1nnMKz#Ox=0di zr;?uNP&qqL^LiaRT`fBJH0WtP2S49t%GoICufcrjxHAbi3rV-3ZaUrt#pO`}tM4?@ zH!@yabMtl9_NFa@{DtNw(W?BZO|WWCX4v&{Zx7;Wr&R7C%xY_-F+b>6rSM>gEviiQ zgDMACA|vh-jZbGHOfqy`^~3d}+38;OK`*qQnZ634s3`MrcF5TYTR0r4^Tp>U@vbTs zDxE4qfrvf!rgZ(5z0CWXlR377jGs-kVAs?EGC7+Har8R5E*WCH*amm7j<|(PL*6Mhp zREon-i;~e@%?Qja)V7e%nT^!b&&xlxw~~glM)EZWG_rEmMwdh6^xZ;BRu(jSBeKLw zg<2TL8fEewek+tL)IH;Glp$xhh^IEGTJ5SG)H;eD^?1q0xEgX_!Yx$nAa-Tt#A4nj zmGqJL4v{$loFqnDjGwfk5Mn+SUcm>&9g6t<(7$yCl1BtzWGL%IH(&}6=#VKBemh=8 zJe?9&))pS{Aj}cUJk5mR!-HFq4`Jdb-7`U(X_P7R#qg2C5lW?4@ z;3QGhdv9=kN+IS%a93ey3`SW$&Yt~_Ug`;ak#IWd1imUjx@Tb0ZSw*ufW2oxLKXP@ z`Fx-xl=czn(i_~FBAK#!x-N`8vGCuU-rJCCp=d9-cC`7MzGG@=71o~j3ut%324mUj z+fM%InlRp18+`B~fXDx~AsPVKKOz&7{%S)8Hd{pmCZ%QF5=g(l)wTb4WMpD*ywqnu zTKu<|{XZ^qf*+`BeD;r5Q-MUq+?&I+6DO))C9#J9NO8;(7mBP0j&~GGwKr~oD59h; zy;e`+)a;NNXh^_(9MMvu6+wB`;&Knw$GrKyB*(Lca8rIbz#Nc5Z+xeB`dVaIS?hAT z^~q$NJ@MQQ+sN8R%<9PVQKYc73wDw8sHR-c5<_ff6-=V$ntrRAmIK%t_}XinY6})pl(w@7 z1f4#8GlIy^njn2{ahTyC_q%N*mOP7ISSSM5ZfkT{=7*B)Lj6doT)Jt8*y%^iECI8g z&6V~L>C(eAPye@Dq-#@}EP@Z&>)e1h5}F?|6@?-{vUd+x*CS6kfX{|L$QWhDPDvjhZ>_#i1F zkdzy|-5dYbv;@00h|qoZIoW)?3q8%%L#11}2v4lf|L zxzi&8e;u#l796!jHDoEC707MShUEtd$)HJl+qY^;dOmLzMW!3~Sl0p?SLV1pF#8AG zXMniw`%%5khHizZQ%}Qbj(E)N{%lmn=5TK++v#>p&XVmIkY%o#C(6o3Y(mjZVxf^oV)`4GMn)vlXx=?Y-Trny;R>CsRM@*aZB^L2 zUK~f{BAVTB#=o9CtbuF4fqLmHx;zoN+>eA%R1aFcDr$Ij7gP4FJdlosziauKl|K~< z4Z~Ji54gDqOHnzl$Cx*LF;gJ~{aq)~Pq3YE5B{?v#$l?b3?rU=R+@#*ORD_My@#|T zkB?TZe4R@&MIqo7ocJtkss<5*lsoe+6Xv}^ztH?WupN-9gm?7q7s z{z0HXZaka%xBLrFG-W%Y>EjLbX?-1RC{QJW$eus*$7|PkbIby5M2mMr@5{zN12oV+9f0d5F0$?kX<6bf_l!!K> z{|oq!822%~N%Wh~R%N?pnwsJ2++2Lyl=^U>{0{|suE764%2jp5wD5_z3UWoy?ew1x zVc`HfmE|{CKc8*&RcC=opZEKeeqCmsZ0D_ROo=akPE7yQCXU9CY8I@Z7?nzs@=?#@ z9bM{9xfO3lnuJKjWB_@y^t)Y{Hv9d0b`%<6CpqWR2SKEN! zfqjNL{+;^Eq1RYg#gAa(JHke^itv+S} zP~|q2LS;TGEVdO+Dc)fYPM%wLD7g(VIWALt-g~M046L=Q;j^&7p01P5nRd|aKV(j1 zP+46!Y34M)6`&rh4q9vvo4~E7ML!;pMt%ab$LJHBdCVuOR zdgfa}Q^LicV@lpSG^YXzD4?XmxdM!v zNZOHV{?wrN*?e4Dh81o+Oi8JlhKcA!${RH0KF079);&X_b4^H`et1`Aij)X`iD1&g z$(Ca+W@X}FC`Wv>F>1^4Q$dF%>elBd*|Sk*R+wo=Jyy+`3%;B54snb}Av{bGYL()f z$&)lh&e27OU@cRl14o}A-V+tr#%cOyjul1hoO-6)zE%T9R;;8sIQV?YgIJ8-Mvp3=KMBpTEI!;=aBne z742&Z9G8Ghke^j=Y)2fw^ottC3cL9nR!51-=Jxy2z}Z4AyQBjXQHTvpN^%`HiL^v9 zPL=KnYAQ7yldIGHC>e&mlRm!Vbqi`h05N1xZQwN$DQn(Y;4Z2EwBjQ2Ws&ddbR}3{?b;w{OfNK^WIBj5bO9I4f_$u&CS!EENnPoIoRAIVcT>1c`s9 zI|dsiHw-OQwx1bR)RK(RR>RLBNe~AwLRvvroJq3f)W!xK$%O`(*#ogr*>2RlorKS? z+p2-xW%_Hs!8dd7Z9W2(5HExBOq}A34PD-?!j+=RUqm1Ckhjl^sfBSa-+Dy8^xOE#p9Zbb+Ny#3zp#4NP$ViHi z!!mRMo@qQllUb#tQ%S27FjJFpuZ8>WvSSB^!Bt3_R^F&*6 z-W1BE=a8x8q<{}?Gh*ukb`}GV#)_cCtVfC^%~r-hIWy(8aNuQ}s_aZ~cBCTL1bZ1P zi++SrPoaKt5gek3lH=#JqcIvfdBv2l0WGEk<0|TZ}jgD4~kV z_ltiK_NC^aeeO`DVsFl7NK*G-1u?W;uFx=Y~#?_`Z*~xR}zKQuxx*eKuFanCZ)HWKKM5^9v*mq%DOzi?BFO**RG#nbD((Ks^*&wr%h+3A~9$!>221 zjsiy2!io43wExERb7@TC3Yfh(5hdw~X?l;8H$ax^RY#-~N@WOz2gM$qM#u(#-HkWU zb`Q19Ib0p>i$^!N$HR?R))fP{8m8h`@u`5oAO+)__b}PGZkcWZoP7rY=8?=8CWXw; zg)z#qnUeX?0>XT7uldbLF=WYh{A%iINdbWd{5@P1KEZ{da0KVslSMM!;`rKgBal(o zm{N84lON(wO2JqB8Y^1`#Gf*X&{lbTj*$;TI%B;z+8A`533}r|Ab`w#1G8Zs)1!bZ z45p#6X~idQ7WfT(SCW8Z;mechukiCou}15i(A>R6fyqW`qy@UwtS&KF+s?TK zhKmmkg(lBUNA1y%mR`xOHoRV%WEgS!hWW|VGOU#%1wTICYFrnmXrw^Rw1Arnk4vkZ zTp6ERshnJwLbon}d@V}%zIfsUCOi9;OplLi+?PBFgUz}H>bPGvmA zA~+oJOcq9a0s>tVY;a5#_F9`Hj+qMt!`VK~f1jd0)%F@cf1l6VWb@5oYz-woaDaU1=k2abfMa$p&=gIa_iJBt|Nx~>#C#lpc zrOP`soT)Hi*0ebI>6%rlzxq~Vc2_B1hz=*Whd(_(Cp>|Y0JVf`i-!~6Oh#row2UMSLk zg}D)}NH?V=_izFFqq+6x^~shPXBo#LZ%v)r3RD#|;4)mRn#~sVhRw>EGNYLAW{V|R z-PdLXSOtsGpXLw+uAgP15d+WbP`JGA4hEpOXVr%ylIyt#76Em%{95nhVC9~a3{WqK zUEg0>5<>N3@Xa4Uw+A`eaK`_|u5>ouc<70jTpu6T$%;|O&Y?`3vIo_jsnE2eutGU6 zt{$tJvKdZj~yy#6?x>#dZ;re<%uD#Ar({gb}OWHzXtTO4OkD0;) z@o|=Z>5%5R2cvUSN)y4fR*CzDLyWhh+dR?Hu`kXS^(W486NAxi@Aw(v;E>lWo2wbRQgeZo??)^|;lag>qq}Vo03Ne49F(o}dQ*Lz)uF}CFeyhp zzOxe-gcm{+$G{XK%f?W~ar{TQX<3hq|F%CDcsTzt)sNpsv;VhDIBeTDs8xqFd6Nd- zv}{SHX*r~5I*$2?`S%nzum$xLk=yrcDwJdQ;`Q85`W8z}uBbGUMP)v?sI)K?b~m<2 zIiz@8_KX4<6{+@@M-J5WwlWS|-t$OYsT~XAwWWW)pMFV<^Vmm0LDQ0=KI^s^9xT*? zbln%Huo~MgRumR`g9m?l?#Z@@>y)(#34Ffh0^B=5JC_^_ zxj&c7gxCD^)%imyLU@w}DZcbi*ftW^{i~Q4Ra2|{d0dck(y>;QJmE6x%>A)`eWk1b z4G{05k~RB4oT}xzIlBPRDW6uUTh{;lCeI}fl{aR1qi@^Hw_Q~_e$y+N%Z0+m%eC;n z3=_Ax=7EfJvq@FH2J0E=?Qi8XXAPn-JMW9p!CbYmo(`1SfZkVu@c8mA4~wqDjWwXu zwAU?o)_8T%nQ^hsTg>@%<%o_ET)R3L~0pz6< zx(+J!A;0iz3cgNbeKA!KhZQ)6iVhYuCwW!-@p5L>-dbz>zO(lBz|jJH7daoZNWW$2 zOB{-;1MAy|pC4ZY8J&0iIaX*5bv|6B?AamKYb~9KFu9fHLh3M|)^N&3=pJl853Ykd z3w*ZMd#lQ;uK7u=^%jXplLdl|k9ovBkH9a?WV2XLu>+w-IrgI}$@qdy5d(hydaIQ? zYvbnJ&X1}Envqh6U-ttjTMil4&jCa^Mx(XNE28DeW-(38(bKgLgKDdQIM^^b0k^UI zi>8|XipV?{Hv8TLik_V$-l2>GWu&7?U=>67>?ao&S5+ zdRXi9&7q?k?RL59^&w(Ay-zN}d>S5z{a0IyqN{|pLJit@kD2F>4{AU}%8H{xDe86I zMu*duJ@E6-k4HjHZWWtoR8o^drpK%s13pr-q%b|&Nx&!rO~c20_jua!qTL_d_4FMZ zu8}9GCN2-vaNnxBuva*LZKqSdqgj^7RdJ*Nfi5+I(|7$j7Y#^}9^_8O?wY=n>Ox%8 zARysITFz-n|NM__d*}yu$h7}sUDNR&4c6@Zw{Bl83H{gqstH(A(bnL)h<6cauV;9G4H1qr6HgWWK&?(-x4+6C#uOs*%LA(J|^oTNJ=(lE>8n zUZT{1EW+7I?-o>3*v#5Yr($8-cj8h)(k5b6bT&S4RTxfWJv_EOD|y_qHJO6(ezr2q z8j(E!KVv)K2#Zt9?rx7jr?^6A!v+CMoyt>m6^I_|OSM&W73MmlTC@W9qtv4QTMsQ+ zbdA&2LuIwz8ORodsQ-G8s$O)`a-t0?t{Y;g4(M>HSDO?lVaQ!)K68cv$$LOCm)7Z3 zADs4bEF2!aq7d}q^4`$CdW9nJ{GoSzvG3+~nPhqebwA3ln0Y@b0W*k;T zb~>E2=>(e^AlKG{NaT8V9|lQX^LodRve@fkwz}sB)q5_LyuqJgawOY)%L8 zqPf^nxP83*0l;e=29LB!qPaT$tUDRUXi{rP2Od&DPoqlL>KZ`{it@(n zF09$4UoEWVM*OPAN~j8fCu{5_a%mf2^a8bj#;W8nwr@_;N(EZxMqKKCK@H%g;vC36 zP$Uz8?I%d9D*9mX0QdhToSls+n;JW8r|F|v1gO8~hjIJnRH%2*I1YP3`Kig8x+8ny z%p%}LP-T3<9}=~YctyfQuXvxQEeukAu_cc&!M&1u#E@!7oJE5A1B2R@WJ@5FYC#0RfzV18H|4g=9`v8f4^CP_Ysdl?17WY0}RzRpNc#IC8S9^Opy01O4aoG?~Dbc|GhJrVLnb<%TBq@pEhz4A|` zxJtgmmu}JNKN*9PzbcRXG_qt-epaTWshX4I70(s(#4PjHEo0>orQ}MM6?dc}$lIg3 z0-epAkF@)ZppFkS1!YTj7DzVo3=T>Ry!X-t8UI;`ArnRk8e1_A5GoivdauNuaoJn2`_ORfb)Rj#`&s{#7?!f-u9|W9%itFS15A+G7G2*(G>$(ROVsBu7Sgi zF^#l}`)?Of&EXU{{=7}uP`>S(Rij3sesROu<~q<6`KNmpN;@u7aipdkmCePUFyV&4qIqr1rL0h)#|E-a^tbPY(JhR>?|DVoQ4Dnpg{ z2FOnxXFSD=6=hb6d+Ct#ehR2zRn0i0tuPSoV`^q^UJ>N*{9z1fF0F1&4(J+mHuO6U zxVK+~otTcW_T-qdv8X*o+7uq9c*Wx?65CECH>{Pc_oku8ncZA6_R|!Q!%)p@${xtS zh;d_QLVMjOH74a=k#;}W9CeS({=#g5PB^M=v@kNE7`NQ@=tuhjmq5a5&`P_UL%b6g zrWCH^wyTwCtQ{3k*XAErmPBPC zcr(&0IjvVnBEiy4VDQ-G9@ViDiwVF`{8Or9-CrJvzcl`SkarpV`ohxYLfkV)#^OP+ ze=Ik`exL>sOGmIe8-F>wsN!vCjNW#7v!^pOMlp9Bn)YQNg8&_a4kPce$ns*Ywe*x8 z-l~0xOi$_l@(A}u6DEtAw_ogYy;NiLz}QE_WkH40M!2vcj}8m-8spUj8sk-*jzDs+M%Le^@w4 zyvg4{`mn)6JT;@$unPCbDDpc10|^0rORV!8)iEOau4+*$ip1C3vrc1{SC>sEahyxE zKs8fpWx65t55R>Ktqd+c%3612_LkS&(mA2_%a@qle^9q`i0!PJQw(Y%ie|?Jkm8>4bnf}xL+5qAu;aM= z%0d`EzB1^`LKC={?k?vl_J-qo$BUlkt5`^j@R(vmSBal;%h;!eQ1X_V<-TCWPZi`5 zhLsB8vsf|nSXZY;UnNNnPnKJD1MtjqaE*b~d|TeHjn76!hutA+AtV_SEU-I%Vsflo z@TO?3hTy!Snkhd>Dp}z89;76Z{o}ku(ec zlu8@P{q4E!d!uTiG;G{o*!>7+5LpeddvEYR8dT1Dk5vo|r@tx>83yTO^IH8s`yknY z+B^Ve$((F+d9H$#N@%=1SmCL)a(ZM%tXSs5-qJbWCSL8l3Kw6#-7G7?iumPuXrsNc z(#MrM#Bf8SJoe6m7)r;^n4eGfm#hHnP-_E_1U3!-3~|;!(&Bvtw~1|!XsKnGf7RHf z_qtSZLwA=m!aC;=OnU*gzkcTZD#k%ISHWye{AcmJ3ysKg6MuY=A5NAA?$bzwCX~XJ*}oC;R4_Fby2WP-!ciyx!-Ru z2&IIqAB9Ne4}EqrxPRxfv)xNZv3p0YYpcVuPeWP!Zv+#R#8{5!kd$~UNy$R0ryO#b}h zpCo$&^n>q4++g@V?QBw~u+V$Ih4^W$HTAU3P`#^5P?lDfea5l6Xkbd~+Q`kVTAN8R z6leu=1*j0Fkpm9TX7uVX_e8Zd%G+76tqr8qxBXRChNtn%WB5pt%WMxrLqk1VJMTG=CMH@)DyZSyP$X%= ziZP}63RP=X<0nj^F-ts{biBhv*|O5QSmmq8=5 z>4(GW`k!}>}8q&avyG=3p# zx}2N!40}#lAgfWS)(>%17|0N0fTm{-@|`i{zA9o!)_gzkSOCK2ucB6a8k@Ljjm>5L zIoyv+Z0ZCM;x@-ioNxv*`He9{k(Xlp*3|sP0>gK+xaa^4uCB^Prv)7LFby$CCajxl zWfik48&46j9c}WrGfk67dp9v$=XvJlt{Kqt`DTPdV+8vHAFm7nw%fPWQmBS5tV^-Q zC7rC^e*+YP=w69(@^JA8Wa?kXx#DDh1%%NK;f~_vWUrzvGRksnAo~HB7?(fr zHazJ3EA0Pr4go8V5FdiW^;TOMCL(}=y0>eJqW5RnI2llLV^d^Dk@!a#zv{?Xk`z5gRf!# zRixxhRb{zvj^B*W|)0z zb+ob0cEauKSA{tdypumQ-4?(79-SfKLKzNO>|>9ZkkAQQCACu!V!212f%@jkpfieY zjB>w58>KJ6E)Kswws2NAUU|6${wgxKf3-2w4)(PXA%8OZfjdb=`CT9$Eu~HP*<4FY z_y-(fqOEr{V)rEP$h>*^7KqAn2ns1ea1>Yy#dEbP3$=UL0UGcczB^B~Oyx~$BJosS zpg~EGl3DOY0EUeFTaEl&nE!d7>aJR|ZRk zr3HX9`C$n?d$OG4gcR<1e`&IccD@yfmGw^%zd(hJn-IgiRpi17A`*Q9#?xk`16Xn9{LG^7 z?%9{d6f<6Wi8x;YUvmuR$T`0;#X))ty4dEoPj7XUnWt!S0my=Ia3X(x@&oX7vZk*& zUb75O14szd^7rU7973`E2s^d4>5Ig1@BOTuDzN)cPLiPWWzUS-@Rtyq7#kRcWAcKF zTDjUV)8Z~r>5k@6M z<;q`<-NnI+9~zAI5#p1n2QcbT1CdhPxXc-jR(6~3zs0~9!{*A3g@$6b17YAFiD=3| zdqaP$=@$*;%Ntl%dP4C#rG&fzK#Ul?@ZjWgKnaC1$j44OJlvcOR;1}Kn`=AStG#G5 z>E`()HeBe8M2vsg>x~7wYYY~S_Sfyqx531oSC8gt`qJRQCHRw&hAL_!K&%T)d9h!I zwXYSuP@3i#qqF$LmqM&6alt-enH(7S z_?aOgOiavv%<|JA9Ix|Y)jr>kRd#;iCeYya*?pi=z}3lZz*UVstvh=!S=pIv@NLWq z<;>2Pw&~?{&)&J|A8aEAS5BkSXPY`-z7|{^m6lrR4U6RWK2Wp!J^s~dYtm8F^K`J% zRwuWpH(x5EciD21gx9d%z z&gGaFqKK8$+*JJthGg8hp^Kg*Wxr%ZJgm zAVN%ZIQf$m-(b;()z699C7mackoA26&8Z1{t^r$D_EYZCUATyD=C|e8^Q;Z1L45K9-?7vK&ay2+q}5B=$rbU@wLLVDQw=1P~}d0+#_SeYtEZq8l5m{ zRt`6UxEb__i5b`Bf�p$8)wX=-xY*k5AO*?W}z;`6>(H`~2rQe_qvd!1wyU(Dy2Xlu&??^r< zkyTqo&53M0G;y+A+;q2E+%(4bH%Kh{tEyd&XsX#(Mm20VKkuNgt+cGJb=isKSC14k z6@^^=iDEcjgfPAnp&>rI#cQy;X#Tlpq&FO#L!FaNX{)AYRUmHKi~uM0w~9-|83uX> z1wM(!sgPV*N|M-h;fHEa)DKwccT4Dpr*B!y!drb}TmSkW$)eAns(%fRmER{csK7w zICU_kXkH=3Z84x!hp0k_I*uBE%cl}oJZJkVhFLj%;+jM@NMk4 z+e<~2T!=+iBB8ocv8T@Hao7>i&;ggLF$+CtG`DLyR2+cKGW|U??gSA~*mv_dAHOuHmNXdxU&tp@dvQSUMUj5BRkV~xlV1+ly(`w?jL&Ol)1_w8 zLgs#W?7jH~#*Bs7ZS0Zr;4Aw^v1->n7NFuzztCPh7wo+Y5p3Lzw{)JWR#{m3F5EC! znE)7ESHpQ3Ud&!GstX-<%BiuA%hEj#Thpam?%GuKM^%Gd2ns36m-1?&8vOioN}T*i zq_(L_oHHvwy)Ax-C5&DXtOdNLwD4e3pd#8TluR-r@5ygHAS=39>g7Zm5qWpI{Q0zKqxeBP z-Dp_i`LHW(>D8{ure~k~lPsCsQQfOWNUWImu^EWXe?1Wrwxlkuoz(QA18(4V?rd^bSMU4%1H$utW(c82HoVER5cKWdO};tOA-R_H*_tZN&R8Axg)oWBFoiTP78(VkC&~l!$G~+vA(urreR3 zVVF15FuDqq2D%VRBrw*ymUn^rm|VJSXas)_I(8~43x<%j@MYm!w%=$gVy{$I9r!l$ zUxZ4&C*OTgA`ii)<&x`H(bn(3L&WcsDaVP9egAq^BSR1gc~RRcZro)2y5N|R5;~!o z@l|`ZYhgY$fYc&w_%xgH>o=C9A5~!B|3HQp9WH zHl(n%oHlF7SxjA4d_=DJtT*$qx z=cL9EDf89*V0G5@t%;RHLh}Ig&u=M~=qFJ(#P84V@FdvHld3LcXCj90lH^(PFQGs#?w?N5Sdwl*^?7mQY z^j<*Ed3pOwIlM~S`TY!+M2J#k%4!VPamJiSnNKaGPpy{;>zz#Lzk&>^EzLp22fgKRvZFwvcl`!&~x;ND~ zFi6RzX;%)F0xQTpoEn66kP%1wd<)*W&57orJRTDEp5js8!=4)5WVas)ZpX!;%EKSX z$f?1papK8z3d(^%CghQt+ChRKxDZZ$&K~OYGlLp>U-Vh662l2O#1|y~t<%RGJkpHv2x<0f8>38#V6bl_|j>c8^uTufQ@h zUP9_Mb$pS0Lx?;(+sk~_PS;hJ?b(EbHSScndLaZtr1Z3{UsgJ=ZskRcgx6f)J!$#C zPq_*#cVdO&^qfpe$~(jP@AJx~NIM;LGE=*9gwPa#Q$*&m>WFDILzKk# z^Y!_0M=aP|vmxGpWl&G#gz#hUmtgi+XSpS;%zb+3v$ICZ0ZALzLu;H|#%}McIHEQ85b4)?U z9Yx--`Jw2CH8#`6T4?Y@>tqQ8`}*}uHgOAF8n$C(z{PQD&bM@OlehU4F5lo0sb&Qw zx}i(2!%u;`^yS%ud^_QU0Y&B-F`qAqN?|FcERlt?;aRsGtVn#dva^Cg7NTE|naHFevRNdUbU9&&{U3ZcVE96>O>UN>;hVOg z2TI=bpL1NrHEb+U?dt1?`e-Af&m;^Tti=$U)CStm5hPhJ$Y4;WIy8&55wwWs!U^Yk z{av>a`tqzup+Fj`_z|YNu1t`{hxk6XSMLx|-Z3rAB4oC=yxn{!2tNla-}s-ogzXqj zfqCjn@Nl)&<447YLxrPj5d{rD_-3}V7}cliPoh0_43Bt>YK}9mT<1NHifxFikH-3X z=bs2Ud7j~HNu6)cc0MY{Z(`1+%_F>aWQNJP+1anU$YbSILX}^jX7B&ZGuO`Q|F$YH z%PgK2v}T{r?4^%_ZCSS{)q5o!1qTXl4o};8)3LAa92D2K4;pk8Hk|Bzm_Kx${E;=iUU6UjV2BeJ)RgaeAhdnMiD!od4;T0` z(HP%86h*KGvXyN=lZ0{I%z)b&WVyNUcv+ijALO(SJyHQ<0f7o2#Z6y_Ynv9-rrtr_ z0Ph+A7V;X+FWZ~t5H*%r2G$4;97=H{*tm24;(SQKywG$rttJGV{t1S%j z`K_n_NMLMP;HyWI&HWMnVaal&eZgqFn{rMCM^Rrmz@+am$%^>yP-{obC*A~tsLA;- zOxmVQvmEa!ntZpR!ygVnq(C_ukx6N4lE&~UT%rrCTuNrze(kn0IkL|`S?jfV^0NG4 ze2GQ{!*`|S$B6ULa3G(FN$qfOUdJV#)g6SVgz^sOi$#_~F+=40pwbVIF!=Nsv0+#y zP}>QrNj=Y3(9mD6sn02a?;Xk-pIgp-pk+TP?TNu5NqnSRr5$cf^wWt&2u;OPOc=-b zo4F4e-N^|68c^V2s7&doKEvY)lq3aEGfRgM--vG?&zPG!ZYgm4IOe!%oImQ9m1j0T zNTLuO?qerj3^*6VO8KQ7vC83(YYe|dGfavVC;ur$87yqkBHBk3u*YA*SnPfZNL`)F zsiQTE*t)g839%>|sIn(h^f;SvmQTxQsGY1}|2-7Lrpd*Llp>pxEFL$os1}J$Ci;Qe zvD2v5!8gV3tGMsp)h^!xHLoF=9o^a1<&@39$*4EIGs?paF^tnQv5j6I#Ld`m2w4$h zASaKEBO@<~CyVu73RPwXhQJL?W&$P+Fbk*-jwc(8kj-j^-{u2kQIya0zotOtBd;0% zeyM%^UTJq7DQ#kN9MWyG%ihFtIqCBbZ%HF#68a$GnVk~njU4Zs!$3^3<>S7qbLmKk zx}mLxY;Yh!hMx}WZm6at4Wz=3G=8_z`J2yU`@Ij1soP6@nk=<(=?tSvPm!@`90z#M zW&ML^Aua^c@0Mp}+|rpi)fAM33zipN7=cZU)XvEnA5Acl#}@FhvEPlmcGB4qU>MeN zu(2ETTrcmeX}|wYn4!Cb*Yo)QV_YUC2GhU&QxEyTHKd=T`Y7{>e2mH4}(i;V%^O^O}7_SV?=#SD)l>%z&M@6yMjK)rBo^8HBo#hN>CnAl*0& zRuf4S*|Y{%fhO;&8Jfxz-=xN%1FO`Mxt3UU8?#IjbV=xn%Q&bU4jzr!NO@@9GCct(DXlUHthh%wavTAHs!`|25xuNUrX4Ueio$;+xeEm!vLT`hTTQ z25LWW_yU8?Cq27&0H2(QbmyXFjS7CUu&D>0bktf$Y@FcPOkTt+7!FzIMt2-a?v|!^ zT^!wjt$Kw58Qvp;jtEB5g4$vOQU^_J=g^MB^D^x_G_V(0ZWCVHaBG`BZLv)+R~j`P zP05Ota+0)9y%-ibNMgZnX8uPhRZC;&$}-l}ib0ta@2yKfVK0 zN9~NByTh@>M;8dAHdYKip}u`!ijUa;gkSVy-i2W@gVf<4y!D6D6*a0K8f;VQZt(1w z_2g;UO|%vX*XXIJ)jd*C6V|wO-8X^{a1Y1@U(HzoSMslm6SkmEQn~u;Nf4^71Rq&e zgM`oTkfd7^g(_;m&oPj!asGZQu&BJj1;eNPa2s7p`5QND;03E}%TqV`KRcEl-IcUL zHPi$=NJXilccHh4{fC)zI}U;S@y~kVMpNWs!>8HZeNSK4)Xk<;%e1|c_V4-KJU;s* zGFVu~B@PU>wGO?M3rgIS1|Jt69s_W-)ufOcdh;u?@cI!61I7H@=))Zf(LRAg%(HGD zETp1(G2-q5l9F{PNhsZ%CnJ1$Bf8v$%J1|>y5XG;-HFn=m3F9nSl*2!%wy*lR)O?- z{cRzXNx+it;XpvPe z)9F}`ifF5ux+?<&btXYy{1b!-%Gi}^n7ZL2d=kxsHM?EHjCf7mrk`ZB4xv5c=NGqfrPtBAaqh_zB;1Lo&&nWWZ<@At-XWB$y;l&3xis;o{+x!h6v#)9YJ zBI9efI)t{GVB}Y*rD=}`b74TdKZN1-wM?KX9QlPP<1I)GiB6(XM9O`@eb{7)7C5-T z@l)CTM2;9LOKFU5*TRdY;E9R9mFMkqv>@cuY`N{l5qvag+2V|jE&h>QzHJXJaJj$X zo9suD)b(*;pVcuz7%ltkg*|2m289NW#Z%HZ7!)@^vh(;hwf!wX@~xU%ZpDUz+^^E@ z;NOa3qKG=-5>1g10<+L}=wWK}gJv`X82GsRNj@xkzY_&hmmATRRq%aHc8tOT-*7Mw zu$m7n-=$!3&c65QKFX;L&SbY+eD4us#O0T&!-+s62>GTE?;E`e6c+PgDn9=TjNnq{ zfpoA}J`(a7n$M1Tr&Uv%W5wGG(J9Hv`}+RZlFusm-(e-ljr?C-*Vsl^)7PH5rVN?4 z(J7o%dbAsd1Q7WSP_46LH0gITvy{g;41O82yvO&_{jX*fj>#ATO|(dlY0`rl=bVo3 zF@!MAM;x=JhkDPq9CI({=3=jq9c!ZEJhyDLK*`9wT(|J@pdZBzmu{kyh-U zaggiP+TSk$-=cboJ#(+#KdH`knR<3CD!q=f%`v~#4~o^aAWF?jtks-(qT!D`S&qg*Zp~W zfPBzRYxnAo<@xXy9&}HYVFnRvpbJ%MH*?Q>zLh+d5u%!KeOnSqAm22h=od)F0or7RV#VtlMrLiHkPa!naGt?;wW+5%N5yja+~f zR6fryj#(Qgh~G}=VbXnhzDm&unhZ^yy93NUu~jm$gLabHdyblYM3ue|b>93b zSp(FB{!7;fCiHL1Fa*2x#490C{s}w)v0dF8V?#d5CkWVBTvf<;MjXQ97ZggZR}D5}D@n1^b_EgpAxG+^u*Np0<~rBEG_T3F zczQZC#ZGqDORezoj-ye8zN4d7yd)_)S6vAFBiU|3|9rb_HGP9kf;2`M(qPy4-c);! z-Ojmr!Cnr-9AA-fAf6d=e?;7_6uQy&c1Y{lXol08+`Jc^GID%y)U^7cpGVwWJpct1 zS0i41Vso|q*ARfg+;P*o`_{DFT?;?6>uo*p*r+b+cuH3wwf-lo<##Z#G|k%>$*p=K zZTI}{f?A-$vYtyL6R+{t++x_)-dJq~(KQ40smP7^0;z_lUsOMoZFK@{4X5+-ju*qz z-=pV}U7}Vnsvl|_t5w92-biM3UN4`6D3xP0jWSq85vrvoKC)!}m;Kg`FPsB`m>fpu z8G4%^3HQcIEHtybD?x@nAj;FGDd=WYZ%Xa@vP`4X&OGEm3;9#?+BPO z;$Ym*D4D5*)YrAwip_^?Vpx6C+l0(m+biE-C7| zyBbQKvDW{b7&Ri-CTb5V>Z<@;Xyoc}MMpeTZnv{LoLVbDg3&IW%%XjMJu}=;GX|m2 zp41d3`uYA=y}kNqNqi~bbd)YgW9Z z3GU)TH1G3*g&9;=Gk~zv@kxR19ZoXI`H=U=4jqAX4qrHRnu~$7quBx`BQ98;0m4RY z+DY&Gc9FrerVhaIGH}zBp{K&-GopGyWwp`PofO}b^e~DDarX<)daB2i)3X;0h+c7s z53P8GdHcHrS^s7X3MkOLV6W(X5WQ%ib)!6Ks}y42`%&{^aiKgW+d0 zNqc?7DSPNvr~3&DQFg9>K9(tA!%#>p;1!Iz+;%n7e}I|qzIBsZfnAfx@upope~~Bp zP_Q(`G?TzHIeoz01}W_y?}6q}p|ab7mBCj@AH&Z=-R2&P(Pc^}muD@_)X+4~x4$U1 zcYhd3A2*?e`KQ#ccl?s@L9N(4!yCC~8%R-}%{fUF<)KjRS4r}3jL}PWoli#xm z;BW|tTfx6?|NMwfK|;kxWS{OS0$uq3v&@&y;$Z6oELH*GJ2;~Yx+X2$W2phr{_N&yoFUBB@&0!VsxrG<`5crzf4Rem2_qeoqaN49kNucZ`dt({zIV$S69oek`Lx ziA6QE(=+8dL8d(aHwgI#zTkf(82;;nsEkz1@ZXgE{|M|*8T;$BL_F9iNUi&U(FP}z zA7kFfw@sRJ+Cp%`1N9!QpJ-pVn_;;_%y;EDOAW}W0!dzS@NnT#H(`db$MHpD=QWvg z(!Rsw3W{~11Ag0Ohh{#GSL7N%31};`M)JL8c@^w!Z*81lGwT5yw`i$`*a*UfF|jUW z9?llzTk&H$ADD80xFj)kQ_?m(HOOQR`Me_tzug@TI}k0i@&oSBFLM zy>HVgT}lcPOS^P;ODwUVNC+YdNGu_Zgrsx{h_Hgx(gM;YOQT4M0wO6PARQ9EXYup> zyx-sNkN0}7OXr%MIdf*1bLKhcJkNdKkE)lkad6(}V%eEwKRjA55=eHsc~3~m-DrS*%d4(sKGfM-%(?4Cf-|F8Rb0gJs#awvc{!9{bK;QLSd>m`5 zv_fSHpyGeiPf;;kwDX20y&0voMQfbJE+Ai2*{BkUp&S@uq>#zomb$=+?s9rU96vZL z=>lTDNJsOLnCWQTXFP9}#h-&XOypH&hKB{VRsm4YKr0{i24V5IQ$F)~p@m?)^e8;Y z2{UJB>5T3Igy;k$?#)@rGObN^^kim8nz>rM`C<_#MhG9dD%gn^c~6~RT%X*RBN?8h z63fh>12Xw4b!yL;vEK&Hbyy#6YSbGLH^xh4evv!2jDzuiPHBzkFn6r=6X3a#7WH&I z#>07xM{Y$ZMywEiFW7@CyrYt{%9bNZ!TU8R=Sla74r&BvP%xVj_qlr= z1oRmQTOEG`_A&uv02W76fYQf7y0K&a>Jln#dFF&H9NIdKZ;fLMJ<)%=iMHRu>~Yvj z?QAyJ5F#IP&_x3JM|9#+@aUf<=RCO|{In?(?Yd*-*W~k71xU!3moe`BH4tDSaRH1( zbSibqkeDXGAK+iEpx2`Thy>&0tnJ_fkkP+R&Tog3VA%n>0_Yk#;`B7eEQ8#@Eds2N z&)?|mKV4(ukl}xY<+uv*P4m%9^acMF)MW;oyywvl=ZrMQ^3`A|4}9#X6|lg^SU3n3 zjf9s*U477bKA;R7?<`IRTnPx| zZsiF}E_k;sjT70#;0U6AK!R__$UXFrq0M-0URqO_JA)2zKu;`gg)_g9`LPoC``H^U z*f|~OJ|wwwpGkb<$&a(~;#980-(m}wQCNsd-<<9WyCR<6P??uzZlT1x)ObH;nAc?i zzMhXoqWSeL*Emhi!y|Z6FX#-$z2gXD=l0)_V84@PEXT3J4(+4Hi8U(1GWGGtvMw=k z>kk*5J#S!f=??_n2O%9NDs6)U7dl(4g1NzAkd_`Uqrr)}ySN`u?l?~?`9b4) zhw9w^77G$FbS?Z|oJKKXp+1J3J9(k-OYz(-);{7BC=Xp|36Z@%cALw1TM0cN z#{x&0aohuN@}F?ve+^X_D9C?}afSH6?r6&y=pN0IMiFO&z)DKuWncy4WBWIbg7J+0 zjqPFlmzVua-viCLf(c0HWNby#QT~tEFEtbhtfxA;^2xO~OVF4cuikOo*9Rvbd1&dtL_j$tiow)zvs93D!AE6W$LaU>K(fZTbE^a@hL$ z&uekf!2E=2uoFn&tQ<_(= zAHTZlK=gO`nUe<3{h9K4QyBGapH~O9=2EMLIT0#Xf}S6}cL~3D&^WPRF8;T*P~9ML zAY<5-%Mh|%A^5xzfQLSH@qdF1#AW|RRQ}VHCI$kkx)os1IC759V0;4PIHOnoPO6Ij zbz7%8KK>W5V$%B-$OGQiqr}93|I<%UDWCA5{k~wjg_-~SMU?#8tpJSgH?pL)i+ozv zey9m!LO^Ve)&FPtj)CI2+6y6nAcrLRWFsRI)x6VFXR(;;{qrya(2*3hmkr+{Jdt+d zyKI_-Iuh~ZsRH3D{XHWg$?)9KMUIS!PG*({-9<`-C(QfPNXWEw{ zumz7fqa6h*-}=E|w4eOIiywJnW>MX#_y=H`&Nf|p5f`W5+duGNQ=Vq4w>DX`o;*F< zIoG(@k&ydUoPWr4V#cPgHoABvHq2)!cTh}Q8YFIDlHV21UHrV$*N&GCUraH0cbE-g zBq#hNP;fFyZdcfQ#DpN>taL?gfo8}CAnP+2*tAB()UFU z)3}^QP=7*!Z&NZi<`NQLN6b@yHDf(F!dz)Bm>og6K#b*>y|1NTQz@pusOcU&>d9pg z+7Il%F%&9~4IJezi2Ik3l|@8SMARBF7Cw3(FBp>)nX8ZoOH4=C01I>m@JQ%XBro^h zLzuWK4M;!Q6wZ%11quxh0oOjJ(Ed<1aR1{dVS~!Aa~D4R0h*i!3-qAgQP6{kK|T!884pZ$v#pFec{t2zHdJ6xo+w&;e+DoR6L9zeCo9)At zna`pPNehJ!4|-}S9d*18b9ruITNMp zN&B9jiOq@Hm8&}@6vOKWA{9QSmF7~z>uJqVUvegX_K1K@L6j9~;REBHGAk;k$$gaA z3%SGW6%9Z8BGWNqE8G*B#VytbQ8Sx1z=wGcj1$f`L;^)|zgVCM?j&@8}GlU0gS> zuGCM+De>)YBd~1E(~Jv=UV31>pahEE2w38@s;=A$H{0;XdE8?AeVXzSC`FA$SNX+k zn(Z|EkTBY819xC#z{4MV8SVhe_H?MlEH;4hOrbvAZC_UDP93kPy)oxMqzgTed(VHk zC|+e%EzALK##96f)U!vKZKO+z#0(Tie_x2FgPPK&VQZ9UZBd6YD_f^3SFfCx7@E{xRP%D zIcVN|Q_!FYgUbzQJsLjDJwOxuTYaJCF=)NzQqWnLj=+}-o)NttZ;p?S#Wz(&@ArV# zi)B!FnX$w81xT$rJ_eT>mi~RRLho@L{q~?{&G?yjF5m8#nkjzl^q<|EdJPgOJYYAx zuT=i%2>HblB1{+a?Zuo5;tQfg&Q+N+x9e7|f8vqvdlr^*ld%O(?UMKp8o-c73;5>p z$Z0oUGaNIoH`p}NUeEU_B?`alN?))V+>B<1)#epW++J{&id({0YTiBA2 zTgS-2U9tJL;DyRi;rP;b-%t7GJqQY<$O)Otw*?Ox=U<0bK7qSMjoi-q?e>|&F z_`OD9L}%vM>Y$dvTeTiE_RrI+slNeQJ9=+QV~{CFS_Y=(l3n(TSJ%Vn{BaF5 z{v{IT36>JS*jjcZ?x#jbIl!MWNui6OQObtjjN($;-(53q&g(G}e*hzaj+ZdvN>0zN z^a~2L&A(z~$H~bjBWl^vt3 z7kv-ZWmK7wVH5{$$YeuLX&m_D3Up!&%jC~RYJV2c!keZv{BqRheHtfOv3b4D`$y(W z;1_rJv!?h#>Ci^F?f8p#yZ`A@f2n5dmkWTQc{wwb+fK{7o@?zh-wPmC$U~tb@BX^h`(1sPBdz)mBgl1#3!Fa#j19H@twRf+5mhV#F&Br49wr3h8W%FWqVTdMrzsK2r&k@84U-m z0k@IkoL5ufRP{6vaRNIH4Z8I2 z6e@$(71w9c*42E8W2?giG6-X)=}q2KHt5<|FlcM2iJ(ntEOm9~fWM&Ri!ER!#_02I@WTNlxy^bQRkEccOLb{f2;gZMSjr-L*~c?~&T_p2 zR%E>c_Vb6mSIGrda{ufCj>X`U)6HRb_a8%a3N)_eE#AK@UY&I*26XTotkcC9OsJM? zjSpHD`_RSAm}Q%A%|jfWc*&(JRAK|;ZM~Qo2pg2PN|m~XwbfEhvyE6hWL@}jJ!%BK zsj4R`AfJkpiJt5RO+w?HU>wPLVaoG(D2JIzq#LcFInvF-OPE#?pClovxdS^a`IIr0 z_k(Yk(acaV#{;VAxB?t9X(+@UQo;jsxuRqT<9>&itF+@m4TTU|hhGh$PU=fuXaGo0 zrqnDvb+<4aDKkdEd12%UF(N9wN%sg($GKejPHHKX5`xlIWtNrF>b{+@X+)qMT*IR%z3Zx8YFFdfteCU87u;lnD z$|`j>#`DfjrXWoXlp;{KEPnM|+0ZFw4(9<|yz&m5{Y!|30yAhU+2eu8`!G8L_S>K_ zppesTWlrwsS~loJ9xcfAizy6j>=hKdZ4c6s>heIoo*>2)f+I!^^nl{YZC^G@P6EA; zofLHv&jIqTL!ku|BIwjwKJXp==R#rJhVXo{OvT#oDY9<&O8N32%O2y2GvO+RddB;^ zavg&XOU^>Zlhycfetg{T9J2^x&-RR>|N`=U$)+L5Vjyv>L_klpkSd0NsFDH!sW!! z(Z=E);pZX zOCp=-R#P-cWvhy08HgZ#e8^{bMj{z3UpxmrV%k0r(FuqTp9~fNBoT=teGe z;Umd8pttfrgizL89q;6@7laYM$nb@|3duOh8aQ=xsFjAG%Xp?dscj_~on<02;zHN2 z+A`9bZ+|lN*JNV}Cc8$xzoE?&Ui}G#B0ZEp$_hii4hoTyzS)Tl!Ns+|2ipRWZCja* zD%e)5U%0v4ZXe7ej39hLYW~?xK3PYT4CY{jTR+!4CtnU}!$#DN_Xd84R}{Y=5y^3C zz2*EUcuByV`JHO4A^}p=fp{fP&Q%d@(|&*_+CSYVW>u=5W43ILSZoXII=MIxJ7V^> zSQW!s4<&tX@=0{~iYP9Y*U2(_AYYpQJXs_g>(*@f-Qp1;%TPApyUZ@*E3O$P7sb_= z7mwX1XJb`j@qfG>eFZX8tC=tdKmUyn>t&`NQ zEX}2AWFq-M-OJb|IPVItcJ{L?b2p)olbgt5QpacOg-_h%!t%DT)a7wvWbkSAZwW4T zj7uSl0|T%ClQaMqCZ#xwLxmS(hfQ~S6F5>@$W^+LeSqZH6^uhPd+SqY3NQ%&ffzZ! zswq+cEejK;SsF01b?^Nf;!~<)gJLK1*q1+|0p9gZO$-b%6B`VrGxg_Dn24dh5}eDP z+1ZgQg%$%=0?e&Tp2j5uFE!)MSSOnED+bFRhVif0nCN*usrxUukWv}T1Zx(pdaT-n z*^;RBNV_v>@V<7s|AYrrh7p>hIIK;rTvcz=Q|k%2?`gbmz13~jB+GK=tEcJeP53M3 zJ5TLb4cw}&`y)rrsb!ircG5iOJae}6xv9v)%s%K6zMQ=uF~TO;Monv>5&5=}Y2*(&kCC_*#BteCykG~N3%F+IDk z9ZA&FUAwWPEnoGKGN!mV#mwqXct$px#Kzp1GWb+QTm9g(M&EeHXL*aBRZ=LfUmlVn6Tjc;I~_piAp0rY}XzXqsoyjSs4n6yaITk2-$ z@;y61);y1}8B9;S*8Ze0os5N%#IZYu8=oysEcf(RWdMt$P(bV9lte^6P9@J{1R2BW zL`=0Zje;{je$=;SZ$d-&^MluMGj;q??eR%|H+Pb6q0UHUT6Q!dby^?hmUFaJ5!0xo zLM)AVUW#yy>lb$CjWq}Qx1z^c00(d}4pQxA(>Fw@028WG_&u_{0M;dG(JN}-BoZX> zMx}BXn9}sWyiwk$DzbqLUK3SWFp!5YxA``2zDt+)moZ-!wq|wr*=mG~7$+$9McoS! zYPnfjZX=m{yfhyfdB-IHr0U`p@p6>1_TnrK({p`3Z!B{hO33lap~a=Tex>7~uO z*-f$RB{^1=4#4EA)>zJauELS3{qX8{tNIF|x}s{EGvDQb$G!J9zEs_ddF1|SVs>@p z^}W+W!S>|iz9Q5VE=x3Rm%}&_>1U4G<^b3k3#`%m8{HNhzR8}kp1Cn<$~_Mhrn7m} z#=pLEh|QepSB&cO?gFdc%z;-S!TWgYkHM>_5bS&B%yqwu1$h`{NsEkJe@^Dx$y3=x zE-S!kG$Bnfqm@o^VLjZvkCwT;X@l3*GNtO1L=#i+QB7Y#+G3{t>7iU}XzWxCcYmq8 z7=82=<}bXf*R{=u^Cg-*YHz%hBt)dG-5zCSkCO?X^8a(%Rh6zc9J|$6B=T9EEj~>4 zm&v2)wui&2dn>}NfC-NX*2}7$@_tioLZTW`Yri<_f7jUbegwapK!1yk>H}*%>je6d z?%C{5nMp~=T%Vq=LWg}d!xdWm6zr5y%U^q0?`n6p`nK{i8$Zh{lln|4>GezYW7Y&8 z{bII-x23v)o34?Njr^9fJEFzOEEwqeoRUF|Tmm4Ow}l{j58zyK0Y1y^_1~>dTs4Zt z-yA-gO^ss9F@iyv6{nD53w!jvlHXgt`^x9P_E}7sx}ANy0~+U7_)M@-YHxtQ+%e&p zi6{AKmeSh*k*=EnOruXv_#D8nB(JxX*wK5H=MbmLDCsjd)0n2DEXhbH zdvVNi6DVmH&9dU$8J;Aj$ysdYBw$s!myaTD3zDS{)mAS>K-2H z!s?|AQIbFcx}&F_1P;`st5OImE(i#u@W5}^*@ziuJZ!D`#o|q#=FV{`IOzHum8_f9|pG;-e}iLxJCA zRxATCDvPTALr`Zby5QEA{p^dTIm(|qco*)%I=J@hX8`C$ zXd^{;(AN!NdKEcw_T2M$uv&lDdbdAKll9s-vADXv6iBk7m6Ce!=%weqjX~jhR4Y(h z3E`*m96sDsP6wF7G`F20#S-}eV^7M0=wMED2RDXqsiY7M9fHZU>HVK(h%4bt!HG68 zp=BH*pDW5&{o{U@nb&EkZ4ON`nan1VkgC7OU3nWG^N$f zLvtcOf!YR(l06u~{OvQyvkM9noaS#AL!y#@w!9bT4yIfDBPs`9D2Cj1zjhVjAdeNe zUh+u_Ofou$$QsGmY$bbl5_x38-stM4HYqfa)d&5^9Pp5tp!KHG%_Gw|g!ITk5fTbZ zrhBf(TKF8W{U_K8J~w)IYNBGN@?#DJaO5klEPL4d*RsmNyp~dCL!;C+%QNcb7)E~=2ED*AR%=ndVzb#jMnc__`NggRHcMnyNH&kUZY)V%;-{OJPS z;g`;!?)~l2&tc>K{^*3vCxwM+#V0 z^TM{e;jfu%F_xJ;&qnnM))82+RnAm)a^s9=;xg_H*&oNmexA29(EE0L@XVp^Ip4N( z*I&hI79-vic>^;D^F$KgEI$-MFGbqbnQZdzFRw(uYn~dD{c%&?x_9y5F}~~F7OAV< z1CiNqxc}*o*!E{X&%Wk1nX@=Ko~Ztp-~qgX~BBI;`3n80DAcSCLa>eIwPKK?pv*mzuwYR~?f z_=FXJ0h5q7^v=;X6?FD=mV`1+4^{9?9S?DPyk;fHz$+84OP7=C-6>J)$@|AhIauY3w%WI+l)3q*V_o zd>uNT{+d%Nw`A2XXK6ttATfcNAJzQwg>N!{z4F82GFU}`7%iWzyRKmdcy?B4Gm_D3 zI}m$SM_~iq%8p_g*UAx|#!O%T)SKeZ6G{w`wFon+lw$$4PZln!S5_94v9nAnitc9$ z&j-lWb9WLGi)hKKJYxQhqOLL7^vd(~k7;+q{a!ahOIO<3>c6s;|GVP3(C;sa?6Rb$ z4luYH#80buLuO@@uDErmMT=O0S~GFt&h3Lz`Ix@8Qv17Xj6Mvst1?QP^UW?Mw7XLV zo7yi@;Kxdx=1+?aL$IoIJMzq(h!~Ch-I=<-17P_~aL`rh*I_x8A8<|lPk0(he8(-X zq~P$H`ujy`6U{9ge}x655e&q(@w%;QjbDPoAJB(Wg;IUstk9^qXndD?Gsx?@|jh1IKsM@J)w|XNM6(#zX_L3(|FL_cf;xTYb75xRl*Nd7E z&c%mn?tZ!VIx`oDA)Bbfnc}{nKjS&sk)dEp_sO?uCC9YRY^u1naVl(j>6rvpP*1s6 z$dqcb${SsAFv&W04Vv{VnJV?r6LJG{N}{r3-j0w`d9tU;QV;TqEb+J3{0!By`j+90 zkug()s;B41{?ta}+!G(ZC5hLW#=fhcGWJTO2?}lNH#x#a$v=WC4dK+84QI{FwyM5! zZ57u_KWc9moXCmztrc*x+krzC04^D9)+-msyP@fFI+DQsbkoVZ_4J#o=dZRE*FT39 zY-e-tN-L%cM~pv6%GZwP5R~(}<)O30PpL%rrVma1eQncG^2KUZA1e2xjvW*7B1O?3 zBnvrBxO@=b+333j;kS4?KoI0_hLOKBp!3%}Tzd->%=vq+eAsb+8*}JZB$d#BDH2Jp!x)~G2_=uZ=WAF0V+nFiuIO+*rr%!bl9s~&0e@inn> zDM99WAy$>udv%d1DO}~YYu^=hBj%mBa~3JG%W^Y5l*qrR(y<>a+=fj%y?<03tUXWr zD;Jh9W?dUZ&18^JsU864DYo>_{ri57&ET6cJ zFsYGB(7wl6Ye3MxoG!nYt$h(t4KPYBzc*dK@6_b#G0J%IL`%LtbW;>r43&pCCh%Ry z<(tRG4W0gEYd_9v)hS+eJ^xUu%lBfM+fi6mms4}569{=cOXII$Dr2pMGy2v=-oj1@ zpdr6$emxa0DSi|zGqXcxJMtgp_>u6r{9ZHQ86h}ZH>3GnVVOa(T-j=xX@jhD=Ts5r ze_*@%<3qV|P5&(1EL?>r9V}5ZHL%@`cFbSL#!Y<7C@qpWCe6-gh*E<$rSYlyr5}rV?@Y;u zafl)U!+wU6sz6c8LZ<$YMAo@J!xzF1e{=XF_MUW+rQLZ%>ocU0N1Qo{zI$M@s-9Ff z_vkGVRmzj+$0{jhd(4T~o#j{_z3qG;GfYv32e0+ykfJdAY~=rD*fNzUO+R17ew)&q zYM?jzI#*iM=cm}uXm^3f{~@h|eVxfG$As)p1YaC@Gv~WhjW~<5DXsh1SJ?ao#tY?n z+L`#Gjq&rs>{Ay64{5L#Zn3_9!`axp>Nf3Mj;y6tiH4YDX6QWoL2sQ`wj?2Az>!!T zOb(Ae+462jbdd?4{hnNW3K&RV{+vFor!Q?qp#`{+=?(9x-s(h@Gl`(fCW>c#9~L7% zt;KVy3~F-F-iJbM*aC55Uc9YS2)5+}xutlmCbp=B;QqFFY?R#}mawU5czP#3@W!aS zqx7Qgz&8={?M578`e=C(+|i5-9rR%J$qPHgn!R@2c#d>95D&1U(N3aS&(Z;|#<+%} z14)zCsNk0m2nI_Yktrw1%U^vhQDV#$nuPQEOLm@J!xb2y$1NxUZ!m{c4)tq1Tm9&a zI`g3gJwI{w9O}cKbbLnxuKTvs>Ct8`8#A`?EYs^jC+BxgkDgu%Kp1%-)73pBv}_JH zKE2((I~<%BFtcHA|2_GdtZwzxm{87ufxtk?ojE&=!btM+Pa)LtUMrf4QzVZ2t2%O# z!9G!I$ZdVQCsJI~dn<%Rd7*s~H#kq&Bc;G<8|o*1I7lKrSL8(DWml77@7$Fj~lYimotR8;#l zZVe`$g!@`mKDxLENY;_pQJ3VBv%EgU5EaYxUrOyXVsLm(UI{EuzRX4hN%^=vD1Xqi zE9!8`WMpx=w)UDG zhdm)pjXo@r1o|^t;#K6MgVQP{E?aKfCs&~q&UJ+2CB}e!lquRZ^!a43t7V+))_Z}u ziPA3gED+>Attzv)M*kehiEooC7ascZm(?Hc+?Dtv{;?EC5LO-s+AjGK!=XUA3aS74JCG)k5Mh}#OjKQe9V zu#v?@_Y=rYw;mmh7#olwL9}7R2&iM*kO(3HV~EDRH=fMHzp_B_U^Jx z+Zo7hKb|FS)7?K>U?Z^=if?lvjGR4cyJ_>Mi;eu}1y*?O1FL~JN$(_FKuMY`fO1JCyI(*j(u=M<85jlTw zviW)Og1TLK=|^8jfd#jAZZj-+Ch0!{lQCH>NvgZrE&xldUjWh;UB_X76KJ3)S}ujz z_+E@+nXHv%k=NuWRBTLQ{cl!Kv5Oof%;yG@)#v&#&wnTwfIyW~RzEp6ea4`;q7%S6 z-22A{d#!_r>VJxh06+dEKjSaq=l>0i)4MnWrj?;o|AzbNa!GG7=dCwHNB z;+J^GO0&vIfVI!6&QA#P`fgNdwvb{Kgc#s0=P{6Rn4TuhI=uC8uEWy&Wg#GBj4z;I zCSxYC>sN(2(J)^W>k0G4>%_j**zGA1e*QfN>LlYe6anMVL#I1i6FC97yj- zcBHK?A((lQ)$;(}YkvC~p_I#lN9HOJgCw8(o=k;D3E)f3QQRPS~Q! zOj(V9)$<2)91kE^Y?nMZlJjx5LM5N$Qr;cWVZZGku=vzAUnSP6NDGL(0ExbC1R{74 ztJ`3sjo^Xd$4xM2Aq){A?2|wSBcI^w<9zr4vU!_iMJ=r=VW>SE@AA^{b3=uFjiU@_ zI9E8RFNtQl2<8@!cdAtq36QEA2{Um_PG6_m93ECKIqZVLx7u}O|Fh@~jmJ;@oe!LL zPR!Rlf3Zr1pDN?YX$r=uHTp3gU&RD{^hcE04dPPFEm}`-0Y`@}U?LBKhPW*3`6#r9 z4+gS-r+vnQmiz|FllLA(U8=)Pj5vp8u;KvJM}gZVk?8V~E@?txG10u| zKxQGK1^_id7dGaT0Qy^iHR3N)pFh>S@iwmg^jibSz&Q%CReyWZ%QF1F%OG$)#5K?f z_3>;^(STVoh%9bwb>!t!ppPU2%=+v(&|!T7qyrQE#)>M>z7F$62OKrmx9<+fp%)A_ z!;|4Ws)7}c=3BSiUI!G!EH#irO%kHsXR>LffnNSjgBwKbsapUAxFgTTh=v zyE}wU_3|-ODo)%&sfQ&!ppNgTdnsUS2;`#+>llJzEjASZ|63Qu+F`_0TV?7$Q zKYalPHC6%e9xVNo#?DWH(na29ee{wm;9e-uQ5uI<0*Kx>?xWC(e{?P# zY%g|S&?xM`_9ANY_*kN%%VQFQy!7pfD40Y1SMjuF@Fa2kr`8YAYnEStAXnhoqKCVGuj0T7aN>_m zf+U%10R@e}=P8QZwh#3Uo24S+7%a9$2%qk2Qna*^K3bM<9B-0VdaZ@e|7p&b_8E(M zcZhXN9E!E&89*7WaE9A<7Af73NlI<`?j<16qoCQ3N|~oD3&C>|&nD0`d|`sp0+``B z&=<#z1#T?0#pg5#960Ng#M&$rI|{6N@F=whOh z`;$oH!9rI1*`Pm|8dQxP=g3p6o!w86?>5{2UZ|Jg=C&B;97Zi$l@}(8pPXo9b9D0d|F=lTdw&hy_II<9c zZ4qz+00%^EiMnps`prx3E5XDMG_y>t%m}@p`R>D~PZZC&0?$9!^Zhyqg?$#b0heN*2|zWM$10Iy^QAuH)( zalKB`$IXplKr5W%D2=ao4?;9xGF!QTjPKTfQ&{CCr!{No+}f7@)VH^B&MDNp`~<~H zrOoxcIpfQZH~iZ1c~nHQ6~A>sORx-Kd--j&1t+h-@$q*yJp4XF00s@A(^!cUCpXM- z8`;23FZKp7cBq)1Qu82;fkZYUn`3xgbRLXqDmha3oKp$aL*oG&G5o5wL1c^hC6Xa} zG}{a)ZQlp#ZcqpbEPH8deJz=-zK)=v;aXVUz{~>5-_arGd+|4d8w%%ZUqQZ@+K6$h zqb>if77w`9rgA38^0z!O%k|>1$zH^iF@-FFx>u=sXYb5ayMWsxBtUl1k-FFg)0&F1 zzq+yGwWl_5X z62|u4VdMbVC;&JfmE*<$iHuVMfZET2XFfVs-4|~A`#@r|;_zaom zb_XSkmj{bmd5ukf+Lav&9K1Q)#;R*TIXH=TJSRq7P2-JMK@`W+*=|#(mCb~wxLZpS zpTTq!>q~SS@gteF4scb?kyUeu)^Zm_?2!a9H=! zYxe%qRyzKin|gjnLEjuJft>EXLNy4TfCQ3(Z6A9be+}?U7N2s~X*%>pvM_7_n9=%< zP4vAAlc-Mk_+$R~l($o9VOqN3EnpHsUyzYO08Ap^5{$EuRtdD-;y50ZY+We|Xhd&Z z{M!3=UFAIYkLa>-t{)Oi{R%+VriFVNo3DHp4lSEmKcC#*vPAJYw^xI3LsR?bpy1La;pcB?yaUM9R@_suyr}~zW>CO;bt6jBiZ{e7r!fu z{!BkTodc`5q$$;=z@6vbz*nL8lkE?}z3gzSVexDIETe%xeW`rKOx65|Yrvwu4_BR} zt(ucw7*9Nv6yd??2{?huADR( zTm{}gt;WvW4J)-p_2lK+UqYJMYa)DZH%KLybW#rFTEqk)$3<3%qw?WvVu-X9N2=)$z8jwa~c`4JD za60wtyATyaC~fvY>H;Mb{4L7CpY?_I;f`wBTb(f>eiB~&uPfO^YsTlcpQ?1GQTVC6^!`=$-4!dpQNA^`PL-cT2_EPCk?X$ZWisL-rGM2yI%WVM!|7hEEe7 zP4fN$Y5~@dO15Z2@l7=G=^m_)nsD$L!xO%#M;Es(o-Unr#ABpUnm4~Yoo!NlH z4Pq;V4ctdS-d{aM;LgLEi1FDt$%uKpUqaVPTXC&ZO7)nNxlUVaZwZ&{S0@DR`${cu z?#tZM+X}`mkxBM4ViGGppO^)4CrQ_!rKe{JvSyV*%%=%|b}kf8QZgB=j;oJaUFAtA zu#5H-k2{^uFlzQ;_kj4PGLPhApM4(1Ofj!@)cu6O2a7%Um!|B`%!+;T$8LtF8#5lI z-o|f`MY_a`xjRu#8aOM>dZzoPpAa0GK^il0rCdGbt7%b@&hj|fBL^p}8UXa!=FV!; z_6tsD(pHoFi}XkH6knVXtgcB9(@8(#q8ge+zZIe;ET2m}%xS#Fb}^>-N}F_t;VFWx z_l3pVoiMh5`CHaarxX|W;^oeI6?0l1e_gO@wR*bUS@iUD8X-fR(Dvtinp@9YkHYcv z@C5V8%KXXwvlk1}wtx0rX*~;}XgQXA>M_itc!K;mhVc85sSbRP73J8|cqUB0-P#IC zqOK`2J49u$T;ZdtS&a6*IGKnB9&X`)gVHf`^B*zfil<-SaNWw;m<=3Or4Yn4uS}!! z^x2<-c9-0n^U}00?Kjrs&~Ka><^4QV(Q-dB!swl;p|Y_VJ7Y`c6D*3u&QTeggpP9^ zb)<8e=?UXQal(5D+Vs|N}bf|tW%vsQ7 zdqaO%;=tlGl$UDfjXampL1tk2od@t0_@kId_8cnZ{ro!{A;%8#(mS+qwlI9rd$zFd znrm11a?Gdc+d?oLGmn-(YcaHFsnBNIU^&N=`!|;5hzR&d6Y;ve7;;pv}7d+i8?32Ctj7 zJwLRR_T3d=D!uG&?SJUk2|`Jgys)6(|6F8QG++E?u$4iM?q&nUwtw&O@};S=Y8$az;&UI z!7(qam(BRnDhlWzf+7RL?J<*XaLen=m6<4f)cX5cpZ=WBx(;w(0+SXaUMm8oFjP~M z9&%yX?1bx?)z+JXO^~*w$tfh}g{*Xg9et=*j%3}>f;f0!17EREzm>hbkHxnW{K0E% zAID8z($vrwX8^FW7d6;1zjesSQ7B=cjebmHoAbo{A^TJ!d)kx9DM*dGb$O8eSBKe1 zC-s*`wvM9lwzfZy`Gq16dD-hRRH4bR;8NE0B=wXa9w%YW)9;8~4 z&sYiSR1}U+=USfnovh+s0u;J! zmBPUIPpaN64p55L9NA4%*I=0tQl`+#08gB$^RefsddRM{74wezu)SphB+$$uc(S9uO&Ho$HVk~AcZmYFoa^v`c$iXkLGj+Ty_`~j% ze`POH3-&A1<G9DKSy15=pww-lbkm5x7U^qN@}aPzc> z7SKb##jmC!>4f04uMB;D?PVfOZoL>_{mGL% zGg{~5F7ub z3g8@{NxjSFYuBgo=Yl=ye!K(7N*KFxv_{jz;(vJMm)F*3!4{6+RbB6$qE7~)52Xw^ zEXL|x8*;37rt=fXO|R(ulK2?B@PsoNTfN5?W;cyA5jk7_f^5)09>cH1h>^wk6Vn2H zU>x-dyqEw++7rTXWyE#Z*>h67nA|1hB&qtF5Ent1U4#HJZ6+X^XtC0+;Ok5X!f0;^ z!ct?ElA^#rWkZoVr6er^+Jequ*@UUV;5;zhQ@BC`uU9Iu!{b}*%GcOj={l2K?5ZeH z8wvyKWyk-d`~g}0!24tYNbj0GN>(uvR5`Vf|jNGhxgH0G_t0lH&Ge3jsEV#)T`4I9r7~U%>Q^@phc1unOI_re(Pm z{Cqyj3Ql1q!_0mY7JG3J!F%51ZyVwQ3yM08=kXG8acFUyJ3;b zykPgXF93vCV%HMRTX(qwujE}7J#zaCO_4ge_3b$=w0qq9rU3fQckE?&4z_|%t3PIu zde^I~_2Ky3R2RPi0fmT|MfcPW_Hjb0PN8X3o#lGYo*2>^|fQ82?1T8Sc{eH}_Vt>~z5^&Q*S_nFCD?A^pr| zb)Q6=S^nrY?KEqJ3(PdhDG`C^AV_`-0- zRWnkd02WfKGb;Kh-%}3e+D3eOYROO?_ONrXGNW&`fb;=FmQ%xIEv=m(a1 z;<-YFyWyq*Uoa0K(bL=|Gdu(JDxnk3~N|Th%O2Qf5u4g4_n4aJ^8Od0cKgl-o zZs4ZQoU34Xp-4^T46YW6Je&HOg1#LL_&P@p$@Y;>)UgjXEM28Ve^iAR*~%N)LH|*fv%Z85bMe9QEeXE^w5+0k9x< zE2MovnXJ5lihd8cu1A@AA=F9c=oIh^O{uFF62g)+POnD-!bS0)4DK5ns)&xd2$lU{ zI*}i5l`)TI@yn(YM!f6sh+mA7pC-D)Jf=gWm_VWM?LkRF{>9 zM+evWN>I(#pnuY&C8x@SRSAOB;WR#NA(us`g-jb71o_IsKO3|l>J;v?y?t`DZ`OnO zR>W7EKUsW_ zy@e}0EKW%ba$RmGAb*}Oesid7Xg!G13j&?H55F!l?TCPhrf<>joXnc2%h#f_>d7l(!Cd1s)sDU3=YD%iP zH;NaF-55S$l`HoEbj^}g<{a>H`&oZf-VYQk%aJ>D!=#m&44Wwfv3Q@;0mCe+^NT9h zkz4j3aWBNUjUD+%_B=h;S%AefjC9j+OJ^RIS~$kj%@xd?BZO>q93rE{aql9Rrv#K` z3uYdb5eYOnFUj|KL&mA+N25GL`m(eShn*(o`hwdc`X+*o`pSzBY76|cs2~Zlc#P%C zQdm@6Nmz=$iEz~C#Hz`>&=u1S$4rvG_Ot+-*o6~x#;ZEmyO3Hr6+x;1rtxgi1=wwi zM1!%H^()Qe$q_Q$T3+`0L<4Q<{d4ZXJGjA~^MWfD(P7)IaeM+nex;UQi^bW;;0)69 zYkNE2&VYcZ`GC8Xs*m-4_4GJc|Kp?ayVuaZ$m}#k@En6ZHaJM&t#JcMEMM+*E)tIj z&V=*8LR)|j(zmOm-MJ8wW+}$HAhpx-h06QEJD)2ZrYwnN)MNHB8j|#&|%Qy>!IBiZDCMj>f0Jd8&?5xy3tvHNAks zjJ(m1D)u}(!>frDNfB&F{?&5T`V%I9U6Dg!dTjHUuk=Z-bs(JPRutsIrC>X@5I5#I zZYO>m_piJCXyQ!H+L{4?@~j9A>bug4pk4e5aZJC{h1RCPP|!sm^B!7To6ONUC8Jlq zogXoL1R+^@0upsN$8KjD;{3{LQ8A^*SGR=LI4CRM-1C#ExxL!WyA1_HTdL)mtx7dl zT{n7)Ig0h8(2Pna{NwP~#R}u~(#6 zv8KcaEso{PsrI?Kr>QQJXv`p!d~4il+YgXdL{@%X35|B?k#kGhtVF$aCx^ak_VNB_-tfx zK897sib67sDI$s&Gm97MLO0oOvNBn^wAQ@EAECB@4O6*zzIf+AwLk{pVjUf>5 z-z(tB0p1Uw&i|zNTd_7c2n1yXn#=$afFh7(ap~U)`7HM2a)o8(m$mv2#sBEjCn2B3 zE_LwBV0@$1r-u+Q4vOD6e``CcymVz%E|&GLNz?@#D|+X4S!X8+=09KtD_mscm*0|JQ% zZ*L#Vrly(~5`jz9yjhrzh5^^qKzwh0f4S=4edZ5Re&DY~mqQL(G(Cb3wMRY+@$$mB zx#0qZW7qoMnaBUE@IW(Jr^MC5`B`eJPGI$xRAYbzF7-e!PiK>N* z-LpqJ3^BdCHMY4bk+m)L^#^f{-62?y`Rk%Ell%9O97#SkD~nL~{HdMhq(6)m237y$_>(^2?T9AldZ|);DsjEOq z+KLwQb3-2R!-qYL0wkj|734i0l~v}AQS)tNth zxLz!ey5Y5UioiD}_yron=Nx4B@wNo%U=V_K2M?YKWo2e{)8e8hyQo*u4K_cI4H0xU@1g))_|6H2TX%QzNnF0PO?7gvnkrQPl?+Zr z>Q+~RrW|fK|7X14Kz#1;T21MFI%L5c) zYuBojWT2PtOL$j|bqtXB!nvUwOvUR{2;4L$fjW&?`&7srP-^F}G5m$W^mZ?{AkW@k zNjQi0i>KC55_?nh-%Q?mXRJ0tnOHX}xi8=&^-{{1Zs&2v}mM zotoeYunUD?-jNmlmv#s7ni7M0v-1KH4GL3fi==*AU%9Zd^1>X47Pkw@zVgKt0-Del zHvH!;^~LeQbb{DGvo6V)tyM;0wZ$2an`MHHSvf&P)r=(N{ti)zN$b zyG0ScE05Y)h)edBtqj9sFjF7M2W?t1LNJ&>UXCN~*<-GmH~kLrROKB9cpTuEa2^d6 zVLB~}o(BQsoT;sEZNA_<>%zj`NDac}BIj#B?^e!M76Ogt>dD?i>>~@FkZH4NUMeVl zyF;P4iCHCmL7O63+tL=NhRR+V+crRCya(%D+FwDz(&eFgzXW^~PL%O0d-OF%d!u$F zxiN}{*{okXT{>f}Too--?!j72aZ`e{1WtdRvCqy(@pTa=anO*vXZnk=04!hU@M1!^ zO*#{JpHYfsl0jjAEmZHN<%?3kCS6^{iDRzKjZreHjfEjNk8yZ5ldMVqog#_Z-Com@ zl|`10UR>1KRGZj?!PMVvO^Qj-Eb(>uz|z;An!0`rz?fhvg#~IT{tf6aHEZZ1idL4` z$Z)-_wJon>yescTt0RdD;h7-lXb_)dKk4w@sHVB8dUHdy*f>0vDF$1WX2lnU!x!S> zptXlCvrBu|ZP_xoqO{kjnkC_;^#0VZ8byp-srZKb=_O;WnIW^5eji=$oEJvuTP>#5 zN2lPU - A different physical network than the one that &corosync; uses. - It is recommended for &qdevice; to reach the &qnet; server. + It is recommended for &qdevice; to reach the &qnet; server via a + different physical network than the one that &corosync; uses. Ideally, the &qnet; server should be in a separate rack than the main cluster, or at least on a separate PSU and not in the same network segment as the Corosync ring or rings. @@ -231,7 +231,7 @@ - Enable the &productname; using the command listed in + Enable &productname; using the command listed in SUSEConnect --list-extensions. diff --git a/xml/ha_yast_cluster.xml b/xml/ha_yast_cluster.xml index 765be4f7..dc47c4f1 100644 --- a/xml/ha_yast_cluster.xml +++ b/xml/ha_yast_cluster.xml @@ -229,6 +229,17 @@
+ + &corosync; &qdevice; (optional but recommended for clusters with an even number of nodes) + + + Allows you to configure &qdevice; as a client of a &qnet; server to + participate in quorum decisions. This is recommended for clusters with + an even number of nodes, and especially for two-node clusters. + For details, see . + + + Security (optional but recommended) @@ -596,6 +607,150 @@ + + Configuring an arbitrator for quorum decisions + + &qdevice; and &qnet; participate in quorum decisions. With assistance from + the arbitrator corosync-qnetd, + corosync-qdevice provides a + configurable number of votes, allowing a cluster to sustain more node + failures than the standard quorum rules allow. We recommend deploying + corosync-qnetd and + corosync-qdevice for clusters + with an even number of nodes, and especially for two-node clusters. + For more information, see . + + + Requirements + + + Before you configure &qdevice;, you must set up a &qnet; server. + See . + + + + + Configuring &qdevice; and &qnet; + + + Start the &yast; cluster module and switch to the + &corosync; &qdevice; category. + + + + + Activate Enable &corosync; &qdevice;. + + + + + In the &qnet; server host field, enter the IP address + or host name of the &qnet; server. + + + + + Select the mode for TLS: + + + + + Use off if TLS is not required and should not be tried. + + + + + Use on to attempt to connect with TLS, but connect without TLS + if it is not available. + + + + + Use required to make TLS mandatory. &qdevice; will exit with + an error if TLS is not available. + + + + + + + Select the Heuristics Mode: + + + + + Use off to disable heuristics. + + + + + Use on to run heuristics on a regular basis, as set by the + Heuristics Interval. + + + + + Use sync to only run heuristics during startup, when cluster + membership changes, and on connection to &qnet;. + + + + + + + If you set the Heuristics Mode to on + or sync, add your heuristics commands to the + Heuristics Executables list: + + + + + Select Add. A new window opens. + + + + + Enter an Execute Name for the command. + + + + + Enter the command in the Execute Script field. This can be a + single command or the path to a script, and can be written in any language + such as Shell, Python, or Ruby. + + + + + Select OK to close the window. + + + + + + + Confirm your changes. + + + +
+ &yast; <guimenu>Cluster</guimenu>—&corosync; &qdevice; + + + + + + + + + The &corosync; &qdevice; screen shows the settings for configuring &qdevice;. + The Enable &corosync; &qdevice; check box is activated, + and the cursor is in the &qnet; server host field. + + +
+
+ Defining authentication settings From 904883e9d11ceba7acfe0f91069824f5fc70e740 Mon Sep 17 00:00:00 2001 From: Tahlia Richardson <3069029+tahliar@users.noreply.github.com> Date: Fri, 11 Oct 2024 16:11:49 +1000 Subject: [PATCH 38/39] Make crmsh SSH procedure more findable --- xml/ha_yast_cluster.xml | 23 +++++++++++++---------- 1 file changed, 13 insertions(+), 10 deletions(-) diff --git a/xml/ha_yast_cluster.xml b/xml/ha_yast_cluster.xml index dc47c4f1..780bbb3e 100644 --- a/xml/ha_yast_cluster.xml +++ b/xml/ha_yast_cluster.xml @@ -1126,25 +1126,28 @@ Finished with 1 errors. Before starting the cluster, make sure passwordless SSH is configured between the nodes. If you did not already configure passwordless SSH before setting up the cluster, you can - do so now by using the ssh stage of the bootstrap scripts: + do so now by using the ssh stage of the bootstrap script: - - + + Configuring passwordless SSH with &crmsh; + - On the first node, run crm cluster init ssh. + On the first node, run the following command: - - +&prompt.root;crm cluster init ssh + + - On the rest of the nodes, run crm cluster join ssh -c NODE1. + On the rest of the nodes, run the following command: - - +&prompt.root;crm cluster join ssh -c NODE1 + + After the initial cluster configuration is done, start the cluster services on all cluster nodes to bring the stack online: - + Starting cluster services and checking the status From 91baf0cb2954878403306b8c05f1c261e1c4cf12 Mon Sep 17 00:00:00 2001 From: Tahlia Richardson <3069029+tahliar@users.noreply.github.com> Date: Mon, 14 Oct 2024 17:04:26 +1000 Subject: [PATCH 39/39] Add procedure for corosync stage of crm cluster init --- xml/ha_config_cli.xml | 72 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 72 insertions(+) diff --git a/xml/ha_config_cli.xml b/xml/ha_config_cli.xml index f396b815..37c9a94f 100644 --- a/xml/ha_config_cli.xml +++ b/xml/ha_config_cli.xml @@ -954,6 +954,78 @@ Membership information For more details, see . + + + You can use the crm cluster init + script to change the &corosync; communication channels (or rings). For example, you might need + to switch from using two &corosync; rings to using a single ring with network device bonding, + or vice versa. + + + Changing &corosync; communication channels with &crmsh; + + + Stop the cluster services on all nodes: + +&prompt.root;crm cluster stop --all + + + + On the first node, change &corosync;'s ring configuration with one of the + following commands: + + + + + To configure one ring, run the script's stage + with no additional parameters: + +&prompt.root;crm cluster init corosync + + Accept the proposed network address or enter a different one, + for example, the address of bond0. + + + + + To configure two rings, specify two network addresses by using the + option --interface (or -i) twice: + +&prompt.root;crm cluster init corosync -i eth0 -i eth1 + + Accept the first network address and port. Enter y + to accept a second heartbeat line, then accept the second network address and port. + + + + + + + On each of the other cluster nodes, update the &corosync; configuration to match the + first node: + + + + + For one ring, run the following command: + +&prompt.root;crm cluster join corosync -c NODE1 + + + + For two rings, run the following command: + +&prompt.root;crm cluster join corosync -i eth0 -i eth1 -c NODE1 + + + + + + Start the cluster services on all nodes: + +&prompt.root;crm cluster start --all + +