diff --git a/docs/administrator/migration/migrate-in-batch.md b/docs/administrator/migration/migrate-in-batch.md index cf428396..ed383726 100644 --- a/docs/administrator/migration/migrate-in-batch.md +++ b/docs/administrator/migration/migrate-in-batch.md @@ -1,5 +1,5 @@ --- -title: Migrate In Batch +title: Migrate In Batch and Rollback --- ## Scenario @@ -17,10 +17,12 @@ So, how to migrate the existing resource? If you only want to migrate individual resources, you can just refer to [promote-legacy-workload](./promote-legacy-workload) to do it one by one. -If you want to migrate a batch of resources, you are advised to take over all resources based on resource granularity through few `PropagationPolicy` at first, -then if you have more propagate demands based on application granularity, you can apply higher priority `PropagationPolicy` to preempt them. +If you want to migrate resources in batch, such as the following two scenarios: -Thus, how to take over all resources based on resource granularity? You can do as follows. +* Migrate all resources of a certain type at the resource level. +* Migrate all types of resources related to a specific application at the application level. + +Then, you need to configure PropagationPolicy to take over the corresponding resources, which can be done as follows: ![](../../resources/administrator/migrate-in-batch-2.jpg) @@ -40,9 +42,9 @@ Edit a [PropagationPolicy](https://karmada.io/docs/core-concepts/concepts#propag * `spec.conflictResolution: Overwrite`:**the value must be [Overwrite](https://github.com/karmada-io/karmada/blob/master/docs/proposals/migration/design-of-seamless-cluster-migration-scheme.md#proposal).** * `spec.resourceSelectors`:defining which resources are selected to migrate -here we provide two examples: +here we provide three examples: -#### Eg1. migrate all deployments +#### Eg1. migrate all resources of the Deployment type. If you want to migrate all deployments from `member1` cluster to Karmada, you shall apply: @@ -64,7 +66,7 @@ spec: schedulerName: default-scheduler ``` -#### Eg2. migrate all services +#### Eg2. migrate all resources of the Service type. If you want to migrate all services from `member1` cluster to Karmada, you shall apply: @@ -86,12 +88,84 @@ spec: schedulerName: default-scheduler ``` +#### Eg3. migrate all resources related to a specific application. + +Assuming a specific application consists of `deployment/nginx` and `service/nginx-svc`, +and you want to migrate the resources related to this application from the `member1` cluster to Karmada, +you need to apply the following configuration: + +```yaml +apiVersion: policy.karmada.io/v1alpha1 +kind: PropagationPolicy +metadata: + name: nginx-pp +spec: + conflictResolution: Overwrite + placement: + clusterAffinity: + clusterNames: + - member1 + priority: 0 + resourceSelectors: + - apiVersion: apps/v1 + kind: Deployment + name: nginx + - apiVersion: v1 + kind: Service + name: nginx-svc + schedulerName: default-scheduler +``` + ### Step four The rest migration operations will be finished by Karmada automatically. -## PropagationPolicy Preemption and Demo +## How to Roll Back Migration Operations + +After resources are migrated to Karmada, if the user deletes the resource template, by default, +the resources in the member clusters will also be deleted. However, in certain scenarios, +users may wish to preserve the resources in the member clusters even after the resource template is deleted. -Besides, if you have more propagate demands based on application granularity, you can apply higher priority `PropagationPolicy` -to preempt those you applied in the migration mentioned above. Detail demo you can refer to the tutorial [Resource Migration](../../tutorials/resource-migration.md) +For example, as an administrator, you may encounter unexpected situations during workload migration +(such as the cloud platform failing to deploy the application or Pod anomalies), necessitating a rollback mechanism to +quickly restore to the state prior to migration in order to minimize losses. + +To meet the above scenarios, Karmada provides the `spec.preserveResourcesOnDeletion` field in the PropagationPolicy +to control whether resources should be preserved on the member clusters when the resource template is deleted. +If set to true, resources will be preserved on the member clusters. +Default is false, which means resources will be deleted along with the resource template. + +> When using this field, please note: +> +> * This setting applies uniformly across all member clusters and will not selectively control preservation on only some clusters. +> * This setting does not apply to the deletion of the policy itself. When the policy is deleted, + the resource templates and their corresponding propagated resources in member clusters will remain unchanged unless explicitly deleted. + +Taking the `PropagationPolicy` from `Example 3` as an example, +the user should modify the `PropagationPolicy` as follows before deleting the resource template: + +```yaml +apiVersion: policy.karmada.io/v1alpha1 +kind: PropagationPolicy +metadata: + name: nginx-pp +spec: + conflictResolution: Overwrite + preserveResourcesOnDeletion: true # preserve member clusters' resources when resource template is deleted. + placement: + clusterAffinity: + clusterNames: + - member1 + priority: 0 + resourceSelectors: + - apiVersion: apps/v1 + kind: Deployment + name: nginx + - apiVersion: v1 + kind: Service + name: nginx-svc + schedulerName: default-scheduler +``` +This concludes the introduction to migration in batch and rollback, +for detailed demos, you can refer to the tutorial: [Seamless Migration and Rollback](../../tutorials/resource-migration.md). diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/administrator/migration/migrate-in-batch.md b/i18n/zh/docusaurus-plugin-content-docs/current/administrator/migration/migrate-in-batch.md index 1fcdab5c..6c7b600c 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/administrator/migration/migrate-in-batch.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/administrator/migration/migrate-in-batch.md @@ -1,5 +1,5 @@ --- -title: 批量迁移 +title: 批量迁移及回滚 --- ## 场景 @@ -17,10 +17,12 @@ title: 批量迁移 如果用户只需迁移个别资源,参考 [promote-legacy-workload](./promote-legacy-workload) 逐个资源迁移即可。 -如果用户想批量迁移资源,建议您首先通过几个 `PropagationPolicy` 以资源为粒度接管所有资源, -然后,如果您还有其他以应用为粒度的分发需求,则可以应用更高优先级的 `PropagationPolicy` 来抢占它们。 +如果用户想批量迁移资源,例如以下两种场景: -那么,如何以资源为粒度接管所有资源?你可以按如下操作: +* 以资源为粒度,迁移某种类型的全部资源 +* 以应用为粒度,迁移某个应用涉及的所有类型的资源 + +那么,您需要通过配置 `PropagationPolicy` 来接管相应资源,可以按如下操作: ![](../../resources/administrator/migrate-in-batch-2.jpg) @@ -39,9 +41,9 @@ title: 批量迁移 * `spec.conflictResolution: Overwrite`:**该字段的值必须是 [Overwrite](https://github.com/karmada-io/karmada/blob/master/docs/proposals/migration/design-of-seamless-cluster-migration-scheme.md#proposal)。** * `spec.resourceSelectors`:指定哪些资源需要被迁移。 -这里提供两个例子: +这里提供三个例子: -#### 示例 1. 迁移所有的 Deployment +#### 示例 1. 迁移 Deployment 类型的全部资源 如果您希望把所有的 Deployment 从 `member1` 集群迁移到 Karmada,你需要应用以下配置: @@ -63,7 +65,7 @@ spec: schedulerName: default-scheduler ``` -#### 示例 2. 迁移所有的 Service +#### 示例 2. 迁移 Service 类型的全部资源 如果您希望把所有的 Service 从 `member1` 集群迁移到 Karmada,你需要应用以下配置: @@ -85,11 +87,76 @@ spec: schedulerName: default-scheduler ``` +#### 示例 3. 迁移某个应用涉及的所有资源 + +假设某个应用由 `deployment/nginx` 和 `service/nginx-svc` 组成, +您希望把该应用涉及的资源从 `member1` 集群一起迁移到 Karmada,你需要应用以下配置: + +```yaml +apiVersion: policy.karmada.io/v1alpha1 +kind: PropagationPolicy +metadata: + name: nginx-pp +spec: + conflictResolution: Overwrite + placement: + clusterAffinity: + clusterNames: + - member1 + priority: 0 + resourceSelectors: + - apiVersion: apps/v1 + kind: Deployment + name: nginx + - apiVersion: v1 + kind: Service + name: nginx-svc + schedulerName: default-scheduler +``` + ### 步骤四 余下的迁移操作将由Karmada自动完成。 -## PropagationPolicy 抢占及 Demo +## 迁移操作如何回滚 + +资源被迁移到 Karmada 后,若用户删除资源模板,默认情况下成员集群的资源也会随之删除。然而在某些场景下,用户希望资源模板删除后, +成员集群资源依然保留。 + +例如,作为管理员,在工作负载迁移过程中可能遇到意外情况(如云平台无法发布应用程序或 Pod 异常), +需要回滚机制立刻恢复到迁移之前的状态,以便快速止损。 + +为了满足以上场景,Karmada 在 PropagationPolicy 中提供了 `spec.preserveResourcesOnDeletion` 字段来控制删除资源模板时, +成员集群上的资源是否应被保留。如果设置为 true,资源将在成员集群上被保留。默认值为 false,即资源将与资源模板一起被删除。 + +> 使用该字段请注意以下两点: +> +> * 该配置对所有成员集群中统一生效,不会仅针对某些集群进行选择性控制。 +> * 该配置不适用于 Policy 本身的删除,当 Policy 被删除时,资源模板及已分发的成员集群资源将保持不变,除非被显式删除。 + +以 `示例 3` 的 PropagationPolicy 为例,用户删除资源模板前需将 PropagationPolicy 修改成如下所示: + +```yaml +apiVersion: policy.karmada.io/v1alpha1 +kind: PropagationPolicy +metadata: + name: nginx-pp +spec: + conflictResolution: Overwrite + preserveResourcesOnDeletion: true # 资源模板删除后,成员集群资源依然保留 + placement: + clusterAffinity: + clusterNames: + - member1 + priority: 0 + resourceSelectors: + - apiVersion: apps/v1 + kind: Deployment + name: nginx + - apiVersion: v1 + kind: Service + name: nginx-svc + schedulerName: default-scheduler +``` -在此基础上,如果您还有以应用为粒度的分发需求,则可以应用更高优先级的 `PropagationPolicy` 来抢占上文提及的用于迁移的 Policy。 -详细的 Demo 可以参考教程:[平滑迁移](../../tutorials/resource-migration.md) +以上就是批量迁移及回滚的介绍,详细的 Demo 可以参考教程:[平滑迁移及回滚](../../tutorials/resource-migration.md)。