Skip to content

Commit

Permalink
Merge pull request #605 from B1F030/doc-translate
Browse files Browse the repository at this point in the history
doc: format command line prompts
  • Loading branch information
karmada-bot authored May 21, 2024
2 parents e4d2352 + 646a498 commit 78c800c
Show file tree
Hide file tree
Showing 48 changed files with 166 additions and 172 deletions.
12 changes: 6 additions & 6 deletions docs/administrator/backup/working-with-velero.md
Original file line number Diff line number Diff line change
Expand Up @@ -113,7 +113,7 @@ Velero consists of two components:

And then you will find nginx is deployed successfully.
```shell
# kubectl get deployment.apps
$ kubectl get deployment.apps
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 2/2 2 2 17s
```
Expand All @@ -134,27 +134,27 @@ kubectl config use-context member2

In `member2`, we can also get the backup that we created in `member1`:
```shell
# velero backup get
$ velero backup get
NAME STATUS ERRORS WARNINGS CREATED EXPIRES STORAGE LOCATION SELECTOR
nginx-backup Completed 0 0 2021-12-10 15:16:46 +0800 CST 29d default app=nginx
```

Restore `member1` resources to `member2`:
```shell
# velero restore create --from-backup nginx-backup
$ velero restore create --from-backup nginx-backup
Restore request "nginx-backup-20211210151807" submitted successfully.
```

Watch restore result, you'll find that the status is Completed.
```shell
# velero restore get
$ velero restore get
NAME BACKUP STATUS STARTED COMPLETED ERRORS WARNINGS CREATED SELECTOR
nginx-backup-20211210151807 nginx-backup Completed 2021-12-10 15:18:07 +0800 CST 2021-12-10 15:18:07 +0800 CST 0 0 2021-12-10 15:18:07 +0800 CST <none>
```
And then you can find deployment nginx will be restored successfully.
```shell
# kubectl get deployment.apps/nginx
$ kubectl get deployment.apps/nginx
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 2/2 2 2 21s
```
Expand Down Expand Up @@ -247,7 +247,7 @@ EOF
And then you can find deployment nginx will be restored on member2 successfully.
```shell
# kubectl get deployment.apps/nginx
$ kubectl get deployment.apps/nginx
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 2/2 2 2 10s
```
Expand Down
8 changes: 4 additions & 4 deletions docs/administrator/migration/migration-from-kubefed.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,8 +50,8 @@ object to describe the joined cluster.

Assume you use the `kubefedctl` tool to check the status of the joined clusters as follows:

```
kubectl -n kube-federation-system get kubefedclusters
```bash
$ kubectl -n kube-federation-system get kubefedclusters

NAME AGE READY KUBERNETES-VERSION
cluster1 1m True v1.21.2
Expand All @@ -60,8 +60,8 @@ cluster2 1m True v1.22.0

Now with Karmada, you can use `karmadactl` tool to do the same thing:

```
kubectl get clusters
```bash
$ kubectl get clusters

NAME VERSION MODE READY AGE
member1 v1.20.7 Push True 66s
Expand Down
4 changes: 2 additions & 2 deletions docs/developers/customize-karmada-scheduler.md
Original file line number Diff line number Diff line change
Expand Up @@ -140,7 +140,7 @@ make image-karmada-scheduler
```

```shell
kubectl --kubeconfig ~/.kube/karmada.config --context karmada-host edit deploy/karmada-scheduler -nkarmada-system
$ kubectl --kubeconfig ~/.kube/karmada.config --context karmada-host edit deploy/karmada-scheduler -nkarmada-system
...
spec:
automountServiceAccountToken: false
Expand Down Expand Up @@ -175,7 +175,7 @@ You can config the plugin enablement by setting the flag `--plugins`.
For example, the following config will disable `TestFilter` plugin.

```shell
kubectl --kubeconfig ~/.kube/karmada.config --context karmada-host edit deploy/karmada-scheduler -nkarmada-system
$ kubectl --kubeconfig ~/.kube/karmada.config --context karmada-host edit deploy/karmada-scheduler -nkarmada-system
...
spec:
automountServiceAccountToken: false
Expand Down
4 changes: 2 additions & 2 deletions docs/developers/document-releasing.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,8 +54,8 @@ go build ./hack/tools/gencomponentdocs/.
1. Update versions.json

```shell
cd website/
vim versions.json
$ cd website/
$ vim versions.json

[
v1.5 # add a new version tag
Expand Down
6 changes: 3 additions & 3 deletions docs/developers/performance-test-setup-for-karmada.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ kubectl apply -f fakekubelet.yml
`kubectl get node` You will find fake nodes.

```shell
> kubectl get node -o wide
$ kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
fake-0 Ready agent 10s fake 10.88.0.136 <none> <unknown> <unknown> <unknown>
fake-1 Ready agent 10s fake 10.88.0.136 <none> <unknown> <unknown> <unknown>
Expand All @@ -68,7 +68,7 @@ fake-4 Ready agent 10s fake 10.88.0.136 <none> <unkno
Deploy an sample deployment to test:

```shell
> kubectl apply -f - <<EOF
kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
Expand Down Expand Up @@ -107,7 +107,7 @@ EOF
`kubectl get pod` You will find that it has been started, although the image does not exist.

```shell
> kubectl get pod -o wide
$ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
fake-pod-78884479b7-52qcx 1/1 Running 0 6s 10.0.0.23 fake-4 <none> <none>
fake-pod-78884479b7-bd6nk 1/1 Running 0 6s 10.0.0.13 fake-2 <none> <none>
Expand Down
2 changes: 1 addition & 1 deletion docs/developers/profiling-karmada.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ The HTTP endpoint will now be available as a local port.
You can then generate the file for the memory profile with curl and pipe the data to a file:

```shell
$ curl http://localhost:6060/debug/pprof/heap > heap.pprof
curl http://localhost:6060/debug/pprof/heap > heap.pprof
```

Generate the file for the CPU profile with curl and pipe the data to a file (7200 seconds is two hours):
Expand Down
2 changes: 1 addition & 1 deletion docs/get-started/nginx-example.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ cd karmada
run the following script:

```
# hack/local-up-karmada.sh
hack/local-up-karmada.sh
```
This script will do the following tasks for you:
- Start a Kubernetes cluster to run the Karmada control plane, aka. the `host cluster`.
Expand Down
14 changes: 7 additions & 7 deletions docs/installation/install-binary.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ Step-by-step installation of binary high-availability `karmada` cluster.
Execute operations at `karmada-01` `karmada-02` `karmada-03`.

```bash
vi /etc/hosts
$ vi /etc/hosts
172.31.209.245 karmada-01
172.31.209.246 karmada-02
172.31.209.247 karmada-03
Expand Down Expand Up @@ -126,9 +126,9 @@ You normally don't need to change `*.sh` files.
### Step 3: Run Shell Scripts

```bash
$ ./generate_ca.sh
$ ./generate_leaf.sh ca_cert/
$ ./generate_etcd.sh
./generate_ca.sh
./generate_leaf.sh ca_cert/
./generate_etcd.sh
```


Expand Down Expand Up @@ -312,7 +312,7 @@ systemctl status etcd.service
### Verify

```bash
etcdctl --cacert /etc/karmada/pki/etcd/ca.crt \
$ etcdctl --cacert /etc/karmada/pki/etcd/ca.crt \
--cert /etc/karmada/pki/etcd/healthcheck-client.crt \
--key /etc/karmada/pki/etcd/healthcheck-client.key \
--endpoints "172.31.209.245:2379,172.31.209.246:2379,172.31.209.247:2379" \
Expand Down Expand Up @@ -515,7 +515,7 @@ Then, like `karmada-webhook`, use `nginx` for high availability.
modify the `nginx` configuration and add the following configuration,Execute operations at `karmada-01`.

```bash
cat /usr/local/karmada-nginx/conf/nginx.conf
$ cat /usr/local/karmada-nginx/conf/nginx.conf
worker_processes 2;

events {
Expand Down Expand Up @@ -858,7 +858,7 @@ ok
modify the `nginx` configuration and add the following configuration,Execute operations at `karmada-01`.

```bash
cat /usr/local/karmada-nginx/conf/nginx.conf
$ cat /usr/local/karmada-nginx/conf/nginx.conf
worker_processes 2;
events {
Expand Down
6 changes: 3 additions & 3 deletions docs/installation/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@ Step 2: Show members of karmada

The components of Karmada are installed in `karmada-system` namespace by default, you can get them by:
```bash
kubectl get deployments -n karmada-system
$ kubectl get deployments -n karmada-system
NAME READY UP-TO-DATE AVAILABLE AGE
karmada-aggregated-apiserver 1/1 1 1 102s
karmada-apiserver 1/1 1 1 2m34s
Expand All @@ -93,7 +93,7 @@ kube-controller-manager 1/1 1 1 2m3s
```
And the `karmada-etcd` is installed as the `StatefulSet`, get it by:
```bash
kubectl get statefulsets -n karmada-system
$ kubectl get statefulsets -n karmada-system
NAME READY AGE
etcd 1/1 28m
```
Expand Down Expand Up @@ -140,7 +140,7 @@ kubectl karmada init --crds https://github.com/karmada-io/karmada/releases/downl

Check installed components:
```bash
kubectl get pods -n karmada-system --kubeconfig=$HOME/.kube/host.config
$ kubectl get pods -n karmada-system --kubeconfig=$HOME/.kube/host.config
NAME READY STATUS RESTARTS AGE
etcd-0 1/1 Running 0 2m55s
karmada-aggregated-apiserver-84b45bf9b-n5gnk 1/1 Running 0 109s
Expand Down
2 changes: 1 addition & 1 deletion docs/tutorials/access-service-across-clusters.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ Note: In order to prevent routing conflicts, Pod and Service CIDRs of clusters n
To enable the MultiClusterService feature in the karmada-controller-manager, run the following command:

```shell
$ kubectl --context karmada-host get deploy karmada-controller-manager -n karmada-system -o yaml | sed '/- --v=4/i \ - --feature-gates=MultiClusterService=true' | kubectl --context karmada-host replace -f -
kubectl --context karmada-host get deploy karmada-controller-manager -n karmada-system -o yaml | sed '/- --v=4/i \ - --feature-gates=MultiClusterService=true' | kubectl --context karmada-host replace -f -
```

Please note that the MultiClusterService feature is disabled by default and can be enabled using the `--feature-gates=MultiClusterService=true` flag.
Expand Down
18 changes: 9 additions & 9 deletions docs/tutorials/autoscaling-with-custom-metrics.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ kubectl apply -f manifests/

You can verify the installation by the following command:
```sh
kubectl --kubeconfig=/root/.kube/members.config --context=member1 get po -nmonitoring
$ kubectl --kubeconfig=/root/.kube/members.config --context=member1 get po -nmonitoring
NAME READY STATUS RESTARTS AGE
alertmanager-main-0 2/2 Running 0 30h
alertmanager-main-1 2/2 Running 0 30h
Expand Down Expand Up @@ -254,9 +254,9 @@ data:
```

```sh
$ kubectl apply -f prom-adapter.config.yaml
kubectl apply -f prom-adapter.config.yaml
# Restart prom-adapter pods
$ kubectl rollout restart deployment prometheus-adapter -n monitoring
kubectl rollout restart deployment prometheus-adapter -n monitoring
```

## Register metrics API in `member1` and `member2` cluster
Expand All @@ -280,13 +280,13 @@ spec:
```

```sh
$ kubectl create -f api-service.yaml
kubectl create -f api-service.yaml
```

The API is registered as `custom.metrics.k8s.io/v1beta2`, and you can use the following command to verify:

```sh
$ kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta2/namespaces/default/pods/*/http_requests?selector=app%3Dsample-app"
kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta2/namespaces/default/pods/*/http_requests?selector=app%3Dsample-app"
```

The output is similar to:
Expand Down Expand Up @@ -419,9 +419,9 @@ derived-sample-app member1 ClusterIP 10.11.59.213 <none> 80/T
In order to do http requests, here you can use `hey`.
* Download `hey` and copy it to kind cluster container.
```sh
$ wget https://hey-release.s3.us-east-2.amazonaws.com/hey_linux_amd64
$ chmod +x hey_linux_amd64
$ docker cp hey_linux_amd64 member1-control-plane:/usr/local/bin/hey
wget https://hey-release.s3.us-east-2.amazonaws.com/hey_linux_amd64
chmod +x hey_linux_amd64
docker cp hey_linux_amd64 member1-control-plane:/usr/local/bin/hey
```

## Test scaling up
Expand All @@ -442,7 +442,7 @@ $ docker cp hey_linux_amd64 member1-control-plane:/usr/local/bin/hey

* Request multi-cluster service with hey to increase the nginx pods' custom metrics(http_requests_total).
```sh
$ docker exec member1-control-plane hey -c 1000 -z 1m http://10.11.59.213/metrics
docker exec member1-control-plane hey -c 1000 -z 1m http://10.11.59.213/metrics
```

* Wait 15s, the replicas will be scaled up, then you can check the pod distribution again.
Expand Down
10 changes: 5 additions & 5 deletions docs/tutorials/autoscaling-with-resource-metrics.md
Original file line number Diff line number Diff line change
Expand Up @@ -279,10 +279,10 @@ derived-nginx-service member1 ClusterIP 10.11.59.213 <none> 80/T

In order to do http requests, here we use `hey`.
* Download `hey` and copy it to kind cluster container.
```
$ wget https://hey-release.s3.us-east-2.amazonaws.com/hey_linux_amd64
$ chmod +x hey_linux_amd64
$ docker cp hey_linux_amd64 member1-control-plane:/usr/local/bin/hey
```sh
wget https://hey-release.s3.us-east-2.amazonaws.com/hey_linux_amd64
chmod +x hey_linux_amd64
docker cp hey_linux_amd64 member1-control-plane:/usr/local/bin/hey
```

## Test scaling up
Expand All @@ -303,7 +303,7 @@ $ docker cp hey_linux_amd64 member1-control-plane:/usr/local/bin/hey

* Request multi-cluster service with hey to increase the nginx pods' CPU usage.
```sh
$ docker exec member1-control-plane hey -c 1000 -z 1m http://10.11.59.213
docker exec member1-control-plane hey -c 1000 -z 1m http://10.11.59.213
```

* Wait 15s, the replicas will be scaled up, then you can check the pod distribution again.
Expand Down
10 changes: 5 additions & 5 deletions docs/tutorials/resource-migration.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,10 +19,10 @@ So, this section will guide you to cover:
#### Step 1: Run the command

```shell
$ git clone https://github.com/karmada-io/karmada
$ cd karmada
$ hack/local-up-karmada.sh
$ export KUBECONFIG=~/.kube/karmada.config:~/.kube/members.config
git clone https://github.com/karmada-io/karmada
cd karmada
hack/local-up-karmada.sh
export KUBECONFIG=~/.kube/karmada.config:~/.kube/members.config
```

> **Note:**
Expand All @@ -37,7 +37,7 @@ $ export KUBECONFIG=~/.kube/karmada.config:~/.kube/members.config
#### Step 2: Run the command

```shell
$ kubectl --context karmada-host get deploy karmada-controller-manager -n karmada-system -o yaml | sed '/- --failover-eviction-timeout=30s/{n;s/- --v=4/- --feature-gates=PropagationPolicyPreemption=true\n &/g}' | kubectl --context karmada-host replace -f -
kubectl --context karmada-host get deploy karmada-controller-manager -n karmada-system -o yaml | sed '/- --failover-eviction-timeout=30s/{n;s/- --v=4/- --feature-gates=PropagationPolicyPreemption=true\n &/g}' | kubectl --context karmada-host replace -f -
```

> **Note:**
Expand Down
2 changes: 1 addition & 1 deletion docs/userguide/cicd/working-with-argocd.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ In this example, we are using a Karmada environment with at least `3` member clu
You can set up the environment by `hack/local-up-karmada.sh`, which is also used to run our E2E cases.

```bash
# kubectl get clusters
$ kubectl get clusters
NAME VERSION MODE READY AGE
member1 v1.19.1 Push True 18h
member2 v1.19.1 Push True 18h
Expand Down
8 changes: 4 additions & 4 deletions docs/userguide/clustermanager/cluster-registration.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ kubectl karmada join member1 --kubeconfig=<karmada kubeconfig> --karmada-context

Check the status of the joined clusters by using the following command.
```
kubectl get clusters
$ kubectl get clusters
NAME VERSION MODE READY AGE
member1 v1.20.7 Push True 66s
Expand Down Expand Up @@ -81,7 +81,7 @@ Be different from the `karmadactl join` which registers a cluster with `Push` mo
In Karmada control plane, we can use `karmadactl token create` command to create bootstrap tokens whose default ttl is 24h.

```
$ karmadactl token create --print-register-command --kubeconfig /etc/karmada/karmada-apiserver.config
karmadactl token create --print-register-command --kubeconfig /etc/karmada/karmada-apiserver.config
```

```
Expand All @@ -97,7 +97,7 @@ For more details about `bootstrap token` please refer to:
In the Kubernetes control plane of member clusters, we also need the `kubeconfig` file of the member cluster. Right after we execute the output of the `karmadactl register` command provided above.

```
$ karmadactl register 10.10.x.x:32443 --token t2jgtm.9nybj0526mjw1jbf --discovery-token-ca-cert-hash sha256:f5a5a43869bb44577dba582e794c3e3750f2050d62f1b1dc80fd3d6a371b6ed4
karmadactl register 10.10.x.x:32443 --token t2jgtm.9nybj0526mjw1jbf --discovery-token-ca-cert-hash sha256:f5a5a43869bb44577dba582e794c3e3750f2050d62f1b1dc80fd3d6a371b6ed4
```

```
Expand All @@ -124,7 +124,7 @@ Once deployed, `the karmada-agent` will automatically register the cluster durin

Check the status of the registered clusters by using the same command above.
```
kubectl get clusters
$ kubectl get clusters
NAME VERSION MODE READY AGE
member3 v1.20.7 Pull True 66s
```
Expand Down
4 changes: 2 additions & 2 deletions docs/userguide/failover/application-failover.md
Original file line number Diff line number Diff line change
Expand Up @@ -152,9 +152,9 @@ Now the application is scheduled into member2 and these two replicas run normall
```shell
# mark node "member2-control-plane" as unschedulable in cluster member2
$ kubectl --context member2 cordon member2-control-plane
kubectl --context member2 cordon member2-control-plane
# delete the pod in cluster member2
$ kubectl --context member2 delete pod -l app=nginx
kubectl --context member2 delete pod -l app=nginx
```

You can immediately find that the deployment is unhealthy now from the ResourceBinding.
Expand Down
Loading

0 comments on commit 78c800c

Please sign in to comment.