CLI:
- log in:
oc login --server=<OPENSHIFT_API_URL> --token=<YOUR_TOKEN>
- set active project:
oc project <PROJECT_NAME>
or create project:
oc new-project <PROJECT_NAME>
oc apply -k cluster-scope/base/
wait for odf/loki operators to init before continuing
CLI:
oc apply -k odf/
GUI: may need to create storagecluster on GUI interface. need to find how to set this up in CLI Admin > Installed Operators > 'Create StorageSystem'- Backing storage Deployment type = Full deployment > Backing strorage type = Use an existing StorageClass > StorageClass = gp2 > Click Next
- Capacity and nodes Select at least 3 nodes > Click Next
- Security and network Click Next
- Review and create Click Create StorageSystem If you get an Error 404:Page Not Found, ignore it Wait for ocs-storagecluster to finish init. ~5-10 minutes
oc apply -k loki/base/backingstores/
oc apply -k loki/base/bucketclasses/
oc apply -k loki/base/storageclasses/
oc apply -k loki/base/objectbucketclaims/
oc apply -k loki/base/lokistacks/
ACCESS_KEY_ID=$(oc -n openshift-logging get secrets/logging-objectbucketclaim -o jsonpath={.data.AWS_ACCESS_KEY_ID} | base64 -d)
AWS_SECRET_ACCESS_KEY=$(oc -n openshift-logging get secrets/logging-objectbucketclaim -o jsonpath={.data.AWS_SECRET_ACCESS_KEY} | base64 -d)
BUCKET_NAME=$(oc -n openshift-logging get configmap/logging-objectbucketclaim -o jsonpath={.data.BUCKET_NAME})
BUCKET_HOST=$(oc -n openshift-logging get configmap/logging-objectbucketclaim -o jsonpath={.data.BUCKET_HOST})
oc -n openshift-logging create secret generic thanos-object-storage \
--from-literal="access_key_id=${ACCESS_KEY_ID}" \
--from-literal="access_key_secret=${AWS_SECRET_ACCESS_KEY}" \
--from-literal="bucketnames=${BUCKET_NAME}" \
--from-literal="endpoint=https://${BUCKET_HOST}"
# wait for all to finish init. ~10 minutes
oc apply -k logging/
oc -n openshift-logging create secret generic lokistack-gateway-bearer-token --from-literal=token="$(oc -n openshift-logging get secret logcollector-token --template='{{.data.token | base64decode}}')" --from-literal=ca-bundle.crt="$(oc -n openshift-logging get configmap openshift-service-ca.crt --template='{{index .data "service-ca.crt"}}')"
GUI: Openshift console > Installed Operators > Red Hat OpenShift Logging > console plugin > enable console plugin. Will need to wait for console to signal a reload. Takes a while
ive not used these commands. commands may not be entirly correct. found on stackoverflow
CLI:
oc exec -it <CEPH_POD_NAME> -- radosgw-admin user create --uid="sourceuser" --display-name="Source User"
ive not used these commands. commands may not be entirly correct. found on stackoverflow
CLI:
oc exec -it <CEPH_POD_NAME> -- radosgw-admin user create --uid="destinationuser" --display-name="Destination User"
CLI: on source cluster s3cmd --configure
- S3 Endpoint: (Endpoint of Cluster A's Ceph RGW)
- Access Key: (Access key of "sourceuser" created above)
- Secret Key: (Secret key of "sourceuser" created above)
Make note of the .s3cfg file which is generated. This will be used to configure s3cmd on other containers or pods.
exit
# To exit from the pod
oc create configmap s3cmd-config --from-file=$HOME/.s3cfg
oc run s3cmd-runner --image=alpine --restart=OnFailure -- sh -c "apk add --no-cache s3cmd && s3cmd sync s3://source-bucket/ s3://destination-bucket/ --config /config/.s3cfg"
oc set volume pod/s3cmd-runner --add -m /config --configmap-name=s3cmd-config
echo '#!/bin/sh s3cmd sync s3://source-bucket/ s3://destination-bucket/ --config /config/.s3cfg' > s3cmd_sync.sh
oc create configmap s3cmd-script --from-file=s3cmd_sync.sh
cat <<EOF | oc apply -f -
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: s3cmd-sync-cron
spec:
schedule: "0 0 * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: s3cmd
image: alpine
command: ["/bin/sh", "/script/s3cmd_sync.sh"]
volumeMounts:
- name: config
mountPath: /config
- name: script
mountPath: /script
volumes:
- name: config
configMap:
name: s3cmd-config
- name: script
configMap:
name: s3cmd-script
restartPolicy: OnFailure
EOF
oc logs job/<NAME_OF_THE_JOB_FROM_CRONJOB>
s3cmd ls s3://source-bucket/
s3cmd ls s3://destination-bucket/
Define policies or automation scripts to delete older logs from the source bucket after they've been confirmed to exist in the destination bucket. This can be done using s3cmd as well with specific date filters and other options.