-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG]: csi loses track of pv/pvc after kubernetes upgrade #653
Comments
@rajbaratht: Thank you for submitting this issue! The issue is currently awaiting triage. Please make sure you have given us as much context as possible. If the maintainers determine this is a relevant issue, they will remove the needs-triage label and assign an appropriate priority label. We want your feedback! If you have any questions or suggestions regarding our contributing process/workflow, please reach out to us at [email protected]. |
Hi @rajbaratht, Thanks for reaching out to us. |
It would help to know the rke version used during the installation of 1.22, and if there was any newer rke version used for the upgrade (to 1.24). |
@bandak2 We are use RKEv1 and we use the same RKE version before and after the upgrade. |
@bandak2 I have created an SR with dell SR#161977353. I have uploaded logs before and after upgrade. And also uploaded a video of the issue. |
Hi bandak2, |
This issue is currently being investigated |
After an initial investigation, it feels like an RKE issue. We've already filed a request 3203 with Rancher on this, and follow up with them for further look into it. |
@bandak2 request 3203 is waiting for a response. Not sure if you've seen it |
@bandak2 Is this still an issue, or did you find a resolution on your side? |
Bug Description
We recently upgraded kubernetes from 1.22.x to 1.24.x and we noticed in the events of the pod that it was complaining the pv was unable to mount as it was mounted elsewhere. However, the pod was up and running and I could see the pv was mounted in the pod.
Logs
Screenshots
No response
Additional Environment Information
Steps to Reproduce
We had to restart the unity controller and node daemonset and then restart all the nodes in the cluster to fix the issue.
Expected Behavior
After kubernetes upgrade to v1.24 there should not be any warning/errors in the pods events with respect to volumes
CSM Driver(s)
CSI driver v2.4
Installation Type
Helm
Container Storage Modules Enabled
No response
Container Orchestrator
Rancher v2.7.1 with Kubernetes Version: v1.22.10
Operating System
RockyLinux 8.6
The text was updated successfully, but these errors were encountered: