Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OADP-1663 Per container security context discovery for MoverSecurityContext #204

Open
kaovilai opened this issue Apr 12, 2023 · 1 comment
Assignees

Comments

@kaovilai
Copy link
Member

kaovilai commented Apr 12, 2023

We should discover securityContext to pass to volsync per container that mount those volumes if possible rather than per pod securityContext.

e2e debugging notes
For mysql app with this securityContext

            securityContext:
              privileged: true

Immediately after app install

sh-4.4$  ls -la /var/lib/mysql/
total 24
drwxr-xr-x. 4 root  root   4096 Apr 12 03:08 .
drwxr-xr-x. 1 root  root     53 Mar 28 11:48 ..
drwx------. 6 mysql mysql  4096 Apr 12 03:08 data
drwx------. 2 root  root  16384 Apr 12 03:08 lost+found
srwxrwxrwx. 1 mysql mysql     0 Apr 12 03:08 mysql.sock

after 1.2 data mover restore with volsync 0.7.1, mysql is failing to come up due to data ownership issues whoami for this container is mysql

sh-4.4$ ls -la /var/lib/mysql/
total 28
drwxrwsr-x. 4 root       1000690000  4096 Apr 12 02:51 .
drwxr-xr-x. 1 root       root          53 Mar 28 11:48 ..
-rw-------. 1 mysql      1000690000    91 Apr 12 02:59 .bash_history
drwxrws---. 6 1000690000 1000690000  4096 Apr 12 02:47 data
drwxrws---. 2 root       1000690000 16384 Apr 12 01:59 lost+found

Ownership changes like this is unexpected.
Prior ownership issues are due to UID of the user changing because UID was not fixed.

This case, UID of user is the same, and we have SCC to force this, but restored file UID/GID appears to have been changed during backup/restore process.

Looks like a new feature introduced in 0.6.0 called moverSecurityContext . Last working version where mysql data mover restore was successful was with 0.5.2 volsync.

This was merged as an option into DPA 3 weeks ago. We need to enable this in e2e.

e2e periodics for 1.1 have been failing consistently for mysql data mover ever since release v0.6.0 of volsync
because we have not been pinning volsync version in e2e.

After enabling MoverSecurityContext, this is the result after restore.

sh-4.4$ ls -la /var/lib/mysql/
total 24
drwxrwsr-x. 4 root       1000690000  4096 Apr 12 04:07 .
drwxr-xr-x. 1 root       root          53 Mar 28 11:48 ..
drwxrws---. 6 1000690000 1000690000  4096 Apr 12 04:09 data
drwxrws---. 2 root       1000690000 16384 Apr 12 04:06 lost+found

Still not as expected. Mysql app want to use its own UID, it does not use UID from pod securityContext to write data.

Looks like we might need to specify UID of the pod to match that of the mariadb container Dockerfile USER
USER 27
so:

            securityContext:
              runAsGroup: 27
              runAsUser: 27
              fsGroup: 27
              privileged: true

specifying pod.spec.container[].securityContext level is not sufficient. pod.spec.securityContext needs to be specified too.. this would be problematic for multi-container pods with each having different UID/GID
e2e is working for mysql data mover locally with these changes

TL;DR: Volsync release 0.6.0 changed UID/GID handling of restored files and broke mysql data mover e2e, and we might be able to work around it for e2e, but there could be customer workloads where multiple UID/GID is used within a single pod not specified via pod’s securityContext moverContext enabled alone may not be sufficient for some workloads.

@kaovilai
Copy link
Member Author

/jira help

@kaovilai kaovilai changed the title Per container security context discovery for MoverSecurityContext OADP-1663 Per container security context discovery for MoverSecurityContext Apr 18, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants