Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Healthcheck is not reporting error for buckets with out access #8428

Open
anandhu-karattu opened this issue Oct 3, 2024 · 5 comments
Open
Assignees
Labels

Comments

@anandhu-karattu
Copy link

Environment info

  • NooBaa Version: noobaa-core-5.17.0-20240926.el9.x86_64
  • Platform: Standalone Noobaa

Actual behavior

This test includes parameter "NC_DISABLE_ACCESS_CHECK=true"

  1. If the account does not have access(r) to buckets directory path, health check should report this as an error.

Expected behavior

  1. Buckets are displayed as healthy even though account does not have access (r) bucket's directory path

Steps to reproduce

  1. set "NC_DISABLE_ACCESS_CHECK=true"

  2. Create an account with proper access to its newbuckets path. (account-3)

  3. Create a new directory with owner as root and mod bits 770
    [root@522new-41 ~]# ls -ld /mnt/gpfs0/buck3/
    drwxrwx---. 2 root root 4096 Oct 3 11:20 /mnt/gpfs0/buck3/

  4. Create a bucket with account-3. This succeeds as the RW access check is turned off.

  5. Change "NC_DISABLE_ACCESS_CHECK=false"

  6. All other accounts with no access to --newbuckets path are shown as error. However buck3 is not showing any error as part of health check.

[root@522new-41 ~]# noobaa-cli diagnose health --all_bucket_details 2>/dev/null
{
  "response": {
    "code": "HealthStatus",
    "reply": {
      "service_name": "noobaa",
      "status": "OK",
      "memory": "316.7M",
      "checks": {
        "services": [
          {
            "name": "noobaa",
            "service_status": "active",
            "pid": "1367807"
          }
        ],
        "endpoint": {
          "endpoint_state": {
            "response": {
              "response_code": "RUNNING",
              "response_message": "Endpoint running successfuly."
            },
            "total_fork_count": 2,
            "running_workers": [
              2,
              1
            ]
          },
          "error_type": "TEMPORARY"
        },
        "buckets_status": {
          "invalid_buckets": [],
          "valid_buckets": [
            {
              "name": "buck3",
              "storage_path": "/mnt/gpfs0/buck3/"
            },
            {
              "name": "buck1",
              "storage_path": "/mnt/gpfs0/test/account-75000/buck1"
            },
            {
              "name": "buck2",
              "storage_path": "/mnt/gpfs0/test/account-75000/buck2"
            }
          ],
          "error_type": "PERSISTENT"
        }
      }
    }
  }
}

More information - Screenshots / Logs / Other output

@romayalon
Copy link
Contributor

@anandhu-karattu I didn't add it yet, this is a duplicate of #8240.
Currently, we check only if the bucket storage path exists, and not if the owner of the bucket has access to the bucket's path.

@naveenpaul1
Copy link
Contributor

naveenpaul1 commented Oct 18, 2024

@anandhu-karattu @romayalon When config.NC_DISABLE_ACCESS_CHECK = false, All the buckets that dont have access are getting listed in invalid_buckets array, config. Will skip the access check and list bucket in valid_buckets array if NC_DISABLE_ACCESS_CHECK = true. Noobaa code explicitly checking for config.NC_DISABLE_ACCESS_CHECK before checking access for the bucket path. We need to have a discussion on whether we should consider NC_DISABLE_ACCESS_CHECK for access check or not.

@romayalon
Copy link
Contributor

@naveenpaul1 on step 5 of the bug they set back NC_DISABLE_ACCESS_CHECK=false
we don't use the bucket owner's uid gid, we always use the root user for this check, this is the gap.

@naveenpaul1
Copy link
Contributor

@romayalon, I and @anandhu-karattu had a call and we could see access restricted buckets are listed in invalid_buckets when NC_DISABLE_ACCESS_CHECK is true otherwise it's listed in valid_buckets, There was some confusion while he tested it and we clarified it through that call.

@romayalon
Copy link
Contributor

@naveenpaul1 I don't know what was mentioned in the meeting, but -

  1. There is no code in the health CLI that will fail bucket owner on access check because we always use root uid/gid on this check -
async function is_bucket_storage_path_exists(fs_context, config_data, storage_path) {
    let res_obj;
    try {
        if (!_should_skip_health_access_check()) {
            await nb_native().fs.stat(fs_context, storage_path); // NOTE - fs_context is the root uid/gid and not the bucket owner
        }
....
  1. There is another case of this issue here - Acess Denied for S3 buckets is not reporting #8240.

3. Here is a reproduction of Anandhu's bug -

  1. Set NC_DISABLE_ACCESS_CHECK=true in config.json
  2. Create a directory for new_buckets_path using root and set 770 permissions -
sudo mkdir /private/tmp/root_dir/
sudo chmod 770 /private/tmp/root_dir/
  1. create an account with uid/gid 7777(no permissions to directory)-
sudo node src/cmd/manage_nsfs.js account add --name account4 --uid 7777 --gid 7777  --new_buckets_path=/private/tmp/root_dir/
  1. create a directory for the bucket path using root and set 770 permissions -
sudo mkdir /private/tmp/root_dir/buck4_storage_path/
sudo chmod 770 /private/tmp/root_dir/buck4_storage_path/
  1. create a bucket with account 4 (uid/gid 7777)-
sudo node --trace-warnings src/cmd/manage_nsfs.js bucket add --name bucket4 --owner account4 --path /private/tmp/root_dir/buck4_storage_path
  1. Set Set NC_DISABLE_ACCESS_CHECK=false in config.json
  2. run Health script -

Output - 
...
{
  "response": {
    "code": "HealthStatus",
    "reply": {
      "service_name": "noobaa",
      "status": "NOTOK",
      "memory": "missing memory info",
      "error": {
        "error_code": "NOOBAA_SERVICE_FAILED",
        "error_message": "NooBaa service is not started properly, Please verify the service with status command."
      },
      "checks": {
        "services": [
          {
            "name": "noobaa",
            "service_status": "missing service status info",
            "pid": "missing pid info",
            "error_type": "PERSISTENT",
            "error_code": {
              "error_code": "NOOBAA_SERVICE_FAILED",
              "error_message": "NooBaa service is not started properly, Please verify the service with status command."
            }
          }
        ],
        "endpoint": {
          "endpoint_state": {
            "response": {
              "response_code": "NOT_RUNNING",
              "response_message": "Endpoint proccess not running."
            }
{
          },
          "error_type": "TEMPORARY"
        },
        "accounts_status": {
          "invalid_accounts": [
            {
              "name": "account4",
              "storage_path": "/private/tmp/root/dir/",
              "code": "STORAGE_NOT_EXIST"
            }
          ],
          "valid_accounts": [],
          "error_type": "PERSISTENT"
        },
        "buckets_status": {
          "invalid_buckets": [],
          "valid_buckets": [
            {
              "name": "bucket4",
              "storage_path": "/private/tmp/root_dir/buck4_storage_path"
            }
          ],
          "error_type": "PERSISTENT"
        }
      }
    }
  }
}

you can see here that the account is on invalid_accounts but the bucket is on valid_buckets..

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants