Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

refactor to use the e2e test script #580

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

michelle-yooh
Copy link
Collaborator

@michelle-yooh michelle-yooh commented Jan 25, 2025

Description

This PR refactors the maxtext_moe_gpu_e2e tests to use the MaxText e2e test script instead of the direct command. It requires AI-Hypercomputer/maxtext#1191 to be merged first.

Tests

Please describe the tests that you ran on Cloud VM to verify changes.

Instruction and/or command lines to reproduce your tests: ...

List links for your tests (use go/shortn-gen for any internal link): ...

Screenshot of the test result
http://shortn/_dNMK6pqBn9

bugs for the failed tests
pinned test on 2 A3 nodes: http://shortn/_WN4xgkG3Hj
pinned test on 2 A3+ nodes: http://shortn/_zKqVme3CFO

Checklist

Before submitting this PR, please make sure (put X in square brackets):

  • I have performed a self-review of my code.
  • I have necessary comments in my code, particularly in hard-to-understand areas.
  • I have run one-shot tests and provided workload links above if applicable.
  • I have made or will make corresponding changes to the doc if needed.

Comment on lines 43 to 40
region = gke.zone_to_region(zone)

gcloud_command = (
f"gcloud config set project {project}",
"sudo chown -R airflow:airflow /home/airflow/composer_kube_config",
f"gcloud container clusters get-credentials {cluster_name}"
f" --region {region}",
)
return gcloud_command


def resize_a3_cluster(cluster_name: str, zone: str, num_nodes: int):
region = gke.zone_to_region(zone)
node_pool = f"{cluster_name}-np-0"

gcloud_command = (
f"gcloud container clusters resize {cluster_name}"
f" --quiet --region {region}"
f" --node-pool {node_pool}"
f" --num-nodes {num_nodes}",
)
return gcloud_command


def wait_for_cluster_ready():
kubectl_command = (
"kubectl wait --for=condition=Ready nodes --all --timeout=5m",
)
return kubectl_command


@task
def scale_up_a3_cluster():
with tempfile.TemporaryDirectory() as tmpdir:
hook = SubprocessHook()

result = hook.run_command(
[
"bash",
"-c",
";".join(
configure_project_and_cluster(
Project.SUPERCOMPUTER_TESTING.value,
XpkClusters.GPU_A3_CLUSTER.name,
XpkClusters.GPU_A3_CLUSTER.zone,
)
+ resize_a3_cluster(
XpkClusters.GPU_A3_CLUSTER.name,
XpkClusters.GPU_A3_CLUSTER.zone,
A3_NUM_NODES,
)
+ wait_for_cluster_ready()
),
],
cwd=tmpdir,
)
assert result.exit_code == 0, f"Command failed with code {result.exit_code}"


@task
def scale_down_a3_cluster():
with tempfile.TemporaryDirectory() as tmpdir:
hook = SubprocessHook()

result = hook.run_command(
[
"bash",
"-c",
";".join(
configure_project_and_cluster(
Project.SUPERCOMPUTER_TESTING.value,
XpkClusters.GPU_A3_CLUSTER.name,
XpkClusters.GPU_A3_CLUSTER.zone,
)
+ resize_a3_cluster(
XpkClusters.GPU_A3_CLUSTER.name,
XpkClusters.GPU_A3_CLUSTER.zone,
0,
)
),
],
cwd=tmpdir,
)
assert result.exit_code == 0, f"Command failed with code {result.exit_code}"


def run_maxtext_tests(dag: models.DAG):
test_name_prefix = "maxtext"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@michelle-yooh -Is it possible to move this code to a utils file, as it is being used in multiple places[https://github.com/GoogleCloudPlatform/ml-auto-solutions/blob/master/dags/multipod/maxtext_gpu_end_to_end.py] and has the potential to be used in places where we need A3 tests? Additionally, in terms of the tests, do we need to have them for each A3 generation?

return gcloud_command


def wait_for_cluster_ready():
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel those could be refactored a little bit and put as helper function in https://github.com/GoogleCloudPlatform/ml-auto-solutions/blob/master/xlml/utils/gpu.py? including wait_for_cluster_ready, scale_up_cluster and scale_down_cluster, resize_a3_cluster, etc?

@michelle-yooh michelle-yooh force-pushed the yooh/gpu-moe-dag-refactor branch from 3d0974a to a751d02 Compare February 3, 2025 20:59
@michelle-yooh
Copy link
Collaborator Author

@RissyRan @parambole I've moved the functions to xlml/utils/gpu.py and verified that the resizing works as expected. http://shortn/_ttKViB2Hv7 (Please disregard the test failures as it used the latest MaxText image which doesn't have the test script yet)

@shralex shralex requested a review from yangyuwei February 3, 2025 21:23
gke.zone_to_region(XpkClusters.GPU_A3_CLUSTER.zone),
)
+ resize_a3_cluster(
XpkClusters.GPU_A3_CLUSTER.name,
Copy link
Collaborator

@parambole parambole Feb 3, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@michelle-yooh I believe there is an additional problem: running A3 tests affects the way clusters are set up. Given that this is the same approach used in the MaxText code, there could be a race condition where the cluster is being brought down while the tests are still running.

@yangyuwei correct me if I'm wrong, but the solution for this would be to bring up and down a new cluster or consolidate the tests into an existing DAG, which avoids rescaling a new cluster.

Hence, I asked about the importance of running the tests on A3.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@parambole That's a valid concern. I had ignored it as the scheduled times for the DAGs are quite distant but I agree that there are still chances that other jobs are brought down in the mid way.

@yangyuwei What were the reasons behind the resizing approach instead of keeping the fixed capacity for A3 cluster?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants