Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[tracking] Alt e2eshark TODO's #300

Open
2 of 14 tasks
zjgarvey opened this issue Jul 25, 2024 · 1 comment
Open
2 of 14 tasks

[tracking] Alt e2eshark TODO's #300

zjgarvey opened this issue Jul 25, 2024 · 1 comment

Comments

@zjgarvey
Copy link
Contributor

zjgarvey commented Jul 25, 2024

Please feel free to contribute to this project by picking up one of the TODO items for alt_e2eshark

  • (High Priority) config-specific x-fails/no-run lists.
  • look into using pytest or another testing framework.
  • generate command-line reproducers for ease of debugging individual stages
  • Add a testing config that is identical to default config, but executes via command line scripts.
  • come up with a better name for the directory than "alt_e2eshark".
  • identify functionality present in e2eshark that is currently lacking and should be migrated over. E.g. Chi's comment about the --report flag.
  • add more type hints to make it easier to track down definitions.
  • add (much) more to the README about adding tests and running them with various command line options.
  • look into removing torch-mlir as a python dependency entirely (perhaps optionally, so torch-mlir developers can still use the test suite for debugging op conversions).
  • Consider alternative options for storing cache directory paths, log directory paths, onnx model paths, etc. OnnxModelInfo is currently a bit clunky in needing to sometimes infer the log directory from the model path, etc.
  • Add a pytorch model info class and paths for testing pytorch using iree-turbine, or through torch.onnx.export.
  • As part of adding pytorch testing configs, refactor some of the config backends.
  • Look into multiprocessing
  • Look into running individual tests as a subprocess, so that a hard crash doesn't tank the entire test run.
@zjgarvey
Copy link
Contributor Author

zjgarvey commented Aug 16, 2024

Ci Setup:

  • flesh out report feature (maybe ignore time report)
  • add a report merge feature
  • testfile intake / registry
  • register all of the tests we want to run
  • maybe setup json configurable tests (and also automate the registry of such tests)
  • have a option to report with smaller stages (5 from e2eshark)

zjgarvey added a commit that referenced this issue Aug 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant