Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Show list of errors from testing farm #436

Open
1 of 2 tasks
abitrolly opened this issue Jul 6, 2024 · 3 comments
Open
1 of 2 tasks

Show list of errors from testing farm #436

abitrolly opened this issue Jul 6, 2024 · 3 comments
Assignees

Comments

@abitrolly
Copy link

Description

When build fails, there is are no error, only logs. GitHub and GitLab are able to parse error logs, and provide info with links that directly show what happened. With packit even reaching the logs is like 4 clicks away and then scrolling scrolling scrolling.

For example, this test failure in tmt.

  1. Improve initialization message teemtee/tmt#3039 (comment)

See logs in various failed checks.

😭

  1. Parse statuses, click red one, go https://github.com/teemtee/tmt/pull/3039/checks?check_run_id=26991071043
  2. Parse table, go https://dashboard.packit.dev/results/testing-farm/515717
  3. Parse page, go to build results

image

  1. See everything is green https://dashboard.packit.dev/results/copr-builds/1696226

WHAT???

  1. Go back an click red oval
  2. Finally land on https://artifacts.dev.testing-farm.io/42e7b5c6-c720-4fcf-b0d9-043b66ee6325/#artifacts-/plans/features/core

image

  1. fail with (1 passed, 0 failed, 0 error)

WHAT???

Now the only way to find out the error is to read through all the logs in these tiny scrolls.

Benefit

  1. Save precious developer's time.
  2. Reduce cognitive load from visual log parsing, and thus make more active brain cells available for the rest of the day.

Importance

Very important.

Workaround

  • There is an existing workaround that can be used until this feature is implemented.

Participation

  • I am willing to submit a pull request for this issue. (Packit team is happy to help!)
@lachmanfrantisek lachmanfrantisek self-assigned this Jul 8, 2024
@lachmanfrantisek
Copy link
Member

lachmanfrantisek commented Jul 15, 2024

Hi @abitrolly !

Sorry for not getting to this sooner. You are touching multiple coupled (and relevant) points here so I'll try to split this into multiple (hopefully doable) tasks:

  1. Clicks on GitHub

Sadly, we can't avoid the extra click to get from the pull request to the Check Run UI (this is a design of GitHub). But, we can use this markdown view to provide more information, but, this is a GitHub-specific feature and we try to share as much as possible => not saying we can't put more info there but we don't want to put all the info from the dashboard into this markdown.

  1. Quickly get the summary of the error

@mfocko mentioned we might be able to provide GitHub with a standardised form of test results. (We need to check this.)
Also, we can try to get this from TF and use it either in a status name (a little space) or in GitHub Check Run Markdown (more space, but GitHub specific as mentioned above) and/or in the Packit's dashboard.

  1. Navigation through the dashboard

Definitely can be improved -- I can see a few small changes that might help here:

  • Providing more of a pipeline-centric view on the tasks (WIP).
  • When showing Copr builds in the test result page on the Packit dashboard, provide the build's overall status to ensure the build succeeded. Showing it green should always be the case since the tests are run only when the build is finished (if we don't skip the builds entirely).
  • Make it clear that the red oval is meant to be used to get to the logs. (At least a tooltip. Maybe some log-like icon? Not sure now but definitely improbable.)
  1. TF result view

Can definitely be improved, but is more on the TF developers. This specific occurrence of "failure" is very confusing. For me as well...;)
We can ask on their issue tracker to improve this.

One thing @Venefilyn is trying to achieve is to have one shared dashboard for all the related tools (basically to have one dashboard both for Packit and TF). This is still not clear how to approach (and sustainably manage) but something we are thinking about.


I hope I haven't forgotten about any crucial issue -- please, let me know what you think about these and we can create a separate task for each item.
Thanks for providing us with the whole story of you going through this! This is really helpful since we are a bit biased.

@abitrolly
Copy link
Author

Thanks for the reply. The "first experience" UI/UX issues are still actual. Now that I've got some answers, I've also a bit biased, so let me concentrate on the problem I am trying to solve right now.

  1. TF result view

Can definitely be improved, but is more on the TF developers. This specific occurrence of "failure" is very confusing. For me as well...;)
We can ask on their issue tracker to improve this.

I asked here https://gitlab.com/testing-farm/oculus/-/issues/24 but the Web UI only parses results.xml (https://gitlab.com/testing-farm/oculus/-/merge_requests/65/diffs) that is provided by TMT (I guess), and the results.xml from TMT just doesn't provide sufficient details.

In teemtee/tmt#3039 (comment) we've traced the incomplete results.xml to beakerlib test result processor. There are still some missing pieces connecting results.xml to JUnit format. Web UI lists both, but it is not clear if it uses both.

One thing @Venefilyn is trying to achieve is to have one shared dashboard for all the related tools (basically to have one dashboard both for Packit and TF). This is still not clear how to approach (and sustainably manage) but something we are thinking about.

I like static web app approach that TF Web UI (https://gitlab.com/testing-farm/oculus/-/merge_requests/64/diffs) is using.

If the results.xml format is documented and well engineered, people could use that to also render GitLab etc. Maybe it is impossible to have a perfect dashboard for everything, but it definitely possible to make a dashboard that could be customized for specific workflow.

In conclusion I must absolutely add that

graph LR
  this --> needs --> diagrams
Loading

@lachmanfrantisek
Copy link
Member

Thanks for all the info, @abitrolly ! And thanks for your work on the related tools -- this shows quite well how is the current state a bit misleading to the user when trying to get to the responsible service...;)

Just a small update that we are starting a small research about the shared dashboard. (How do people want to use it, and what information do they need.) I am linking this issue since it has a couple of interesting points. If you are interested, we can even include you in our interview round. (But issue(s) works fine as well, so no push...;)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: new
Development

No branches or pull requests

2 participants