Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RHELC-1736] Fix different exit codes #1392

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

hosekadam
Copy link
Member

@hosekadam hosekadam commented Sep 30, 2024

The convert2rhel finished with different exit codes when inhibitor was found and different subcommands were used.

In the code was missing part for handling the inhibitors during analysis exit.

Jira Issues:

Checklist

  • PR has been tested manually in a VM (either author or reviewer)
  • Jira issue has been made public if possible
  • [RHELC-] or [HMS-] is part of the PR title
  • GitHub label has been added to help with Release notes
  • PR title explains the change from the user's point of view
  • Code and tests are documented properly
  • The commits are squashed to as few commits as possible (without losing data)
  • When merged: Jira issue has been updated to Release Pending if relevant

@hosekadam hosekadam requested a review from a team as a code owner September 30, 2024 10:02
@hosekadam hosekadam added the kind/bug-fix A bug has been fixed label Sep 30, 2024
Copy link

codecov bot commented Sep 30, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 96.11%. Comparing base (261d533) to head (efd4c76).
Report is 1 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1392      +/-   ##
==========================================
+ Coverage   96.09%   96.11%   +0.02%     
==========================================
  Files          72       72              
  Lines        5174     5178       +4     
  Branches      896      897       +1     
==========================================
+ Hits         4972     4977       +5     
  Misses        119      119              
+ Partials       83       82       -1     
Flag Coverage Δ
centos-linux-7 91.63% <100.00%> (+0.02%) ⬆️
centos-linux-8 92.49% <100.00%> (+0.02%) ⬆️
centos-linux-9 92.61% <100.00%> (+0.02%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@hosekadam hosekadam added the tests/sanity PR ready to run the sanity test suit. Equivalent to `/packit test --labels sanity`. label Sep 30, 2024
@has-bot
Copy link
Member

has-bot commented Sep 30, 2024

/packit test --labels sanity


Comment generated by an automation.

@hosekadam
Copy link
Member Author

/packit test --labels sanity

Copy link
Member

@Venefilyn Venefilyn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel like there isn't a need for a new unit test, but rather modifying the existing ones to fail if it is the old style

Especially given how many mocks the new test creates it seems unnecessary to have a whole new test function for it

@hosekadam
Copy link
Member Author

@Venefilyn I remember I tried it, but modifying some existing one would result in a complex test. Also I remember finding the right one was a bit problematic - many of the tests aren't commented, so you need to figure out which one is the best from the code.

So I chose the way where I create single test, easy to read and simply testing the issue.

@Venefilyn
Copy link
Member

@Venefilyn I remember I tried it, but modifying some existing one would result in a complex test. Also I remember finding the right one was a bit problematic - many of the tests aren't commented, so you need to figure out which one is the best from the code.

So I chose the way where I create single test, easy to read and simply testing the issue.

@hosekadam Ack sounds good. Though with this implementation you suggested we'll not get everything we wanted I feel. What if we do something like this for the implementation? Means we get the right returns but the same output. Your current implementation I feel doesn't handle analyze stage correctly as it doesn't handle anything in the analyze stage

diff --git a/convert2rhel/main.py b/convert2rhel/main.py
index c62a2669..4053fb47 100644
--- a/convert2rhel/main.py
+++ b/convert2rhel/main.py
@@ -65,6 +65,7 @@ _REPORT_MAPPING = {
     ),
 }
 
+
 # Track the exit codes for different scenarios during the conversion.
 class ConversionExitCodes:
     # No errors detected during the conversion
@@ -196,28 +197,36 @@ def main_locked():
         rollback_changes()
         provide_status_after_rollback(pre_conversion_results, include_all_reports=True)
 
-        if backup.backup_control.rollback_failed:
-            return ConversionExitCodes.FAILURE
+        if _get_failed_actions(pre_conversion_results):
+            return ConversionExitCodes.SUCCESSFUL
 
         return ConversionExitCodes.SUCCESSFUL
     except _InhibitorsFound as err:
         loggerinst.critical_no_exit(str(err))
-        results = _pick_conversion_results(process_phase, pre_conversion_results, post_conversion_results)
+        results = _pick_conversion_results(
+            process_phase, pre_conversion_results, post_conversion_results
+        )
         _handle_main_exceptions(process_phase, results)
 
         return _handle_inhibitors_found_exception()
     except exceptions.CriticalError as err:
         loggerinst.critical_no_exit(err.diagnosis)
-        results = _pick_conversion_results(process_phase, pre_conversion_results, post_conversion_results)
+        results = _pick_conversion_results(
+            process_phase, pre_conversion_results, post_conversion_results
+        )
         return _handle_main_exceptions(process_phase, results)
     except (Exception, SystemExit, KeyboardInterrupt) as err:
-        results = _pick_conversion_results(process_phase, pre_conversion_results, post_conversion_results)
+        results = _pick_conversion_results(
+            process_phase, pre_conversion_results, post_conversion_results
+        )
         return _handle_main_exceptions(process_phase, results)
     finally:
         if not backup.backup_control.rollback_failed:
             # Write the assessment to a file as json data so that other tools can
             # parse and act upon it.
-            results = _pick_conversion_results(process_phase, pre_conversion_results, post_conversion_results)
+            results = _pick_conversion_results(
+                process_phase, pre_conversion_results, post_conversion_results
+            )
 
             if results and process_phase in _REPORT_MAPPING:
                 json_report, txt_report = _REPORT_MAPPING[process_phase]
@@ -228,6 +237,10 @@ def main_locked():
     return ConversionExitCodes.SUCCESSFUL
 
 
+def _get_failed_actions(results):
+    return actions.find_actions_of_severity(results, "SKIP", level_for_raw_action_data)
+
+
 def _raise_for_skipped_failures(results):
     """Analyze the action results for failures
 
@@ -236,7 +249,7 @@ def _raise_for_skipped_failures(results):
     :raises SystemExit: In case we detect any actions that has level of `SKIP`
         or above.
     """
-    failures = actions.find_actions_of_severity(results, "SKIP", level_for_raw_action_data)
+    failures = _get_failed_actions(results)
     if failures:
         # The report will be handled in the error handler, after rollback.
         message = (
@@ -400,7 +413,9 @@ def provide_status_after_rollback(pre_conversion_results, include_all_reports):
         return
 
     if not pre_conversion_results:
-        loggerinst.info("\nConversion interrupted before analysis of system completed. Report not generated.\n")
+        loggerinst.info(
+            "\nConversion interrupted before analysis of system completed. Report not generated.\n"
+        )
 
         return

@hosekadam
Copy link
Member Author

@Venefilyn Thanks for the tip! You are right, this is a better solution than I prepared.

@r0x0d
Copy link
Member

r0x0d commented Oct 14, 2024

/packit test --labels sanity

@Venefilyn
Copy link
Member

@danmyway should we update the integration tests to match?

@danmyway danmyway requested a review from a team as a code owner October 17, 2024 11:59
@danmyway danmyway requested review from danmyway and kokesak and removed request for danmyway October 17, 2024 11:59
@danmyway
Copy link
Member

@Venefilyn @hosekadam I've addressed the changes in the failing tests and some more. I'll trigger the whole suite to see, where we need to make other adjustments (if any)

@danmyway
Copy link
Member

/packit test

@Venefilyn
Copy link
Member

@hosekadam can you change the other tests that are failing as well? They should also look for exit code 2 instead of 0

@hosekadam
Copy link
Member Author

/packit test

@hosekadam
Copy link
Member Author

/packit retest-failed

Copy link
Member

@kokesak kokesak left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kokesak
Copy link
Member

kokesak commented Oct 23, 2024

Also this test is causing issues - https://artifacts.osci.redhat.com/testing-farm/4d3cdcfd-3e04-4b04-aa7c-c394c9b4b07c/work-non-destructiveeqcz6bcj/plans/tier0/core/non-destructive/execute/data/guest/default-0/tests/integration/tier0/non-destructive/custom-repository/custom_invalid_repo_provided-14/output.txt.

Did something changed? The test is expecting the CUSTOM_REPOSITORIES_ARE_VALID::UNABLE_TO_ACCESS_REPOSITORIES message to appear but it doesn't appear in the logs

@hosekadam
Copy link
Member Author

Also this test is causing issues - https://artifacts.osci.redhat.com/testing-farm/4d3cdcfd-3e04-4b04-aa7c-c394c9b4b07c/work-non-destructiveeqcz6bcj/plans/tier0/core/non-destructive/execute/data/guest/default-0/tests/integration/tier0/non-destructive/custom-repository/custom_invalid_repo_provided-14/output.txt.

I think this got changed in #1288 and since this got merged, the CUSTOM_REPOSITORIES_ARE_VALID has dependency in SUBSCRIBE_SYSTEM. Is the test still relevant? @kokesak

@hosekadam hosekadam force-pushed the fix-exit-codes branch 2 times, most recently from 80065ea to 0e62884 Compare October 23, 2024 11:44
@kokesak
Copy link
Member

kokesak commented Oct 23, 2024

Also this test is causing issues - https://artifacts.osci.redhat.com/testing-farm/4d3cdcfd-3e04-4b04-aa7c-c394c9b4b07c/work-non-destructiveeqcz6bcj/plans/tier0/core/non-destructive/execute/data/guest/default-0/tests/integration/tier0/non-destructive/custom-repository/custom_invalid_repo_provided-14/output.txt.

I think this got changed in #1288 and since this got merged, the CUSTOM_REPOSITORIES_ARE_VALID has dependency in SUBSCRIBE_SYSTEM. Is the test still relevant? @kokesak

I see. We can fix it here if that is ok with you. I suggest setting up the CONVERT2RHEL_INCOMPLETE_ROLLBACK env var and lets see if that help. You only need to add these two lines to the https://github.com/oamg/convert2rhel/blob/main/tests/integration/tier0/non-destructive/custom-repository/test_custom_repository.py#L40C1-L40C64

@pytest.mark.parametrize("environment_variables", ["CONVERT2RHEL_INCOMPLETE_ROLLBACK"], indirect=True)
def test_custom_invalid_repo_without_rhsm(shell, convert2rhel, environment_variables):
...

@hosekadam
Copy link
Member Author

/packit test

Comment on lines 40 to 41
@pytest.mark.parametrize("environment_variables", ["CONVERT2RHEL_INCOMPLETE_ROLLBACK"], indirect=True)
def test_custom_invalid_repo_without_rhsm(shell, convert2rhel, environment_variables):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it seems that we have fixed the test but since we introduced the INCOMPLETE_ROLLBACK env var and the rollback actually fails then the exit code for this test case is actually 1 and not 2

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please, revert this.
It actually discovered a regression introduced in #1288 as Adam mentioned.
It's reported here https://issues.redhat.com/browse/RHELC-1753.
I don't really like that we're trying to make the test pass just for the sake of making it green :(

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. Should I trigger the whole suite again when the rpms are built? @danmyway

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry @hosekadam, I missed the message, triggered the suite instead

hosekadam and others added 2 commits October 24, 2024 15:40
The convert2rhel finished with different exit codes when inhibitor was
found and different subcommands were used.

In the code was missing part for handling the inhibitors during analysis
exit.
Signed-off-by: Daniel Diblik <[email protected]>
Fix another exit codes based on the test run which should be 2.
@danmyway
Copy link
Member

/packit test

Copy link
Member

@Venefilyn Venefilyn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just leaving the dev ack approval

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug-fix A bug has been fixed merge-after-tests-ok tests/sanity PR ready to run the sanity test suit. Equivalent to `/packit test --labels sanity`.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants