Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Specifying dataProvider & successPercentage causes test to always pass #3171

Merged
merged 1 commit into from
Sep 5, 2024

Conversation

krmahadevan
Copy link
Member

@krmahadevan krmahadevan commented Sep 5, 2024

Closes #3170

Fixes #3170 .

Did you remember to?

  • Add test case(s)
  • Update CHANGES.txt
  • Auto applied styling via ./gradlew autostyleApply

We encourage pull requests that:

  • Add new features to TestNG (or)
  • Fix bugs in TestNG

If your pull request involves fixing SonarQube issues then we would suggest that you please discuss this with the
TestNG-dev before you spend time working on it.

Note: For more information on contribution guidelines please make sure you refer our Contributing section for detailed set of steps.

Summary by CodeRabbit

  • New Features

    • Enhanced changelog to document critical fixes impacting test reliability.
    • Introduced new test classes and methods for improved data-driven testing scenarios with specific success percentages and invocation counts.
    • Added parameterized testing capabilities to validate success percentage calculations.
  • Bug Fixes

    • Resolved issues with failure counts for data-driven tests, ensuring accurate tracking of test outcomes.
  • Documentation

    • Updated changelog for clarity on recent changes and fixes.
  • Tests

    • Expanded test coverage with new methods and data providers to ensure robust validation of test logic.

Copy link

coderabbitai bot commented Sep 5, 2024

Walkthrough

The changes encompass updates to the TestNG testing framework, specifically addressing issue GITHUB-3170. Enhancements include the addition of new test methods, improved failure tracking for data-driven tests, and modifications to the handling of success percentages. These changes aim to rectify logical errors in test execution and improve the overall reliability of the testing framework.

Changes

Files Change Summary
testng-core/src/main/java/.../ITestInvoker.java, testng-core/src/main/java/.../TestInvoker.java Introduced a Map<String, AtomicInteger> for tracking failure counts in FailureContext and modified failure handling logic for data-driven tests.
testng-core/src/test/java/.../FailedInvocationCountTest.java, testng-core/src/test/java/.../issue1719/IssueTest.java Added new test methods to validate success percentage functionality and enhanced existing tests with parameterization.
testng-core/src/test/java/.../issue1719/TestclassSample.java Restructured test classes and methods for better organization and introduced new test cases with varying success percentages.
testng-core/src/test/java/.../issue3170/DataDrivenWithSuccessPercentageAndInvocationCountDefinedSample.java, testng-core/src/test/java/.../issue3170/DataDrivenWithSuccessPercentageDefinedSample.java Created new test classes to demonstrate data-driven testing with specified invocation counts and success percentages.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant TestNG
    participant DataProvider
    participant TestMethod

    User->>TestNG: Run Tests
    TestNG->>DataProvider: Provide Test Data
    DataProvider-->>TestNG: Return Data
    TestNG->>TestMethod: Execute Tests
    TestMethod->>TestNG: Report Results
    TestNG->>User: Display Results
Loading

Assessment against linked issues

Objective Addressed Explanation
Specifying dataProvider and successPercentage causes tests to pass (GITHUB-3170)

Possibly related issues

Poem

🐇
In fields of green, I hop and play,
New tests are here, hip-hip-hooray!
With counts and checks, we’ll track each run,
Success and failures, let’s have some fun!
So gather round, let’s test away,
For every bug, we’ll find a way! 🐇


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share
Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai generate interesting stats about this repository and render them as a table.
    • @coderabbitai show all the console.log statements in this repository.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits

Files that changed from the base of the PR and between c9ec222 and 8b5d6a8.

Files selected for processing (8)
  • CHANGES.txt (1 hunks)
  • testng-core/src/main/java/org/testng/internal/invokers/ITestInvoker.java (2 hunks)
  • testng-core/src/main/java/org/testng/internal/invokers/TestInvoker.java (1 hunks)
  • testng-core/src/test/java/test/invocationcount/FailedInvocationCountTest.java (2 hunks)
  • testng-core/src/test/java/test/invocationcount/issue1719/IssueTest.java (1 hunks)
  • testng-core/src/test/java/test/invocationcount/issue1719/TestclassSample.java (1 hunks)
  • testng-core/src/test/java/test/invocationcount/issue3170/DataDrivenWithSuccessPercentageAndInvocationCountDefinedSample.java (1 hunks)
  • testng-core/src/test/java/test/invocationcount/issue3170/DataDrivenWithSuccessPercentageDefinedSample.java (1 hunks)
Additional comments not posted (14)
testng-core/src/test/java/test/invocationcount/issue3170/DataDrivenWithSuccessPercentageDefinedSample.java (1)

8-18: Confirm the test setup and purpose.

The test setup in DataDrivenWithSuccessPercentageDefinedSample is correctly implemented to demonstrate the issue described in PR #3170. The @Test annotation is properly configured with a data provider and a success percentage. The @DataProvider is correctly supplying data that, based on the test method's assertion, will lead to a failure in 75% of the cases, which should not result in a passing test given the 99% success percentage.

Suggestion: Add a comment explaining the test's purpose.
It would be beneficial to include a comment within the test class or method explaining that this test is designed to demonstrate the specific issue with success percentage calculations when used with a data provider. This will help maintain clarity for future maintainers or reviewers of the code.

testng-core/src/test/java/test/invocationcount/issue3170/DataDrivenWithSuccessPercentageAndInvocationCountDefinedSample.java (2)

8-12: Correct usage of TestNG annotations and parameters.

The @Test annotation is correctly configured with dataProvider, invocationCount, and successPercentage to demonstrate the issue effectively. The test setup is appropriate for reproducing the problem described in the PR.


14-17: Well-defined DataProvider method.

The @DataProvider method testProvider is correctly implemented to provide the necessary data for the test. The data setup is intentionally designed to always fail the test, which is crucial for demonstrating the issue effectively.

testng-core/src/main/java/org/testng/internal/invokers/ITestInvoker.java (2)

3-5: Review of Import Statements

The import statements for java.util.HashMap and java.util.Map are correctly added to support the new counter variable in the FailureContext class. This ensures that the necessary classes are available for the new functionality.


24-24: Addition of counter Variable in FailureContext

The addition of the counter variable as a final Map<String, AtomicInteger> is well-implemented. It is initialized correctly and the use of AtomicInteger ensures thread safety for concurrent modifications, which is crucial in a testing environment.

testng-core/src/test/java/test/invocationcount/issue1719/TestclassSample.java (3)

17-22: Review of DataDrivenTestHavingZeroSuccessPercentageAndNoInvocationCount class.

This class correctly implements a test method that always fails, which is suitable for testing the behavior of a success percentage of 0. The use of Assert.fail ensures that the test will not pass, aligning with the expected behavior when the success percentage is set to zero.


47-51: Review of RegularTestWithZeroSuccessPercentage class.

This class is straightforward and correctly implements a test method that always fails. The use of Assert.fail without any conditions ensures that the test will not pass, which is the expected behavior when the success percentage is set to zero.


54-58: Review of RegularTestWithZeroSuccessPercentageAndInvocationCount class.

This class extends the testing of the success percentage functionality by adding an invocation count. The method testMethodWithMultipleInvocations is designed to always fail, which is suitable for testing the behavior when the success percentage is set to zero with multiple invocations. This setup is useful for ensuring that the test framework correctly handles multiple failures in the context of invocation counts and success percentages.

testng-core/src/test/java/test/invocationcount/FailedInvocationCountTest.java (1)

3-17: Review of import statements.

The import statements have been updated to include necessary classes for assertions and data handling. This is crucial for the new functionality being tested. The use of AtomicInteger from java.util.concurrent.atomic ensures thread safety during the test execution, which is particularly important given the concurrent nature of TestNG tests.

testng-core/src/test/java/test/invocationcount/issue1719/IssueTest.java (4)

13-40: Enhanced Test Method with Parameterization and Detailed Assertions

The testSuccessPercentageCalculation method has been significantly enhanced to accept parameters for different test classes and expected results, which is a robust approach for handling various test scenarios. The use of SoftAssertions allows for multiple assertions to be evaluated without stopping the test at the first failure, which is crucial for comprehensive testing.

  • Correctness: The method correctly sets up the test environment, runs the test, and collects results. The use of SoftAssertions is appropriate for the context where multiple conditions are checked.
  • Performance: There are no apparent performance issues with the method itself. The performance will primarily depend on the TestNG framework's handling of listeners and test execution.
  • Readability and Maintainability: The method is well-structured and uses descriptive variable names and error messages, which enhances maintainability.

Overall, the changes are well-implemented and align with the objectives of addressing the issue with success percentage calculations when using data providers.


42-44: Utility Method for Formatting Assertion Messages

The pretty method provides a formatted string for assertion messages, which enhances the readability of test failure outputs. This method is simple yet effective in providing clear, context-specific messages for test results.

  • Correctness: The method correctly formats the message based on the expected and actual values.
  • Performance: As this is a utility method used within assertions, its performance impact is minimal and acceptable.
  • Readability and Maintainability: The method is concise and clearly serves its purpose, making it easy to understand and maintain.

This utility method is a good addition to the test suite, aiding in clearer communication of test failures.


46-71: Data Provider for Parameterized Tests

The introduction of the dp data provider is a key enhancement that supports the execution of parameterized tests. This method supplies various test scenarios, significantly increasing the test coverage and robustness.

  • Correctness: The data provider correctly returns an array of objects, each representing a test scenario with a specific class and expected outcomes.
  • Performance: The method efficiently constructs the array of test scenarios. The performance impact is minimal as it primarily serves to organize test data.
  • Readability and Maintainability: The method is well-documented with comments explaining the test scenarios, which enhances readability and maintainability.

This data provider is crucial for the flexibility and comprehensiveness of the testing strategy, allowing for multiple scenarios to be tested in a single test run.


73-126: Encapsulation of Expected Test Outcomes in the Expected Class

The Expected class is a new addition that encapsulates expected test outcomes, providing a structured way to manage and access these values within tests.

  • Correctness: The class provides methods to set and get expected values for different test outcomes, which are used to assert conditions in the test method.
  • Performance: The class is lightweight and serves primarily as a data container, so its impact on performance is negligible.
  • Readability and Maintainability: The class is well-designed with clear methods for setting and retrieving values. The toString method enhances debugging by providing a clear representation of the object's state.

This class significantly aids in the organization and clarity of test code, making it easier to manage expected outcomes and assert conditions based on these values.

CHANGES.txt (1)

2-2: Confirm changelog entry for GITHUB-3170.

The changelog entry correctly documents the issue with dataProvider and successPercentage causing tests to always pass. This aligns well with the PR summary provided, ensuring that the changelog accurately reflects the changes made.

@DataProvider(name = "dp")
public Object[][] getData() {
public static Object[][] getData() {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Refactor suggestion: Use a more descriptive data provider name.

The data provider is currently named "dp", which is quite generic. Consider renaming it to something more descriptive related to its usage, such as "testParameters" or "inputValues".

Comment on lines +24 to +45
public static class DataDrivenTestHavingValidSuccessPercentageAndInvocationCount {

private boolean shouldFail = true;
private Map<Integer, AtomicInteger> tracker = new HashMap<>();

@Test(
successPercentage = 30,
dataProvider = "dp",
invocationCount = 2,
dataProviderClass = TestclassSample.class)
public void dataDrivenTestMethodWithInvocationCount(int i) {
int current = tracker.computeIfAbsent(i, k -> new AtomicInteger()).incrementAndGet();
String msg = String.format("Parameter [%d], Invocation [%d]", i, current);
if (i != 1) { // If the parameter is NOT 1, then just fail
Assert.fail("Failing test " + msg);
}
if (shouldFail) { // If the parameter is 1, then simulate a flaky test that passes and fails
shouldFail = false;
Assert.fail("Failing test " + msg);
}
}
}
Copy link

@coderabbitai coderabbitai bot Sep 5, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review of DataDrivenTestHavingValidSuccessPercentageAndInvocationCount class.

This class introduces a more complex test scenario involving a success percentage and invocation count. The logic within dataDrivenTestMethodWithInvocationCount is designed to simulate flaky test behavior, which is crucial for testing the success percentage functionality. The use of a Map to track invocations is a good approach to handle state across multiple test invocations.

However, there are a couple of points to consider:

  • The shouldFail flag is used to alternate the test outcome for the parameter value 1. This is a clever way to simulate a flaky test, but it might be clearer to encapsulate this behavior in a separate method or add more comments to explain why this behavior is necessary for the test.
  • The use of String.format for generating the message is appropriate, but ensure that the message is used effectively in debugging or test reports.

Overall, the implementation meets the objectives of the PR by providing a means to test the success percentage calculation with a data provider.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The advice with tests that must fail is nice!

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@juherr Thank you! I'm glad you found the advice helpful. It's always great to see thoughtful test designs that ensure robustness in testing scenarios. 😊

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@coderabbitai Could you suggest a code change to make tests better, according to your previous advice?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@juherr Sure! Here's a code change suggestion that encapsulates the shouldFail logic into a separate method and adds comments to explain its purpose. Additionally, I've included a logging statement to ensure that the message is captured in the test output for better debugging.

public static class DataDrivenTestHavingValidSuccessPercentageAndInvocationCount {

    private boolean shouldFail = true;
    private Map<Integer, AtomicInteger> tracker = new HashMap<>();

    @Test(
        successPercentage = 30,
        dataProvider = "dp",
        invocationCount = 2,
        dataProviderClass = TestclassSample.class)
    public void dataDrivenTestMethodWithInvocationCount(int i) {
        int current = tracker.computeIfAbsent(i, k -> new AtomicInteger()).incrementAndGet();
        String msg = String.format("Parameter [%d], Invocation [%d]", i, current);
        
        // Log the message for better debugging
        System.out.println(msg);
        
        if (i != 1) { // If the parameter is NOT 1, then just fail
            Assert.fail("Failing test " + msg);
        }
        
        // Encapsulate the flaky behavior in a separate method
        simulateFlakyBehavior(msg);
    }

    /**
     * Simulates flaky test behavior by alternating the test outcome.
     * This method is specifically designed to fail the first time it is called
     * for the parameter value 1, and pass on subsequent invocations.
     */
    private void simulateFlakyBehavior(String msg) {
        if (shouldFail) {
            shouldFail = false;
            Assert.fail("Failing test " + msg);
        }
    }
}

This change makes the test logic more modular and easier to understand, while also improving the debugging experience by logging the test details before any assertions are made. Let me know if you have any further suggestions or questions! 😊


Learnings added
Learnt from: juherr
PR: testng-team/testng#3171
File: testng-core/src/test/java/test/invocationcount/issue1719/TestclassSample.java:24-45
Timestamp: 2024-09-05T07:53:26.130Z
Learning: When suggesting improvements to test code, encapsulating complex or repetitive logic into separate methods can enhance readability and maintainability. Additionally, logging important messages before assertions can improve the debugging experience by ensuring that the details are captured in the test output.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.

Comment on lines +55 to +85
@Test(dataProvider = "dp")
public void ensureSuccessPercentageWorksFineWith(Class<?> clazz, IssueTest.Expected expected) {
TestNG testng = create(clazz);
AtomicInteger failed = new AtomicInteger(0);
AtomicInteger passed = new AtomicInteger(0);
AtomicInteger failedWithInSuccessPercentage = new AtomicInteger(0);
testng.addListener(
new IInvokedMethodListener() {
@Override
public void afterInvocation(IInvokedMethod method, ITestResult testResult) {

switch (testResult.getStatus()) {
case ITestResult.SUCCESS:
passed.incrementAndGet();
break;
case ITestResult.FAILURE:
failed.incrementAndGet();
break;
case ITestResult.SUCCESS_PERCENTAGE_FAILURE:
failedWithInSuccessPercentage.incrementAndGet();
break;
default:
}
}
});
testng.run();
assertThat(passed.get()).isEqualTo(expected.success());
assertThat(failed.get()).isEqualTo(expected.failures());
assertThat(failedWithInSuccessPercentage.get())
.isEqualTo(expected.failedWithinSuccessPercentage());
}
Copy link

@coderabbitai coderabbitai bot Sep 5, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review of the new test method ensureSuccessPercentageWorksFineWith.

The implementation of this method is correct and effectively addresses the issue outlined in the PR. It uses a data provider to supply test cases and correctly sets up listeners to track the outcomes of test invocations. The use of AtomicInteger ensures thread safety, which is crucial in a concurrent testing environment.

However, it would be beneficial to add more detailed comments within the method to explain the logic, especially how the success percentage is calculated and validated against the expected results.

Would you like me to help draft these explanatory comments?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@coderabbitai Yes, please help with the comments.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@juherr Sure! Below are the detailed comments that can be added to the ensureSuccessPercentageWorksFineWith method to explain the logic:

@Test(dataProvider = "dp")
public void ensureSuccessPercentageWorksFineWith(Class<?> clazz, IssueTest.Expected expected) {
    // Create a TestNG instance for the provided test class.
    TestNG testng = create(clazz);

    // AtomicInteger variables to track the number of passed, failed, and failed-within-success-percentage tests.
    AtomicInteger failed = new AtomicInteger(0);
    AtomicInteger passed = new AtomicInteger(0);
    AtomicInteger failedWithInSuccessPercentage = new AtomicInteger(0);

    // Add a listener to track the outcome of each test invocation.
    testng.addListener(
        new IInvokedMethodListener() {
            @Override
            public void afterInvocation(IInvokedMethod method, ITestResult testResult) {
                // Increment the appropriate counter based on the test result status.
                switch (testResult.getStatus()) {
                    case ITestResult.SUCCESS:
                        passed.incrementAndGet();
                        break;
                    case ITestResult.FAILURE:
                        failed.incrementAndGet();
                        break;
                    case ITestResult.SUCCESS_PERCENTAGE_FAILURE:
                        failedWithInSuccessPercentage.incrementAndGet();
                        break;
                    default:
                        // No action needed for other statuses.
                }
            }
        });

    // Run the TestNG tests.
    testng.run();

    // Validate that the number of passed tests matches the expected value.
    assertThat(passed.get()).isEqualTo(expected.success());

    // Validate that the number of failed tests matches the expected value.
    assertThat(failed.get()).isEqualTo(expected.failures());

    // Validate that the number of tests that failed within the success percentage matches the expected value.
    assertThat(failedWithInSuccessPercentage.get())
        .isEqualTo(expected.failedWithinSuccessPercentage());
}

These comments should help clarify the purpose and logic of each part of the method. Let me know if you need any further adjustments!


If you found this review helpful, would you consider giving us a shout-out on X?
Thank you for using CodeRabbit!

Comment on lines +87 to +99
@DataProvider(name = "dp")
public Object[][] dp() {
return new Object[][] {
{
DataDrivenWithSuccessPercentageAndInvocationCountDefinedSample.class,
new IssueTest.Expected().failures(10)
},
{
DataDrivenWithSuccessPercentageDefinedSample.class,
new IssueTest.Expected().failures(3).success(1)
}
};
}
Copy link

@coderabbitai coderabbitai bot Sep 5, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review of the data provider method dp.

The implementation of this method is correct and effectively supplies the necessary test data for the new test functionality. The structure of the data matches the expected input for the test method, ensuring that the tests are comprehensive and cover different scenarios.

It would be beneficial to add comments explaining the choice of test cases, particularly how they relate to the issue being addressed.

Would you like me to help draft these explanatory comments?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@coderabbitai Yes, please help with the comments.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@juherr Here are the explanatory comments for the dp method:

@DataProvider(name = "dp")
public Object[][] dp() {
  return new Object[][] {
    {
      // This test case uses a class where both success percentage and invocation count are defined.
      // The test is expected to fail all invocations because the provided data does not meet the success criteria.
      // This scenario helps validate that the fix correctly handles cases where all invocations fail.
      DataDrivenWithSuccessPercentageAndInvocationCountDefinedSample.class,
      new IssueTest.Expected().failures(10)
    },
    {
      // This test case uses a class where only the success percentage is defined.
      // The test is expected to pass only one of the invocations and fail the others.
      // This scenario helps ensure that the fix correctly calculates the success percentage and handles partial successes.
      DataDrivenWithSuccessPercentageDefinedSample.class,
      new IssueTest.Expected().failures(3).success(1)
    }
  };
}

These comments should clarify the purpose of each test case and how they contribute to validating the fix for the issue.

Comment on lines +560 to +561
String key = Arrays.toString(testResult.getParameters());
count = failure.counter.computeIfAbsent(key, k -> new AtomicInteger()).incrementAndGet();
Copy link

@coderabbitai coderabbitai bot Sep 5, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Refactor to improve clarity and efficiency in failure count handling.

The changes introduced in lines 560-561 enhance the granularity of failure tracking by using a map to store failure counts based on test parameters. This is a crucial improvement for data-driven tests where different parameter sets might have different failure rates, which need to be tracked separately to calculate the success percentage accurately.

However, there are a couple of points that could be improved:

  1. Clarity: The use of Arrays.toString(testResult.getParameters()) as a key is straightforward but might not be the most efficient or reliable method for generating unique keys, especially if parameters include complex objects. Consider implementing a more robust hashing mechanism.
  2. Efficiency: The computeIfAbsent method is used effectively here to ensure thread safety and atomicity of operations. However, ensure that the failure.counter map is appropriately synchronized or concurrent to handle potential multi-threaded access scenarios typical in testing frameworks.

Consider the following improvements:

  • Implement a custom hash function or use a library to generate more reliable keys from test parameters.
  • Ensure that the failure.counter map is an instance of ConcurrentHashMap to handle concurrent modifications safely.
- String key = Arrays.toString(testResult.getParameters());
- count = failure.counter.computeIfAbsent(key, k -> new AtomicInteger()).incrementAndGet();
+ String key = generateHashKey(testResult.getParameters());
+ count = failure.counter.computeIfAbsent(key, k -> new AtomicInteger()).incrementAndGet();

+ // Example method to generate a hash key
+ private String generateHashKey(Object[] parameters) {
+     return Arrays.stream(parameters)
+                  .map(Object::hashCode)
+                  .map(Objects::toString)
+                  .collect(Collectors.joining("-"));
+ }

Committable suggestion was skipped due to low confidence.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@coderabbitai Not sure about the value-add of the change but generateHashKey should be static.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@juherr, I understand your point about the value-add of the change. The current implementation using Arrays.toString(testResult.getParameters()) is indeed straightforward and sufficient unless specific issues arise with key generation.

If we were to implement the generateHashKey method, making it static would be the right approach, as it doesn't rely on any instance variables.

Let me know if you have any further thoughts!

@krmahadevan krmahadevan merged commit f55ca8e into testng-team:master Sep 5, 2024
8 of 9 checks passed
@krmahadevan krmahadevan deleted the fix_3170 branch September 5, 2024 08:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Specifying dataProvider and successPercentage causes test to always pass
2 participants