You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are seeing intermittent build/test failures with the gpclient tests:
-------------------------------------------------------
T E S T S
-------------------------------------------------------
Running org.epics.gpclient.PVEventRecorderTest
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.603 sec
Running org.epics.gpclient.datasource.DataSourceImplementationTest
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.032 sec
Running org.epics.gpclient.PassiveRateDecouplerTest
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.795 sec
Running org.epics.gpclient.ActiveRateDecouplerTest
Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 4.03 sec <<<
FAILURE!
activeScanningRate(org.epics.gpclient.ActiveRateDecouplerTest) Time elapsed:
1.022 sec <<< FAILURE!
java.lang.AssertionError:
Expected: a value less than or equal to <5>
but: <6> was greater than <5>
From a distance, this looks like the test is making assumptions on parallelism/timing/resource usage on the host that are not always true.
Is there a way to improve robustness of this test?
The text was updated successfully, but these errors were encountered:
Well, the test sets up sending monitors every 100ms, creates a subscription, then sleeps for 500ms, and expects to have received 4 or 5 updates.
If the sleep takes longer than 500ms (and those functions usually guarantee a minimum sleep time), the subscription might as well receive 6 updates, which is what's happening in our case.
I would suggest to either simply loosen the criteria allowing 4-6 updates after a 500ms sleep,
or to turn things around, measure the time between updates, and use criteria on the results' statistics (mix/max/avg after removing outliers or similar).
shroffk
added a commit
to shroffk/epicsCoreJava
that referenced
this issue
Sep 10, 2019
We are seeing intermittent build/test failures with the gpclient tests:
From a distance, this looks like the test is making assumptions on parallelism/timing/resource usage on the host that are not always true.
Is there a way to improve robustness of this test?
The text was updated successfully, but these errors were encountered: