You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the generated report, Devel::Cover completely ignores files which have no coverage (because no part of the test suite ever ran them). For our use case, this makes the report a bit deceptive and dangerous because it gives the impression that we have reasonable coverage, when actually we have scary big holes, where we shouldn't be refactoring (yet)
So I wondereded if it's possible to force uncovered files to be treated as 0%. I asked about this on the london.pm list but it seems no-one had attempted this. So here goes...
I have no idea if this is generating "correct" (let alone "optimal") cover_db runs, but it does seem to do the job. However, the report generation code then spews bazillions of use of uninitialized value warnings because some part of a data structure is (partially) created. It is not expecting this - it appears that the "outer" level reference that now exists for that file causes code to be entered that wan't previously, but this code then assumes a multi-level structure exists with values for statement, branch and condition coverage, and so is reading undef when it expects a number.
This patch "shuts it up" but I don't think that it's really correct, as we end up with the "average number" for the file being 0.0 (seen above) whereas I'd like it to be n/a
diff --git a/lib/perl5/site_perl/5.32.0/x86_64-linux/Devel/Cover/Report/Html_minimal.pm b/lib/perl5/site_perl/5.32.0/x86_64-linux/Devel/Cover/Report/Html_minimal.pm
index ede35908d..5ffa77f0f 100644
--- a/lib/perl5/site_perl/5.32.0/x86_64-linux/Devel/Cover/Report/Html_minimal.pm
+++ b/lib/perl5/site_perl/5.32.0/x86_64-linux/Devel/Cover/Report/Html_minimal.pm
@@ -482,10 +482,14 @@ sub print_file_report {
my ($show, $th) = get_showing_headers($db, $opt);
my $file_data = $db->cover->file($fin);
+ # If we have injected files into the DB with no coverage, we don't want to
+ # accidentally autovivify a "total" structure for them, as creating the
+ # hashref here "confuses" get_summary_for_file().
+ my $total = $db->{summary}{$fin}{total};
print_html_header($out, "File Coverage: $fin");
print_summary($out, 'File Coverage', $fin,
- $db->{summary}{$fin}{total}{percentage},
- $db->{summary}{$fin}{total}{error},
+ $total->{percentage},
+ $total->{error},
$db);
print_th($out, ['line', @$th, 'code']);
(I believe there's at least one talk on YouTube that says work is on 5.32.0 ("this week"), so this isn't a leak)
I'm not sure where to take this from here. I guess
add (yet another) option to cover to create these placeholder runs
fix the report generation not to warn (roughly that patch)
improve the report generation to output n/a n/a n/a n/a n/a n/a n/a (instead of n/a n/a n/a n/a n/a n/a 0.0)
ideally treat the average calculation so that a file with that last "total" column of n/a counts as part of the denominator
I don't know how to submit a patch to do any of this. Or even if it's the right plan.
Right now (I think) if I have 5 files with totals of 0.6, 0.7, 0.8, 0.9 and 1.0, and 5 files completly uncovered, my overal "average" is 0.8, which is highlighted as orange.
With my hack, those 5 files will now show up, but the averaging ignores them. I'd rather like the averaging to treat the overal totals as 0.4, which is very red.
It's actually a bit of a "bug" - right now if I delete the test for a module with low coverage, my reported average coverage goes up. Game the system!
The text was updated successfully, but these errors were encountered:
Sorry, was not clear - I think that the "feature" cover would need is "treat this list of files on the CLI as having 0 coverage".
Scanning recusively, filtering and so on is the job of script or human invoking it, as I don't think it's possible to create a sufficiently generalised filter rule language for the command line. My example happens to have filtering because that's what I needed. I would assume I'd re-write it as my code doing filtering to get a list of files, and then shell out to cover with that list.
(OK, possibly whichever globbing syntax it is that includes ** to mean "any number of levels of directory" might just make it work self-contained - by implication that would be in combination with all the usual glob contructions, particularly comma seperated lists in {}. But I doubt that is easy to implement without dragging in more dependencies)
In the generated report, Devel::Cover completely ignores files which have no coverage (because no part of the test suite ever ran them). For our use case, this makes the report a bit deceptive and dangerous because it gives the impression that we have reasonable coverage, when actually we have scary big holes, where we shouldn't be refactoring (yet)
So I wondereded if it's possible to force uncovered files to be treated as 0%. I asked about this on the london.pm list but it seems no-one had attempted this. So here goes...
I hacked together a script for work - cover-touch.pl
Without this the coverage report looks like this:
With this:
we see that the UNKNOWN UNKNOWNS are now improved to KNOWN UNKNOWNS.
(Output stolen from the talk http://act.yapc.eu/gpw2024/talk/7872 - video not online yet)
I have no idea if this is generating "correct" (let alone "optimal") cover_db runs, but it does seem to do the job. However, the report generation code then spews bazillions of
use of uninitialized value
warnings because some part of a data structure is (partially) created. It is not expecting this - it appears that the "outer" level reference that now exists for that file causes code to be entered that wan't previously, but this code then assumes a multi-level structure exists with values for statement, branch and condition coverage, and so is readingundef
when it expects a number.This patch "shuts it up" but I don't think that it's really correct, as we end up with the "average number" for the file being
0.0
(seen above) whereas I'd like it to ben/a
(I believe there's at least one talk on YouTube that says work is on 5.32.0 ("this week"), so this isn't a leak)
I'm not sure where to take this from here. I guess
cover
to create these placeholder runsn/a n/a n/a n/a n/a n/a n/a
(instead ofn/a n/a n/a n/a n/a n/a 0.0
)n/a
counts as part of the denominatorI don't know how to submit a patch to do any of this. Or even if it's the right plan.
Right now (I think) if I have 5 files with totals of 0.6, 0.7, 0.8, 0.9 and 1.0, and 5 files completly uncovered, my overal "average" is
0.8
, which is highlighted as orange.With my hack, those 5 files will now show up, but the averaging ignores them. I'd rather like the averaging to treat the overal totals as 0.4, which is very red.
It's actually a bit of a "bug" - right now if I delete the test for a module with low coverage, my reported average coverage goes up. Game the system!
The text was updated successfully, but these errors were encountered: