Skip to content

Commit

Permalink
Wiki: linting and testing overhaul (#1140)
Browse files Browse the repository at this point in the history
  • Loading branch information
EagleoutIce authored Nov 14, 2024
2 parents 9f270a3 + 905c78a commit c50b1d3
Show file tree
Hide file tree
Showing 5 changed files with 203 additions and 1 deletion.
1 change: 1 addition & 0 deletions .github/workflows/broken-links-and-wiki.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -86,6 +86,7 @@ jobs:
update_wiki_page "Query API" wiki:query-api
update_wiki_page "Interface" wiki:interface
update_wiki_page "Normalized AST" wiki:normalized-ast
update_wiki_page "Linting and Testing" wiki:linting-and-testing
if [ $CHANGED_ANY == "true" ]; then
git config --local user.email "[email protected]"
Expand Down
1 change: 1 addition & 0 deletions package.json
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,7 @@
"wiki:df-graph": "ts-node src/documentation/print-dataflow-graph-wiki.ts",
"wiki:normalized-ast": "ts-node src/documentation/print-normalized-ast-wiki.ts",
"wiki:query-api": "ts-node src/documentation/print-query-wiki.ts",
"wiki:linting-and-testing": "ts-node src/documentation/print-linting-and-testing-wiki.ts",
"wiki:interface": "ts-node src/documentation/print-interface-wiki.ts",
"build": "tsc --project .",
"build:bundle-flowr": "npm run build && esbuild --bundle dist/src/cli/flowr.js --platform=node --bundle --minify --target=node22 --outfile=dist/src/cli/flowr.min.js",
Expand Down
7 changes: 6 additions & 1 deletion src/documentation/doc-util/doc-files.ts
Original file line number Diff line number Diff line change
@@ -1,14 +1,19 @@
import fs from 'fs';

export const FlowrGithubBaseRef = 'https://github.com/flowr-analysis';
export const FlowrSiteBaseRef = 'https://flowr-analysis.github.io/flowr';
export const RemoteFlowrFilePathBaseRef = `${FlowrGithubBaseRef}/flowr/tree/main/`;
export const FlowrWikiBaseRef = `${FlowrGithubBaseRef}/flowr/wiki/`;
export const FlowrNpmRef = 'https://www.npmjs.com/package/@eagleoutice/flowr';
export const FlowrDockerRef = 'https://hub.docker.com/r/eagleoutice/flowr';
export const FlowrCodecovRef = 'https://app.codecov.io/gh/flowr-analysis/flowr';

export function getFilePathMd(path: string): string {
// we go one up as we are in doc-util now :D #convenience
const fullpath = require.resolve('../' + path);
const relative = fullpath.replace(process.cwd(), '.');
// normalize path separators so that this is consistent when testing on windows
const cwd = process.cwd().replaceAll('\\', '/');
const relative = fullpath.replaceAll('\\', '/').replace(cwd, '.');
/* remove project prefix */
return `[\`${relative}\`](${RemoteFlowrFilePathBaseRef}${relative})`;
}
Expand Down
195 changes: 195 additions & 0 deletions src/documentation/print-linting-and-testing-wiki.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,195 @@
import { setMinLevelOfAllLogs } from '../../test/functionality/_helper/log';
import { LogLevel } from '../util/log';
import { codeBlock } from './doc-util/doc-code';
import { FlowrCodecovRef, FlowrDockerRef, FlowrGithubBaseRef, FlowrSiteBaseRef, FlowrWikiBaseRef, getFilePathMd, RemoteFlowrFilePathBaseRef } from './doc-util/doc-files';

function getText() {
return `
For the latest code-coverage information, see [codecov.io](${FlowrCodecovRef}),
for the latest benchmark results, see the [benchmark results](${FlowrSiteBaseRef}/wiki/stats/benchmark) wiki page.
- [Testing Suites](#testing-suites)
- [Functionality Tests](#functionality-tests)
- [Test Structure](#test-structure)
- [Writing a Test](#writing-a-test)
- [Running Only Some Tests](#running-only-some-tests)
- [Performance Tests](#performance-tests)
- [Oh no, the tests are slow](#oh-no-the-tests-are-slow)
- [Testing Within Your IDE](#testing-within-your-ide)
- [Using Visual Studio Code](#vs-code)
- [Using WebStorm](#webstorm)
- [CI Pipeline](#ci-pipeline)
- [Linting](#linting)
- [Oh no, the linter fails](#oh-no-the-linter-fails)
- [License Checker](#license-checker)
## Testing Suites
Currently, flowR contains two testing suites: one for [functionality](#functionality-tests) and one for [performance](#performance-tests). We explain each of them in the following.
In addition to running those tests, you can use the more generalized \`npm run checkup\`. This will include the construction of the docker image, the generation of the wiki pages, and the linter.
### Functionality Tests
The functionality tests represent conventional unit (and depending on your terminology component/api) tests.
We use [vitest](https://vitest.dev/) as our testing framework.
You can run the tests by issuing:
${codeBlock('shell', 'npm run test')}
Within the commandline,
this should automatically drop you into a watch mode which will automatically re-run the tests if you change the code.
If, at any time there are too many errors, you can use \`--bail=<value>\` to stop the tests after a certain number of errors.
For example:
${codeBlock('shell', 'npm run test -- --bail=1')}
If you want to run the tests without the watch mode, you can use:
${codeBlock('shell', 'npm run test -- --no-watch')}
To run all tests, including a coverage report and label summary, run:
${codeBlock('shell', 'npm run test-full')}
However, depending on your local R version, your network connection and potentially other factors, some tests may be skipped automatically as they don't apply to your current system setup
(or can't be tested with the current prerequisites).
Each test can specify such requirements as part of the \`TestConfiguration\`, which is then used in the \`test.skipIf\` function of _vitest_.
It is up to the [ci](#ci-pipeline) to run the tests on different systems to ensure that those tests are ensured to run.
#### Test Structure
All functionality tests are to be located under [test/functionality](${RemoteFlowrFilePathBaseRef}test/functionality).
This folder contains three special and important elements:
- \`test-setup\` which is the entry point if *all* tests are run. It should automatically disable logging statements and configure global variables (e.g., if installation tests should run).
- \`_helper\` which contains helper functions to be used by other tests.
- \`test-summary\` which may produce a summary of the covered capabilities.
We name all tests using the \`.test.ts\` suffix and try to run them in parallel.
Whenever this is not possible (e.g., when using \`withShell\`), please use \`describe.sequential\` to disable parallel execution for the respective test.
#### Writing a Test
Currently, this is heavily dependent on what you want to test (normalization, dataflow, quad-export, ...)
and it is probably best to have a look at existing tests in that area to get an idea of what comfort functionality is available.
Generally, tests should be [labeled](${RemoteFlowrFilePathBaseRef}test/functionality/_helper/label.ts) according to the *flowR* capabilities they test. The set of currently supported capabilities and their IDs can be found in ${getFilePathMd('../r-bridge/data/data.ts')}. The resulting labels are used in the test report that is generated as part of the test output. They group tests by the capabilities they test and allow the report to display how many tests ensure that any given capability is properly supported.
Various helper functions are available to ease in writing tests with common behaviors, like testing for dataflow, slicing or query results. These can be found in [the \`_helper\` subdirectory](${RemoteFlowrFilePathBaseRef}test/functionality/_helper).
For example, an [existing test](${RemoteFlowrFilePathBaseRef}test/functionality/dataflow/processing-of-elements/atomic/dataflow-atomic.test.ts) that tests the dataflow graph of a simple variable looks like this:
${codeBlock('typescript', `
assertDataflow(label('simple variable', ['name-normal']), shell,
'x', emptyGraph().use('0', 'x')
);
`)}
When writing dataflow tests, additional settings can be used to reduce the amount of graph data that needs to be pre-written. Notably:
- \`expectIsSubgraph\` indicates that the expected graph is a subgraph, rather than the full graph that the test should generate. The test will then only check if the supplied graph is contained in the result graph, rather than an exact match.
- \`resolveIdsAsCriterion\` indicates that the ids given in the expected (sub)graph should be resolved as [slicing criteria](${FlowrWikiBaseRef}/Terminology#slicing-criterion) rather than actual ids. For example, passing \`12@a\` as an id in the expected (sub)graph will cause it to be resolved as the corresponding id.
The following example shows both in use.
${codeBlock('typescript', `
assertDataflow(label('without distractors', [...OperatorDatabase['<-'].capabilities, 'numbers', 'name-normal', 'newlines', 'name-escaped']),
shell, '\`a\` <- 2\\na',
emptyGraph()
.use('2@a')
.reads('2@a', '1@\`a\`'),
{
expectIsSubgraph: true,
resolveIdsAsCriterion: true
}
);
`)}
#### Running Only Some Tests
To run only some tests, vitest allows you to [filter](https://vitest.dev/guide/filtering.html) tests.
Besides, you can use the watch mode (with \`npm run test\`) to only run tests that are affected by your changes.
### Performance Tests
The performance test suite of *flowR* uses several suites to check for variations in the required times for certain steps.
Although we measure wall time in the CI (which is subject to rather large variations), it should give a rough idea of the performance of *flowR*.
Furthermore, the respective scripts can be used locally as well.
To run them, issue:
${codeBlock('shell', 'npm run performance-test')}
See [test/performance](${RemoteFlowrFilePathBaseRef}test/performance) for more information on the suites, how to run them, and their results. If you are interested in the results of the benchmarks, see [here](${FlowrSiteBaseRef}/wiki/stats/benchmark).
### Testing Within Your IDE
#### VS Code
Using the vitest Extension for Visual Studio Code, you can start tests directly from the definition and explore your suite in the Testing tab.
To get started, install the [vitest Extension](https://marketplace.visualstudio.com/items?itemName=vitest.explorer).
![vscode market place](img/vs-code-vitest.png)
| Testing Tab | In Code |
|:---------------------------------------:|:-------------------------------------:|
| ![testing tab](img/testing-vs-code.png) | ![in code](img/testing-vs-code-2.png) |
- Left-clicking the <img style="vertical-align: middle" src='img/circle-check-regular.svg' height='16pt'> or <img style="vertical-align: middle" src='img/circle-xmark-regular.svg' height='16pt'> Icon next to the code will rerun the test. Right-clicking will open a context menu, allowing you to debug the test.
- In the Testing tab, you can run (and debug) all tests, individual suites or individual tests.
#### Webstorm
Please follow the official guide [here](https://www.jetbrains.com/help/webstorm/vitest.html).
## CI Pipeline
We have several workflows defined in [.github/workflows](../.github/workflows/).
We explain the most important workflows in the following:
- [qa.yaml](../.github/workflows/qa.yaml) is the main workflow that will run different steps depending on several factors. It is responsible for:
- running the [functionality](#functionality-tests) and [performance tests](#performance-tests)
- uploading the results to the [benchmark page](${FlowrSiteBaseRef}/wiki/stats/benchmark) for releases
- running the [functionality tests](#functionality-tests) on different operating systems (Windows, macOS, Linux) and with different versions of R
- reporting code coverage
- running the [linter](#linting) and reporting its results
- deploying the documentation to [GitHub Pages](${FlowrSiteBaseRef}/doc/)
- [release.yaml](../.github/workflows/release.yaml) is responsible for creating a new release, only to be run by repository owners. Furthermore, it adds the new docker image to [docker hub](${FlowrDockerRef}).
- [broken-links-and-wiki.yaml](../.github/workflows/broken-links-and-wiki.yaml) repeatedly tests that all links are not dead!
## Linting
There are two linting scripts.
The main one:
${codeBlock('shell', 'npm run lint')}
And a weaker version of the first (allowing for *todo* comments) which is run automatically in the [pre-push githook](../.githooks/pre-push) as explained in the [CONTRIBUTING.md](../.github/CONTRIBUTING.md):
${codeBlock('shell', 'npm run lint-local')}
Besides checking coding style (as defined in the [package.json](../package.json)), the *full* linter runs the [license checker](#license-checker).
In case you are unaware,
eslint can automatically fix several linting problems[](https://eslint.org/docs/latest/use/command-line-interface#fix-problems).
So you may be fine by just running:
${codeBlock('shell', 'npm run lint-local -- --fix')}
### Oh no, the linter fails
By now, the rules should be rather stable and so, if the linter fails,
it is usually best if you (if necessary) read the respective description and fix the respective problem.
Rules in this project cover general JavaScript issues [using regular ESLint](https://eslint.org/docs/latest/rules), TypeScript-specific issues [using typescript-eslint](https://typescript-eslint.io/rules/), and code formatting [with ESLint Stylistic](https://eslint.style/packages/default#rules).
However, in case you think that the linter is wrong, please do not hesitate to open a [new issue](${FlowrGithubBaseRef}/flowr/issues/new/choose).
### License Checker
*flowR* is licensed under the [GPLv3 License](${FlowrGithubBaseRef}/flowr/blob/main/LICENSE) requiring us to only rely on [compatible licenses](https://www.gnu.org/licenses/license-list.en.html). For now, this list is hardcoded as part of the npm [\`license-compat\`](../package.json) script so it can very well be that a new dependency you add causes the checker to fail &mdash; *even though it is compatible*. In that case, please either open a [new issue](${FlowrGithubBaseRef}/flowr/issues/new/choose) or directly add the license to the list (including a reference to why it is compatible).
`;
}

if(require.main === module) {
setMinLevelOfAllLogs(LogLevel.Fatal);
console.log(getText());
}
Binary file modified wiki/Linting and Testing.md
Binary file not shown.

2 comments on commit c50b1d3

@github-actions
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"artificial" Benchmark Suite

Benchmark suite Current: c50b1d3 Previous: 0d1268d Ratio
Retrieve AST from R code 245.4433267727273 ms (100.97557255775912) 237.85305927272728 ms (97.36861369002281) 1.03
Normalize R AST 17.92380840909091 ms (31.526905348222503) 16.982624772727274 ms (30.42886266900597) 1.06
Produce dataflow information 60.91719522727273 ms (126.41920544061759) 60.41169277272727 ms (128.7371176899317) 1.01
Total per-file 855.0886790454545 ms (1545.3067215845101) 833.961438 ms (1514.7315556086162) 1.03
Static slicing 2.1224975199953495 ms (1.1586635761483512) 2.0461436166648226 ms (1.2405957027340997) 1.04
Reconstruct code 0.24969024095374226 ms (0.1957585127565638) 0.23572579664556767 ms (0.19160803373208626) 1.06
Total per-slice 2.3865562701014373 ms (1.2375596561897722) 2.2952539344461735 ms (1.3064191460121453) 1.04
failed to reconstruct/re-parse 0 # 0 # 1
times hit threshold 0 # 0 # 1
reduction (characters) 0.7869360165281424 # 0.7869360165281424 # 1
reduction (normalized tokens) 0.7639690077689504 # 0.7639690077689504 # 1
memory (df-graph) 95.46617542613636 KiB (244.77619956879823) 95.46617542613636 KiB (244.77619956879823) 1

This comment was automatically generated by workflow using github-action-benchmark.

@github-actions
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"social-science" Benchmark Suite

Benchmark suite Current: c50b1d3 Previous: 0d1268d Ratio
Retrieve AST from R code 245.26885322 ms (46.042311577377724) 254.70445752 ms (48.56635699718653) 0.96
Normalize R AST 19.275219 ms (15.519069244596878) 19.45440952 ms (14.953138748943163) 0.99
Produce dataflow information 74.63655406000001 ms (72.32703486511011) 75.30514048 ms (71.35653069164984) 0.99
Total per-file 7768.1200948000005 ms (29057.697911117084) 7850.45238692 ms (28841.253371136383) 0.99
Static slicing 16.04143741063529 ms (44.39267620069801) 16.172304981916042 ms (44.135225929438114) 0.99
Reconstruct code 0.2803405855208846 ms (0.1573590448964055) 0.34529443588845116 ms (0.1709415154465775) 0.81
Total per-slice 16.330065101053936 ms (44.430783054404905) 16.52688720676753 ms (44.1588873153231) 0.99
failed to reconstruct/re-parse 0 # 0 # 1
times hit threshold 0 # 0 # 1
reduction (characters) 0.8712997340230448 # 0.8712997340230448 # 1
reduction (normalized tokens) 0.8102441553774778 # 0.8102441553774778 # 1
memory (df-graph) 99.4425 KiB (113.62933451202426) 99.4425 KiB (113.62933451202426) 1

This comment was automatically generated by workflow using github-action-benchmark.

Please sign in to comment.