-
Notifications
You must be signed in to change notification settings - Fork 124
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix CI and test related to deprecated warning #194
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fine by me so approving now so this can be merged when taken out of Draft.
python -m unittest -v |& tee log.txt | ||
grep '^FAIL:' log.txt > fails.txt | ||
# if all failures is not raised in `test.test_wrapper`: | ||
if [ "$(grep 'test.test_wrapper' fails.txt | wc -l)" -ne "$(wc -l < fails.txt)" ]; then |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we don't expect some tests to pass would it not be better to just remove them or skip them?
If the endpoints are unreliable it may be better to rather investigate something like testcontainers and run containers for the tests or pytest-recording.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also another option is to use pytest markers (e.g. @pytest.mark.unreliable
), then mark the tests that are unreliable, and then run:
pytest -m 'not unreliable'
## Below will ignore the exit code
pytest -m 'unreliable' || :
But regardless, I think we should have a decent answer as to why we are running tests if we don't expect them to pass, and see if there is not some pattern in pytest that is appropriate for dealing with them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, I understood.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's fine for now, we can consider it as part of a move to pytest later.
@@ -870,31 +871,31 @@ def _mime_vs_type(mime, requested_type): | |||
self.assertEqual(0, _mime_vs_type("text/n3", N3)) | |||
self.assertEqual(0, _mime_vs_type("text/turtle", TURTLE)) | |||
self.assertEqual(0, _mime_vs_type("application/turtle", TURTLE)) | |||
self.assertEqual(0, _mime_vs_type("application/json", JSON)) # Warning | |||
self.assertEqual(0, _mime_vs_type("application/json", JSON)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should reconsider these asserts, it seems the count is very arbitrary, and I'm not sure that is what matters, maybe we should rather check that the result is actually json rather or something.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm okay with this, but we should re-evaluate the tests we run if we don't expect them to pass.
Please squash |
Merged, thanks again for all your contributions @eggplants |
After #191, I will convert into
ready to merge
.graph.load()
withgraph.parse()
#187python -m unittest -v
(Test discovery)test_wrapper.py
passes, but it fails in the whole testWrapper.py
#192