Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

make PyErrState thread-safe #4671

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

davidhewitt
Copy link
Member

This PR resolves the thread-safety challenges of #4584 for us to be able to at least ship 0.23.

I don't love the complexity that this lazy state creates inside error-handling pathways, so I think in the future I will work to proceed with #4669 and further steps to remove the lazy state. But 0.23 is already breaking enough, users don't need more changes and this should be an in-place drop-in.

@ngoldbaum
Copy link
Contributor

I noticed clippy was failing so I just pushed a fix. I'll try to get the CI green on this if there are any more issues.

match self_state {
Some(PyErrStateInner::Normalized(n)) => n,
_ => unreachable!(),
let normalized_state = PyErrStateInner::Normalized(state.normalize(py));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the only spot where there might be a deadlock is here, if normalize somehow leads to arbitrary Python code execution.

Is that possible? If not I think it deserves a comment explaining why.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If it can deadlock, I'm not sure what we can do, since at this point we haven't actually constructed any Python objects yet and we only have a handle to an FnOnce that knows how to construct them.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great observation; I've added a wrapping call to py.allow_threads before potentially blocking on the Once, which I think avoids the deadlock (I pushed a test which did deadlock before that change).

@ngoldbaum
Copy link
Contributor

The algorithm makes sense to me, I agree that this ensures that normalizing an error state can't be done simultaneously in two threads.

Copy link

codspeed-hq bot commented Oct 31, 2024

CodSpeed Performance Report

Merging #4671 will degrade performances by 25.38%

Comparing davidhewitt:threadsafe-err (4a30dde) with main (5464f16)

Summary

❌ 2 regressions
✅ 81 untouched benchmarks

⚠️ Please fix the performance issues or acknowledge them on CodSpeed.

Benchmarks breakdown

Benchmark main davidhewitt:threadsafe-err Change
enum_from_pyobject 19 µs 24.7 µs -23.22%
not_a_list_via_extract_enum 13.5 µs 18 µs -25.38%

@ngoldbaum
Copy link
Contributor

Huh, I can reproduce the test failure happening on CI. It's flakey, but you can trigger it with cargo test --no-default-features --features "multiple-pymethods abi3-py37 full" --test "test_declarative_module" running in a while loop.

.expect("Cannot normalize a PyErr while already normalizing it.")
};
// avoid deadlock of `.call_once` with the GIL
py.allow_threads(|| {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess somehow dropping the GIL somehow allows a race condition to happen where multiple threads try to simultaneously create a module...


#[test]
fn test_no_deadlock_thread_switch() {
static ERR: OnceLock<PyErr> = OnceLock::new();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

More of a style thing I guess, but I think you can avoid the static if you use std::thread::scope

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants