-
Notifications
You must be signed in to change notification settings - Fork 191
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
test_normalized_conditional_states in test_fock_measurement is exceptionally slow with tensorflow backend #165
Comments
@co9olguy Looks like the issue is related to ram usage as my computer starts to swap during those tests... looking into this, but it appears that if you reduce the
The ram usage however is probably a bug in |
That's weird - I don't remember this happening in the past. In this something we can trace to a specific PR or change in either the codebase or the tests? |
Oh, this could be an issue with TF not correctly clearing the graph between repeat runs? @co9olguy |
This is definitely not a new issue. I've observed it for months now, but just got fed up enough to report it as issue now |
After some testing, the issue with the slow test appears to be a memory leak specific to Note however though that the test is still pretty slow even outside of memory leak issues, it just gets extremely slow if it starts swapping. |
Thanks @zzzeid. It's enough to know which test was causing it and flag it for refactoring. What likely happened is someone designed a test which would suit a particular backend, but forgot to account for the implementation/performance needs of other backends. |
I agree @zzzeid, my suspicion is that there is a backend initialization fixture which isn't being reset after evaluation, or is using the wrong scope. It might be something that is easily fixed by modifying the fixture to include tear down, i.e., @pytest.fixture(params=backend_params)
def setup_backend(
request, cutoff, pure, batch_size
): # pylint: disable=redefined-outer-name
"""Parameterized fixture, used to automatically create a backend of certain number of modes.
This fixture should only be used in backend tests, as it bypasses Engine and
initializes the backend directly.
Every test that uses this fixture, or a fixture that depends on it
will be run three times, one for each backend.
To explicitly mark a test to only work on certain backends, you may
use the ``@pytest.mark.backends()`` fixture. For example, for a test that
only works on the TF and Fock backends, ``@pytest.mark.backends('tf', 'fock').
"""
def _setup_backend(num_subsystems):
"""Factory function"""
backend = request.param()
backend.begin_circuit(
num_subsystems=num_subsystems,
cutoff_dim=cutoff,
pure=pure,
batch_size=batch_size,
)
yield backend
backend.reset()
return _setup_backend (question: does |
It might be a simpler solution: just call |
There is a test in
tests/backend/test_fock_measurement.py
which takes a long time to run, and crashes my computer when run locally. We should pinpoint which test this is and mark it for refactoring.The text was updated successfully, but these errors were encountered: