Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Time Cutoff #2454

Closed

Conversation

cfichtlscherer
Copy link
Contributor

Simulations for tallying events in specific time intervals can be made more efficient using time cutoffs. Particles older than this cutoff value are killed. Keep in mind that the time variable of a particle is inherited.

@gridley
Copy link
Contributor

gridley commented Apr 5, 2023

Does it make sense to have time treated differently among the different particles? What is a scenario where we would want all neutrons past a certain time to die, but continue propagating protons, for example?

One thing on my wishlist is a time boundary. For example, we run particles in a fixed source calculation until some boundary in time is hit, then stop them at that point, potentially putting them into a bank of SourceSites and then handing control back to the user with the python API. It would be useful for changing the geometry or densities in a time-dependent fashion. Would functionality like this accomplish what you're looking for? I think @stchaker was going to start looking at implementing something like this pretty soon.

One last point: particles will actually survive slightly longer than the time cutoff here. You'd have to be checking for the max time while advancing along a track segment to kill them at the exact cutoff time.

@cfichtlscherer
Copy link
Contributor Author

Thanks for the feedback!

  1. That's a good question, and I am not sure. I wanted to make it consistent with the energy cutoffs and thought it was the most flexible. That could make things easier in the future, but we can also make it a fixed variable for all particles.

  2. That sounds interesting! So you plan a functionality to modify the model in dependency of the particle time and the goal is to simulate dynamical systems iteratively? Maybe it's easier to structure it more like the depletion code, that you can specify times and densities beforehand, which are then changed?

  3. Yes, that's right - I was a bit lazy. I think we could probably introduce the if condition before the particle advanced?

@gridley
Copy link
Contributor

gridley commented Apr 5, 2023

As for why this shouldn't look like depletion, it's because the interesting dynamic reactor problems have multiphysics feedback. You can't know density or temperature over time until you have a history of neutron (and potentially photon)-driven heating. So it would be handy to do the feedbacks through the python API.

Maybe a similar functionality would be useful in depletion, e.g. adjusting for critical boron concentration.

Anyway, IMO a single time cutoff makes the most sense, because the problem is nonphysical if you're artificially killing some specific type of particle. Maybe others will have a differing opinion though.

@cfichtlscherer
Copy link
Contributor Author

Sure, I suggest we wait a bit again to see if there are any other opinions. Otherwise, we go with the single time cutoff.

@gridley
Copy link
Contributor

gridley commented Apr 5, 2023

Interestingly, it seems that MCNP does have separate time cutoffs for the different particle types, p. 337 describes it.

https://permalink.lanl.gov/object/tr?what=info:lanl-repo/lareport/LA-UR-22-30006

So I guess it does make sense to do this, although I'm not sure what the application would be.

@cfichtlscherer
Copy link
Contributor Author

Maybe we want to be able to vary the time variable in the simulation at some point? (kinda: #2355, even if this example won't be part of the code)
And, you can already give the particles different time variables in the source.

@cfichtlscherer
Copy link
Contributor Author

I ran clang-format over it again.

Copy link
Contributor

@gridley gridley left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great to me! I just am suggesting a slight addition to the documentation that clarifies exactly when the particles get killed.

openmc/settings.py Outdated Show resolved Hide resolved
docs/source/io_formats/settings.rst Outdated Show resolved Hide resolved
src/particle.cpp Outdated Show resolved Hide resolved
@cfichtlscherer
Copy link
Contributor Author

@gridley Thanks a lot for your feedback! What do you think about it now?

Copy link
Contributor

@gridley gridley left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me!

@gridley
Copy link
Contributor

gridley commented Apr 24, 2023

I'm just going to wait for one more person to give it the OK before merging, Chris!

@paulromano
Copy link
Contributor

I'll give this a look today 😄

:time_neutrons
The time above which neutrons will be killed.

*Default*: 1.7976931348623157E+308
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might be better to just say "Infinity"?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, thats better.

src/particle.cpp Outdated
Comment on lines 207 to 209
if (time() > settings::time_cutoff[static_cast<int>(type())]) {
wgt() = 0.0;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In both MCNP and Serpent, I believe it calculates the distance to the time cutoff and won't let the particle move past that time -- here we're not really stopping the particle at the time cutoff and in principle it could go much past the time specified, which makes me worried and seems like it could create surprising behavior for the user.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The documentation does clarify the expected behavior here. Despite this, I agree with you Paul. It costs nearly the same arithmetic to check along the pathlength rather than at collision sites, so why not take this more precise approach?

Copy link
Contributor

@paulromano paulromano left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given that @gridley agrees with me on the time boundary (i.e., that particles should be stopped exactly at the time cutoff), I'm going to make that a formal request for this to be merged. FWIW, I could see two possible approaches:

  1. Always calculate a "distance to time cutoff" and use that when comparing the distance to collision and distance to surface.
  2. Use the distance to collision/surface as is, but have a separate check if the time exceeded the time cutoff and if it does, move the particle a distance back such that it would exactly match the time cutoff.

After you implement this, there should be a test with a TimeFilter having a bin corresponding to times past the time cutoff, for which there should be zero flux.

@cfichtlscherer
Copy link
Contributor Author

cfichtlscherer commented Apr 26, 2023

Dear @paulromano, thanks for your feedback! Personally, I like option 2 better because I think option 1 requires a lot of unnecessary computing time that we can save (@gridley or do you think option 1 is better?). As I understand it, I would try to implement it with the idea that if the particle has exceeded the time (enters the if condition), I would just push the particle back a bit?

Maybe something like (not tested or tried, just for understanding):

  if (time() > settings::time_cutoff[static_cast<int>(type())]) {
    double time_dif = time() - settings::time_cutoff[static_cast<int>(type())];
    double push_back_distance = this->speed() * time_dif;
     
    for (int j = 0; j < n_coord(); ++j) {
       coord(j).r -= push_back_distance * coord(j).u;
    }
    wgt() = 0.0;
  }

@gridley
Copy link
Contributor

gridley commented Apr 26, 2023

Maybe this lump of code should go into an inlined function for Particle:

Particle::move_distance(double length) {
    for (int j = 0; j < n_coord(); ++j) {
       coord(j).r += length * coord(j).u;
    }
}

Then we don't have to write out two loops. But aside from that, yeah, looks good to me!

@gridley
Copy link
Contributor

gridley commented Apr 26, 2023

Also, I think that code is going to be prematurely killing the particle. The wgt() would have to be zero'd after doing the tracklength tally on this last little segment, if we are going to be strictly adhering to what Paul suggested. So you'd have to set some bool that's like: hit_time_boundary and then kill the particle after doing tracklength tallying.

@cfichtlscherer
Copy link
Contributor Author

Yes, that's right! I was still thinking about whether you could do it differently, because you tear the functionality apart a bit. But I think this is the best method.

@cfichtlscherer
Copy link
Contributor Author

To be honest, I think the problem is the test of the surface source. As described in the previous step, I believe that the routine

        if self._model.settings.surf_source_write:
            with h5py.File("surface_source_true.h5", 'r') as f:
                source_true = f['source_bank'][()]
                # Convert dtye from mixed to a float for comparison assertion
                source_true.dtype = 'float64'

does not work as it should and I propose to change the test slightly that the data is read out more explicitly.
@gridley, @paulromano What do you think about this change?

@gridley
Copy link
Contributor

gridley commented May 11, 2023

What do you think is wrong about how it's being read? It seems like your change would be just sorting the values before comparing, right?

If anything, we should fix the underlying file reader. That would be quite bad to have wrong!

@cfichtlscherer
Copy link
Contributor Author

cfichtlscherer commented May 11, 2023

Thanks, @gridley!

I am not entirely sure what is going on (probably not a good sign in a pull request ;-) ).

I noticed that the array with all entries of the 1000 particles has too many entries at the end. It should have 12_000, but has 13_000 entries. The additional 1000 entries are causing the test to fail for the time cutoff (I am not sure why and where it could be triggered by the changes coming with the time cutoff).

The error (creation of 1000 additional entries) must occur somewhere in these lines:

        if self._model.settings.surf_source_write:
            with h5py.File("surface_source_true.h5", 'r') as f:
                source_true = f['source_bank'][()]
                # Convert dtye from mixed to a float for comparison assertion
                source_true.dtype = 'float64'

When you read out the individual lines of source_true you get 12 values in a 1d object, so I think the values are created in the

source_true.dtype = 'float64'

statement. I thought maybe it is not working correctly, because we are looking of an array that contains different <class 'numpy.void'> entries, whereby the first two carry 3 and the other only 1 element.

@gridley
Copy link
Contributor

gridley commented May 12, 2023

Hm, unfortunately I'm not too familiar with the HDF5 interface code. Paul will probably have some useful insight here.

@cfichtlscherer
Copy link
Contributor Author

Thanks and sorry it's kind of a mess - I tried to explain it as I understood it...

@paulromano
Copy link
Contributor

@cfichtlscherer What is the status of this PR? I just read through the comments but I'm confused because it looks like tests are passing fine?

@cfichtlscherer
Copy link
Contributor Author

Hey @paulromano. Yes, it seems like everything is working fine. Anyway, this is only since I made the small modification in the tests/regression_tests/surface_source/test.py. As described above, this will compare for every particle only 12 entries instead of 13 (as described, I don't know where this mysterious value number 13 is coming from). Maybe you can have a quick check if you understand what's the issue here:

#2454 (comment)

Otherwise, I am happy to merge it into the code.

An additional comment on that: If we get this merged, we can try to add a score for the killed particles:
#2531

Copy link
Contributor

@gridley gridley left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me!

include/openmc/particle.h Show resolved Hide resolved
@gridley
Copy link
Contributor

gridley commented Jun 30, 2023

Sorry to take so long to review this.

@gridley
Copy link
Contributor

gridley commented Jun 30, 2023

@paulromano what do you think about this now?

@paulromano
Copy link
Contributor

Thanks for the ping @gridley, I'll take another look after the holiday. Appreciate your patience on this @cfichtlscherer!

Copy link
Contributor

@paulromano paulromano left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the long delay @cfichtlscherer! While digging into the test failure you were seeing, I think I did find an issue with how source sites are written to HDF5 but will need to resolve that as part of a separate PR.

Comment on lines 741 to 742
elif key in ['time_neutron', 'time_photon', 'time_electron',
'time_positron']:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Slightly more efficient

Suggested change
elif key in ['time_neutron', 'time_photon', 'time_electron',
'time_positron']:
elif key in ('time_neutron', 'time_photon', 'time_electron',
'time_positron'):

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, also changed it for the energy_cutoff

src/particle.cpp Outdated
Comment on lines 212 to 217
bool hit_time_boundary = false;
if (time() > settings::time_cutoff[static_cast<int>(type())]) {
double time_dif = time() - settings::time_cutoff[static_cast<int>(type())];
time() = settings::time_cutoff[static_cast<int>(type())];

double push_back_distance = speed() * time_dif;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
bool hit_time_boundary = false;
if (time() > settings::time_cutoff[static_cast<int>(type())]) {
double time_dif = time() - settings::time_cutoff[static_cast<int>(type())];
time() = settings::time_cutoff[static_cast<int>(type())];
double push_back_distance = speed() * time_dif;
bool hit_time_boundary = false;
double time_cutoff = settings::time_cutoff[static_cast<int>(type())]
if (time() > time_cutoff) {
double dt = time() - time_cutoff;
time() = time_cutoff;
double push_back_distance = speed() * dt;

Comment on lines 13 to 19
s1 = openmc.Sphere(r=100, surface_id=1)
s2 = openmc.Sphere(r=200, surface_id=2, boundary_type='vacuum')
inner_sphere = openmc.Cell(cell_id=1, name='inner sphere')
inner_sphere.region = -s1
outer_sphere = openmc.Cell(cell_id=2, name='outer sphere')
outer_sphere.region = +s1 & -s2
root = openmc.Universe(universe_id=0, name='root universe')
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please remove manual assignment of surface, cell, and universe IDs

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes this is a bad habit of mine

Comment on lines 6 to 8
class TimeCutoffTestHarness(PyAPITestHarness):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's no need to have a custom class. Simply have a function that returns a Model instance, put a @pytest.fixture decorator above it, pass it as an argument to the test function, and then assign it to PyAPITestHarness. So something like:

@pytest.fixture
def time_model():
    model = openmc.Model()
    ...

    return model

def test_time_cutoff(time_model):
    harness = PyAPITestHarness('statepoint.10.h5', time_model)
    harness.main()

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

much easier!

Comment on lines 38 to 39
cell_filter = openmc.CellFilter(outer_sphere)
tally.filters = [cell_filter]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe instead have a time filter with two bins, one from 0 to cutoff, and one from cutoff to 2*cutoff. First bin should have non-zero result and second bin should have zero result.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes I like the idea. Then we can also test everything in a single cell.


# Tally flux under time cutoff
tallies = openmc.Tallies()
tally = openmc.Tally(1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again, please avoid assigning IDs manually

Comment on lines 436 to 441
std::sort(simulation::surf_source_bank.begin(),
simulation::surf_source_bank.end(),
[](const SourceSite& l, const SourceSite& r) {
return std::tie(l.parent_id, l.progeny_id) <
std::tie(r.parent_id, r.progeny_id);
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The surface source test already includes a sort, so I don't see why we should be doing this sort by default and making the user always pay the cost.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was added with the help of @gridley (to be more precise, he wrote this part of the code! 😃) - see #2500. We thought this would solve the surface source bug. Since you solved this one, I will remove this block again.

src/particle.cpp Outdated
Comment on lines 201 to 208
for (int j = 0; j < n_coord(); ++j) {
coord(j).r += distance * coord(j).u;
}
this->move_distance(distance);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After spending some time with this, I figured out that this change (event though it doesn't change the logic) was the cause of the failure in the surface_source test. Can you revert this change for now (along with the associated changes you made for that test)?

Copy link
Contributor Author

@cfichtlscherer cfichtlscherer Aug 2, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay. We use move_distance also in the time_cutoff to move the particle back to the place where it was killed. For now, I have only changed it at the point you specified. To be honest, I still don't understand exactly what this changes 😅.
I added a comment that we can set it back to this->move_distance(distance) in the future after fixing the surface source.

@cfichtlscherer cfichtlscherer mentioned this pull request Aug 6, 2023
5 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants