-
Notifications
You must be signed in to change notification settings - Fork 493
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Time Cutoff #2454
Time Cutoff #2454
Conversation
Does it make sense to have time treated differently among the different particles? What is a scenario where we would want all neutrons past a certain time to die, but continue propagating protons, for example? One thing on my wishlist is a time boundary. For example, we run particles in a fixed source calculation until some boundary in time is hit, then stop them at that point, potentially putting them into a bank of SourceSites and then handing control back to the user with the python API. It would be useful for changing the geometry or densities in a time-dependent fashion. Would functionality like this accomplish what you're looking for? I think @stchaker was going to start looking at implementing something like this pretty soon. One last point: particles will actually survive slightly longer than the time cutoff here. You'd have to be checking for the max time while advancing along a track segment to kill them at the exact cutoff time. |
Thanks for the feedback!
|
As for why this shouldn't look like depletion, it's because the interesting dynamic reactor problems have multiphysics feedback. You can't know density or temperature over time until you have a history of neutron (and potentially photon)-driven heating. So it would be handy to do the feedbacks through the python API. Maybe a similar functionality would be useful in depletion, e.g. adjusting for critical boron concentration. Anyway, IMO a single time cutoff makes the most sense, because the problem is nonphysical if you're artificially killing some specific type of particle. Maybe others will have a differing opinion though. |
Sure, I suggest we wait a bit again to see if there are any other opinions. Otherwise, we go with the single time cutoff. |
Interestingly, it seems that MCNP does have separate time cutoffs for the different particle types, p. 337 describes it. https://permalink.lanl.gov/object/tr?what=info:lanl-repo/lareport/LA-UR-22-30006 So I guess it does make sense to do this, although I'm not sure what the application would be. |
Maybe we want to be able to vary the time variable in the simulation at some point? (kinda: #2355, even if this example won't be part of the code) |
I ran clang-format over it again. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great to me! I just am suggesting a slight addition to the documentation that clarifies exactly when the particles get killed.
@gridley Thanks a lot for your feedback! What do you think about it now? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me!
I'm just going to wait for one more person to give it the OK before merging, Chris! |
I'll give this a look today 😄 |
docs/source/io_formats/settings.rst
Outdated
:time_neutrons | ||
The time above which neutrons will be killed. | ||
|
||
*Default*: 1.7976931348623157E+308 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might be better to just say "Infinity"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, thats better.
src/particle.cpp
Outdated
if (time() > settings::time_cutoff[static_cast<int>(type())]) { | ||
wgt() = 0.0; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In both MCNP and Serpent, I believe it calculates the distance to the time cutoff and won't let the particle move past that time -- here we're not really stopping the particle at the time cutoff and in principle it could go much past the time specified, which makes me worried and seems like it could create surprising behavior for the user.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The documentation does clarify the expected behavior here. Despite this, I agree with you Paul. It costs nearly the same arithmetic to check along the pathlength rather than at collision sites, so why not take this more precise approach?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given that @gridley agrees with me on the time boundary (i.e., that particles should be stopped exactly at the time cutoff), I'm going to make that a formal request for this to be merged. FWIW, I could see two possible approaches:
- Always calculate a "distance to time cutoff" and use that when comparing the distance to collision and distance to surface.
- Use the distance to collision/surface as is, but have a separate check if the time exceeded the time cutoff and if it does, move the particle a distance back such that it would exactly match the time cutoff.
After you implement this, there should be a test with a TimeFilter having a bin corresponding to times past the time cutoff, for which there should be zero flux.
Dear @paulromano, thanks for your feedback! Personally, I like option 2 better because I think option 1 requires a lot of unnecessary computing time that we can save (@gridley or do you think option 1 is better?). As I understand it, I would try to implement it with the idea that if the particle has exceeded the time (enters the if condition), I would just push the particle back a bit? Maybe something like (not tested or tried, just for understanding):
|
Maybe this lump of code should go into an inlined function for Particle:
Then we don't have to write out two loops. But aside from that, yeah, looks good to me! |
Also, I think that code is going to be prematurely killing the particle. The wgt() would have to be zero'd after doing the tracklength tally on this last little segment, if we are going to be strictly adhering to what Paul suggested. So you'd have to set some bool that's like: |
Yes, that's right! I was still thinking about whether you could do it differently, because you tear the functionality apart a bit. But I think this is the best method. |
To be honest, I think the problem is the test of the surface source. As described in the previous step, I believe that the routine if self._model.settings.surf_source_write:
with h5py.File("surface_source_true.h5", 'r') as f:
source_true = f['source_bank'][()]
# Convert dtye from mixed to a float for comparison assertion
source_true.dtype = 'float64' does not work as it should and I propose to change the test slightly that the data is read out more explicitly. |
What do you think is wrong about how it's being read? It seems like your change would be just sorting the values before comparing, right? If anything, we should fix the underlying file reader. That would be quite bad to have wrong! |
Thanks, @gridley! I am not entirely sure what is going on (probably not a good sign in a pull request ;-) ). I noticed that the array with all entries of the 1000 particles has too many entries at the end. It should have 12_000, but has 13_000 entries. The additional 1000 entries are causing the test to fail for the The error (creation of 1000 additional entries) must occur somewhere in these lines: if self._model.settings.surf_source_write:
with h5py.File("surface_source_true.h5", 'r') as f:
source_true = f['source_bank'][()]
# Convert dtye from mixed to a float for comparison assertion
source_true.dtype = 'float64' When you read out the individual lines of
statement. I thought maybe it is not working correctly, because we are looking of an array that contains different |
Hm, unfortunately I'm not too familiar with the HDF5 interface code. Paul will probably have some useful insight here. |
Thanks and sorry it's kind of a mess - I tried to explain it as I understood it... |
@cfichtlscherer What is the status of this PR? I just read through the comments but I'm confused because it looks like tests are passing fine? |
Hey @paulromano. Yes, it seems like everything is working fine. Anyway, this is only since I made the small modification in the Otherwise, I am happy to merge it into the code. An additional comment on that: If we get this merged, we can try to add a score for the killed particles: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me!
Sorry to take so long to review this. |
@paulromano what do you think about this now? |
Thanks for the ping @gridley, I'll take another look after the holiday. Appreciate your patience on this @cfichtlscherer! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for the long delay @cfichtlscherer! While digging into the test failure you were seeing, I think I did find an issue with how source sites are written to HDF5 but will need to resolve that as part of a separate PR.
openmc/settings.py
Outdated
elif key in ['time_neutron', 'time_photon', 'time_electron', | ||
'time_positron']: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Slightly more efficient
elif key in ['time_neutron', 'time_photon', 'time_electron', | |
'time_positron']: | |
elif key in ('time_neutron', 'time_photon', 'time_electron', | |
'time_positron'): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, also changed it for the energy_cutoff
src/particle.cpp
Outdated
bool hit_time_boundary = false; | ||
if (time() > settings::time_cutoff[static_cast<int>(type())]) { | ||
double time_dif = time() - settings::time_cutoff[static_cast<int>(type())]; | ||
time() = settings::time_cutoff[static_cast<int>(type())]; | ||
|
||
double push_back_distance = speed() * time_dif; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
bool hit_time_boundary = false; | |
if (time() > settings::time_cutoff[static_cast<int>(type())]) { | |
double time_dif = time() - settings::time_cutoff[static_cast<int>(type())]; | |
time() = settings::time_cutoff[static_cast<int>(type())]; | |
double push_back_distance = speed() * time_dif; | |
bool hit_time_boundary = false; | |
double time_cutoff = settings::time_cutoff[static_cast<int>(type())] | |
if (time() > time_cutoff) { | |
double dt = time() - time_cutoff; | |
time() = time_cutoff; | |
double push_back_distance = speed() * dt; |
s1 = openmc.Sphere(r=100, surface_id=1) | ||
s2 = openmc.Sphere(r=200, surface_id=2, boundary_type='vacuum') | ||
inner_sphere = openmc.Cell(cell_id=1, name='inner sphere') | ||
inner_sphere.region = -s1 | ||
outer_sphere = openmc.Cell(cell_id=2, name='outer sphere') | ||
outer_sphere.region = +s1 & -s2 | ||
root = openmc.Universe(universe_id=0, name='root universe') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please remove manual assignment of surface, cell, and universe IDs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes this is a bad habit of mine
class TimeCutoffTestHarness(PyAPITestHarness): | ||
def __init__(self, *args, **kwargs): | ||
super().__init__(*args, **kwargs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's no need to have a custom class. Simply have a function that returns a Model
instance, put a @pytest.fixture
decorator above it, pass it as an argument to the test function, and then assign it to PyAPITestHarness
. So something like:
@pytest.fixture
def time_model():
model = openmc.Model()
...
return model
def test_time_cutoff(time_model):
harness = PyAPITestHarness('statepoint.10.h5', time_model)
harness.main()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
much easier!
cell_filter = openmc.CellFilter(outer_sphere) | ||
tally.filters = [cell_filter] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe instead have a time filter with two bins, one from 0 to cutoff, and one from cutoff to 2*cutoff. First bin should have non-zero result and second bin should have zero result.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes I like the idea. Then we can also test everything in a single cell.
|
||
# Tally flux under time cutoff | ||
tallies = openmc.Tallies() | ||
tally = openmc.Tally(1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Again, please avoid assigning IDs manually
src/simulation.cpp
Outdated
std::sort(simulation::surf_source_bank.begin(), | ||
simulation::surf_source_bank.end(), | ||
[](const SourceSite& l, const SourceSite& r) { | ||
return std::tie(l.parent_id, l.progeny_id) < | ||
std::tie(r.parent_id, r.progeny_id); | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The surface source test already includes a sort, so I don't see why we should be doing this sort by default and making the user always pay the cost.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
src/particle.cpp
Outdated
for (int j = 0; j < n_coord(); ++j) { | ||
coord(j).r += distance * coord(j).u; | ||
} | ||
this->move_distance(distance); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After spending some time with this, I figured out that this change (event though it doesn't change the logic) was the cause of the failure in the surface_source
test. Can you revert this change for now (along with the associated changes you made for that test)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay. We use move_distance
also in the time_cutoff
to move the particle back to the place where it was killed. For now, I have only changed it at the point you specified. To be honest, I still don't understand exactly what this changes 😅.
I added a comment that we can set it back to this->move_distance(distance)
in the future after fixing the surface source.
Simulations for tallying events in specific time intervals can be made more efficient using time cutoffs. Particles older than this cutoff value are killed. Keep in mind that the time variable of a particle is inherited.