Replies: 2 comments
-
If we construct this the right way, we might also be able to process real data with just some extra steps for pedestals, calibration data ... |
Beta Was this translation helpful? Give feedback.
-
Moving forward with this, the idea is first to separate the internal logic (steps order and dependencies, handling jobs, slurm config...) from the paths handling and stages options. The idea is to be much more modular regarding paths, which are now relatively hard-coded and tight to the current MC productions (in config we provide things like lstmcpipe new config file would look like:
This is being implemented in https://github.com/cta-observatory/lstmcpipe/tree/refactor When this is done and in a second, we can more easily move to a workflow management system, since the stages logic has been decoupled from the core of lstmcpipe. |
Beta Was this translation helpful? Give feedback.
-
Hi
Linked to discussion #113 .
To support different pipelines, we probably need a more generic and consistent approach.
At the end, the steps are always the same... (r0 to dl1, train, apply to test dataset, produce irfs...)
E.g. I would see a very generic method
Input and output could be directories or mapped file lists (TBD).
From there, one could make e.g.
lstmcpipe_split_train_test
,lstmcpipe_r0_to_dl1
,lstmcpipe_dl1_to_dl2
,lstmcpipe_train_from_dl1
,lstmcpipe_dl2_to_irfs
...Beta Was this translation helpful? Give feedback.
All reactions