-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DAMD-89: Document and implement DetectorMap #131
base: tickets/PIPE2D-641
Are you sure you want to change the base?
Conversation
e2443f1
to
1edbe52
Compare
97f064e
to
b15e025
Compare
b15e025
to
ff1af47
Compare
Corrected calculations from AB maginitude to flux [nJy] accordingly.
…Flux The original code assumed AB magnitude as input. It is now AB flux. The reference flux calculation was updated accordingly.
…des. Also renamed local variables from *mag to *flux where appropriate.
and photometry errors fiberFluxErr, psfFluxErr, totalFluxErr
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Only minor comments.
|
||
|
||
def headerToMetadata(header): | ||
"""Convert FITS header to LSST metadata |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't this something we should move to pfs_utils
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pfs_utils
should know nothing about LSST.
The detectorMap shouldn't be necessary for building a fiber flat: the fibers are dithered.
It's useful to have a temporary directory when running tests. This implementation is taken from LSST's lsst.utils.tests, but it's not in LSST 18.1.0, so reproduced here.
We don't use this anywhere. These days, we'd use ArcLineSet instead.
Getting an afw-format FITS header (PropertyList) seems to be a common enough operation that we should have a separate function for it.
By using the I/O in datamodel, we ensure that we're always writing the correct format, but it means a bit of a refactor. We no longer have to determine the subclass of data on disk (that's done in the datamodel I/O code); now we just have to convert to/from the datamodel representation.
ff1af47
to
aac8b1e
Compare
Based on tickets/PIPE2D-641.