-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kernel dies running everest.standalone.DetrendFITS on short cadence data #18
Comments
Hi @michaelcarini, sorry for the delay. The error you're getting is expected: Everest is trying to invert a 100,000 x 100,000 matrix when computing the GP and is choking. What I do when running short cadence is to pass the This isn't ideal since it can introduce artifacts at those indices, but in practice it typically works OK. I've been meaning to switch from Let me know if this works. |
Rodrigo,
Still have the problem-but it may be a machine issue-typically how much Physical memory do the machines have on which you have run Everest on SC data ?
Another question-when I run Everest it comes up with the window to select the aperture-but there is no button on the window to continue once the aperture is selected-typically I just close the window-is that the proper procedure?
Mike
From: Rodrigo Luger <[email protected]>
Reply-To: rodluger/everest <[email protected]>
Date: Monday, August 12, 2019 at 8:37 AM
To: rodluger/everest <[email protected]>
Cc: Michael Carini <[email protected]>, Mention <[email protected]>
Subject: Re: [rodluger/everest] Kernel dies running everest.standalone.DetrendFITS on short cadence data (#18)
** This message originated from outside WKU. Always use caution following links. **
Hi @michaelcarini<https://github.com/michaelcarini>, sorry for the delay. The error you're getting is expected: Everest is trying to invert a 100,000 x 100,000 matrix when computing the GP and is choking. What I do when running short cadence is to pass the breakpoints=[...] keyword argument to DetrendFITS (or any Everest model). This should be a list of indices (ranging from 0 to the number of cadences in the campaign) at which to divide the light curve. The GP (and PLD) models will be computed separately in each chunk. You can see that for the first 8 campaigns, I default to something like 30 breakpoints<https://github.com/rodluger/everest/blob/master/everest/missions/k2/k2.py#L121> for the short cadence light curves.
This isn't ideal since it can introduce artifacts at those indices, but in practice it typically works OK. I've been meaning to switch from george to celerite for computing the GP, as the latter can handle covariance matrices this big with no problem. But I haven't yet had time...
Let me know if this works.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub<#18?email_source=notifications&email_token=AMUEBJNM352VMQQNXC35GJDQEFRSTA5CNFSM4IGUDCDKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD4CRT3Q#issuecomment-520428014>, or mute the thread<https://github.com/notifications/unsubscribe-auth/AMUEBJNMN2NMS4UQMC37PH3QEFRSTANCNFSM4IGUDCDA>.
|
Hmm, with 30 breakpoints I can just run everything on my Macbook. If you have 30 evenly spaced breakpoints, the memory load should be the same as for LC data (since each chunk will have about the same number of cadences as a single LC light curve). Which campaign are you running? As for the interactive aperture selector, yes, closing the window selects the aperture you chose. |
Campaign 10-I am running on an i-mac with 8gbyte memory. Let me look more closely at the documentation to be sure I am specifying the breakpoints correctly, I may not have done that right. Thanks!
Mike
From: Rodrigo Luger <[email protected]>
Reply-To: rodluger/everest <[email protected]>
Date: Wednesday, August 14, 2019 at 3:14 PM
To: rodluger/everest <[email protected]>
Cc: Michael Carini <[email protected]>, Mention <[email protected]>
Subject: Re: [rodluger/everest] Kernel dies running everest.standalone.DetrendFITS on short cadence data (#18)
** This message originated from outside WKU. Always use caution following links. **
Hmm, with 30 breakpoints I can just run everything on my Macbook. If you have 30 evenly spaced breakpoints, the memory load should be the same as for LC data (since each chunk will have about the same number of cadences as a single LC light curve). Which campaign are you running?
As for the interactive aperture selector, yes, closing the window selects the aperture you chose.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub<#18?email_source=notifications&email_token=AMUEBJM2VZ7TDV7WVQHL7CDQERRRRA5CNFSM4IGUDCDKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD4J7FNI#issuecomment-521401013>, or mute the thread<https://github.com/notifications/unsubscribe-auth/AMUEBJORL2XEM3NSXHQ7363QERRRRANCNFSM4IGUDCDA>.
|
Hi Rodrigo,
Ok, so I got EVEREST to run further on a machine with 16Gbyt of memory. I am no longer getting the memory error, but I have run into another sticking point.
I get a message that says ‘Function get_outliers going in circles, skipping’, followed by the message ‘optimizing the gp’. It then sits at optimizing the gp for days. I restarted it again and so far it has been on this step for 18 hours. How long should it be taking to optimize the gaussain process (I assume that is what gp stands for)? It is back to using most of the memory when it gets to the optimization step, but I am not getting the out of memory errors.
Thanks for your help,
Mike
From: Rodrigo Luger <[email protected]>
Reply-To: rodluger/everest <[email protected]>
Date: Wednesday, August 14, 2019 at 3:14 PM
To: rodluger/everest <[email protected]>
Cc: Michael Carini <[email protected]>, Mention <[email protected]>
Subject: Re: [rodluger/everest] Kernel dies running everest.standalone.DetrendFITS on short cadence data (#18)
** This message originated from outside WKU. Always use caution following links. **
Hmm, with 30 breakpoints I can just run everything on my Macbook. If you have 30 evenly spaced breakpoints, the memory load should be the same as for LC data (since each chunk will have about the same number of cadences as a single LC light curve). Which campaign are you running?
As for the interactive aperture selector, yes, closing the window selects the aperture you chose.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub<#18?email_source=notifications&email_token=AMUEBJM2VZ7TDV7WVQHL7CDQERRRRA5CNFSM4IGUDCDKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD4J7FNI#issuecomment-521401013>, or mute the thread<https://github.com/notifications/unsubscribe-auth/AMUEBJORL2XEM3NSXHQ7363QERRRRANCNFSM4IGUDCDA>.
|
Rodrigo,
We could really use some advice and assistance as we are unable to get Everest.standalone.DetrendFITS to run on our short cadence data and this has begun to hold up submission of a paper.
I have moved to our High Performance computing cluster and I am on a node with 96 gigabytes of memory and allocated 500 g of swap space, running ubuntu. The last run on the short cadence data generated a segmentation fault error. I asked the systems administration people to look into this and here is what they told me:
“That was a segmentation fault. It didn’t run out of memory. It looks like the segmentation fault was within _flapack.cpython-36m-x86_64-linux-gnu.so and it was error 6. I think you are going to need to contact the developer. A segmentation fault means that the program tried to access memory that hadn’t been initialized and several other C related memory operations that are not allowed.”
So where do we go from here?
I have been running tests on the LC data for the same source (3C 273) on the same machine. The first test ran out of memory!-which I do not understand and neither do my sysads, though that is when they allocated the 500G of swap space. I currently have the LC data running again, but it now seems hung-I’ll find out Monday what they see on the system side. I am calling the routine as follows:
everest.standalone.DetrendFITS('ktwo229151988-c102_lpd-targ.fits',raw=False,season=None,clobber=False,aperture=aperture). When I call the SC data, bedsides changing the file name, I also define 30 breakpoints with breakpoints=[…].
My collaborators and I very much appreciate any and all help you can provide.
Regards,
Michael
From: Rodrigo Luger <[email protected]>
Reply-To: rodluger/everest <[email protected]>
Date: Wednesday, August 14, 2019 at 3:14 PM
To: rodluger/everest <[email protected]>
Cc: Michael Carini <[email protected]>, Mention <[email protected]>
Subject: Re: [rodluger/everest] Kernel dies running everest.standalone.DetrendFITS on short cadence data (#18)
** This message originated from outside WKU. Always use caution following links. **
Hmm, with 30 breakpoints I can just run everything on my Macbook. If you have 30 evenly spaced breakpoints, the memory load should be the same as for LC data (since each chunk will have about the same number of cadences as a single LC light curve). Which campaign are you running?
As for the interactive aperture selector, yes, closing the window selects the aperture you chose.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub<#18?email_source=notifications&email_token=AMUEBJM2VZ7TDV7WVQHL7CDQERRRRA5CNFSM4IGUDCDKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD4J7FNI#issuecomment-521401013>, or mute the thread<https://github.com/notifications/unsubscribe-auth/AMUEBJORL2XEM3NSXHQ7363QERRRRANCNFSM4IGUDCDA>.
|
Hi @michaelcarini, I think I know what's going on. Typically when I run the short cadence light curves, I've already processed the long cadence ones, so EVEREST knows to use the GP hyperparameters from the LC run. What's happening is EVEREST is trying to optimize the GP kernel by inverting the full covariance matrix for the SC data, which is guaranteed to give you errors. Sorry this has caused you so much trouble. Let me know how this sounds to you -- I'd be happy to give a shot at this if you send me the target number. |
Rodrigo,
Thanks for the quick reply. Let me go through these steps and see what I find. Thanks again
Michael
From: Rodrigo Luger <[email protected]>
Reply-To: rodluger/everest <[email protected]>
Date: Sunday, September 22, 2019 at 11:16 AM
To: rodluger/everest <[email protected]>
Cc: Michael Carini <[email protected]>, Mention <[email protected]>
Subject: Re: [rodluger/everest] Kernel dies running everest.standalone.DetrendFITS on short cadence data (#18)
** This message originated from outside WKU. Always use caution following links. **
Hi @michaelcarini<https://github.com/michaelcarini>, I think I know what's going on. Typically when I run the short cadence light curves, I've already processed the long cadence ones, so EVEREST knows to use the GP hyperparameters from the LC run. What's happening is EVEREST is trying to optimize the GP kernel by inverting the full covariance matrix for the SC data, which is guaranteed to give you errors.
What you can do is pass optimize_gp=False to DetrendFITSand manually specify the kernel parameters via the kernel_params keyword, which accepts a tuple of white noise amplitude in flux units, red noise amplitude in flux units, and timescale in days for the Matern-3/2 kernel. The simplest way to set those parameters is to look at what the GP hyperparameters are in the long cadence run. Is this a target that was processed in the LC EVEREST catalog? If so, just grab the FITS file (you can search for it here<https://rodluger.github.io/everest/catalog.html>) and check out the GPWHITE, GPRED, and GPTAU parameters in the header<https://rodluger.github.io/everest/fitsfiles.html#lightcurve-hdu>. If the target is not in the EVEREST catalog, you can de-trend it yourself in LC using DetrendFITS and look at the resulting FITS file header.
Sorry this has caused you so much trouble. Let me know how this sounds to you -- I'd be happy to give a shot at this if you send me the target number.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub<#18?email_source=notifications&email_token=AMUEBJOHZPW6W4SPF6Y3BPTQK6K4VA5CNFSM4IGUDCDKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD7JJR7Q#issuecomment-533895422>, or mute the thread<https://github.com/notifications/unsubscribe-auth/AMUEBJLC3Y6AV2PPSQRUIV3QK6K4VANCNFSM4IGUDCDA>.
|
Rodrigo,
Your suggestion resulted in the same segmentation error. At this point, we would very much appreciate it if you could run the short cadence data for 3C 273 (229151988) through Everest for us.
I still would like to figure out what is going on and why I cannot seem to get EVEREST to run.
Regards,
Michael
From: Rodrigo Luger <[email protected]>
Reply-To: rodluger/everest <[email protected]>
Date: Sunday, September 22, 2019 at 11:16 AM
To: rodluger/everest <[email protected]>
Cc: Michael Carini <[email protected]>, Mention <[email protected]>
Subject: Re: [rodluger/everest] Kernel dies running everest.standalone.DetrendFITS on short cadence data (#18)
** This message originated from outside WKU. Always use caution following links. **
Hi @michaelcarini<https://github.com/michaelcarini>, I think I know what's going on. Typically when I run the short cadence light curves, I've already processed the long cadence ones, so EVEREST knows to use the GP hyperparameters from the LC run. What's happening is EVEREST is trying to optimize the GP kernel by inverting the full covariance matrix for the SC data, which is guaranteed to give you errors.
What you can do is pass optimize_gp=False to DetrendFITSand manually specify the kernel parameters via the kernel_params keyword, which accepts a tuple of white noise amplitude in flux units, red noise amplitude in flux units, and timescale in days for the Matern-3/2 kernel. The simplest way to set those parameters is to look at what the GP hyperparameters are in the long cadence run. Is this a target that was processed in the LC EVEREST catalog? If so, just grab the FITS file (you can search for it here<https://rodluger.github.io/everest/catalog.html>) and check out the GPWHITE, GPRED, and GPTAU parameters in the header<https://rodluger.github.io/everest/fitsfiles.html#lightcurve-hdu>. If the target is not in the EVEREST catalog, you can de-trend it yourself in LC using DetrendFITS and look at the resulting FITS file header.
Sorry this has caused you so much trouble. Let me know how this sounds to you -- I'd be happy to give a shot at this if you send me the target number.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub<#18?email_source=notifications&email_token=AMUEBJOHZPW6W4SPF6Y3BPTQK6K4VA5CNFSM4IGUDCDKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD7JJR7Q#issuecomment-533895422>, or mute the thread<https://github.com/notifications/unsubscribe-auth/AMUEBJLC3Y6AV2PPSQRUIV3QK6K4VANCNFSM4IGUDCDA>.
|
@michaelcarini I sent you an email with the de-trended FITS file. Let me know if that worked and I'll close the issue. |
Rodrigo
It worked, thank you.
A question-how do we read the fits file back into Everest from the local disk ?
I have not been able to figure that out-everything I try in Everest seems to only want to download from MAST.
Thanks
Mike
From: Rodrigo Luger <[email protected]>
Reply-To: rodluger/everest <[email protected]>
Date: Wednesday, September 25, 2019 at 11:05 AM
To: rodluger/everest <[email protected]>
Cc: Michael Carini <[email protected]>, Mention <[email protected]>
Subject: Re: [rodluger/everest] Kernel dies running everest.standalone.DetrendFITS on short cadence data (#18)
** This message originated from outside WKU. Always use caution following links. **
@michaelcarini<https://github.com/michaelcarini> I sent you an email with the de-trended FITS file. Let me know if that worked and I'll close the issue.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub<#18?email_source=notifications&email_token=AMUEBJML7QPXUQRZIAYKTBDQLOD5JA5CNFSM4IGUDCDKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD7SOHDI#issuecomment-535094157>, or mute the thread<https://github.com/notifications/unsubscribe-auth/AMUEBJONEXIWAYSKUY5JH2TQLOD5JANCNFSM4IGUDCDA>.
|
You should have a local folder named
If not, create it, and place the file star_sc = everest.Everest(229151988, cadence="sc", season=102) from python, it should be able to access the file. Let me know if this doesn't work. |
Yes!!! That worked. I had moved the file (instead of copying it) from that directory to work on it earlier.
Thanks!
Mike
From: Rodrigo Luger <[email protected]>
Reply-To: rodluger/everest <[email protected]>
Date: Thursday, October 3, 2019 at 4:36 PM
To: rodluger/everest <[email protected]>
Cc: Michael Carini <[email protected]>, Mention <[email protected]>
Subject: Re: [rodluger/everest] Kernel dies running everest.standalone.DetrendFITS on short cadence data (#18)
** This message originated from outside WKU. Always use caution following links. **
You should have a local folder named
~/.everest2/k2/c102/229100000/51988/
If not, create it, and place the file hlsp_everest_k2_llc_229151988-c102_kepler_v2.0_sc.fits in there. Then, when you call
star_sc = everest.Everest(229151988, cadence="sc", season=102)
from python, it should be able to access the file. Let me know if this doesn't work.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub<#18?email_source=notifications&email_token=AMUEBJOFL6ZDXP6PRHM4UO3QMZQVBA5CNFSM4IGUDCDKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEAJVSAA#issuecomment-538138880>, or mute the thread<https://github.com/notifications/unsubscribe-auth/AMUEBJILUNEPICS7X6VEEXLQMZQVBANCNFSM4IGUDCDA>.
|
I am trying to run everest.standalone.DetrendFITS on short cadence data in a Jupyter notebook. When the procedure gets to the "computing the model" step, eventually I get a message that the Kernel has hung and has to reststart. I am running Python 2.7.3 on a macbook pro (OS 10.13.2 2.2 Ghz Intel core i7, 16GBYT memory)
It runs fine with long cadence data.
Update-I tried running it from the command line and received an error that I was out of application memory. I stopped all other applications but still got the error. Has anyone else tried running it on sc data and has a tip or trick?
The text was updated successfully, but these errors were encountered: