-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Continuous integration and docker auto-build #70
Comments
I got a side issue when comparing the VMEC outputs. Because of errors in using the dynamic library on docker (see the log), I am using a python script to compare the wout file with a reference one (see compare_VMEC.py). The reference wout file was produced on the PPPL cluster with CentOS 6 and GCC in the Below is the comparison between the produced file on the Eddy cluster and the reference one.
The This actually brings up difficulties in doing the regression test. |
@zhucaoxiang this is very interesting. From my (also not large) experience with CI testing your approach looks fine on first sight. Github actions are the new standard way of doing things and we use it also for pyccel and other stuff now and plan to convert older Travis CI builds to that one. About the results: one idea would be to switch off all optimization and see if there are still differences. Compiler-optimized SIMD floating-point code will introduce differences in the last digits that may be amplified if some computation is numerically unstable. Still the differences look a bit large. In the end this may point to a bug in the code. |
Regarding the vmec problem, I bet the quantities related to J have larger
errors because of mu0. For a 1 Tesla 1 meter configuration, errors in B on
the order of machine precision will have errors in J on the order of
machine precision / mu0.
…On Thu, Jun 25, 2020, 2:40 AM Christopher Albert ***@***.***> wrote:
@zhucaoxiang <https://github.com/zhucaoxiang> this is very interesting.
From my (also not large) experience with CI testing your approach looks
fine on first sight. Github actions are the new standard way of doing
things and we use it also for pyccel and other stuff now and plan to
convert older Travis CI builds to that one.
About the results: one idea would be to switch off all optimization and
see if there are still differences. Compiler-optimized SIMD floating-point
code will introduce differences in the last digits that may be amplified if
some computation is numerically unstable. Still the differences look a bit
large. In the end this may point to a bug in the code.
—
You are receiving this because you were assigned.
Reply to this email directly, view it on GitHub
<#70 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABH4ZAYTWVF6ZQJ67RCT37LRYLWOFANCNFSM4OH2LB2A>
.
|
It is a great idea to set up CI for stellopt - thanks for working on this! About your remark that "GitHub actions cannot cache numerical packages": one thing that may work for you here is to use Back on the VMEC issue, another reason J-related quantities may have larger errors is that J is a derivative of B, and numerical derivatives magnify noise. For some combination of this reason and 1/mu0, when VMEC is run for vacuum fields it often gives jdotb ~ O(1) in SI units. This value is small compared to typical tokamak currents (MA) so it is not necessarily a bug. Since this jdotB~1 is 100% numerical error, I am not too surprised/worried by a 1e-3 difference in jdotb between different computers. |
@landreman Thanks for the replies. I think you are right about why the J terms have such a significant difference. I am trying to compare the debug version with As to use GitHub actions to install numerical packages, I did it for FOCUS. It works, but my concern is that we need so many libraries in STELLOPT. It would take probably over 10 mins to install all the libraries each time. As far as I know, there is no easy way to use cached packages. Building a container actually takes less than 1 min. We could implement both at the testing stage. |
In the branch
ci
, I have implemented some initial tests for continuous integration (CI). Basically, each push and pull-request will trigger an automatic build and regression test. This could be done with various tools. Here, I am usingGitHub actions
andDocker
.It seems that GitHub action cannot cache numerical packages. To avoid installing all the required packages each time, I created a docker image for compiling (
docker pull zhucaoxiang/stellopt:compile
). When triggered, it will compile the latest code and run testings using the docker image. An example log can be found here. It takes 5 minutes to compile the code.Here are the things that I would like to discuss.
compere QAS VMEC case
. We should determine a list of regression tests that are representative and fast. Also, we have to update the check script.The text was updated successfully, but these errors were encountered: