Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Floating point imprecision #87

Open
edwardwliu opened this issue Apr 6, 2023 · 3 comments
Open

Floating point imprecision #87

edwardwliu opened this issue Apr 6, 2023 · 3 comments
Assignees
Labels
bug Something isn't working C++ C++ changes

Comments

@edwardwliu
Copy link
Collaborator

edwardwliu commented Apr 6, 2023

When passing a data frame with 0 in training, the resulting processed_x occasionally replaces 0 with 1.387e-17. This is
likely due to floating point behavior either in the C++ logic or wrapper APIs. This has also led to flakiness in some exact tests.

@edwardwliu
Copy link
Collaborator Author

It's possible this is related to scaling per Theo

@petrovicboban
Copy link
Contributor

@edwardwliu what's the status of this?

@petrovicboban petrovicboban added the bug Something isn't working label May 2, 2023
@petrovicboban petrovicboban added the Python Python changes label May 11, 2023
@edwardwliu edwardwliu added C++ C++ changes and removed Python Python changes labels Jun 14, 2023
@edwardwliu
Copy link
Collaborator Author

Another issue that may be related to precision is that when predicting over the same dataset used in training, sometimes not all leaves are used.

There has been one case when honesty=TRUE and scale=TRUE where the number of unique leaves != number of unique predictions when predicting over the data used for averaging

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working C++ C++ changes
Projects
None yet
Development

No branches or pull requests

2 participants