Strange behaviour when using custom loss function #342
-
Hi, The custom loss is included below. The idea is to apply a penalty to the loss if the expression blows up when evaluated near zero.
The data used should be here: x_data__.txt and y_data__.txt. The target I was aiming for (first column of y_data) is the function where
I'm also including a Hall of Fame that was seen in such a case: bizarre_hof.csv I did not open this as a bug report because I'm not really sure if it is my faulty custom loss that is causing the issue or a PySR problem. Thanks! |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Two quick comments:
concat = fill(5f-4, 3, 1) (which creates an array with shape (3, 1) and value
p, completed = eval_tree_array(tree, concat, options)
penalty = (!completed || p[1] >= 50) ? 1.5 : 1.0 You should also check the first
if !flag || !completed
return 1000f0
end and then just compute the multiplicative penalty from |
Beta Was this translation helpful? Give feedback.
Two quick comments:
concat
array in your code would be:(which creates an array with shape (3, 1) and value$5 \times 10^{-4}$ )
eval_tree_array
is important, as it tells you when the evaluation encounters a NaN or not. So I would do:You should also check the first
flag
and return a large penalty if it is false.