You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I noticed this strange behaviour on my model. The results of doing: preds = model.predict([test_data_x1, test_data_x2, leaks_test])
are different from: preds = model.predict([test_data_x2, test_data_x1, leaks_test])
when the only thing that changes is the order of the arguments.
Now I understand that the two networks can have different weights. But is there a way to understand when it's working properly and when it's not?
I have the same behaviour when using another embedding (sentence-bert), for this I slightly modified the network like this:
I noticed this strange behaviour on my model. The results of doing:
preds = model.predict([test_data_x1, test_data_x2, leaks_test])
are different from:
preds = model.predict([test_data_x2, test_data_x1, leaks_test])
when the only thing that changes is the order of the arguments.
Now I understand that the two networks can have different weights. But is there a way to understand when it's working properly and when it's not?
I have the same behaviour when using another embedding (sentence-bert), for this I slightly modified the network like this:
Do you have some suggestions on possible strategies to solve this?
Thanks a lot
The text was updated successfully, but these errors were encountered: