You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi @GjjvdBurg
Thank you for the benchmark work. It's a great work and very thorough. But I'm having some problems reproducing this work. Can I ask if it would be convenient for you to take a very short time to give me some advice on what might be wrong? I'd appreciate it if you could help. The problems I have encountered are as follows.
Python
I ran into a lot of Python errors, the representative traceback are:
Traceback (most recent call last):
File "/home/---/miniconda3/envs/cpd/lib/python3.9/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/home/---/miniconda3/envs/cpd/lib/python3.9/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/---/TCPDBench/execs/python/cpdbench_rbocpdms.py", line 64, in wrapper
detector = run_rbocpdms(*args, **kwargs)
File "/home/---/TCPDBench/execs/python/cpdbench_rbocpdms.py", line 144, in run_rbocpdms
detector.run()
File "/home/---/TCPDBench/execs/python/rbocpdms/detector.py", line 320, in run
self.next_run(self.data[t,:], t+1)
File "/home/---/TCPDBench/execs/python/rbocpdms/detector.py", line 360, in next_run
if self.CPs[t-2][-1][0] != self.CPs[t-3][-1][0]:
IndexError: list index out of range
AND
Traceback (most recent call last):
File "/home/---/miniconda3/envs/cpd/lib/python3.9/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/home/---/miniconda3/envs/cpd/lib/python3.9/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/---/TCPDBench/execs/python/cpdbench_rbocpdms.py", line 64, in wrapper
detector = run_rbocpdms(*args, **kwargs)
File "/home/---/TCPDBench/execs/python/cpdbench_rbocpdms.py", line 144, in run_rbocpdms
detector.run()
File "/home/---/TCPDBench/execs/python/rbocpdms/detector.py", line 320, in run
self.next_run(self.data[t,:], t+1)
File "/home/---/TCPDBench/execs/python/rbocpdms/detector.py", line 373, in next_run
self.update_all_joint_log_probabilities(y, t)
File "/home/---/TCPDBench/execs/python/rbocpdms/detector.py", line 1102, in update_all_joint_log_probabilities
model.update_joint_log_probabilities(
File "/home/---/TCPDBench/execs/python/rbocpdms/probability_model.py", line 260, in update_joint_log_probabilities
self.update_alpha_derivatives(y, t,
File "/home/---/TCPDBench/execs/python/rbocpdms/probability_model.py", line 634, in update_alpha_derivatives
full = (self.one_step_ahead_predictive_log_loss +
ValueError: operands could not be broadcast together with shapes (5,) (0,)
I'm also experiencing some WARNINGs, not sure if it's related to this.
/home/---/TCPDBench/execs/python/bocpdms/probability_model.py:526: RuntimeWarning: overflow encountered in exp
/home/---/TCPDBench/execs/python/rbocpdms/BVAR_NIG_DPD.py:1553: RuntimeWarning: invalid value encountered in scalar add
AND
/home/---/TCPDBench/execs/python/rbocpdms/BVAR_NIG.py:2876: RuntimeWarning: divide by zero encountered in log
Personally, I think it could be that for some reason some variables are not being assigned or updated successfully, but I verified that the dataset is complete.
R
I also found that some R tasks failed. The json files of the result are:
Hi @GjjvdBurg
Thank you for the benchmark work. It's a great work and very thorough. But I'm having some problems reproducing this work. Can I ask if it would be convenient for you to take a very short time to give me some advice on what might be wrong? I'd appreciate it if you could help. The problems I have encountered are as follows.
Python
I ran into a lot of Python errors, the representative traceback are:
AND
I'm also experiencing some WARNINGs, not sure if it's related to this.
AND
Personally, I think it could be that for some reason some variables are not being assigned or updated successfully, but I verified that the dataset is complete.
R
I also found that some R tasks failed. The json files of the result are:
The text was updated successfully, but these errors were encountered: