-
Notifications
You must be signed in to change notification settings - Fork 300
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error in inference - not enough values to unpack (expected 2, got 0) #115
Comments
Were you able to solve this? I too am using a 3080Ti and am facing this same issue when running with CUDA 11.3 |
Hello, I wasn't able to solve this on Windows, but I managed to make it work on Linux Ubuntu 22.04 (I don't think DROID-SLAM can work on Windows). I installed CUDA 12.2 from the NVIDIA website and used PyTorch with CUDA 12.1 with pip3 install from the official PyTorch website. For the environment I used the virtualenv package from pip instead of conda. The version of python that I used is 3.8. I installed the rest of the packages in the environment with pip commands. Additionally, I installed the ninja package with |
@FlorinM25 I tried with pytorch=2.1.1 and cuda=12.1 and python=3.8 but I got libcudart error |
I'm getting the same unpack error because the distance comparison https://github.com/princeton-vl/DROID-SLAM/blob/main/droid_slam/factor_graph.py#L322 comes back as close to zero and gets set to |
Were you able to solve this? |
Would also like an update on this! |
I initially encountered this issue as well, but later discovered that it was indeed due to a problem with the datapath. For example, the script provided by the author uses the path 'TUM-RGBD,' but in my case, the folder was actually named 'TUM_RGBD.' I wonder if anyone else is facing a similar issue? My running environment is: Ubuntu 20.04, RTX 3090, Python 3.9, PyTorch 1.10, CUDA 11.3. I installed the environment using the yaml file content provided by Yaxun-Yang in #28. |
Hello! |
In my case it was caused by the video.counter.value==1 when the droidbackend was invoked. The reason is that camera pose shifts in my dataset are so minor that is below the motion filter thresh (args.filter_thresh) and no frames were added during the track process. |
Hello Xichong, |
Set the args.filter_thresh to a smaller number can get the program running. If you know your sequence is monocular, you can skip this program and manually set the extrinsics motion sequences to identical matrices (I assume you are estimating the camera motions from an outer project). |
Hello,
Firstly, thank you very much for this amazing project!
When I want to run some demos with the commands presented in the README file I always get this error:
ii, jj = torch.as_tensor(es, device=self.device).unbind(dim=-1)
The terminal looks like this:
When the demo is running, when the images are iterated, the
Open3d
window opens but nothing appears on it.After some debugging in the
factor_graph.py
file, I noticed that tensorsii
andjj
are[0]
for all the running process, as well as thees
array which is always empty.I tried to use the
--reconstruction_path
flag to save the recon files. I get disps.npy, images.npy, intrinsics.npy, poses.npy, tstamps.npy. The .npy files have some values in them, but I doubt the fact that they are correct because the disps.npy file looks like this:I also tried to disable visualization as said in issue #76 with
--disable_vis
flag but the process just stops after some iterations:In issue #13 a
datapath
is mentioned, but I am not sure what it refers to..I am working on Windows in a virtualenv in which I installed
PyTorch 2.1.1 with cuda11.8
(I tried with torch 1.10 and cuda11.3 but the same error occurred). The GPU I tested on was a 3080TI with 12gb VRAM.I assume this is a CUDA related issue, but I am sure in what way.
I hope someone can help me fix my errors. Thank you!
The text was updated successfully, but these errors were encountered: