Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Segmentation fault (core dumped) when running on my own images and point cloud ? #8

Open
dangmanhtruong1995 opened this issue Jun 25, 2024 · 1 comment

Comments

@dangmanhtruong1995
Copy link

Hi. I have been able to run the demo, however when I run on my own images and point cloud, then it could not run:

/home/researcher/anaconda3/envs/freereg/lib/python3.8/site-packages/MinkowskiEngine-0.5.4-py3.8-linux-x86_64.egg/MinkowskiEngine/init.py:36: UserWarning: The environment variable OMP_NUM_THREADS not set. MinkowskiEngine will automatically set OMP_NUM_THREADS=16. If you want to set OMP_NUM_THREADS manually, please export it on the command line before running a python script. e.g. export OMP_NUM_THREADS=12; python your_program.py. It is recommended to set it below 24.
warnings.warn(
logging improved.
Overwriting config with config_version None
img_size [384, 512]
/home/researcher/anaconda3/envs/freereg/lib/python3.8/site-packages/torch/functional.py:512: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3587.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
Params passed to Resize transform:
width: 512
height: 384
resize_target: True
keep_aspect_ratio: True
ensure_multiple_of: 32
resize_method: minimal
/home/researcher/anaconda3/envs/freereg/lib/python3.8/site-packages/torch/nn/modules/transformer.py:306: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance)
warnings.warn(f"enable_nested_tensor is True, but self.use_nested_tensor is False because {why_not_sparsity_fast_path}")
Using pretrained resource local::./tools/zoe/models/ZoeD_M12_NK.pt
Loaded successfully
No module 'xformers'. Proceeding without it.
ControlLDM: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
making attention of type 'vanilla' with 512 in_channels
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla' with 512 in_channels
/home/researcher/anaconda3/envs/freereg/lib/python3.8/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: resume_download is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use force_download=True.
warnings.warn(
/home/researcher/anaconda3/envs/freereg/lib/python3.8/site-packages/transformers/modeling_utils.py:433: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
with safe_open(checkpoint_file, framework="pt") as f:
Loaded model config from [./tools/controlnet/models/control_v11f1p_sd15_depth.yaml]
Loaded state_dict from [./tools/controlnet/models/v1-5-pruned.ckpt]
Loaded state_dict from [./tools/controlnet/models/control_v11f1p_sd15_depth_ft.pth]
Global seed set to 12345
We force to use step-150 (~150 rather than 150) for our control process use 20 steps!
source-feat:['rgb_df', 'rgb_gf']
target-feat:['dpt_df', 'dpt_gf']
weight: [0.5 0.5]
we use zoe-ransac solver for source-rgb and target-dpt!
[Open3D WARNING] Read PTS: only points and colors attributes are supported.
Estimating zoe-depth for rgb on demo:
100%|██████████████████████████████████████████| 2/2 [00:00<00:00, 66576.25it/s]
50%|██████████████████████▌ | 1/2 [00:08<00:08, 8.99s/it]Segmentation fault (core dumped)

How can I make it work ? Thank you very much.

@adrianJW421
Copy link

adrianJW421 commented Aug 30, 2024

I encountered the same problem, and I found that the reason is that spconv_feature_extract on dpt data preprocessed from custom data is all zero matrix, which is different from the demo data. This indicates that zoe depth estimation of custom point cloud data failed.

Upon further debugging, I found that the issue occurred during the projection of the point cloud to the 2D depth map, resulting in a depth map with all-zero values. The root cause of this issue was identified by examining the relationship between my point cloud model and the camera's view frustum. It was observed that, using the default camera extrinsics, the camera's view frustum deviates significantly from the point cloud model, as shown in the figure. Therefore, I believe that adjusting the camera's input extrinsics to ensure that the point cloud model remains within the camera's view frustum will resolve the issue.

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants