Skip to content

training with preprocessed data failed #3

@Alen-Wong

Description

@Alen-Wong

Thank you for this excellent work, I encountered the following error when training with preprocessed data, how can I resolve it?

Optimizing 
Output folder: ./output/3765832a-c
Reading camera 8/8
Loading Training Cameras
[ INFO ] Encountered quite large input images (>1.6K pixels width), rescaling to 1.6K.
 If this is not desired, please explicitly specify '--resolution/-r' as 1
Loading Test Cameras
Number of points at initialisation :  2108
Training progress:   6%|▊            | 600/10000 [01:10<17:07,  9.15it/s, Loss=0.2673802]Traceback (most recent call last):
  File "train.py", line 322, in <module>
    training(lp.extract(args), op.extract(args), pp.extract(args), args.test_iterations, args.save_iterations, args.checkpoint_iterations, args.start_checkpoint, args.debug_from)
  File "train.py", line 223, in training
    gaussians.densify_and_prune(opt.densify_grad_threshold, 0.005, scene.cameras_extent, size_threshold)
  File "/media/data_nix/wangzihao/code/depth-aware-3DGS/scene/gaussian_model.py", line 393, in densify_and_prune
    self.densify_and_clone(grads, max_grad, extent)
  File "/media/data_nix/wangzihao/code/depth-aware-3DGS/scene/gaussian_model.py", line 384, in densify_and_clone
    new_scaling = self._scaling[selected_pts_mask]
IndexError: The shape of the mask [2108] at index 0 does not match the shape of the indexed tensor [1, 3] at index 0
Training progress:   6%|▊            | 600/10000 [01:10<18:18,  8.56it/s, Loss=0.2673802]

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions