In the paper it is mentioned "we use a Bayesian update to update the label probabilities at each voxel". But in the experimental part, I saw that the point cloud was directly generated by the segmented RGB image and depth image, and then passed to voxblox for semantic reconstruction, and "Bayesian update" was not used. What I understand is that the experiment Part of it just replaces ordinary RGB images with semantic images, and does not perform semantic fusion operations. Is there any experiment in this area?
In the paper it is mentioned "we use a Bayesian update to update the label probabilities at each voxel". But in the experimental part, I saw that the point cloud was directly generated by the segmented RGB image and depth image, and then passed to voxblox for semantic reconstruction, and "Bayesian update" was not used. What I understand is that the experiment Part of it just replaces ordinary RGB images with semantic images, and does not perform semantic fusion operations. Is there any experiment in this area?