You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
May I ask, how can I control the volume size of the reconstructed model when using this code for mesh reconstruction? I tried using the method of loading camera parameters in the 'neus' code to control the size of the generated mesh by adjusting the size of the' scale_mat ', but the generated mesh was always smaller than the ground-truth. Do you have any suggestions?thanks a lot!
scale_mats = [camera_dict['scale_mat_%d' % idx].astype(np.float32) for idx in range(self.n_images)]
world_mats = [camera_dict['world_mat_%d' % idx].astype(np.float32) for idx in range(self.n_images)]
for scale_mat, world_mat in zip(scale_mats, world_mats):
# scale_mat[:3, :4] *= 1.2
P = world_mat @ scale_mat
P = P[:3, :4]
intrinsics, pose = load_K_Rt_from_P(None, P)
self.intrinsics_all.append(torch.from_numpy(intrinsics).float())
self.pose_all.append(torch.from_numpy(pose).float())
The text was updated successfully, but these errors were encountered:
May I ask, how can I control the volume size of the reconstructed model when using this code for mesh reconstruction? I tried using the method of loading camera parameters in the 'neus' code to control the size of the generated mesh by adjusting the size of the' scale_mat ', but the generated mesh was always smaller than the ground-truth. Do you have any suggestions?thanks a lot!
The text was updated successfully, but these errors were encountered: