Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What is the difference between $\hat{f}$ and $\bar{f}$? #9

Open
mkocabas opened this issue Dec 5, 2023 · 2 comments
Open

What is the difference between $\hat{f}$ and $\bar{f}$? #9

mkocabas opened this issue Dec 5, 2023 · 2 comments

Comments

@mkocabas
Copy link

mkocabas commented Dec 5, 2023

Hi @Anttwo,

Congratulations on your preprint! Results look great. I have a couple of questions:

  1. I have a confusion about $\hat{f}$ and $\bar{f}$. What is the difference between them? Is $\hat{f}$ an approximation/efficient version of $\bar{f}$?
  2. How many points do you sample per Gaussian? To be more specific, I am asking the number of points sampled to obtain set $\mathcal{P}$.
  3. How do you compute the depth given multiple Gaussians with different $\alpha_g$?
@Anttwo
Copy link
Owner

Anttwo commented Dec 8, 2023

Hi mkocabas,

Thank you so much for your kind words!

Here are some answers to your questions:

  1. The SDF $\bar{f}$ is an ideal SDF, that only holds true in the ideal case where Gaussians are flat, opaque and well spread on the surface. On the contrary, the estimator $\hat{f}$ aims to approximate the real SDF of the current scene, i.e. the distance to the surface represented by the current Gaussian Splatting that is being optimized, and that can be observed in the depth maps (so in theory, this SDF always holds true). Our strategy is to minimize the difference between the two, to enforce Gaussians to converge toward the ideal scenario (flat, opaque and well-spread Gaussians). Please note that we recently updated the paper on arxiv, as we noticed a minor typo in the definition of $\bar{f}$. Formally, we now introduce a more general SDF $f$ to better clarify this point.
  2. We tried (and will propose) different numbers of samples in the code, as this parameter can influence the training speed and the quality of the regularization. In the paper, we simply use 1,000,000 samples per training iteration.
  3. I understand your concerns, as Gaussians with $\alpha_g < 1$ may produce meaningless depth maps. Actually, as we explain in the paper, we also use an additional entropy term on opacities in the regularization, that enforces opacities to take binary values (either 0 or 1). This assumption seems logical as we want to extract an opaque mesh from the Gaussian Splatting. Consequently, we avoid semi-transparent Gaussians, and computing depth maps makes more sense. Please refer to this issue About calculation of depth map #8 for more details about depth computation.

I hope my message provides the answers you need!
If not, of course, feel free to ask additional questions.

Best!

@mkocabas
Copy link
Author

mkocabas commented Dec 8, 2023

Thanks @Anttwo for the detailed answer. 1 and 3 is pretty clear, thanks!

Regarding 2, could you give more details how you sample this 1M points? Do you sample fixed number of points per Gaussian e.g. $10^6/n_\text{g}$?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants