-
Notifications
You must be signed in to change notification settings - Fork 202
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
can we use this type of regularization to flatten gauss-point ball? #3
Comments
Hello yuedajiong, I'm not sure I get your question right; Could you give me more details about what you want to know? Best |
Hi,Anttwo: above pseudo code wants to show: 1+9=10 1:9 better than 5:5 which one is 1, or 9, optimized by GS main network. |
Another reltated solution, FYI: (with normals) Differentiable-Surface-Splatting-for-Point-based-Geometry-Processing |
Hello yuedajiong, So you mean that you want to flatten the Gaussians by enforcing one of the three scaling factors We actually tried some loss terms to explicitly enforce the smallest scaling factor to be close to 0 and flatten Gaussians (for example, Actually, we found that our regularization terms (either using density or SDF, as explained in the paper) naturally enforce Gaussians to flatten, in a non-destructive way. But still, there may be a better regularization term to craft! |
Since each gaussian splat has a full 3D rotation, isn't enough for a successful training to always use only 2 scales and the third to be hard coded to 0 or very small? Did you try this approach and know how it compares to your implementation? |
Thanks @Anttwo |
NeuSG: Neural Implicit Surface Reconstruction with 3D Gaussian Splatting Guidance And my understanding: the gauss point(s) can not be very large. like fish scales. |
NeuSG: used ||.||_1 (I am not sure this is proper, because I tried similar. Of course, these guys jointly used SDF.) Me: tried another flatten regulirization, similar I mentioned above. (still falling short of expectations) Maybe you are right, most of simple/direct regulizations are too destructive. |
import torch
x = torch.tensor([1.0], requires_grad=True) #internal auxiliary parameter for X
y = torch.tensor([1.1], requires_grad=True) #internal auxiliary parameter for Y
optimizer = torch.optim.SGD([x,y], lr=1.0)
for i in range(100):
X = torch.sigmoid(x) #compent_x @scale
Y = torch.sigmoid(y) #compent_y @scale
reg1 = torch.nn.functional.mse_loss(X+Y, torch.tensor([1.0])) #x+y=1
reg2 = 1./(XX + YY) #one >> other(s)
mse0 = 1.0 #GS optimization (dummy)
loss = reg1*10. + reg2 + mse0
optimizer.zero_grad()
loss.backward()
optimizer.step()
print("i=%02d X=%.2f Y=%.2f reg1=%.4f reg2=%.4f loss=%.4f"%(i, X.tolist()[0], Y.tolist()[0], reg1.item(), reg2.item(), loss.item()))
The text was updated successfully, but these errors were encountered: