-
Notifications
You must be signed in to change notification settings - Fork 202
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
real-case: resin toy wizard reconstruction. all better quality cases are at the end sections! In Chinese: 高斯溅射这条路上提取表面/mesh目前最靠谱的路。好效果的在最后部分。 #34
Comments
Hello @yuedajiong , I'm happy to see SuGaR applied to more datasets! I suppose this is the coarse mesh, so its colors are just the vertex colors output by the poisson algorithm, which are quite bad. Also, for rendering the mesh with a traditional texture, a good practice to get something close to your training images is to put an Emission shader to the texture, or to remove all light sources and just use a Ambient lighting with intensity 1. For centered objects such as figures, SuGaR should work really well, even if the object has many details. I showcased two robots reconstructed like this in the presentation video, but I also tried other similar figures and the results is generally very nice. Here are some examples of meshes, with a traditional UV texture: |
@Anttwo great-master: coarse, and density-mode, not sdf-mode. I am training and adjusting now. I will update once generated better results, minutes or hours. |
You're right, density-mode actually works better for this kind of scene. I can't wait to see your final result! 😃 |
@Anttwo please refer to: graphdeco-inria/gaussian-splatting#541 I will merge that logic into your code, tomorrow, and show result to you. |
For the surface reconstruction of everyday-visible objects, no specific adjustments are made for particular objects and data. I have tried almost all well-known non-GS approaches, but the quality of the reconstruction results is quite unsatisfactory. I have also attempted some GS approaches, but they cannot even reconstruct a very basic mesh.
(still on adjusting ...) |
Thank you for your feedback! Indeed, I'm quite surprised by your results, as I experimented with many custom datasets on my side, and the output mesh is generally good, even with very detailed objects like a Telecom tower (I will put some qualitative results on the webpagge). May I ask more details about your dataset: Concerning normals, it is generally just a matter of convention during reconstruction ; Multiplying by -1 the normals should do the job (but for many softwares like Blender, it doesn't change anything). Looking forward to your reply! Edit: Here is a test I just made on a new scene which is, just like yours, a figure centered in the scene (actually a little challenging because of the strong black color, and strong specularity on the figure). Looking at your scene, you should be able to get a similar quality: |
I am checking my steps, including data-capturing,data-processing and training, especial data-processing. I also believe and hope to reconstruct my data into a beautiful result similar to yours. my data(video), my images(240 frames, all frames) I will try your data such as leego firstly. doing ... |
@yuedajiong the model vibrates during acquisition, so it becomes a dynamic scene |
@cdcseacave Thanks. |
I used a lion-resin-toy, and almost perfect colmap poses, the mesh output is very good. |
the mask logic in algorithm native(GS and SuGaR parts) is necessary. |
@ygtxr1997
|
@yuedajiong hi,I am facing the same problem as you. May I ask where the specific modifications are needed to incorporate mask or transparency information into the training? |
|
@yuedajiong Actually, I did 1, but the reconstructed mesh has a lot of noise background, which leads to poor quality of the foreground. Also, I found that GS was originally able to handle black backgrounds, but the process of generating mesh in Sugar does not have a background concept. It seems that we can only consider adding masks during the training process. Have you tried it? Also, can we communicate through QQ? I have sent an email to you. |
@miaowu99 如果你要做的彻底,以对象重构为目标,做到赚钱的效果:用mask或者rgba; 如果你只是想稍微好一点,你可以用各种rule去把核心对象周边的点,给clean掉(比如远离点中心,比如点很稀疏),如果你不放心,你可以一边 clean一边refine train。 就是SuGaR作者,这个博士帅哥,他都计划要要做mask. (我自己实现的是rgba,修改cuda代码) |
RGBA as mask,大概: |
要实现精准的mesh和贴图估计,应该试试隐式sdf烘焙的方案,如bakedsdf这种,3d gauss做精准的mesh目前还比不过隐式sdf (neus/volsdf类)的方案。 |
请问你实现的带mask refine 的代码可以share吗? |
@yangqing-yq #https://github.com/fudan-zvg/4d-gaussian-splatting 2310.10642 Real-time Photorealistic Dynamic Scene Representation and Rendering with 4D Gaussian Splatting #https://github.com/VITA-Group/4DGen 2312.17225 4DGen: Grounded 4D Content Generation with Spatial-temporal Consistency |
coarse density:
(the best way to reconstruct surface based on GS, so far. )
one of inputs:

8000 iteations: (from 7000)

sugarmesh_3Dgs7000_densityestim02_sdfnorm02_level03_decim1000000.zip
15000 iterations: (still no refine step)

some better quality cases are in the end sections !!!!!!!!!!!!!!!!!!!!!!!!!!!!!
The text was updated successfully, but these errors were encountered: