Experimental analysis of initialization sparsity in 3D Gaussian Splatting (3DGS): we measure how densification and reconstruction quality (PSNR/SSIM) vary with the number of initial SfM points.
As you might know, most 3D Gaussian Splatting (3DGS) pipelines are initialized from a point cloud—typically produced by COLMAP/SfM. When that cloud is sparse, it’s not obvious how much initialization sparsity will limit the final result. In this blog, we measure how densification and reconstruction quality (PSNR/SSIM) vary with the number of initial SfM points.
We run the same 3DGS training on the same scene, but vary the number of Gaussians at initialization (roughly: 20k, 41k, 51k, 103k, 206k). We track (1) how the number of Gaussians grows over training iterations, and (2) final reconstruction quality measured by PSNR and SSIM.
The first result is surprisingly positive: even when starting from very few points, the built-in densification in 3DGS is robust.
All runs eventually expand to about the same final magnitude: around ~1 million Gaussians.
This is exactly what you want to see if you’re bootstrapping from sparse initialization.
I’ll soon publish a deep-dive on 3DGS’s built-in densification (with clean PyTorch code) — subscribe to my newsletter to get notified when it’s released.
Practical implication: if your device captures sparse points, 3DGS will still densify and produce a rich representation. You are not stuck with the initial sparsity.
Here is the catch: densification can equalize the count, but it does not fully equalize the quality. The chart below plots PSNR and SSIM at convergence as a function of initialization size.
The trend is clear: starting with more Gaussians yields better reconstruction fidelity. For example, initializing with ~200k Gaussians (instead of only ~25k) raised final PSNR from roughly ~26.5 dB to ~28.2 dB, and SSIM from about ~0.88 to ~0.908 in our test scene. So while training can expand the cloud, the final quality doesn’t fully recover if your initialization is too sparse.
If your capture device produces a very sparse point cloud (few viewpoints, weak SfM, sparse LiDAR, noisy depth fusion), you will likely need to densify before or during training. Below are practical options, ordered from “closest to classic 3DGS” to “initialization-free”.
Point Cloud Densification for 3D Gaussian Splatting from Sparse Input Views proposes a densification method that generates high-quality point clouds for improved initialization. Their key point: depth-map regularization methods can be sensitive to depth accuracy, so instead they construct a better point cloud to start training. (Chan et al., 2024)
MVSplat predicts a clean Gaussian scene in a single forward pass from sparse multi-view images by building a cost volume via plane sweeping. This is a strong option when SfM is unreliable or too sparse. (Chen et al., 2024)
AttentionGS targets initialization-free 3DGS by using structural attention: geometric attention helps recover global structure early, then texture attention refines details. This is particularly relevant when your point cloud is missing large regions or SfM is unstable. (Liu et al., 2025)
ConeGS proposes error-guided densification: it inserts Gaussians along pixel viewing cones at depth estimates from a fast proxy, rather than only cloning along existing geometry. This can improve reconstruction quality under tight Gaussian budgets (useful when memory is constrained). (Baranowski et al., 2025)
If your issue is not only sparsity but also redundancy (too many unhelpful points), SDI-GS uses segmentation-driven initialization to keep structurally significant regions. They report up to ~50% Gaussian reduction while maintaining comparable PSNR/SSIM. (Li et al., 2025)
Dense, reliable initialization is key to 3D Gaussian Splatting. Our experiment shows a nuanced picture:
If your pipeline starts from sparse SfM, the safest path is to densify (or predict Gaussians feed-forward) before training.
Want to truly understand 3D Gaussian Splatting—not just run a repo? My 3D Gaussian Splatting Course teaches the full pipeline from first principles in PyTorch only (no C++, no CUDA). You’ll learn initialization, densification, rendering, and how to experiment with recent papers.
Explore the Course →