Link Search Menu Expand Document

Memory-quality Trade-off

We additionally explore quantization methods and resolution-memory trade-off to search for optimal configuration. At first, we have randomly sampled 5 scenes from Co3D. All the experiments will be added to our paper. We provide pre-trained checkpoints in the first column of each table.

Experiment 1) Resolution-quality trade-off

We compare reconstruction qualities by varying the resolution: 64, 128, 256, and 512. We follow the setup from our paper, and estimate the required memory and reconstruction metrics (PSNR, SSIM, LPIPS).

ResolutionMemory(MB/scene)PSNRSSIMLPIPS
6438.026.560.74640.4827
12847.229.390.80810.4017
25663.230.810.85510.3353
384161.231.030.86190.3202

The setup with resolution 256 shows the best trade-off between memory and quality.

Experiment 2) Quantization Methods

We further explore several quantization methods and appropriate compression bits that can be used for our method. ours(SH) applies our quantization on spherical harmonics coefficients, while ours(SH+D) applies our quantization on both spherical harmonics coefficients and densities. Clipping clips the feature values to heuristically searched interval; for instance, density should be non-negative values. We will elaborate thorough details about the compression methods in the supplementary materials. We additionally seek for the optimal bit for each compression method.

Quantization MethodBitMemoryPSNRSSIMLPIPS
ours(SH)863.230.810.85510.3353
ours(SH)437.427.480.73920.4512
ours(SH)224.814.690.53250.6797
ours(SH+D)860.830.530.85460.3369
ours(SH+D)434.823.070.72380.5166
ours(SH+D)221.814.650.52800.9981
clipping862.617.780.72270.4879
clipping437.417.720.65750.5502
clipping224.416.660.64760.5881

We observed that our method shows the best performance and has reasonable memory requirement.

Experiment 3) Progressive Scaling

We found an interesting observation, the progressive scaling of Plenoxels strongly affects the memory footprint. We thus explore two progressive scaling methods: “weight” and “threshold”. According to Plenoxels, “weight” filters out voxels by casting rays from training views and “sigma” directly prunes voxels whose densities are below the desired threshold. For more details, please refer to the supplementary materials of Plenoxels. For each method, we change the threshold of each scaling method to find a good adjustment between memory usage and rendering quality.

Quantization MethodThresholdMemoryPSNRSSIMLPIPS
sigma569.030.660.85540.3352
sigma1066.430.790.85500.3354
sigma2063.230.810.85510.3363
sigma10049.830.470.85470.3537
weight1.2875.830.810.85570.3333
weight2.5673.030.710.85600.3335

Models progressively scaled by the “sigma” method generally requires less memory than models progressively scaled with the “weight” method, although gaps between the two approaches are negligible. We select the “sigma” method as our progressive scaling method with a sigma threshold 20, which has shown the best trade-off between memory footprint and rendering quality.