In the rapidly evolving field of computer graphics, innovative techniques are transforming how we digitally visualize and understand physical objects. Today, we review photogrammetry, an established technology, versus recent advancements in Neural Radiation Fields (NeRFs) and Gaussian Splats.
Photogrammetry: Bridging the Gap Between 2D and 3D
Photogrammetry, first developed in the late 1800s (yes, really!), was used in WWI and WWII for military reconnaissance. It gained prominence in the digital revolution, particularly with the rise of drones and high-resolution digital cameras. It has been the de facto method for reconstructing real-world environments for the past few decades.
Photogrammetry creates 3D coordinates from 2D images. It stitches numerous images together to make cohesive 3D shapes and then projects the images onto these shapes. This method is beneficial in fields where accurate, scalable 3D representations of a physical environment are needed.
What is Photogrammetry?
Photogrammetry identical features across multiple 2D photographs taken from different viewpoints to reconstruct a 3D scene, including calculating the geometry of the objects within the scene.
Applications of Photogrammetry:
Cultural Heritage: Preserving historical sites and artifacts by creating detailed digital replicas.
Surveying and Mapping: Creating accurate topographic maps and 3D models for construction and urban planning.
Forensics: Reconstructing crime scenes, providing valuable visual evidence and measurement in courtrooms.
Photogrammetry Pros
Accuracy: Known for its high accuracy of spatial measurements.
Accessibility: It can produce accurate models from inexpensive cameras or phones
Standardized: It results in geometric primitives commonly used in 3D graphics.
Photogrammetry Cons
Manual Input: It requires many input images, a potentially time-consuming process to combine these, and expertise to ensure accuracy.
Materials: Because photogrammetry relies on corresponding points across images, it can be challenging to capture reflective, transparent, or high-detail surfaces.
Neural Radiance Fields (NeRFs): Crafting Realism with Neural Networks
NeRFs, developed in 2020, use neural networks to generate highly detailed images of physical objects from various viewpoints based on a limited set of input images.
What are NeRFs?
NeRFs learn a scene's light field, allowing them to generate new views from any angle by understanding how light interacts with the scene. This results in photorealistic quality, capturing intricate lighting and reflections.
NeRFs use ray tracing, but instead of doing complex mathematical modeling, they use machine learning to learn the color of each ray.
Applications of NeRFs:
Product Visualization: Creating realistic rendering of products to view from any angle. This could enhance the online shopping experience and increase sales conversions.
Real Estate: Creating virtual property tours, allowing potential buyers to explore homes remotely.
Film and Media: Transforming existing footage into dynamic, interactive environments. This could allow Directors to change perspectives, adjust lighting, and add or remove elements, providing storytelling and scene composition flexibility.
Medical Imaging: NeRFs don’t require traditional photographs and can, for example, use CT scans as input. In healthcare, these could help visualize complex anatomical structures from imaging data, aiding in diagnostics and treatment planning.
NeRFs Pros
Realism: NeRFs offer photorealistic quality, capturing intricate lighting and reflections. Additional processes can be run to create geometric surfaces for measurement.
View Synthesis: NeRFs are particularly good at creating novel views from a sparse number of input images.
NeRFs Cons
Computational Cost: High computational demands make them less practical for large-scale or real-time projects.
Flexibility: NeRFs are challenging for applications needing dynamic and interactive visualizations, particularly if they require re-lighting, however, there is ongoing research on this.
Gaussian Splats: Efficiency in 3D Representation
Originating from Lee Alan Westover’s work in 1991, Gaussian Splats subsequently experienced a period of relative obscurity between its introduction until 2023. This was due to a combination of technological limitations and the dominance of other 3D modeling and rendering techniques. Thanks to machine learning and GPU technology advances, Splats have recently seen a major resurgence.
What are Gaussian Splats?
Splats visualize a 3D scene as a collection of blobs of different sizes. The color of these blobs depends on the direction they are viewed. They are an order of magnitude less computationally complex (and thus faster) than NeRFs, but require significantly more memory to process.
Unlike NeRFs, Splats use rasterization to create an image of a new viewpoint. The underlying results are similar to a point cloud, but instead of points, they are 3D ellipsoids whose color changes based on viewing direction.
Applications of Gaussian Splats:
Applications for Splats are very similar to NeRFs. However, Splats are better equipped for real-time and large-scale applications (at the time of writing). Here is an example:
Mapping: Splats could provide smoother transitions and more accurate representations of vast terrains and large structures, such as in digital mapping.
Splat Pros
Efficiency: More computationally efficient than NeRFs, making them suitable for large-scale projects.
Accuracy: Balances detail and computational cost, providing a good middle ground for various applications.
Splat Cons
Flexibility: Splats are not currently suited for projects requiring re-lighting or editing.
Understanding the Differences
While Photogrammetry, NeRFs, and Gaussian Splats all aim to create detailed 3D representations, they do so using different methods and serve distinct purposes:
Photogrammetry is best for applications requiring accurate spatial measurements.
NeRFs provide incredible realism, capturing intricate lighting and reflections, can be trained quickly but are computationally expensive.
Gaussian Splats, like NeRFs, provide incredible realism. However, they are computationally far cheaper, opening the door for large-scale projects and real-time applications.
One last item …
4D Gaussian Splats: Capturing Motion in 3D Space
4D Gaussian Splats build on the idea of 3D Gaussian Splatting by adding a time element. This allows for the creation of dynamic, moving scenes in 3D.
What are 4D Gaussian Splats?
4D Gaussian Splats represent how objects move and change shape over time. They use predictive models to show motion and deformation, making them ideal for capturing dynamic scenes.
Achievements and Applications
Film and Animation: This could be used to create lifelike animations and special effects by capturing complex motions in high detail.
Virtual Reality: This technique could improve VR experiences by allowing real-time rendering of dynamic environments, making interactions more realistic.
Sports Analysis: This could help analyze and visualize athletes' movements, providing detailed insights for training and performance improvement.
Final Thoughts: Embracing the Digital Renaissance
As we stand on the brink of a digital renaissance, techniques like Photogrammetry, NeRFs, and Gaussian Splats promise to enhance our visual experiences, opening new avenues for how we visualize the world around us. With ongoing research and development, we can expect even more innovative techniques to emerge, further revolutionizing our ability to capture and interact with the digital and physical worlds.