How to get started with Radiance Field Platforms
The words Neural Radiance Fields and 3D Gaussian Splatting can be intimidating, but they don't have to be! Here's an overview of the current companies to help you decide what's going to work best
It’s been a bit since our last newsletter, though the main website has continued to received daily updates and articles.
Radiance fields are able to create hyper-realistic three dimensional outputs from a series of 2D images. There has been so much momentum, from just about every conceivable angle that keeping up feels like drinking from a fire hose.
In this edition:
In person events: I will be at SXSW and GTC over the next couple of weeks. Say hi if you’re attending!
Radiance Fields and Pop Culture: Radiance Fields have been popping up in pop culture.
Getting Started with Radiance Field Platforms: A look at getting started with some both cloud based and local public radiance field platforms.
New Paper Releases: More and more papers continue to be unveiled.
Radiance Field Code Releases: There’s been some big code releases this week!
Before we dive in, here’s some general news from around the world of radiance fields. I will be presenting a radiance field exhibition and speaking on a panel at GTC in just under two weeks. If you’re planning on going to GTC or GDC, please send me a note so that we can meet!
Similarly I will also be at SXSW from March 9th-13th for anyone else that is attending.
Radiance Fields and Pop Culture
Radiance fields have popped up in popular culture over the first few months, appearing in music videos for Super Bowl performer, Usher to Polo G.
We’ve also seen examples of how radiance fields can be used to show off events, with Yulei, who used Gaussian Splatting and React Three Fiber to immortalize an art gallery.
There was also some NeRFs in J Cole and Drake’s newest music video, Might Delete Later, Vol. 1.
Unlock Your Brand’s Potential: Exclusive Sponsorship Opportunities Awaiting!
🚀 Elevate Your Reach with Limited-Edition Sponsorship Slots for Our Dynamic Newsletter!
Our community is rapidly growing, comprised of engaged and enthusiastic readers passionate about radiance fields and machine learning. This is your chance to showcase your brand, product, or services to a dedicated audience eager to discover what you have to offer.
Deep Dive: Getting Started with Radiance Fields
Stop me if you’ve had this happen to you. You saw some cool tweet showing some amazing new piece of tech and you felt you HAD to learn how to use it. You clicked on a couple links and tried to figure it out for yourself, only to feel completely overwhelmed.
Over the last year, so many radiance field methods have emerged, each with their own benefits and drawbacks. It’s tough to know where to invest your time into. I am going to break down some of the existing platforms, where we’ll start from the easiest to get started with (cloud based) and move to some local options.
Luma AI:
For a lot of people, myself included, Luma is the OG cloud based platform. Luma burst onto the the scene in roughly October of 2022 and at the time was the only Cloud based NeRF platform. Since then, Luma has greatly expanded, raising nearly $70 million in total fundraising and adding Gaussian Splatting and Generative AI to the mix.
Luma is compatible with Unreal Engine, via their .luma file type for NeRFs and .ply export for Gaussian Splatting. They also unveiled a rendering fellowship to develop plugins for other industry standard platforms. Hopefully we will be seeing the results of that soon.
Luma also has a complete editing suite built into the platform, which allows you to create keyframes and export videos with varying aspect ratios. It’s really intuitive to use and has a lot of options to play with.
If you’re looking for Generative AI, Luma also has research initiative Genie, which is a text to 3D model. It is currently free during the research phase and models can be exported into a variety of files including: fbx, gltf, blend, obj, usdz, and stl.
If you’re just getting started, there’s no easier platform to currently create with than Luma. You can sign in either with Google or an Apple account on their site to start creating. Luma is currently 100% free and downloadable on iOS or Web. It seems like an Android application is coming, but not yet.
Polycam
Polycam has been a fixture in the 3D scanning world for the past 5 years. In the last few months of 2023, Polycam introduced their implementation of Gaussian Splatting, which is actually built on top of nerfstudio.
Similar to Luma, Polycam is a cloud based platform that takes either photos or videos as inputs. Polycam is avaiable on iOS and Web for Gaussian Splatting. From their results, you can export out .ply files that you can bring into Unreal Engine.
They have a Gaussian Splatting featured gallery to give you some inspiration. Polycam is another great platform to dip your toes into when you’re looking to get started. Make an account directly on their site.
Volinga AI
Volinga was the first company to unveil an Unreal Engine extension for NeRFs, early last year. Similar to Luma, they have also begun offering Gaussian Splatting as a service, which is also compatible with Unreal Engine.
They now have a cloud interface where you can drag and drop videos to be turned into .NVOLs, which is their file type. Volinga is geared towards Virtual Production, with compatibility with companies such as disguise and Pixotope. Volinga allows you to train one capture at a time for non commercial work and when you have a specific capture to be used commercially, they have a pay per commercial use rate of 50 Euros.
You can register for an account on their website.
Kiri Engine
Kiri Engine is a cloud based iOS and Android native app. For almost all of last year, I was asked when will radiance fields be natively available on Android? Kiri Engine was the first and still currently only Android native application to offer Gaussian Splatting.
Similar to Luma and Polycam, you can drag and drop for captures on the web or upload captures from your phone. Kiri Engine does come with an editing suite that allows you to clean up, crop, and export your captures. You are able to also export your captures into Unreal Engine. Where it differs from the two is that Kiri Engine is not free for Gaussian Splatting. You will need a Kiri Engine Pro account, which is roughly $60 for a year.
Kiri also has an app within the Vision Pro app store to view your captures.
For Android users, this will be the app to get started with.
Local Platforms
If Cloud based applications are a non starter for you, there are quite a few platforms that train locally on your computer. Please note, currently, in order to train, you will need an NVIDIA GPU, preferably with 12GB + of VRAM. With that said, here’s some recommendations:
Postshot
To this day, there is not a platform that feels as intuitive as Postshot. If you have used Microsoft Word before, you will feel comfortable using PostShot. Postshot does away with the command line.
They have their own implementation of NeRFs and Gaussian Splatting, with presets set for you, based on your desired output. For instance, if you want to train with more robust settings, they have models that range from Small to XXLarge.
Several scenes from this music video were made using an iPhone 15 Pro and Postshot. Additionally, the one click installer for Windows makes it easy to get it up and running.
Postshot is directly compatible with Adobe After Effects. From its easy to install interface and local training, Postshot is a great choice to get up and running.
nerfstudio
nerfstudio came out around the same time as Luma AI and was founded by Ethan Weber, Evonne Ng, and Matt Tancik. The latter of the three is one of the original authors of NeRF and if you go to read the original paper, you’ll find it’s hosted on matthewtancik.com.
Getting back to the point, nerfstudio is the most flexible of all of the applications that we’ve looked at thus far. If you have a developer background, you will probably gravitate towards nerfstudio due to its permissive Apache 2.0 license, strong modular foundations, and wide community.
Using nerfstudio does require either some familiarity with or an openness to looking at Command Line. It’s not as bad as it sounds, I promise!
nerfstudio can only be installed through Github, but there’s a lot more videos on how to get started installing it. Check out Jonathan Stephens’ tutorials. They’re great and provide a lot of copy and paste materials to help you out!
nerfstudio has a range of different supported radiance field methods, including nerfacto (nerfstudio’s native NeRF method), Zip-NeRF (current Google SoTA), and most recently Splatfacto (nerfstudio’s native Gaussian Splatting method).
From nerfstudio, you are able to export into Blender through their add-on or if you’re using their Splatfacto method, you can export out the .ply and use it in Unreal Engine.
Instant NGP
Instant NGP is NVIDIA’s current publicly released method, although there will (hopefully) be a release of Adaptive Shells soon. Instant NGP was the platform that started this all, allowing for the rapid training of NeRFs.
NVIDIA has created a set of installable one click binaries, based on your GPU and can be found here. Instant NGP does require a basic amount of command line, but similar to nerfstudio, has a lot of video documentation. While Instant-NGP is not compatible with any of the game engines, you can use a Quest headset to look at and edit your captures in VR. The download binaries can be found on their Github page.
New Papers:
There’s continued to be a large amount of papers released. Here are some of the ones that stood out to me last week.
3D Gaussian Model for Animation and Texturing: 3D Gaussian Model, a proxy-based representation that is analogous to the 3D model for a typical rasterization-based pipeline. An injective mapping that enables 3D Gaussians to be optimized in texture space without the need for a shell. A training strategy that constraints the 3D Gaussians with respect to each triangle, thereby enabling animation and texture mapping.
VastGaussian: Vast 3D Gaussians for Large Scene Reconstruction: High-fidelity reconstruction and real-time rendering on large scenes based on 3D Gaussian Splatting.
Spec-Gaussian: Anisotropic View-Dependent Appearance for 3D Gaussian Splatting: A novel ASG appearance field to model the view dependent appearance of each 3D Gaussian, which enables 3D-GS to effectively represent scenes with specular and anisotropic components without sacrificing rendering speed.
Sora Generates Videos with Stunning Geometrical Consistency: We introduce a new benchmark that assesses the quality of the generated videos based on their adherence to real-world physics principles. We employ a method that transforms the generated videos into 3D models, leveraging the premise that the accuracy of 3D reconstruction is heavily contingent on the video quality.
Code Releases:
There have been quite a few exciting code releases within the past few days. Check them out!
Gaussian Pro (Beta): A novel method that applies a progressive propagation strategy to guide the densification of the 3D Gaussians.
DUSt3R: DUSt3R generates a complete 3D reconstruction without knowing camera parameters.
PhysGaussian: Bringing real world physics to Gaussian Splatting!
Gaussian Splatting SLAM (MonoGS): They demonstrate the first monocular SLAM solely based on 3D Gaussian Splatting, which also supports Stereo/RGB-D inputs.
SC-GS: Sparse-Controlled Gaussian Splatting for Editable Dynamic Scenes.
For a more robust and up to date look at the state of radiance fields, please follow my website radiancefields.com, which publishes as news rolls out. Additionally, if you found this helpful, please consider sharing it with someone.
Sponsorships! → We're also accepting sponsors for this quarter and Q2. If you'd like to secure your sponsorship, please send me an email!
Comments, questions, tips?
Send a letter to the editor — Michael Rubloff
Linkedin / Twitter