Albedo to EARS, Pt. 1 - What is EARS?
Happy New Year!
In the previous post I shared a proposal for a Directed Research project surveying the various techniques used in Russian Roulette and Splitting, beginning with Arvo & Kirk’s albedo based methods through recent developments by Rath et al. (EARS).
I’m happy to share that the project was approved - over the next semester I will develop a path tracer demonstrating these techniques under the supervision of Prof. Ulrich Neumann. Professor Neumann was very encouraging with previous projects and coursework and I am looking forward to the opportunity to work with him.
This post will be the first in a series documenting the path to EARS.
Project proposal: project-proposal.pdf
First post in the series: Directed Research at USC
GitHub repository: roblesch/roulette
What is EARS?
EARS, or Efficiency aware Russian-Roulette and Splitting, is a technique presented in a publication by Alexander Rath and others at the University of Saarland submitted to SIGGRAPH in 2022. It enhances previous techniques in Russian Roulette and Splitting by Vorba and Křivánek by extending scene pre-computation with additional data to improve efficiency.
If you are curious, you are best served hearing it in the words of the author -
Project Goals
This project will produce a path tracer demonstrating techniques across the history of Russian Roulette and Splitting - beginning with Arvo & Kirk’s Albedo-based Russian Roulette, following with Vorba & Křivánek’s ADDRS, and finally demonstrating recent developments with Rath and others’s EARS.
If you’d like more detail, check out the proposal - project-proposal
Implementation
I spent the Winter Break laying the groundwork for the path tracer that will implement EARS. Based on observations in previous projects, the core of this path tracer honors the following values:
- Public asset compatible. Making scenes from scratch is a pain.
- Referenceable implementation. Debugging numerical issues in the dark is also quite painful.
- Straightforward. There’s no need to get lost in generality on a one-person research project.
- Extensible. One day I may like to add complicated effects or scene accelerators - ideally this is the last time I start a path tracer from scratch!
With these in mind, I evaluated a few well-known projects as starting points.
Reference Renderers
Mitsuba3
Ubiquitous in the research community, Mitsuba3 is a widely popular and highly cited renderer supporting a broad variety of techniques, materials and integrators. Its XML-based scene format is easy to parse, and various test scenes are available.
However, with the use of Dr.Jit, many of the core mathematical operations are fairly abstracted, and Dr.Jit’s documentation is difficult to navigate. This is a difficult project to reference.
Pbrt-v3
Pbrt-v3 is the accompanying source code for the popular text, “Physically Based Rendering - From Theory to Implementation”. PBRT offers a rigorous discussion of state-of-the-art path tracing techniques, and like Mitsuba, has many test scenes publicly available. Ingo Wald’s PBRT parser can help with scene digestion. The pbrt source is more in the style of a production renderer, so although well documented, it also is not always the easiest to parse and debug.
Tungsten
The top choice for this project, Benedikt Bitterli’s Tungsten checks all the boxes. Its scenes are communicated in an easy to parse JSON format, and its implementations are straightforward and easy to understand. The only hiccup here is installing VS2013 to debug. There is also a set of test scenes aviailable on Benedikt’s personal site.
Honorable Mention - ChameleonRT
Will Usher’s ChameleonRT supports OBJ and glTF, and most interestingly implements ray tracing backends for NVIDIA’s OptiX, DirectX, Vulkan, Metal and OSPRay. I’ll certainly be revisiting this project when I am looking to speed things up.
A Note on Scene Formats
These projects all communicate scenes in ways that are well supported. Choosing a scene format inherits some choices of a framework regarding transforms, bounds and intersections - or requires translation. Because of this, the primary focus for this work will be on scenes provided natively in Tungsten’s json format.
External Dependencies
pugixml
nlohmann/json
glm
stb_image
(I took a slightly painful detour of wrestling withlibpng
andzlib
in CMake before eventually settling on the very convenientstb_image
. I may come back around tolibpng
.)
Core Architecture
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
┌────────┐
│Renderer│
└───┬────┘
│
│ ┌───────────┐
├─┤FrameBuffer│
│ └───────────┘
│
│ ┌─────┐
└─┤Scene│
└──┬──┘
│
│ ┌──────┐
├─┤Camera│
│ └──────┘ ┌─────┐
│ ┌─┤Shape│
│ ┌──────────┐ │ └─────┘
├─┤Primitives├─┤
│ └──────────┘ │ ┌────────┐
│ └─┤Material│
│ ┌─────────┐ └────────┘
├─┤Materials│
│ └─────────┘
│
│ ┌──────┐
├─┤Lights│
│ └──────┘
│
│ ┌──────────┐
└─┤Integrator│
└──────────┘
Early stages of development aren’t much to write home about. This design is largely guided by scene transport format and common path tracer design. Of particular note to the goal of EARS is the Integrator
- implementations of this interface will execute the various methods of traversing the scene and determining what values go into the FrameBuffer
. For now, the DebugIntegrator
generates an image by sampling camera directions to RGB.
Beautiful. 🤌