Beta version: boosted rendering for large models

Hi all,

If you want to try it out, here is the first beta version of VR Sketch with a boosted rendering engine. This version only works with an Oculus headset tethered to a PC, typically a Quest—no SteamVR support, and also no standalone Quest for now. It will be fixed soon, of course.

This version sacrifices (some) quality for (a lot of) performance. It is several times faster than the existing modes. It is ideal if you have been struggling to have your huge models render at a reasonable speed. In that case, try again! The exact factor of speed improvement is unknown so far: no matter how large the models we try, they all render at the maximum frames-per-second on the headset… but according to our estimates, it might be 5 to 10 times faster.

It doesn’t display the textures or shadows, though both might possibly be re-added later. There are a number of bugs remaining to fix before it’s ready. And this doesn’t mean that the current, higher-quality images will go away: this new mode will likely replace the “performance” mode from the Settings dialog, but the default “quality” mode won’t change. (Performance improvements for the “quality” mode are still possible in the future.)

Our next step is to port this work to the standalone Quest, where a similar performance improvement is expected. Part of this improvement comes from a lower GPU memory usage—which is an interesting benefit particularly on Quest: the models can be more complex before running out of GPU memory. We still don’t expect to be able to load our most complicated models on the Quest, but we certainly expect a jump in the limit.

I will report our progress in this thread in the days or weeks ahead. For now, enjoy the beta for the tethered Quest!

  • Armin Rigo

I imagine textures are probably higher priority than shadows.
I’m really interested in what kind of tricks could have lead to a 10x speedup.

It’s not entirely clear to me. I was trying a new method that would use the modern ray-tracing capabilities of desktop GPUs, and this forced me to implement various other changes. The result was great, but then I realized that it came mostly just from these other changes, and not from the ray-tracing part. So this new beta version builds upon these other changes only, and forgets the ray-tracing parts—with the advantage that it works everywhere, not just the modern desktop GPUs.

I think by now that the big speedup comes from two factors. First, we send to the GPU a single big mesh, as a single “draw call”. Compare with how it is done traditionally: for every different material (e.g. every different texture), a separate “draw call” is done. I think this makes a big difference because the graphic card and/or the CPU driver can be clever about what triangles it needs to draw or not, if it gets all the triangles at once. Or maybe it’s just that every separate “draw call” is very slow, maybe because it requires some global GPU synchronization. Anyway, now we draw everything as a single draw call which will draw triangles of multiple colors.

The other optimization is related to speed at which the GPU memory works. All of these triangles take hundreds of MBs just to store 3D coordinates, and even more for the related vertex data (normals, texture coordinates, etc.). The new version optimizes that, as far as possible. Simply when you think that the GPU needs to even read these hundreds of MBs 72 times per second you realize that it’s likely to be the bottleneck. It’s not how fast the GPU can compute the color of each pixel of the triangles (with lightning, shadows, etc.) and apply it on screen; it’s simply that it cannot do that without first reading three vertices from memory and it can’t do that more quickly than its RAM allows. So, here is how we compactify this data now: first, the triangles sent to the GPU are now two-faced, like in Sketchup, instead of the more-traditional-for-3D-games solution of drawing independent triangles for the two sides (maybe on different meshes because different materials). That divides the number of triangles by two. Then, we store every vertex with 5 words only: 3 words for the 3D position, and 2 words that encode all the rest—the very approximate normal direction, the (more precise) amount of light the face should get, which edges of the triangle should be drawn as black, a “Z shift” value to add to avoid z-fighting, and which material is on the front and on the back side of that triangle. (If we later add textures again, it is likely that we will need 2 more words for the texture position; we will see how much performance suffers from that.)

As I said, I’m not 100% sure that it is all the reasons that make the new version so much faster, given that the speedup is so huge. That’s my best effort at coming up with a plausible explanation :slight_smile:

Armin Rigo

1 Like

Here is the version for Quest (beta version, like above): (installation instructions here, section “Installation without using the App Lab”)

It turns out that this version can indeed load much bigger models than previously possible. We successfully loaded a model from our “large” category, which we previously thought would never work on Quest 2. This model loads fine, but renders at 24 frames-per-second, which is not enough for a smooth experience. But it is possible to load it, and 24 frames-per-second is very impressive for the Quest 2! This is a model which Sketchup reports (in “Model info, Statistics”) as having 875’000 faces and 34’000 component instances. Compare it with our previous recommendation: maximum 100’000-200’00 faces for models sent to the Quest 2. So maybe 875’000 is still a bit too much for a smooth experience, but we expect that slightly smaller models would now be completely fine.

Give it a try! As above, this beta version has some known bugs and limitations, most importantly no support for textures, but it should work fine for viewing and walking around; there might be some issues with editing or trying to change some settings (like fog, light direction, etc.). If you install this beta version, you can always re-install the official version later (maybe remove the beta version explicitly first).

1 Like

For “large” models, have you considered some sort of optional preprocessing to allow them to be viewed a acceptable FPS?
If you can measure the number of polys where rendering gets unacceptably slow, then you could limit the model to that many, discarding in order of smallest area.

Yes, that’s an option that we might consider too. But note that while it would help a lot with some kinds of model, there are other kinds where it wouldn’t, so it’s always delicate. For example, a photogrammetric model is essentially one sheet of tiny triangles, so discarding some of them would make obvious holes in the model.

For some kinds of models, if we could drop parts of the model before it even reaches the standalone Quest, then it might help, given that the limit there appears to be the memory (not the FPS any more). But that is a bit beyond what we want to have in the near future. It would need the Quest loading more details about some groups if we teleport close to them, and then forgetting about them again when we teleport away.

There are also talks here about using an impostor system to render the groups far away, but it is still very unclear to me if that would work or if we’d run into issues. For example: “this wall is in its own group, and far away that it is rendered as an impostor, which introduces approximations—and then it no longer seems to be exactly attached to the wall next to it”.