GPU-driven for dynamic scenes? (Chap. 26)

Hi there,

I’ve been experimenting with different approaches to rendering, currently something similar to Chapter 26 GPU-Driven rendering. Unlike the more traditional approach, it doesn’t appear to lend itself well to the models changing over time (adding/removing) since there is one shared buffer for all models, to make it available to the GPU to execute draw calls.

Is this an inherent limitation of indirect/GPU driven rendering approaches, something that can be overcome with a clever workaround, or am I missing something obvious?

There are a few buffers with length tied to the model count (including models buffer, model params buffer, draw argument buffer). Recreating them from scratch each frame (or each frame when something changes) seems inefficient and like a good way to lose at least some of the benefits you could get from offloading work to the GPU.

Among the more outlandish ideas I had was something like a fixed number of fixed size model pool buffers that track the models and empty spots, blit the memory over from individual models when a model in the pool changes, tie the number of draw arguments to the fixed number of model pool buffers, and within the encodeCommands shader loop over the models in the buffer for the draw calls. That, if it even makes any sense whatsoever, also I think reduces the possible parallelism in the GPU.

Feel free to ignore that ^ but I would appreciate anyone’s thoughts on the general problem.

1 Like

For what it’s worth, it looks like “bindless” rendering as described here Explore bindless rendering in Metal - WWDC21 - Videos - Apple Developer and in the sample projects can achieve the same result of making all models available to shader functions, but WITHOUT requiring a shared buffer (e.g. a model buffer) to do so!

I may have spoke too soon. These examples also have an argument buffer in the renderer that represents all the models/meshes of a scene, and populate that buffer ahead of time, and nothing changes in the draw call.

How is one supposed to handle this? Keep track of when any change happens to the array of models/instances in your scene, and recreate the argument buffer at that point?

Or maybe keep track of the index of the model that has changed (or been added, or removed) in the scene and set the argument buffer and buffer for that index, e.g.?:

encoder.setArgumentBuffer(modelsBuffer, startOffset: 0, arrayElement: index)
encoder.setBuffer(
mesh.vertexBuffers[Position.index], // or nil?
offset: 0,
index: 0)

I’m away from my computer this week so can’t experiment.

You could investigate Metal heaps and transient resources.

Thank you for the recommendation! Will look into it

You can also implement a semaphore, but instead of using like the book does with Uniforms, you can use it with Buffers, passing in a new buffer.

I have been writing a 3D Modeling App for iPad/iPhone that has geometry topology that is always changing, and have been passing in updated buffers as the topology changes it works well, but it will depend on how many models you are updating at the same time.

It all comes down to what you want to achieve, to how you will need to structure you buffers.

1 Like

Thanks for responding. Congratulations on your app, which looks great.

Took me a bit to wrap my head around what you are saying, but that is very creative. I’ll have fun experimenting!

Your app looks super. I’d love to have a fun modelling app on my iPad.

1 Like

Apple to the rescue with some sample code :crossed_fingers:

Heaps and Fences:
Implementing a Multistage Image Filter Using Heaps and Fences | Apple Developer Documentation

Heaps and Events:

Thanks Caroline, I could not have done it without your book. When I started I had never coded anything for a GPU. I also still need to finish reading it, I cherry picked chapters I knew would be helpful.

1 Like