Hey all, I am writing a deferred rendering demo. In the metal book, we achieve this by writing the position to color attachment 0, normal to color attachment 1, etc
Coming from WebGL, I know I can reconstruct my world space position using the depth buffer texture and the inverse of my camera projection and view matrices. This effectively saves the need to draw to one extra texture and reduces the VRAM needs. However, now I need to store the depth texture in the VRAM by setting up it’s store action to
.store, as I will need it later on to reconstruct the world space pos.
I implemented the technique in my Metal app and it runs well, but am left wondering if it is such a good idea. From the book I know about the
.memoryless texture mode, which is extra fast and does not use the VRAM, rather some super special space on the Apple GPUs.
The reconstructing technique also has the downside of running matrix math on each pixel of the screen in order to reconstruct the world space position.
What would the Metal gurus recommend on the matter? Should I continue reconstructing from the depth framebuffer, should I just add another
.memoryless texture and write my world space position to it or should I not care that much for a simple demo?