You didn’t explain using .read in the fragment function to render the textures on the quad. Specifically, in.position is float4, so why is in.position.xy cast as uint2, apparently to read the texture pixels?
albedoTexture is the resulting color texture. You’ll need to use object colors, but also the normals and position to work out shading.
After creating the G-buffer textures in GBufferRenderPass, you pass them on to LightingRenderPass where you set the fragment textures on the render encoder in draw(commandBuffer:scene:uniforms:params:).
The GPU now has albedo/normal/position textures that are all the same size as the render view. You then draw a full screen quad in order that every fragment in the render view gets processed.
Each fragment has a position passed in. The rasteriser has converted out.position from the vertex function to screen space for the fragment input (See Chapter 7: The Fragment Function - Screen Space).
This is debug code from the Metal debugger showing the value of the fragment at the marked position.
You used .read in Chapter 12: Render Passes to read the render texture at touch coordinates.
Notice that read differs from sample. read uses pixel coordinates rather than normalized coordinates.
in.position is a float4 in screen space and you’re going to read a pixel from a texture the same size as the screen, so only the x and y are useful here, as read uses a uint2 to read textures.
Seems odd to transform the first two components of a float4 (x and y) from clip to screen coordinates, and it doesn’t seem to be explained in the MSLS. Anyway, it is not transforming those components correctly. The scaling seems to be off by a factor of two and the image is shifted by (width, height)/2. So I get a quarter image of an expanded area in the corner.