I need help understanding a fundamental concept in Apple’s Metal pipeline.
Let’s assume I have a single triangle I am going to render, made of three points R, G and B. Let’s also assume that each point has a colour associated with it - corresponding to its name. We set up the vertex shader to simply pass the vertex position and colour without modification.
The pipeline now rasterises the triangle - let say that it covers 10 pixels on screen. Now here’s where my confusion lies - the fragment shader…
If my fragment shader just returns the vertex’s colour then Metal will interpolate between the colours of each corner. My naive assumption was that Metal would call the fragment shader once for each vertex, and then use the resulting colours to interpolate the values for each pixel.
However, if I sample a texture in my fragment shader then it would need to be passed the position of each pixel, not the original vertices, so I can look up / sample the correct colour from in the texture in the shader.
I suppose a daft way to ask the same question is: How often is the fragment shader called 3, 10, 30 or some other number of times? Or even more daft - what is a fragment: an individual pixel, or a structure that comprises the 3 vertices and pixels that need to be coloured?
Hi @peggers123 and welcome to the forum
After vertex processing of the three vertices, as you say, the pipeline rasterises the triangle to, say, ten fragments.
Before the fragment function processes each of these ten fragments, when a fragment function has a structure passed in with a stage_in
attribute, each of the structure members are interpolated for the fragment. (To prevent this interpolation, you can use the flat
attribute on the structure member.)
In your colour example, each fragment would be interpolated between the three vertex colours:
The fragment function is performed for every fragment that a triangle covers.
The fragment function doesn’t have access to any other fragment.
As to what a fragment is, Apple rather confusingly says
A fragment is a possible change to the render targets
A pixel is what lights up on your screen to show the render. Depending on settings, such as antialiasing, a pixel can be affected by multiple fragments. I tend to think of a fragment being what software produces, and a pixel being what the hardware shows, but this isn’t a precise definition.
@Caroline thank you so much for the swift reply. The element I was missing was that the rasteriser interpolates any value in the structure output from the vertex shader that is not attributed (e.g. [[position]] or [[flat]]). i.e. the rasteriser has no idea which of these values, if any, are colour.
Now it all makes perfect sense
btw - I bought the Metal by Tutorials book in print form and am very much enjoying it.
1 Like