In the command encoder, each type of graphics function has its own argument table.
So you can have up to 31 buffers indexed into each of vertex, fragment and kernel. The argument tables for each type of graphics function are all separate from each other.
For example, you can set for a render encoder:
setVertexBuffer(buffer, offset: 0, index: 0-31)
setFragmentBuffer(buffer, offset: 0, index: 0-31)
For the compute command encoder:
setBuffer(buffer, offset: 0, index: 0-31)
Each of those are indexing into separate argument tables, which the GPU hooks into.
So if you have edge_factors
in buffer 0 in the kernel function, it does not necessarily mean that it has to be buffer 0 in the vertex function.
On the CPU side, during the compute encoder, you set:
computeEncoder.setBuffer(tessellationFactorsBuffer, offset: 0, index: 2)
In the GPU kernel function, this buffer is called factors
, and the kernel function uses edge_factors
, which is set up by us, to calculate and write the entries into this factors
buffer at index 2.
On the CPU side, you then set that factors
buffer, called in Swift tessellationFactorsBuffer
, into the render command encoder’s tessellation factor buffer.
renderEncoder.setTessellationFactorBuffer(tessellationFactorsBuffer,
offset: 0,
instanceStride: 0)
You set the control points buffer at index 0 on the render command encoder.
renderEncoder.setVertexBuffer(controlPointsBuffer, offset: 0, index: 0)
These are the very few control points to control the mountain tiles.
In the vertex function, rendering patches rather than vertices is slightly different. In this sample, there is no model, so you’re not using Model I/O at all, so you’re not using MDLVertexDescriptor
.
However, in buildRenderPipelineState()
, you do set up a MTLVertexDescriptor
in the pipeline state to describe how the control points buffer is laid out. You set the stride here too. That’s so that the vertex function can use stage_in
to read the control points.