Errata for Metal by Tutorials 3rd Edition

FYI: It’s Aug 1 and this isn’t fixed.

1 Like

In chapter 5:
[At the bottom-left of the preview window, you’ll see an option to pin the preview.]

FYI: It’s now at the top-left of the preview window. Xcode Version 14.3.1 (14E300c)

1 Like

Aug 1st; this still isn’t fixed.

1 Like

Error in Chapter 6

You force a call to mtkView(metalView, drawableSizeWillChange: metalView.bounds.size) to ensure that you set up the projection matrix. This contains a subtle bug which has no consequences in this instance but which fooled me later in chapter 7 when sending width and height to a fragment shader and you say: “size contains the drawable texture size of the view. In other words, the view’s bounds scaled by the device’s scale factor”. Print it and you’ll see it measures pixels doesn’t rescale to suit the device’s points.

The correct call is:

mtkView(metalView, drawableSizeWillChange: metalView.drawableSize)

Curiously, when we want to go in the other direction and collect the mouse location (as you do in chapter 11), it is provided in pixels (i.e. not converted to points).

regards

1 Like

This book was certainly value for money. I’m still reading it and back in chapter 2.
I think there’s an error in the shader code but maybe I misunderstand something.
In the train example, we make a vertex descriptor with one float3 attribute. Then we load a float 4 in the vertex shader! I think it works because they’re both of length 32 but its more correct to do as follows:

struct VertexIn {
  float3 position [[attribute(0)]];
};

vertex float4
  vertex_main(const VertexIn vertex_in [[stage_in]]) {
  return float4(vertex_in.position,1);
}

It’s because you’re using a vertex descriptor. [[attribute(0)]] maps the property in the shader structure to the first attribute in the vertex descriptor.

You can test that it’s not because of the length of the property in Chapter 8, Textures.

In Shaders.metal, change float2 uv [[attribute(UV)]] to float4 uv....

In the vertex function, change in.uv to in.uv.xy.

The app still works.

VertexIn.uv is padded with z = 0, w = 1.

It’s true that we’re using a vertex descriptor, which describes attribute(0) as a .float3. Your shader code in chapter 2 promises a float4:

struct VertexIn {
  float4 position [[attribute(0)]];
};

vertex float4
  vertex_main(const VertexIn vertex_in [[stage_in]]) {
  return vertex_in.position;
}

… and nevertheless it works!!! I’m not saying that it doesn’t. I’m saying that there is an inconsistency that bothers me. Both your code and my code work but yours because .float3 and .float4 produce the same buffer (of stride 16 and not 32 as I said above) … we get a kind of side effect. And yet … are they really the same? …

The buffer has already been set up (in .setVertexBuffer()) with some fixed number of bytes per vertex. I think each data point is 16 bytes and this regardless of whether float4 or float4. But surely a float3 sent to a buffer by padding it (with a zero?) and then read as a float4 is not the same as reading it as float3 and converting with float4(float3,1) !! (x,y,z,0) vs (x,y,z,1)

Anyhow, the point of my report was that this is confusing. This is a tutorial and if we want to understand what’s going on, something remains to be explained. I think I understand a lot more about Metal as a result of really working your book but I’m still rather confused about shader signatures.

All the best

I take your point that if you haven’t understood, then I have failed to teach properly. One of the difficulties of teaching a graphics API is that to get any result at all, you’re thrown everything up front, so that everything merges together. As you go through the book, with repetition, I hope it unmerges. I’m glad you are focussing on the one topic to understand it.

I think that vertex descriptors and stage_in are the hardest thing to understand in the whole of Metal, and it took me ages. I actually use .float3 to emphasize the heavy loading that vertex descriptors do. If I used float4, I’d get fewer questions. :slight_smile: .

Let me try and see how I can make it better.

This is from Chapter 3, The Rendering Pipeline:

To recap, the CPU sent the GPU a vertex buffer that you created from the model’s mesh. You configured the vertex buffer using a vertex descriptor that tells the GPU how the vertex data is structured. On the GPU, you created a structure to encapsulate the vertex attributes. The vertex shader takes in this structure as a function argument, and through the [[stage_in]] qualifier, acknowledges that position comes from the CPU via the [[attribute(0)]] position in the vertex buffer. The vertex shader then processes all of the vertices and returns their positions as a float4.

Chapter 4, The Vertex Function describes first not using vertex descriptors and then using vertex descriptors.

This is the relevant paragraph in the Vertex Descriptors section:

[[attribute(0)]] is the attribute in the vertex descriptor that describes the position. Even though you defined your original vertex data as type Vertex containing three Floats, you can define the position as float4 here. The GPU can make the conversion.

Would it help if I added this to the end of that Vertex Descriptor section:


In summary, if this is your vertex descriptor:

These are the resulting MTLBuffers when Model I/O reads in the asset:

On the Swift side, you send the vertex MTLBuffers to the GPU, using the setVertexBuffer command on the render command encoder, with the correct buffer index number.

You can then add this structure to your vertex shader:

struct VertexIn {
  float4 position [[attribute(0)]];
  float2 uv [[attribute(2)]];
};

Notice that aside from changing the size of the position attribute, the normal attribute(1) is completely ignored in this structure.

To be able to access your MTLBuffers, you include them in the vertex function header. Because you’re using a vertex descriptor, you can use the stage_in qualifier.

vertex 
VertexOut vertex_main(VertexIn in [[stage_in]]) { ... }

The buffers on the GPU still remain the same, but when the GPU reads in, it maps the properties from the pipeline state’s vertex descriptor layout, to the layout in VertexIn.

In this vertex function, you won’t have access to the normal property because you haven’t defined it in VertexIn.

Because the GPU’s pipeline state holds the vertex descriptor, the GPU mapping of vertex descriptor attributes when you use the stage_in qualifier is just that magical.


Does that clarify at all?

Unfortunately, it’s too early to look at the buffers on the GPU. That’s another level of :exploding_head:. But later, when you know how to capture the GPU workload, you can experiment with changing the vertex descriptor and the vertex shader structure and examining the buffers on the GPU.

Just another thing that blows my mind too. When you use Model I/O, the vertex descriptor describes how to create the MTLBuffer. You can even rearrange the buffers by changing the vertex descriptor.

For example:

let mdlMesh = MDLMesh(
        planeWithExtent: [1, 1, 1],
        segments: [4, 4],
        geometryType: .triangles,
        allocator: allocator)
mdlMesh.vertexDescriptor = MDLVertexDescriptor.defaultLayout

mdlMesh loads with one vertex layout, but when you assign a new layout, it shuffles all its buffers around.

Actually it seems it’s okay to call with metalView.bounds.size but the bounds should not be “scaled by the device’s scale factor” since they have already been scaled contrary to the documentation which claims that bounds are: " The default bounds … size is the same as the size of the rectangle in the frame property”. When I set my frame to (600,600) both bounds.size and drawable size are set to (1200,1200).

Nevertheless, the mouse position reported (using hover) its in a range up to 600 only.

Hi Caroline :grinning:

Indeed. I think I understand vertex descriptors but it seems to me that since the pipeline knows I’m using one, it shouldn’t need a struct at all to use [[stage_in]]. Since it does need one, the struct should have the same organisation, but it doesn’t!! Where on earth does w=1 come from? I think you’re telling me it comes from .float3 but I don’t know what documentation is telling you that. Here’s the sum total of what Apple’s documentation has to say about .float3

I’ve trawled the (very difficult) MSL spec and the (not much easier) Metal Programming Guide and found nothing. Indeed where it gets the right to assume w=1 is not clear to me as I don’t think both points and vectors are treated homogeneously. Maybe naming the attribute “Position” in the vertex descriptor is helping? I thought I could give it any name I liked. [But as I wrote this reply, I discovered in the documentation for MDLVertexAttribute that name is not merely a string but should/could? be one of several constants including MDLVertexAttributePosition although I still don’t see how you know it turns it into a float4 automatically. Very sloppy in my view].

I think I already understand that point clearly. There is no w=1 in that buffer.

I can understand it maps and I have examined buffers and I believe you, but as I say, how does it know to add w=1? And what documentation told you that?

:grinning: :grinning: Now I’m with you. :grinning: :grinning:

I think in your next edition it would be clearer either to use my vertexIn struct which explicitly adds the w=1 or explain what (is it .float3 or [[stage_in]] or even the string MDLVertexAttributePosition) does and reference the documentation.

Anyhow, you are so kind to put so much work into answering my questions.

My best regards.

I don’t remember whether I read the automatic padding with w=1 or whether I observed it. I can’t find it in the documentation now, anyway.

I can observe it by deliberately placing values in the vertex function. For example, if I change the uv value in VertexIn to float4 instead of float2, and then in the vertex function I assign out.uv = in.uv.w, then I can observe the out-going vertex attributes in the GPU frame capture. In this case out.uv would be [1, 1].

In this way, I can observe that the GPU pads the incoming float2 with 0 in the z and 1 in the w. So my assumption is that w, if not provided, is always assigned 1.

The mouse position is in points. The drawable texture is in pixels.

So you would have to multiply the mouse position by the device’s scale to get the corresponding position in the drawable texture.

#if os(macOS)
   scaleFactor = Float(NSScreen.main?.backingScaleFactor ?? 1)
#elseif os(iOS)
  scaleFactor = Float(UIScreen.main.scale)
#endif

Yeah. That’s what I’ve been doing. One of the fewer and fewer cases where we need to handle the platforms differently

In chapter 6, Metal by Tutorials, Chapter 6: Coordinate Spaces | Kodeco

The steps have me put Shared/Common.h in the Objective-C Bridging header for the project. I get an error “Cannot find ‘Uniforms’ in scope” unless I also add Shared/Common.h to the Objective-C Bridging header of the iOS and maxOS targets.

I suspect that the instructions should tell readers to add the Common.h bridging headers to the targets rather than to the project (or perhaps both places.)

Hi @duncanchampney and welcome to the forum :blush:!

The steps should work as is. I’ve been through them several times. However, I’m working with completely new projects, and something may have changed in Xcode over the couple of years the book and sample code has been out. And something may have changed in your project. If you’ve somehow overridden the project values with the target values, then you’d need to change them in the target.

I’m glad you managed to overcome this.

Unfortunately, after adding quaternions in Chapter 23, objects will no longer render using Xcode 15.

This is because the initialisation of simd_quatf has changed and the model matrix is initially zeroed rather than set to identity.

In Transform.swift, change

var quaternion = simd_quatf()

To:

var quaternion = simd_quatf(.identity)

Everything should work again.

1 Like

This topic was automatically closed after 13 hours. New replies are no longer allowed.