Can't get past Part I

I’m reading your book since December and finding it very difficult. I’ve read the first 7 chapters repeatedly and gone no further. The problem, I think, is that I don’t use Blender or other modelling software. I have my own code which produces a “mesh” that I wouldn’t know how to produce any other way. My framework can produce a data model as

public struct Vertex {
public let position: SIMD4
public let colour: SIMD4
}

var vertices: [Vertex] = HexTiledTorus(width:20, height:10, radius: 0.5)

I have been able to render this to the screen using the pipeline but I’ve not used any of the ModelI/O objects such as meshes, vertexDescriptors or allocators that appear in these tutorials nor can I see how to obtain these objects you rely so heavily upon. In particular, my structure has no concept of a mesh or sub-mesh and I know I’m going to want to treat all the tiles that comprise my model as separate objects (sub-meshes?)

I’ve skimmed forward but cannot see that you address this. Any suggestions?

The code in the book relies upon you using the book resources. If you use your own resources, produced in a non-standard way, then you won’t be able to use the code in the book, but you’ll need to adapt it.

As you’ve found, you don’t need meshes, vertexDescriptors, or submeshes to render an object. You can send arrays of vertices (and colors) down the pipeline instead, and let the shader functions deal with the result.

In Chapter 4, Coordinate Spaces, you send vertices down the pipeline. (Just a few, but quantity is irrelevant.) Instead of sending a vertex array containing only position, you could send a struct array made up of position and color.

The other day I was experimenting with a custom Node. I created a Primitive subclass of Node, which creates a quad (rect). It could create other primitives too, but I haven’t got to that yet. I attach it as an example of a custom Node that you can add to your scene.

This example does not use a vertex descriptor, but just relies on arrays of vertex positions being send to the vertex shader.

Each Primitive holds its own color and sends it to the fragment shader directly without passing it through the vertex shader, as it assumes that the Primitive has only one color associated. If yours is colored per vertex, then you’d send it to the vertex shader in the struct as I just described.

Mine has a Submesh because it uses indices for rendering, so that vertices are not duplicated. A quad (rect) will require four vertices (one at each corner), but six indices to produce the two triangles.

I would suggest that you understand how the engine works by using the book’s resources, and then create a custom Node that uses your mesh. You have control over how the node renders, by the delegate method render(renderEncoder:uniforms:fragmentUniforms:). Hopefully with the example I’ve attached, you’ll see this.

Archive.zip (2.9 KB)

Thanks for your prompt reply. However:

If you use your own resources, produced in a non-standard way, then you won’t be able to use the code in the book

Hardly fair comment. My ‘own resources’ are no more than a class which produces a [Vertex]. Not rocket science and I can’t imagine what you mean by ‘produced in a non-standard way’, especially as my Vertex is identical to those used in your book. The point is that your book does not equip me to use Metal but requires me to learn Blender to get anywhere.

Just a few, but quantity is irrelevant.

That’s what I thought too but then: why sub-meshes? I didn’t even learn that much.

Mine has a Submesh because it uses indices for rendering, so that vertices are not duplicated.

I’m sure if I understood this cited paragraph, I would make progress. Mine too uses indices for rendering. Indeed I’ve rendered a torus (not made of the usual concentric rings but tiled instead with hexagons, each of which is 6 triangles). I have two large arrays, one of vertices and the other of UInt16. So maybe I already have a sub-mesh? It is not, however, an MTLSubMesh and I’ve no idea how to acquire one or what an Allocator is or countless other components of the Metal framework.

I will try to understand your Node/Primitive construct and plow on. You guys really need to test your tutorial on people who have not rendered. It is jam-packed with terminology which I do not understand, made all the more confusing in that much terminology has everyday usage which does not correspond with technical usage. I remain confused, for example, by what you mean by Texture. I won’t make a list … it would go on and on.

regards and thanks for your reply

Perhaps I phrased “non-standard” badly. I meant that most people who want to render a 3d model will use a model that’s made in an external app, such as Blender. You don’t need to understand how Blender works, but if you want to examine a model to get a feel of how a 3d model is put together, then it’s useful to open Blender to take a look at it.

You generally have a submesh for each Material you use. If you are simply using vertex colors assigned to each vertex, then that is not a Material. A Material is, for example, the color red with a 0.2 roughness. Indices are arranged in groups, or submeshes, and each submesh has its own material.

An MTKSubmesh is MetalKit’s class corresponding to the concept of submesh.

You can think of a texture as an image. However, an image is generally the contents of a bitmap file. A texture is an area of memory that contains image data; it can be used for texturing a model, or it can be written to by a render pass. The view’s drawable is a texture, for example.

1 Like

Thank you. I’m one of those who like to do things as much as I can from first principals :slight_smile: