Chapter 8: No Question, Just a remark

I hardly understood a blessed thing!
This chapter needs a glossary and some good old-fashioned explanation.

I’m sorry to hear that - I’ve always been rather fond of that chapter!

How far did you get before you came unstuck, and where can I help?

One suggestion - it helps if you understand what goes into making and rendering a 3D model in a CAD or modelling program such as Blender (free).

This is an easy fast tutorial: Blender Tutorial for Beginners: How To Make A Mushroom | Kodeco

Hi Caroline

[Actually, you’ll laugh. It looks like we’ve already had this conversation back in March 2020, when you explained Texture to me. It looks like I’ve made no progress. Still in Part I of the book.]

I did go through that Blender tutorial some time ago and I found it distracted me entirely from learning about Metal :slightly_frowning_face:

The problem with the chapter, for me, is that it is full of jargon. What is a texture (a png?, I don’t think so - Warren Moore seems to talk about an object that is read from or written to). The thing is, I have previously succeeded in decorating one of my own purpose-built meshes with the contents of a png. (See below where I mapped parts of a png onto a torus laid out in hexagons). No Blender was involved, just some geometry.

So I can follow the mechanics, but what is a sampler/sampling?, a semantic?, a material?, base-colour/diffuse? and more and more.

Even the starter project takes a leap. Why are you zipping together a MDLMesh and a MTKMesh? And, actually, are there any sub-meshes? I don’t think so. Or perhaps one?

All I can see is that I supply an image (like wallpaper) and some coordinates (UV) to help me paper it over my triangles. All the rest is a blur.

I do remember that torus :smiley:

There’s a ton of jargon in 3D rendering, unfortunately. Yes, a glossary would be nice. I’ll give that some thought.

I’ve found when learning hard things, that I have to take some things as “given”. I might not understand everything the first time around, but the second time I meet them I’ll understand slightly more. It’s very difficult to teach / learn graphics because you have to know everything before you can even render a triangle, and that’s just not possible. So you have to go round in circles for a while until jargon becomes familiar and things start to gel.

If you’ve taken away from the chapter that you paper over your triangles using UV coordinates and an image, that’s a good start.

Textures

A png is an image format. An image is a matrix of pixels. A texture in graphics, is a bit more complicated.

This is a good description (ignoring Direct3D specifics - which actually mostly translate into something in Metal): Introduction To Textures in Direct3D 11 - Win32 apps | Microsoft Learn

So an MTLTexture is somewhat similar to an MTLBuffer. Except that it can be two or three dimensional, and you can “sample” textures in different ways.

Sampling

Sample is a special kind of read. Just as you can read data at a particular point in a buffer, you can read data in a texture.

But textures are special in that they can be stretched over a surface. If you just read a texture, you’re reading the texture at a specific coordinate and getting that specific value. If you try and read off the edge of the texture, that’s a problem.

If you sample a texture, you’re interpolating values. And how you interpolate them is described by a sampler (those pretty frog pictures in the chapter). If you try and sample off the edge of the texture, the sampler tells the GPU what to do.

Semantic and Materials

Chapter 11 goes over materials in a bit more detail, where you’ll use various other textures for various purposes.

A base color (diffuse / albedo) texture just describes color without any lighting effects.

Because a texture is made up of float4s, you can describe things other than color. Such as a normal map texture holds the direction that a particular fragment points, so that you can light that fragment correctly.

There are multiple values that you take into account when shading a fragment, such as how shiny the surface is, whether it’s transparent, whether it emits a color. These are the semantics. Taking all these semantic values together, makes up the surface material. You don’t always have textures for these values, sometimes you have scalar values.

A semantic might be baseColor or roughness or specular. Each semantic describes a surface property that you can use for shading a fragment. If the material has 0 roughness, then it’s going to be shiny. If it has 1 roughness, it won’t. Instead of that scalar value, you could have a texture map that describes variable roughness over a surface.

Model I/O vs Metal

I didn’t want to get too bogged down in material semantics. Accessing things in Model I/O is quite complicated, and not really relevant to learning how Metal works with the GPU.

Model I/O is a framework that is only really for file input and output. It deals with various complicated file formats, such as .dae and .usd. If you generate your own mesh, then you wouldn’t use Model I/O. It adds an extra layer of complication, because we need to extract things like mesh and textures from files. Initialising Model and Mesh and Submesh are only hard because of this extraction. In a real game engine, you would probably have your own faster file format rather than one that relies on extracting from Model I/O.

When you read in an asset, you read into Model I/O’s format. To use the asset in a Metal render, you need it in Metal’s format.

zip

An asset might have multiple meshes. Each mesh might have multiple submeshes. An asset will always have at least one mesh and each mesh will have at least one submesh (* actually that’s not quite true. It’s only true for indexed drawing, but you’ll always do indexed drawing on objects created in 3D apps). As well as holding the indices of the vertices, which describe how to render the mesh, the Model I/O submesh describes the materials. Meshes are held in vertex buffers, and materials are held in textures or scalar values.

Model I/O’s MDLSubmesh contains both index data and material data. Metal’s MTKSubmesh only contains index data. So you have to store the material data somewhere else.

When loading from Model I/O, you have this hierarchy:

MDLAsset 
  -> MDLMeshes 
    -> MDLSubmeshes

You want to have this hierarchy:

Model 
  -> Mesh (containing MTKMesh) 
    -> Submesh (containing MTKSubmeshes and Materials)

You queried this code:

meshes = zip(mdlMeshes, mtkMeshes).map {
  Mesh(mdlMesh: $0.0, mtkMesh: $0.1)
}

You might be misunderstanding what Swift’s zip does. This is Apple’s example:

let naturalNumbers = 1...Int.max
let zipped = Array(zip(words, naturalNumbers))
// zipped == [("one", 1), ("two", 2), ("three", 3), ("four", 4)]

At this point of the code, you have mdlMeshes loaded and mtkMeshes created from these mdlMeshes. However, you still need to load the materials for each submesh. So for each mesh, you create a Mesh using both the mdlMesh and corresponding mtkMesh that you’ve just created.

Mesh.init will create Submeshes. Then each Submesh will load Model I/O’s material.

The zip and map will allow you to process each mesh.

The way the book loads Model, Mesh and Submesh is the easiest approach I’ve come up with so far. I’m open to suggestions :slight_smile:

3 Likes

Thanks. So much to digest.

1 Like