Face normal vs vertex normal

There’s quite a lot to unravel here. First normals, then colors.

This is a great article: Introduction to Shading (Normals, Vertex Normals and Facing Ratio)

These are the surface normals of a cube exported from a 3D app in a standard .obj format:

vn 0.0000 1.0000 0.0000
vn 0.0000 0.0000 1.0000
vn -1.0000 0.0000 0.0000
vn 0.0000 -1.0000 0.0000
vn 1.0000 0.0000 0.0000
vn 0.0000 0.0000 -1.0000

Six surface normals corresponding to six faces.

If you return the normal from a fragment shader in one of the sample projects, then you will see this visualised:

return float4(in.worldNormal, 1);

(Three sides will be black because the normals are negative).

When you render, you send the GPU vertices, not faces, through the draw call. Model I/O conveniently imports vertex information from the .obj file. For a cube which has 8 vertices, it expands it to 24 vertices, extracts the relevant face normal for that particular vertex and places it in an MTLbuffer:

position normal position normal position normal …

In the book, we then use a vertex descriptor to tell the GPU how our data is formatted. That’s on page 55 of the book, and we would tell the vertex descriptor in this example to use position and normal attributes and give it the size and type of each attribute. The pipeline holds this vertex descriptor, so that the vertex shader function can use the stage_in attribute to read the buffer using the Vertex struct (matching the vertex descriptor structure) provided. The Vertex struct will have attribute(0) for position, and attribute(1)fornormal`.

But you don’t have to use a vertex descriptor, of course, to send data to the GPU.

However, if you aren’t using a standard file format, Model I/O can’t help you compute the normals, so you’ll have to calculate them yourself. The scratchapixel article will help with that.

For colors, you have found that vertex colors are interpolated over the surface of each triangle. And you can hold a color for each vertex.

A Material holds all color and surface information. Like red & shiny. A Submesh has multiple indices and one material. That’s the most efficient way of allocating groups of triangles the same color. You can have some vertices in one group that are red and shiny, and another group that is blue and dull.

To do that, group your indices into submeshes and then allocate that submesh a material.

You can of course stick with vertex colors and assign each vertex a color. You have the choice of interleaving the color into the position / normal buffer, so it contains position / normal / color information for each vertex, or create a new buffer with colors for each vertex and send it to the GPU with a new buffer index.

You would of course make sure that the Vertex shader struct matches your buffer information.

There is another great Metal book by Warren Moore which may help. Sometimes an explanation from another person may gel more with your thinking.

It’s in Objective-C and fairly outdated, but the principles of sending vertices to the GPU hasn’t changed.

Hello Caroline,
I’m embarrassed to have taken up so much of your time on my rather garbled instalment-based explanation of my problem. Let me try to clarify one last time.

I have no difficulty with the maths involved. The normals are easily calculated and my model produces them. I also understand clearly how they are represented in a .obj file but I don’t use such a file. (See note at end of post). I can also see how indices for these normals are provided in the .obj file (see example below) and although I have these indices, I realise that I am not uploading them to the GPU in my implementation since although I could upload a buffer, I can’t see how to apply it.

Consider two triangles, PQR (in the yz-plane) and QRS (in the xy-plane) which have a touching side and which are not co-planar. We can give them coordinates, normals and indices:

v 0 0 1      // P
v 0 0 0      // Q
v 0 1 0.     // R
v 1 0 0      // S
vn -1 0 0
vn 0 0 -1
f 1/∅/1   2/∅/1   3/∅/1
f 2/∅/2   3/∅/2   4/∅/2

where f contains vertex/[texture]/face indices (∅ is null)
Those vns are not vertex normals; they are face normals and vn is a misnomer. They cannot be interlaced:

position normal position normal position normal …

since there are only 2 normals and 4 positions.

Now I could ignore normals and interlace colours with the vertices but if I make the first three vertices RED and the last one BLUE I will get a red PQR triangle and a graded red-blue QRS triangle. Q and R cannot be both RED and BLUE!! My colours need to be associated with faces and not with vertices

Now the point real point is that I’m not using these file formats but Metal is supposed to allow me to work without them. I can construct triangles using the [[vertex_id]] attribute in my shader to point at the vertex indices, but I want to point at a different set of indices, one per triangle, when it comes to colour since clearly I have 4 vertices but only 2 triangles and 2 normals. There is not a one-to-one correspondence between vertex and index here.

The command:

    renderEncoder.drawIndexedPrimitives(type: .triangle, indexCount: model.indices.count, indexType: .uint16, indexBuffer: indexBuffer!, indexBufferOffset: 0)

helps me point at index vertices. What command will help me point at triangle indices and then, where would I use it? In the fragment shader?

What am I missing? Surely there are metal commands which allow me access the functionality it clearly uses when it takes in Wavefront files?

Thanks so much for your patience in looking at this. Surely, the real benefit of your books is the forum.

HexagonTiledTorus copy.zip (117.8 KB)
Note: I attach (if you’re curious) an example of my model - I wouldn’t have a clue how to create this in Blender as it it is designed to produce geometrically decreasing hexagons (each composed of 6 almost equilateral triangles) which perfectly tile a torus. Don’t look at the code (which needs to be tidied up); just run it to see the wireframe. Comment out

renderEncoder.setTriangleFillMode(.lines)

in Renderer.draw(in:) to see the hexes which become smaller as we head from outer to inner circumference. I’ve had to use graded colouring from centre to edge to distinguish hexagons.

Pretty!

I’m out today, but will address this later.

Out of curiosity, but will still address the problem, just for this particular application, is the hexagon mesh necessary? What if you created a hexagon texture in Photoshop or other, and covered a plain old torus mesh with the texture?

One point - Metal doesn’t care about .objs or any file format - files are only conveniences to store large amounts of vertices. The GPU only cares about vertices handed to it, as described in Chapter 3. The vertices go down the pipeline and get sorted into triangles. So generating mesh in an app is very common.

And in my previous post, I was trying to explain that the GPU doesn’t have a concept of faces. If you want to use normals in a vertex function then the normals need to be per vertex. Alternatively, if you only want the normals in a fragment function, you could assign a face number to each vertex and send an array of normals to the fragment function and extract the normal by the face number.

Yes, the hexagons (except the centre point) are what it’s about.

The trouble with that is twofold:

  1. normals to vertices are not actually perpendicular to faces
  2. vertices are shared by multiple faces, in my case three every time
    Your example in the book, start of chapter 5, produces colours from normals. I don’t believe these are vertex normals or each face adjacent to a vertex would get the same colour. Indeed, the .obj file has only one vn attribute and four vertices, so I believe that under the hood it really is operating on a face vertex.

I think you’re on the track of something here. I’ve begun to understand that this is resolved at the fragment function level. Nevertheless, assign[ing] a face number to each vertex is problematic as each vertex share three faces.

A simpler problem is simply to display a cube on-screen with a different uniform colour on each face, but without using a Wavefront file. I don’t know how to do that. Do you? All the tutorials I’ve found online which display cubes show this multi-coloured rainbow effect arising from interpolation!!! It’s easy to do with Blender because we colour the face, export a file with this nice vn attribute and everything works. It seems to me that understand how to do this with Metal but eschewing ModelI/O would be a very valuable addition to your next edition.

All the best.

I agree about the example being a useful addition to the next edition. I don’t know if we will have the opportunity and / or the time and space to be able to do it, but if so we’ll certainly consider it.

I’m hoping that you know how to use git. I attach a project with three commits showing three different renders of a cube. First one is all red, second and third are separate color per face. You can checkout each commit and run the project to see how each is done.

Initial explanation:

A vertex is not a position. It may have a position attribute, but it is not itself a position.

A cube has 8 corners. A corner is a point in space. Each corner has three touching sides, so a cube actually has 8 corners (points in space), and 24 vertices.

Using indexing, you need only list the 8 points in space, and have 24 indices.

However, if you want to give a vertex a set of attributes, such as position, normal, color, then you’ll need to split out the vertices into 24 vertices.

Another consideration is that each vertex will have its own normal, but it might share a color with other vertices.

I’ve been talking about materials and submeshes to do this. Group the indices into groups of the same color. That’s a submesh.

The code:

Most of the action takes place in Primitive’s buildCube(). Renderer’s draw(in:) does the draw calls according to how the data is set up in Primitive. And the shader functions also match.

Commit #1 is a cube with 8 vertices and 24 indices. It does an indexed draw and the cube is all red. (The back indices are actually wrong, but that didn’t show up until #2 :stuck_out_tongue_winking_eye: ) Vertex just has a position.

Commit #2 is a cube with 24 vertices and no indices. It doesn’t do an indexed draw and has no indexBuffer, just a Vertex buffer. The vertices are created in buildCube(). This goes through the indices and separates out the indices into a color for each face. Vertex has a position and a color. The vertex shader passes through the color to the fragment shader, which uses the stage_in attribute to receive it.

Commit #3 is a cube with 8 vertices and 24 indices. The indices are grouped into 6 submeshes. Each submesh has its own color. This is most like the way the book does it. For example the train has a submesh for the wheels, the chimney, the body, the chassis etc. Each submesh has its own color. Instead of being attached to a vertex, this color goes directly to the fragment shader. This has the advantage that as well as color, you can easily add a new attribute such as shiny to a submesh. The disadvantage is that there are draw calls for each submesh. If you look at the GPU debugger for this one, there are six draw calls (one set for solid and one set for wireframe). Obviously overkill for a cube, but of course it depends on your models.

MeshGeneration.zip (130.3 KB)

If you wanted more complex materials, such as color and shininess, in Commit #2 you could assign a face number to the vertex instead of adding the color directly. That face number could index into an array of materials which you send to the fragment shader as in Commit #3.

1 Like

If you’re not so great at git, it can all be done in Xcode:

git

In the Source Control Navigator, select master and right click the commit you want to check out.

Caroline,
you’re a champion. I haven’t tried it yet but I most certainly will. I was beginning to understand it the way you presented commit 3 but your explanation and code will save me from going down dead-ends. Well done!

1 Like

Caroline pointed me to this discussion in another thread. I was also interested in programmatically generating vertices (and normals, tangents, and bitangents) and the sample code she produced here has been a great help. Her book has made updating an app I wrote 12 years ago using ObjC and OpenGLES1.3 so much easier.

1 Like

Ugh… No TRIANGLE_FAN in Metal… but progress!

image

1 Like

In case anyone else needs to go from indices that represent points on a polygon (and used TRIANGLE_FAN in OpenGL), here is a little routine to convert the indices:

var indicesNew:[[UInt16]] = Array()
var value2:UInt16
for index in indices {
let firstIndex = index[0]
for (num, value) in index.enumerated() {
if num > 0 && num < index.count - 1 {
value2 = index[num+1]
indicesNew.append([firstIndex, value, value2])
}
}
}

It works but as I am new to Swift there are probably better ways to do it.

To maintain Caroline’s grouping structure, use this loop:

for index in indices {
var groupNew:[UInt16] = Array()
let firstIndex = index[0]
for (num, value) in index.enumerated() {
if num > 0 && num < index.count - 1 {
value2 = index[num+1]
groupNew.append(contentsOf: [firstIndex, value, value2])
}
}
indicesNew.append(groupNew)
}

With this I now have:

image

a randomly colored polyhedron!

And for tchelyzt, a Torus Slice:

image

Thanks Caroline,
I’m progressing now:

Screenshot 2020-03-16 at 17.03.00

Those look great :clap:!

Looks great tchelyzt! I think I’m finally figuring out what’s going on “behind the scenes” with ModelIO and figured I’d summarize here to (1) see if my understanding is correct and (2) maybe provide some insight.

When I save a polyhedron as an OBJ file it does as you say and gives collections of indices like:
f 1/1/1 2/2/1 3/3/1
f 4/4/2 5/5/2 6/6/2
f 7/7/3 8/8/3 9/9/3
f 2/65/22 1/66/22 23/67/22 22/68/22
f 3/61/21 54/62/21 53/63/21 1/64/21
f 30/181/51 23/182/51 1/183/51 53/184/51 13/185/51

Clearly the OBJ format then considers face normals as the normal indices are not the same as the vertex indices. Thus, as shown above, vertex index 1 has four different normal indices (1, 21, 22, 51) depending on which face we’re considering.

As Metal cannot handle this, ModelIO simply adds more vertices and normals. Vertex 1 in the above case will be repeated four(?) times. Instead of 60 vertex coordinates in the OBJ file we get 348 in the Metal buffer. There are 62 normals in the OBJ file and 240 texture coordinates. It’s not clear how we get from 60, 62, and 240 to 348. The indices went from 60 in the OBJ to 116 in the Metal buffer.

ModelIO is not simply repeating a vertex for each face as in this polyhedron all vertices are shared by four faces.

Coloring by normal shows that each face indeed has a constant normal.

image

I agree with your analysis but I can’t explain the numbers. I’d expect a 4 (face) normals per vertex and possibly a (vertex) normal too.
I’ve just posted in Chapter 6: Textures (without ModelI/O) - #4 by tchelyzt to describe how this “normals” interpretation intended to uniformly colour faces falls down when I want to texture faces (without ModelIO).

As for your comment:

for me, the jury is still out. I kinda can’t believe that Metal cannot handle it, but I certainly don’t know how to as Metal to do it.

Incidentally, how is your poly-sphere produced? In Blender?

My torus is designed to preserve hexagon similarity. It narrows the hexagons as they climb in the northern latitudes from the exquator towards the inquator and then mirrors that in the southern latitudes. Each successive hexagon bears the same proportionate down-scaling to its predecessor. I’m fooling around with the idea of an (impossible) toroidal planet on which a game could be played.

When I said “Metal cannot handle this” I didn’t mean that there was no way to get Metal to use face normals as it clearly does when a model with face normals is imported with ModelIO. I meant that it doesn’t handle face normals in the way they are described in an OBJ file. I think Caroline is right in saying that Metal only knows about vertex normals and I can see how in a typical model this would be all you care about as you don’t want a faceted surface. Thus to have more than one normal at a vertex you must have overlapping vertices.

I got all my polyhedra about 12 years ago from Mathematica. At the time I think it had information for about 50 of them. In the latest version there is information for 201 different polyhedra with 126 having more than 15 vertices.

Interesting idea for a game. I’m trying to port a game I wrote many years ago using Obj-C and OpenGLES1.3.

In our private communication I’d mentioned a Boy Surface as an example of a one-sided surface (like a Klein bottle). While a Klein bottle can’t readily be parametrized, it seems a Boy Surface can.

If you let r range from 0 to 1, and theta from -pi to pi, you can get the coordinates for points on the surface with:

z = r E^(I theta);
a = z^6 + Sqrt[5] z^3 - 1;
m = {Im[z (z^4 - 1)/a], Re[z (z^4 + 1)/a], Im[(2/3) (z^6 + 1)/a] + 0.5};
m/(m.m)

Sorry for the Mathematica syntax but I think it is fairly easy to understand. This gives something that looks like:
image

I’ve 3D printed a couple of these guys (one with the top removed so you can see “inside”.

image

1 Like

BTW, have you read the Ringworld science fiction novels?

No. Afraid not.
Think I’ve heard of them.

Just to backtrack a little so that I can catch up :smiley:. I don’t feel that quote is quite right.

Simplistically, the GPU takes in a stream of vertices into a vertex function. That vertex function outputs a position. That’s the only really important thing about the vertex function.

The rasteriser takes those positions and fills out triangles in 2d. If you think of that 2d as a grid, then conceptually the triangles cover squares (fragments) in that grid.

The fragment function takes in the fragments and assigns a color to that fragment.

So vertex function is for position, fragment function is for color. Anything else is extra.

You might use normal values to help calculate the color, for example, if a face points a certain way, then darken it.

That’s all on the GPU side.

Metal is an API that allows you to decide what the GPU will receive and change certain state properties on the GPU. If you use Metal vertex descriptors, then yes, there are certain properties that Metal ‘knows about’, such as Normals and Colors. But you can send the GPU any property that you care about.

I am slowly working on a parametric shader function where you calculate everything inside the function as @ericmock was describing up there. I wrote a tessellated version a while ago, but it is not quite right yet.

1 Like

I agree that the quote is not quite right, as I pretty much contradicted myself later in the reply. Lol. Really, Metal knows about what you tell it about. I’ve been looking through Apple’s sample code in the DynamicTerrainWithArgumentBuffers project. While I’m still very much overwhelmed by it, one thing is certain, they did A LOT of things on the GPU.