Face normal vs vertex normal

Ugh… No TRIANGLE_FAN in Metal… but progress!


1 Like

In case anyone else needs to go from indices that represent points on a polygon (and used TRIANGLE_FAN in OpenGL), here is a little routine to convert the indices:

var indicesNew:[[UInt16]] = Array()
var value2:UInt16
for index in indices {
let firstIndex = index[0]
for (num, value) in index.enumerated() {
if num > 0 && num < index.count - 1 {
value2 = index[num+1]
indicesNew.append([firstIndex, value, value2])

It works but as I am new to Swift there are probably better ways to do it.

To maintain Caroline’s grouping structure, use this loop:

for index in indices {
var groupNew:[UInt16] = Array()
let firstIndex = index[0]
for (num, value) in index.enumerated() {
if num > 0 && num < index.count - 1 {
value2 = index[num+1]
groupNew.append(contentsOf: [firstIndex, value, value2])

With this I now have:


a randomly colored polyhedron!

And for tchelyzt, a Torus Slice:


Thanks Caroline,
I’m progressing now:

Screenshot 2020-03-16 at 17.03.00

Those look great :clap:!

Looks great tchelyzt! I think I’m finally figuring out what’s going on “behind the scenes” with ModelIO and figured I’d summarize here to (1) see if my understanding is correct and (2) maybe provide some insight.

When I save a polyhedron as an OBJ file it does as you say and gives collections of indices like:
f 1/1/1 2/2/1 3/3/1
f 4/4/2 5/5/2 6/6/2
f 7/7/3 8/8/3 9/9/3
f 2/65/22 1/66/22 23/67/22 22/68/22
f 3/61/21 54/62/21 53/63/21 1/64/21
f 30/181/51 23/182/51 1/183/51 53/184/51 13/185/51

Clearly the OBJ format then considers face normals as the normal indices are not the same as the vertex indices. Thus, as shown above, vertex index 1 has four different normal indices (1, 21, 22, 51) depending on which face we’re considering.

As Metal cannot handle this, ModelIO simply adds more vertices and normals. Vertex 1 in the above case will be repeated four(?) times. Instead of 60 vertex coordinates in the OBJ file we get 348 in the Metal buffer. There are 62 normals in the OBJ file and 240 texture coordinates. It’s not clear how we get from 60, 62, and 240 to 348. The indices went from 60 in the OBJ to 116 in the Metal buffer.

ModelIO is not simply repeating a vertex for each face as in this polyhedron all vertices are shared by four faces.

Coloring by normal shows that each face indeed has a constant normal.


I agree with your analysis but I can’t explain the numbers. I’d expect a 4 (face) normals per vertex and possibly a (vertex) normal too.
I’ve just posted in Chapter 6: Textures (without ModelI/O) - #4 by tchelyzt to describe how this “normals” interpretation intended to uniformly colour faces falls down when I want to texture faces (without ModelIO).

As for your comment:

for me, the jury is still out. I kinda can’t believe that Metal cannot handle it, but I certainly don’t know how to as Metal to do it.

Incidentally, how is your poly-sphere produced? In Blender?

My torus is designed to preserve hexagon similarity. It narrows the hexagons as they climb in the northern latitudes from the exquator towards the inquator and then mirrors that in the southern latitudes. Each successive hexagon bears the same proportionate down-scaling to its predecessor. I’m fooling around with the idea of an (impossible) toroidal planet on which a game could be played.

When I said “Metal cannot handle this” I didn’t mean that there was no way to get Metal to use face normals as it clearly does when a model with face normals is imported with ModelIO. I meant that it doesn’t handle face normals in the way they are described in an OBJ file. I think Caroline is right in saying that Metal only knows about vertex normals and I can see how in a typical model this would be all you care about as you don’t want a faceted surface. Thus to have more than one normal at a vertex you must have overlapping vertices.

I got all my polyhedra about 12 years ago from Mathematica. At the time I think it had information for about 50 of them. In the latest version there is information for 201 different polyhedra with 126 having more than 15 vertices.

Interesting idea for a game. I’m trying to port a game I wrote many years ago using Obj-C and OpenGLES1.3.

In our private communication I’d mentioned a Boy Surface as an example of a one-sided surface (like a Klein bottle). While a Klein bottle can’t readily be parametrized, it seems a Boy Surface can.

If you let r range from 0 to 1, and theta from -pi to pi, you can get the coordinates for points on the surface with:

z = r E^(I theta);
a = z^6 + Sqrt[5] z^3 - 1;
m = {Im[z (z^4 - 1)/a], Re[z (z^4 + 1)/a], Im[(2/3) (z^6 + 1)/a] + 0.5};

Sorry for the Mathematica syntax but I think it is fairly easy to understand. This gives something that looks like:

I’ve 3D printed a couple of these guys (one with the top removed so you can see “inside”.


1 Like

BTW, have you read the Ringworld science fiction novels?

No. Afraid not.
Think I’ve heard of them.

Just to backtrack a little so that I can catch up :smiley:. I don’t feel that quote is quite right.

Simplistically, the GPU takes in a stream of vertices into a vertex function. That vertex function outputs a position. That’s the only really important thing about the vertex function.

The rasteriser takes those positions and fills out triangles in 2d. If you think of that 2d as a grid, then conceptually the triangles cover squares (fragments) in that grid.

The fragment function takes in the fragments and assigns a color to that fragment.

So vertex function is for position, fragment function is for color. Anything else is extra.

You might use normal values to help calculate the color, for example, if a face points a certain way, then darken it.

That’s all on the GPU side.

Metal is an API that allows you to decide what the GPU will receive and change certain state properties on the GPU. If you use Metal vertex descriptors, then yes, there are certain properties that Metal ‘knows about’, such as Normals and Colors. But you can send the GPU any property that you care about.

I am slowly working on a parametric shader function where you calculate everything inside the function as @ericmock was describing up there. I wrote a tessellated version a while ago, but it is not quite right yet.

1 Like

I agree that the quote is not quite right, as I pretty much contradicted myself later in the reply. Lol. Really, Metal knows about what you tell it about. I’ve been looking through Apple’s sample code in the DynamicTerrainWithArgumentBuffers project. While I’m still very much overwhelmed by it, one thing is certain, they did A LOT of things on the GPU.

That is their best sample in my opinion. It takes a long time to tear it apart and I haven’t completely succeeded yet. But it answers a lot of questions. And raises more :grimacing:.

Well I hope you can forgive the poor artwork - I’ll improve it if I’m able. But here is a textured version of my “planet”:


The code is inelegant but I’ve now realised how to do it much more nicely.

Well done :clap:

I’m afraid I’ve been a bit preoccupied :grimacing:. Have you resolved all your questions?

I’ll keep working on my parametric example, but hopefully you’re all beyond that now.

That looks as if it might be a prime candidate for argument buffers. The sample Eric mentioned - DynamicTerrainWithArgumentBuffers and the accompanying WWDC video does much the same thing.

You could assign a custom attribute to each hexagon and depending on the attribute, use that particular texture on the hexagon

I’m not sure what argument buffers but they sound like they might describe my next solution. I’ve designed it but haven’t implemented it yet. In fact, right from the start I assumed that it was the right approach but was so feeble at shaders that I couldn’t. Now I think I know how. I can’t find a good tutorial or book on shaders. They all want me to copy what they do but nobody goes under the cover and explains.

Will revert in a few days with a full explanation which might help people who struggled like me.

Most people do most of their work in fragment shaders. That’s where you set the final color of each fragment, so that’s where you check your normals (which you’ve either calculated in the vertex shader or precalculated) and light the fragment accordingly.

This is an excellent site for fragment that has many examples: https://thebookofshaders.com

There are two main sites where people share their shaders:

Vertex: https://www.vertexshaderart.com
Fragment: https://www.shadertoy.com

Both of these have tutorials.
These use glsl, but conceptually they are much the same.

1 Like

I’ll take a look. I’ve implemented my “better” version and all my hexes have become near white!!

I’m sure the shader is responsible :slight_smile:

Very nice site!!

I’ve done something that I really don’t understand. I have a working vertex shader that takes a vertex in [[buffer(0)]], some uniforms and [[vertex_id]] and it does exactly what I wanted it to do.

Now I am changing my code to provide a simpler vertex (no uvs) and a table of uvs which can be looked up based on calculations involving some new uniforms. In order to avoid breaking my working model, I left all the parts in place and just added the new parameters (and extended the uniforms) to the function signature as below:

vertex VertexOut
vertex_main(constant VertexIn *vertices [[buffer(0)]],
            constant VertexUniforms &uniforms [[buffer(1)]],
            constant float2 *localUVs [[buffer(2)]],          // NEW STUFF (7 values)
            constant VertexIn *newVertices [[buffer(3)]],     // NEW STUFF
            uint id [[vertex_id]]) {

Compiling and running with no new code, everything works as before. The new parameters have no effect, as planned.


I realised that I have no idea where [[vertex_id]] comes from. Actually, [[buffer(0)]] and [[buffer(3)]] are different sizes, the former being a sub-mesh and the later the entire mesh. Which element of these buffers would be selected by [[vertex_id]] if I included code. Clearly there is no one-to-one relation between their elements !!!

Naturally I don’t actually intend to use both buffers - this is just a careful build. But my question is: What is [[vertex_id]] and how does it know which buffer I mean?

You mentioned argument buffers in a recent post. My [[buffer(2)]] contains 7 uv values (representing each vertex on a hexagon) and I will use localUVs[uniforms.index] to select the appropriate one allowing me to make six draw calls in my loop and build the hexagons - each triangle needs 3 of those 7 uvs. (In my new Vertex, each vertex know which of the three it needs based on which of the 6 we’re drawing).

uniforms.index is a uint16_t. I hope that’s right.

Sorry to keep bending your ear like this, Caroline. I hope seeing my questions is valuable for you, at least.

The Metal Shading Language Spec document is here: https://developer.apple.com/metal/Metal-Shading-Language-Specification.pdf

vertex_id is defined as:
The per-vertex identifier, which includes the base vertex value if one is specified

Which isn’t a great explanation :slight_smile: .

When you do a draw call, you specify the number of vertices to be drawn. This will set up the necessary shader cores on the GPU to execute the vertex function in parallel. For example:

renderEncoder.drawPrimitives(type: .point, 
                             vertexStart: 0,
                             vertexCount: 1000)

Here I’m drawing 1000 vertices, starting at vertex 0. Each shader being executed in parallel will receive a value from 0 to 999 in vertex_id.

So in your case the index will be on the entire mesh rather than the submesh. But vertex_id doesn’t itself know about any buffers - it’s just how you apply it in your code.