Distorted texture when using vertices with z-axis

Hello everyone :slight_smile:

Im currently working on trying to build my own space invader type of game, in 2D using Metal.

I have 3 backgrounds for the game, and Ive split out these three in its own “renderer”.

For example:

MainRenderer creates commandEncoder, etc, and pass that into BackgroundRenderer, then it goes to the MidgroundRenderer, etc.

The Background Quad that fills the full screen was initially made like this:

vertices = [
Vertex(position: [-1, 1], textureCoordinate: [0, 1]),
Vertex(position: [1, 1], textureCoordinate: [1, 1]),
Vertex(position: [1, -1], textureCoordinate: [1, 0]),
Vertex(position: [-1, -1], textureCoordinate: [0, 0])
]

This all worked fine and I could draw separate layers so to speak.

Then I wanted to introduce depth testing between the layers so gave each renderer its own depthStencilState with a .less comparison value. All good.

However, I wanted to use the z-axis of each layer to determine the depth value.

So when I add the z axis to my background quad like this:

vertices = [
Vertex(position: [-1, 1, 1], textureCoordinate: [0, 1]),
Vertex(position: [1, 1, 1], textureCoordinate: [1, 1]),
Vertex(position: [1, -1, 1], textureCoordinate: [1, 0]),
Vertex(position: [-1, -1, 1], textureCoordinate: [0, 0])
]

my texture is all distorted. I can’t figure out what the issue is even after several hours with chatgpt, haha.

Ive tried just render the first layer with a orthographic projection matrix too but it doesnt work.

Anyone has any ideas or suggestions to try?

Thank you :slight_smile:

I’m probably not fully understanding this.

If you are just rendering a quad, you may not need a projection matrix at all. The Metal coordinate system goes from -1 to 1 in the x and y axes, and 0 to 1 in the z axis.

You could layer each quad with a z axis value of 0 to 1 and then just pass the vertex positions to the rasterizer without calculating any projection.

And welcome to the forum :wave: !

Hi Caroline,

Thanks for your answer :slight_smile:

You are right, I don’t need a projection matrix if I simply just render a quad. However, and I could be wrong, but Chat-GPT “said” that as I wanted to have some depth in my game, even though it’s 2D, I could use the Z value for each layer as the depth comparison.

So, Background would have z = 1, Midground z = 0.5, etc.

However, when I add the Z value to my Quad my texture goes bonkers as you can see in the screenshot below.

Here’s the repo in case you want to check it out. There’s two branches, main (without the z value) and branch DepthTesting which is not working : GitHub - Sockerjam/spaceshipgame

Here’s the texture with z value added.

This addresses the quad vertex position rendering error. I didn’t look at any other errors so far.

My suggestion is to always make sure that the GPU buffers contain what you think they do, using the Metal frame capture. All the following screen captures come from there.

This is the main branch buffer:

This is the depth branch buffer:

As you can see, the position values look wrong.

This is the buffer contents on the GPU formatted into floats

Each float takes up 4 bytes. You would expect from your structure that the floats making up Vertex 0 would look like: [-1.0, 1.0, 1.0, 0.0, 1.0]. However, there’s an extra 0.0 at row 3, and two more at rows 6 and 7.

At the end of Quad.init, you can add a print statement:

print(
  MemoryLayout<Vertex>.stride,
  MemoryLayout<SIMD3<Float>>.stride,
  MemoryLayout<SIMD2<Float>>.stride,
  MemoryLayout<SIMD3<Float>>.stride +
        MemoryLayout<SIMD2<Float>>.stride)

The result of that is 32 16 8 24
You can see that the stride of Vertex is 32, and the stride of your vertex descriptor is 24. So the GPU is expecting 24 bytes for each vertex, whereas you’re supplying 32 bytes.

There’s padding going on.

Copy your Vertex struct to OldVertex.

Change your Vertex struct to:

struct Position {
  var x: Float
  var y: Float
  var z: Float
}

struct Vertex {
    let position: Position
    let textureCoordinate: SIMD2<Float>

  init(position: [Float], textureCoordinate: SIMD2<Float>) {
    self.position = Position(
      x: position[0],
      y: position[1],
      z: position[2])
    self.textureCoordinate = textureCoordinate
  }
}

At the end of Quad.init, replace the print statement with:

print(MemoryLayout<Position>.stride, MemoryLayout<Position>.size, MemoryLayout<Position>.alignment)
print(MemoryLayout<OldVertex>.stride, MemoryLayout<OldVertex>.size, MemoryLayout<OldVertex>.alignment)
print(MemoryLayout<Vertex>.stride, MemoryLayout<Vertex>.size, MemoryLayout<Vertex>.alignment)

The result of this is:

12 12 4
32 24 16
24 24 8

The stride of Vertex is now 24 bytes, which matches the size of OldVertex.

The new Vertex doesn’t have any padding.

Notice the alignment property. Vertex is 8, whereas OldVertex is 16.

Check out this article: Size, Stride, Alignment by Greg Heo.

Towards the end of the article:

The alignment of a struct type is the maximum alignment out of all its properties.

The stride then becomes the size rounded up to the next multiple of the alignment .

The stride of a SIMD3 is 16. It has 4 bytes of padding. The stride of Position is 12, and doesn’t have any padding.

If you run the app with these changes, the Vertex MTLBuffer now contains these float values:

Notice that you still have an extra 0.0 at row 3, but the extra padding at rows 6 and 7 have gone.

The vertices render in the correct position:

(screenshot from Geometry in the Metal frame capture)

Wow, thank you so much for your help and for the big effort you put it, really appreciate it, Caroline :slight_smile:

I’m just digesting it now but it’s really fascinating. And I haven’t looked much at the Metal debugger but it seems really great.

I’ll let you know how I get on!

1 Like

I implemented the fixes you mentioned and managed to use the debugger myself to sort out another issue that came up, yay, haha.

However, and I think my intuition with projection matrices are not that great, or how the depth values work.

As I implemented this quad with a z value of 1, if I don’t have a projection and view matrix going it doesn’t show. I’m not sure why as Metals NDC go from 0 - 1 on the z axis? Shouldn’t it show?

When I tried implementing an orthographic camera with a view matrix I only get a small white square (I changed it to white to make debugging easier) in the centre of the screen. (Screen show below).

Even if I play around with my view matrix translation matrix the square doesn’t get bigger or smaller.

You can have a look at the QuadFix branch if you have a chance :slight_smile:

Thank you so much again!

If you remove your projection and view matrices, your z depth will be 1.0. Change depthStencilDescriptor.depthCompareFunction to .lessEqual, and the quad will show, taking up the whole screen. If you really need .less, you can change the z value to 0.9999 and, as the z value is less than 1, the quad will show.

If you really need the orthographic projection matrix, you can make the quad bigger or smaller by changing viewSize in StaticCamera.

I see, thank you!

So if I understand it correctly, if you would use a projectionMatrix that has a depth of 10, what does that mean for my depthStencilState? Or that will always compare values from 0 - 1?

The matrix has no relevance to the depthStencilState.

  1. Matrices calculate the position in the vertex function.
  2. The rasterizer then refits the position so that it fits within NDC (depth of 0 to 1).
  3. You don’t want to render objects that are behind other objects, so before rendering a fragment, the depth of the current fragment is tested against the depth buffer.
  4. If the current fragment passes the depth test, the fragment renders.

The depth test type is the important thing here. Initially, the depth buffer will have 1.0 in it. If your depth test is .less, and your z value is 1.0, then your z value is not less than the depth buffer and the fragment won’t render. However, if you change the test to .lessEqual, then the fragment is equal to the depth buffer and will render.

I see, thank you.

I was playing around with adding a 3D model to my game and for that I used the uniform.projectionMatrix * uniform.viewMatrix * uniform.modelMatrix * in.position to calculate the final position.

The near and far values for my proejctionMatrix ranges from 0.1 to 10 and what I did notice is that depending on where I sat the z axis for my model, it would not render.

I was having a look in the debugger under the vertex attributes and if I positioned my model at 10 in my modelMatrix it would yield a position of 0.998 for the z axis. At position 4 it yields a position output of 0.388. So at 4 it would render of course, not at 10.

So doesn’t that mean that the matrices does have an effect on the depthStencilState? Or doesn’t the depthStencilState setting affect the razterizer?

Thanks Caroline :slight_smile:

I probably put that badly. The matrices are separate from the depth test. As you say, your matrices calculate the vertex position, which, after conversion to NDC (Normalised Device Coordinates), the depth test later tests. You don’t have to have matrices at all though.

In other words, yes the matrices do affect the vertex position, but the depth stencil test is later in the pipeline, and only works on the end result of the position, and doesn’t care whether there are or are not matrices.