How to make a texture?

Chapter 8 showed me how to load a texture but I wanted to build a new texture from scratch. To keep it really easy (I thought), I’d build a tiny 5x5 2D texture with pixel format .a8Unorm which I think means that each pixel has a single 8-bit chanel (greyscale).

My data is a [UInt8] with a single white spot in the centre of a black background which I load into a MTLBuffer.

let device = MTLCreateSystemDefaultDevice()!
let commandQueue = device.makeCommandQueue()!
let commandBuffer = commandQueue.makeCommandBuffer()!

let matrixIn: [UInt8] = [
   0,0,0,0,0,
   0,0,0,0,0,
   0,0,1,0,0,
   0,0,0,0,0,
   0,0,0,0,0
]
var bufferIn = device.makeBuffer(bytes: matrixIn,
                                 length: 25,
                                 options: .storageModeShared)!

When I examine this buffer, it is as expected:

var matrixOut = [UInt8]()

 var start = bufferIn.contents()
   .bindMemory(to: UInt8.self, capacity: 25)

 var bufferPointer = UnsafeBufferPointer(start: start, count: 25)
 bufferPointer.map { matrixOut += [$0] }

 for r in 0..<5 {
   print("|", terminator: "  ")
   for c in 0..<5 {
      print(matrixOut[r * 5 + c], terminator: "  ")
   }
   print("|")
}

/*
 |  0  0  0  0  0  |
 |  0  0  0  0  0  |
 |  0  0  1  0  0  |
 |  0  0  0  0  0  |
 |  0  0  0  0  0  |
 */

Now I try to make a texture with it:

 let textureDesc = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .a8Unorm,
                                                           width: 5,
                                                           height: 5,
                                                           mipmapped: false)
 textureDesc.storageMode = .shared
 textureDesc.usage = [.shaderRead, .renderTarget]
 let textureIn = device.makeTexture(descriptor: textureDesc)!

// Fill textureIn

  var bytesPerRow = MemoryLayout<UInt8>.stride * 5
  var region = MTLRegion(origin: MTLOrigin(x: 0, y: 0, z: 0), size: MTLSize(width: 5, height: 5, depth: 1))

  textureIn.replace(region: region, mipmapLevel: 0, withBytes: bufferIn.contents(), bytesPerRow: bytesPerRow)

but when I examine textureIn.buffer!.contents(), I find that it’s nil so replace has failed to load my data.

   start = textureIn.buffer!.contents()
     .bindMemory(to: UInt8.self, capacity: 25)
   //Unexpectedly found nil while unwrapping an Optional value. Playground execution failed:

   bufferPointer = UnsafeBufferPointer(start: start, count: 25)
   bufferPointer.map { matrixOut += [$0] }

   for r in 0..<5 {
     print("|", terminator: "  ")
     for c in 0..<5 {
        print(matrixOut[r * 5 + c], terminator: "  ")
     }
     print("|")
  }

Any idea what I missed?
textureIn.replace(…) seems to be the problem

[Incidentally, my plan was to subject this texture to a load of different MPS kernels and so see your Image Processing example in chapter 30 at work.]

I’m not in completely the right head space, but I think that you use MTLTexture.buffer when you are sharing a resource. So when you change the MTLTexture.buffer, it changes the original MTLBuffer.

When you create an MTLTexture, if it’s not sharing a resource, then buffer will be nil.

How about using getBytes:

textureIn.replace(
  region: region, 
  mipmapLevel: 0,
  withBytes: bufferIn.contents(),
  bytesPerRow: bytesPerRow
)
var newArray = [UInt8](repeating: 0, count: 25)
region = MTLRegionMake2D(0, 0, textureIn.width, textureIn.height)
textureIn.getBytes(
  &newArray, 
  bytesPerRow: 5, 
  from: region, 
  mipmapLevel: 0)
1 Like

Looks like your head space is clearer than mine Caroline :thinking:

Your solution works but I don’t understand why or why what I did doesn’t. If the texture has some data, following the ‘replace’, where on earth is it?

When you create an MTLTexture , if it’s not sharing a resource, then buffer will be nil .

I’ve no idea what this sharing means. I thought a MTLTexture was a MTLResource (by inheritance). Both are protocols so just what the specific object is I do not know! ‘textureIn’ has no ‘parent’

Anyhow, thanks. As always you are so helpful. I’m going to barrel on and try to use the template as if it exists :grinning:

1 Like

Hi Caroline,
with your help I’ve been able to carry out some interesting tests which may interest you (see attached playground).
MPSConvolve.playground.zip (13.3 KB)

I use an 11x11 texture with a centred value of 100 so that you can think of each output value as the percentage of each surrounding pixel included in the central pixel.

I’ve run a MPSImageGaussianBlur the texture and you can see from the test that it has an effect on a 7x7 range and corresponds nicely with the matrix shown here in Wikipedia. This represents a Gaussian Blur with a standard deviation equal to the 4th root of 0.5 and you’ll note that it’s core nine pixels are close to (but only approximately) the kernel matrix (normalised) you call a Gaussian Blur in the book.

I’ve also run a MPSImageConvolution on the same image with your kernel and learned that you need to input it in normalised form to get the desired effect. This will be important in my final example. Strangely they don’t give us a ‘scale’ property and we must do the normalising.

If you comment out line 31, something surprising happens. I thought the program would crash because it shouldn’t be able to write to ‘textureOut’ but it can. I wonder why!!

Then, I replaced ‘matrixIn’ with a uniform array of '100’s to see how a solid matrix is treated and the effect you describe in the book occurs, with a fall-off in effect near the edges. There is talk in the MPS documentation of EdgeMode functionality but it doesn’t seem applicable here, and anyhow doesn’t consider wrap-around as an option, which I was hoping for as I wished to use it as a compute kernel and not an image filter. Any computation in, say, a cellular automaton, would be wrong here and errors would propagate into the centre too.

Finally, I had a feeling there was something wrong in your edge calculation and the last examples show that this is indeed the case. Your calculation:

(0 * 1 + 0 * 2 + 0 * 1 +
0 * 2 + 6 * 4 + 7 * 2 +
0 * 1 + 4 * 2 + 9 * 1) / 9 = 6

should divide by 16 and yield 3 as my result here shows. [Switch to UInt8 to see this clearly].

Anyhow, it was nice to investigate and learn quite a lot about how to construct one-channel textures (maybe I need to look at multi-channels too).

I haven’t shared resources.

Yes, both MTLTexture and MTLBuffer are MTLResources.

The difference is that MTLTexture is formatted image data (See old Metal Programming Guide).

I was researching use cases for sharing resources, and came across this nice example, where you scratch out a blurred picture to see the sharp picture underneath.

https://medium.com/@s1ddok/combine-the-power-of-coregraphics-and-metal-by-sharing-resource-memory-eabb4c1be615

When drawing a path, it’s easier to use Core Graphics, so this article shows how to create the MTLTexture from the Core Graphics path.

Yes, I would have thought it would crash. Perhaps it’s advisory only with MPS. Apple says:

Metal can optimize operations for a given texture, based on its intended use. Set explicit usage options for a texture, if you know them in advance, before you use the texture.

Your result is very interesting. The book states that you only use the numbers from the convolution matrix where the image matrix is non-zero. However, your result shows that is not true. As you say, you divide by 16 and not 9 which gets the result of 3.

As you again say, if you normalise the book’s image matrix first, then you get the book’s result of 6. Hmmm.

I think you mean ‘the book’s kernel matrix’ and I don’t say that. Actually, if you don’t normalise, each pixel is just 16 times bigger.

Yes, I’ve seen this article but it’s way over my head. I’ve never worked with Core Graphics nor have any idea what a CGContext is, etc., etc. :grinning:

all the best