I’m thinking about the example in the early chapters where ContentView() calls MetalView() and it seems to me that if I had several such calls, each MetalView would produce the same image, using the same Renderer.
I’ve handled that by lifting the production of the mesh into a struct (CubeMesh) which is passed as MetalView(mesh:) and in this way I was able to produce two differently sized cubes in views on the same screen. I had to obtain a device to do this but I just grabbed the default device and didn’t pass it into the view. (Do I sometimes need to do this?)
That worked, two different sized cubes appeared, but similar in every other respect so it seems to me that I need to pre-build the renderers too (if for example I want different dynamics such as path or speed by running different shaders).
Am I thinking about this correctly? Will I end up replacing my view with MetalView(contents: MyStruct) where everything I need will be specified in MyStruct before the call.
And what is happening in the GPUs? Are the two views running in parallel using separate buffers or what?
You’d probably have to do a fair bit of restructuring. There are a myriad of architectures you could use. The one in the book is only one possible one, and it’s over-simplified for learning purposes.
In the book, GameController
is initialised in MetalView
, so that would need to come out of there if GameController
holds multiple views. Also InputController
would have to be per view rather than a singleton.
The crucial thing to understand is that an MTKView
requires a delegate callback to do the drawing. MTKViewDelegate
has two required methods - one to resize the view that’s called when the view changes size, and one to draw every frame.
A GameController
could hold multiple views, and when you set up each view, you set up the delegate.
Depending on how you set it up, you should only need one Renderer
instance. You could tie the scene to the view and in the delegate draw(in:)
method, send the view’s scene to draw
in Renderer
.
To ensure just one Renderer
instance, you could abstract the game scene. Have a GameScene protocol, which your game scenes conform to. So when you send the scene to Renderer
, you’re sending a GameScene
rather than a specific scene.
For the different shaders, you’ll need different pipeline states. If you want different models to render differently, you could hold the pipeline state on each model. Or again have a Model
protocol, with different types of models that conform to Model
. Each type could hold its own pipeline state object, and you could sort the scene that way so that you’re not constantly swapping GPU state (although that is fast).
For your last question, your views will be totally separate, with their own drawable texture. I guess the GPU automatically schedules view rendering, just as it schedules any view drawing, but I don’t really know the full answer to that one.
Thank you Caroline. You’re very good to take so much trouble with that answer. That’s why I keep coming back to Kodeco.
You’ve given me lots to think about. I’ve already succeeded in putting two unrelated metal views on one screen and I plan to try out all your ideas.
All the best
Tchelyzt