Integrating with SceneKit

Hi,

I am integrating SceneKit with Metalkit. My main motivation is to use some low level features from Metal like viewports and using the Scenegraph from Scenekit.

I needed a 4port Cad view and created a sample but its not working as excepted see my post

https://forums.developer.apple.com/thread/109546

Also I did not found a way adapt depthbias.

I am looking for to that chapter but I am unsure if thats possible what I have tried until now, maybe you can tell me how a integration is possible or to what extent.

best regards Murat

Hi Murat

I’m not an expert in SceneKit, nor have I tested this to see how performant it is - I think it likely to be slow, but how about if you do a first pass render to a texture using SCNRenderer and then blit it to another texture to save the texture before rendering the second pass, and then repeat. You could then have four different SCNViews with planes that wear the four textures. Or blit each pass to each quarter of one texture.

If you were doing it in Metal, chapter 14 Multipass has an example of displaying a blitted texture from a render pass on the screen.

I found this example of rendering to a texture from Lachlan Hurst: GitHub - lachlanhurst/SceneKitOffscreenRendering: Using SCNRenderer to render a scene offscreen to a texture that's then displayed in another scene

Hi Caroline,

In Chapter 22 (Integration with SceneKit):

We have lightPosition in world coordinates.
Then in vertex function: in.normal (in model space) is multiplied by scn_node.normalTransform,
which is supposed to be inverse transpose of model view matrix.
Then we take dot product of these, but it seems like they are in different coordinate spaces:
lightPosition is in world space and out.normal is in camera space (is it?).
Shading looks OK, until you start moving the spaceship, then it breaks.

Shouldn’t we also move the lightPosition into camera space?
Also, how can I get camera position from scn_node uniforms in world space?

Thanks,

@waechterj - Thank you!

Yes, you’re absolutely right. The light position should be in camera space.

#include <metal_stdlib>
using namespace metal;

#include <SceneKit/scn_metal>

struct VertexIn {
  float4 position [[attribute(SCNVertexSemanticPosition)]];
  float3 normal [[attribute(SCNVertexSemanticNormal)]];
  float2 uv [[attribute(SCNVertexSemanticTexcoord0)]];
};

struct Uniforms {
  float4x4 modelViewProjectionTransform;
  float4x4 normalTransform;
  float4x4 modelViewTransform;
  float4x4 modelTransform;
};

struct VertexOut {
  float4 position [[position]];
  float3 normal;
  float4 viewLightPosition;
  float4 viewPosition;
  float2 uv;
};

vertex VertexOut shipVertex(VertexIn in [[ stage_in ]],
                            constant float3& lightPosition [[buffer(2)]],
                            constant SCNSceneBuffer& scn_frame [[buffer(0)]],
                            constant Uniforms& scn_node [[buffer(1)]]) {
  VertexOut out;
  out.position = scn_node.modelViewProjectionTransform * in.position;
  out.normal = (scn_node.normalTransform * float4(in.normal, 0)).xyz;
  out.viewLightPosition = scn_frame.viewTransform * float4(lightPosition, 1);
  out.viewPosition = scn_node.modelViewTransform * in.position;
  out.uv = in.uv;
  return out;
}

fragment float4 shipFragment(VertexOut in [[stage_in]],
                            texture2d<float> baseColorTexture [[texture(0)]]) {
  constexpr sampler s(filter::linear);
  float4 baseColor = baseColorTexture.sample(s, in.uv);

  float3 lightDirection = (normalize(in.viewLightPosition - in.viewPosition)).xyz;
  float diffuseIntensity = saturate(dot(normalize(in.normal), lightDirection));
  return baseColor * diffuseIntensity;
}

I’m not 100% sure about getting camera position from Uniforms in world space, but the docs suggest that SCNSceneBuffer.inverseViewTransform gives “view space to world space”. That’s in Frame-Constant data at Apple Developer Documentation

Thank you Caroline,

Just figured out how to get camera position in world space:

scn_frame.inverseViewTransform * float4(0, 0, 0, 1)

In camera space it’s already (0, 0, 0), so for camera direction we can use fragment position in view space (viewPosition in your example).

I’m trying to recreate what you are doing in Materials chapter, except receiving everything we can from SceneKit. So naturally the next step is to use normals from normals texture, and for that I need a tangent. SceneKit docs mention SCNVertexSemanticTangent attribute that is supposed to be automatically inferred from uv coordinates. I’ve tried adding tangent attribute to VertexIn, but I get an error “Vertex attribute 2 is not defined in the vertex descriptor”.
I could probably use ModelIO to add tangent and bitangent to the spaceship mesh, but what is the correct way of using SCNVertexSemanticTangent to get inferred tangent?

Regards,

Jacob

I haven’t used SceneKit to that extent, so I can’t really help. However, I did find this recent answer to a four year old post (!) in the developer forum:

https://forums.developer.apple.com/thread/15431

He got the same message as you, and the answer is the suggestion that you load the mesh using Model I/O.