In the Raytracing shadeKernel the intersectionBuffer (Swift name for the MTLBuffer) is passed in as pointer to an array of Intersection structs:
kernel void shadeKernel(uint2 tid [[ thread_position_in_grid]],
constant Uniforms &uniforms,
device Ray *rays,
device Ray *shadowRays,
device Intersection *intersections,
device float3 *vertexColors,
device float3 *vertexNormals,
device float2 *random,
texture2d<float, access::write> rendertarget)
In the shadowKernel the same intersectionBuffer is passed in as an array of float values:
kernel void shadowKernel(uint2 tid [[ thread_position_in_grid ]],
constant Uniforms &uniforms,
device Ray *shadowRays,
device float *intersections,
texture2d<float, access::read_write> renderTarget)
The intersection distance, which is all that is used from the Intersection struct here happens to be the first member of the struct. So for intersections[0] I suppose it should work. But, in general, for intersections[rayIdx] aren’t we referring to arbitrary numbers since the kernel will erroneously use the stride of a float to find values? It seems we should be using Intersection* instead of float* and intersections[rayIdx].distance in the code.
Or is there an implicit conversion going on that I don’t understand? I suppose I could run this in the shader debugger. But it seems less clear than it should be.
Thanks again,
~chuck