Issue in Chapter 21

This post is helpful to understand syntax

Continuing the discussion from Chapter 21 Shader Attributes:

But at this stage, when “Build and Run” I have this bug

**2020-05-23 18:17:03.344388+0200 Raytracing[18963:2774266] Metal GPU Frame Capture Enabled**

**2020-05-23 18:17:03.344670+0200 Raytracing[18963:2774266] Metal API Validation Enabled**

**validateComputeFunctionArguments:834: failed assertion `Compute Function(accumulatedKernel): Shader uses texture(t[1]) as read-write, but hardware does not support read-write texture of this pixel format.'**

**(lldb)**

Any clue?

:sob: I was almost reaching the end … so

Ho, and of course, I have the issue also with the “Final” project
:scream:

what GPU are you running your code on?

MacBook Pro (13-inch, 2018, Four Thunderbolt 3 Ports)
Enabled with
Intel Iris Plus Graphics 655 1536 Mo :fearful:

And by the way : thank you for writing this book. Very instructing so far :slight_smile:

1 Like

So

I made some googling with “hardware compatibility” in mind and in search text fields

I found this thread: iOS 13, iPad Pro now says hardware… | Apple Developer Forums
From there I applied the suggested workaround which consist of:

In Shaders.metal, a revisited kernel function:

kernel void accumulatedKernel(constant Uniforms &uniforms,
                              texture2d<float> rendTex,
                              texture2d<float, access::read> t,
                              texture2d<float, access::write> tout, //<<<<<<< added
                              uint2 tid [[thread_position_in_grid]])
{
    if (tid.x < uniforms.width & tid.y < uniforms.height) {
        float3 color = rendTex.read(tid).xyz ;
        if (uniforms.frameIndex > 0) {
            float3 prevColor = t.read(tid).xyz ;
            prevColor *= uniforms.frameIndex ;
            color += prevColor ;
            color /= (uniforms.frameIndex + 1);
        }
        tout.write(float4(color, 1), tid); //<<<<<<< changed
    }

and in Renderer.swift under

// MARK: accumulation
computerEncoder?.setTexture(accumulationTarget, index: 1)
computerEncoder?.setTexture(accumulationTarget, index: 2) //FIXME: My own addition inspired by https://forums.developer.apple.com/thread/123657
computerEncoder?.setComputePipelineState(accumulatedPipeline)


}

and so far, I have a black screen (which is what I expected :partying_face:)

@behr Do you still have issues with this?

I’m having this problem (created duplicate thread, pasting it in here, I’ll try the posted work-around tonight but note that work-around is advised against by Apple, I believe):

I’m hitting a failed assertion on the run before section 1.5 (“Create the acceleration structurre”) on a 2018 MacBook Pro, and the problem persists through to the supplied (i.e. not coded by me) final project. The build succeeds but then I immediately get:

“validateComputeFunctionArguments:834: failed assertion `Compute Function(accumulateKernel): Shader uses texture(t[1]) as read-write, but hardware does not support read-write texture of this pixel format.'”

(In the final project it moves to the shadowKernel:)

“validateComputeFunctionArguments:834: failed assertion `Compute Function(shadowKernel): Shader uses texture(renderTarget[0]) as read-write, but hardware does not support read-write texture of this pixel format.'”

It’s happening with both the Xcode beta and release versions. I found on apple developer forums someone may have got it working by passing the texture twice, with read and write access, but this is apparently not a good idea. I’ll try it anyway to see if it works.

Hi there :slight_smile: - I just ran the supplied final project on my 2019 MacBook Pro in both Xcode 11 and Xcode 12 beta 3 on Catalina 10.15.6, and it seems to run fine on mine. Is your configuration any different?

If you’re using Xcode 12, does it run in Xcode 11?

What GPU do you have? You can find out by print(device.description).
I found it doesn’t work when I switch to the integrated Intel GPU. I don’t think there’s much I can suggest, I’m afraid.

Oops, I’m on beta 2, I’ll start downloading 3 now. The above fix has got me past the first mid-section run; I’m working on the others now also. Thanks Caroline!

I just edited my answer! I just found it doesn’t work for me on Intel

You can find out what GPUs you have with:

    let devices = MTLCopyAllDevices()
    for device in devices {
      print(device.description)
    }

Also, did you try running it on your iOS device instead of macOS, because that should work.

<CaptureMTLDevice: 0x600003310d20> → <MTLDebugDevice: 0x1022385b0> → <MTLIGAccelDevice: 0x102600000>
name = Intel(R) Iris™ Plus Graphics 655

As you say … if (device = Intel) { return sadTrombone.aif }

Haven’t tried iOS yet, but I’ve got to the build & run right before section 2 (Shadow Rays), and it’s still running with the above fix (using the texture twice with .read and .write permissions). It looks like I’ll need to implement the fix again for the Shadow Kernel, judging by what the final version is failing on, but I’ll let you know how it goes.

(When I was saying “this is apparently not a good idea”, I was referring to:

“Note: It is invalid to declare two separate texture arguments (one read, one write) in a function signature and then set the same texture for both.”

You may have issues especially regarding synchronization with read/write between different kernel threads. I’m surprised that the Metal API validation isn’t complaining.

Unless you can make sure that all the hardware / OS you want to support have read-write texture support, you need to provide an implementation that doesn’t need this read-write support.

from the same link the OP provided (I think)…)

Yes, I can see how there might be synchronisation issues. Hopefully the iOS device will work for you.

Confirmed it works on my phone without fixes, and on my MacBook in the final state with the fix implemented twice (one for each kernel function that requires read and write to the same texture). Thanks Caroline!

1 Like