Thanks for your article.
Well done, but I have a question.
On my development phone (iPhone6s), if the position on AVCaptureDevice is set to back instead of front, screen update is too slow for Comic Filter and Crystal Filter.
Is there any way to improve this?
The back camera definitely has the ability to capture a higher resolution, so that might be slowing it down. Does the Monochrome filter work well on the back camera? I can take a look to see if I can find an optimization for the code. I’ll try to get my hands on an iPhone 6s.
1 Like
Love the article. For some reason, while setting up the Combine pipeline Xcode tells me that CGImage doesn’t have a method .create
Any ideas?
Under the Extensions group, there’s a file called CGImageExtensions.swift, which should include an extension on CGImage
to add the create
method.
Can you see that in the downloaded material?
Thanks for the great tutorial 
Can you tell wether or not this way is efficient in terms of ressources and performance?
For example the simple Combine pipeline (without CI filters) vs. the classic approach with UIViewRepresentable and AVPreviewLayer?
Thanks for the kind words!
I have not done any performance tests, however, without the CIFilter
s, I would imagine using an AVPreviewLayer
would be faster. An Apple Engineer explained to me that AVPreviewLayer
has the shortest path from the camera sensor to the screen pixels… so that will always win.
How would you update this guide to use the new built in concurrency API with async/await and Task to replace the outgoing GCD/DispatchQueue method?