Awesome tutorial, would love more like how you can extract an image based on the depth data. For example: the last picture with the statue. Instead of making the background transparent, how would you extract the outline of the statue as an image?
One option would be to update the MaskParams.slope to be a larger number. The steeper the slope, the less range will be captured by the mask. You can also play with MaskParams.width to focus in on the area you want to view.
After you have a narrow mask, you can apply it to the image to remove everything but the parts of the image at that depth.
Hi Yono, thank you so much for your wonderfully composed tutorial. I am currently following your tutorial and working on a project that automatically capture photos as depth map images with specific âlayersâ of depth. Your tutorial is very inspiring.
However, I have a quick question while I am looking for the optimal parameter for my case. How can we save the depth data image with the images that we take with a dual cameras iPhone?
I am using an iPhone 8+, taking photos with both portrait mode, and even using the Apple demo, AVCam to take images. I know there are depth data included since I can see the depth effect from the Apple demo app, WiggleMe. However, when I AirDrop those photos to my Mac (with MacOS High Sierra) and load them to the Xcode project with the instruction you specified and changing the file names. I can see the images in the app, however, the âOriginalâ âDepthâ âMaskâ âFilteredâ buttons are all grayed out. I am not able to apply Mask to the images.
Do you know if there is a specific ways I should transfer the images from my iPhone to Xcode project? So I can use them on the app?
I think the issue might be AirDrop. When you take a portrait mode picture, there are actually 2 images stored. One with the depth data and one without. When you AirDrop, it looks like it sends the one without.
Try connecting your phone to your Mac with a lightning cable and open up the built in Image Capture app. You should see 2 images with the following naming convention:
IMG_####.jpg
IMG_E####.jpg
The one with the E is about 1/2 the size of the one without the E. You want the bigger one. When you AirDrop, it looks like it sends the smaller one, but removes the E.
Hi @toojuice, thank you so much for the clarification! There is not much resources on this topic in the internet right now. And your tutorial is the best one by far! Thank you so much for your help! And I am getting to the expecting results now thank you! And much appreciated @shogunkaramazov for the reminder!
Hi,
First of all thank you for this great tutorial.
I want to create a video recorder with these functionalities so can you just tell me how to show those the live depth filter on the live camera. As AVCaptureVideoDataOutput has didOutputSampleBuffer, is there any option for doing such filter for showing it on live cam?
Thanks for the awesome tutorial - really interesting topic, very well written, clear and understandable!
Iâm trying to build off this tutorial and incorporate the new CIDepthBlurEffect that apple includes in iOS 11 to enhance the blur effect. Iâve gotten most of it working, but am finding the parameter âinputFocusRectâ which sets what to focus on is hard to use and not very well documented. Itâs in the CIVector format and I want to be able to pass in touch points, however they seem to be in different coordinate spaces. Any knowledge of the CIDepthBlurEffect and how to implement with the great work shown in the tutorial?
I have a demand to create a Portrait mode in my app, and our team wants an automatic focus according to feature detection in the mask. Is that possible?
Another question, is whether I can create a tap gesture that will make the focus (of the filter with the depth mask) on the feature I tapped on? This is basically what Apple implemented in their portrait mode. I also managed to implement a touch gesture that focuses on a point in the screen using a regular camera, but in depth it seems different since the focus value is a single value as opposed to regular camera focus where I used functions provided by Apple that locks the focus according to a CGPoint ⊠I hope this makes sense.
Thanks for the kind words! I havenât done a lot of work with depth maps and video, but check out this example project from WWDC that does what youâre looking to do:
Thanks for the kind words! I believe that CoreImage has itâs origin in the bottom left corner of the screen, whereas UIKit has itâs origin at the top left corner. So to convert between the two, you need to do something like:
let yPosInCoreImage = screenHeight - touchYPos
Sorry for the slow response. I didnât get a notification that there was some activity on the forum for this tutorial.
Thanks for the kind words. I think there are two different concepts for focus that you are talking about here. One is the actual focus of the camera and the other is the artificial focus based on the depth map. For the depth map-based focus to work, the entire image has to be in focus (or at least the point you want to highlight needs to be).
So assuming you have focused image that has depth data included, you could take the value of the depth map at the touch location and use that as your focus depth. Then create the depth mask using this focus depth (just like in the tutorial).
Does that make sense?
Sorry for the slow reply. I didnât get a notification about your post.
Hi @toojuice,
Thank You for your reply I have checked the wwdc video, and able to create video with blurred background with depthdata using - (void)dataOutputSynchronizer:(AVCaptureDataOutputSynchronizer *)synchronizer didOutputSynchronizedDataCollection:(AVCaptureSynchronizedDataCollection *)synchronizedDataCollection
Again thank you for this awesome tutorial and the reply. :]
I wonder how to load original photo without depth effect. I mean I want to load IMG_####.jpg not IMG_E####.jpg.
Some app can read original photo from portrait mode photos for example âfocosâ app. I tried with many ways, but I couldnât.
First, I got the PHAsset object via my own image picker, and I tried to load with fullSizeImageURL from requestContentEditingInput. I tried to load with CGImageSourceCreateWithURL and CGImageSourceCreateImageAtIndex also, I could load just photos with depth effect.
For better blur, I think I should get original photos, but I canât find right way now.
// 3
if let imageSource = CGImageSourceCreateWithURL(filePath as CFURL, nil)
{
let cgImage = CGImageSourceCreateImageAtIndex(imageSource, 0, nil)
let image = UIImage(cgImage: cgImage!)
all results were same, I could load photos which applied depth effect. (left photo of below photo)