Kodeco Forums

Image Depth Maps Tutorial for iOS: Getting Started

Learn how you can use the incredibly powerful image manipulation frameworks on iOS to use image depth maps with only a few lines of code.


This is a companion discussion topic for the original entry at https://www.raywenderlich.com/314-image-depth-maps-tutorial-for-ios-getting-started

Awesome tutorial, would love more like how you can extract an image based on the depth data. For example: the last picture with the statue. Instead of making the background transparent, how would you extract the outline of the statue as an image?

1 Like

Thanks so much!

One option would be to update the MaskParams.slope to be a larger number. The steeper the slope, the less range will be captured by the mask. You can also play with MaskParams.width to focus in on the area you want to view.

After you have a narrow mask, you can apply it to the image to remove everything but the parts of the image at that depth.

Does that make sense?

Hi Yono, thank you so much for your wonderfully composed tutorial. I am currently following your tutorial and working on a project that automatically capture photos as depth map images with specific “layers” of depth. Your tutorial is very inspiring.

However, I have a quick question while I am looking for the optimal parameter for my case. How can we save the depth data image with the images that we take with a dual cameras iPhone?

I am using an iPhone 8+, taking photos with both portrait mode, and even using the Apple demo, AVCam to take images. I know there are depth data included since I can see the depth effect from the Apple demo app, WiggleMe. However, when I AirDrop those photos to my Mac (with MacOS High Sierra) and load them to the Xcode project with the instruction you specified and changing the file names. I can see the images in the app, however, the “Original” “Depth” “Mask” “Filtered” buttons are all grayed out. I am not able to apply Mask to the images.

Do you know if there is a specific ways I should transfer the images from my iPhone to Xcode project? So I can use them on the app?

@toojuice Can you please help with this when you get a chance? Thank you - much appreciated! :]

Hi @henglee, thanks for the kind words!

I think the issue might be AirDrop. When you take a portrait mode picture, there are actually 2 images stored. One with the depth data and one without. When you AirDrop, it looks like it sends the one without.

Try connecting your phone to your Mac with a lightning cable and open up the built in Image Capture app. You should see 2 images with the following naming convention:

  • IMG_####.jpg
  • IMG_E####.jpg

The one with the E is about 1/2 the size of the one without the E. You want the bigger one. When you AirDrop, it looks like it sends the smaller one, but removes the E.

Hi @toojuice, thank you so much for the clarification! There is not much resources on this topic in the internet right now. And your tutorial is the best one by far! Thank you so much for your help! And I am getting to the expecting results now thank you! And much appreciated @shogunkaramazov for the reminder!

1 Like

Hi,
First of all thank you for this great tutorial.
I want to create a video recorder with these functionalities so can you just tell me how to show those the live depth filter on the live camera. As AVCaptureVideoDataOutput has didOutputSampleBuffer, is there any option for doing such filter for showing it on live cam?

Hi Yono!

Thanks for the awesome tutorial - really interesting topic, very well written, clear and understandable!

I’m trying to build off this tutorial and incorporate the new CIDepthBlurEffect that apple includes in iOS 11 to enhance the blur effect. I’ve gotten most of it working, but am finding the parameter “inputFocusRect” which sets what to focus on is hard to use and not very well documented. It’s in the CIVector format and I want to be able to pass in touch points, however they seem to be in different coordinate spaces. Any knowledge of the CIDepthBlurEffect and how to implement with the great work shown in the tutorial?

Thanks!

Hi, thanks for an awesome tutorial!

I have a demand to create a Portrait mode in my app, and our team wants an automatic focus according to feature detection in the mask. Is that possible?
Another question, is whether I can create a tap gesture that will make the focus (of the filter with the depth mask) on the feature I tapped on? This is basically what Apple implemented in their portrait mode. I also managed to implement a touch gesture that focuses on a point in the screen using a regular camera, but in depth it seems different since the focus value is a single value as opposed to regular camera focus where I used functions provided by Apple that locks the focus according to a CGPoint 
 I hope this makes sense.

Hi @souvickcse,

Thanks for the kind words! I haven’t done a lot of work with depth maps and video, but check out this example project from WWDC that does what you’re looking to do:

https://developer.apple.com/library/content/samplecode/AVCamPhotoFilter/Introduction/Intro.html

Hope that helps!

(and sorry for the slow reply. I didn’t get a notification about your post)

Hi @geraldfinzi,

Thanks for the kind words! I believe that CoreImage has it’s origin in the bottom left corner of the screen, whereas UIKit has it’s origin at the top left corner. So to convert between the two, you need to do something like:

let yPosInCoreImage = screenHeight - touchYPos

Sorry for the slow response. I didn’t get a notification that there was some activity on the forum for this tutorial.

Hi @eyzuky,

Thanks for the kind words. I think there are two different concepts for focus that you are talking about here. One is the actual focus of the camera and the other is the artificial focus based on the depth map. For the depth map-based focus to work, the entire image has to be in focus (or at least the point you want to highlight needs to be).

So assuming you have focused image that has depth data included, you could take the value of the depth map at the touch location and use that as your focus depth. Then create the depth mask using this focus depth (just like in the tutorial).

Does that make sense?

Sorry for the slow reply. I didn’t get a notification about your post.

Hi @toojuice,
Thank You for your reply I have checked the wwdc video, and able to create video with blurred background with depthdata using - (void)dataOutputSynchronizer:(AVCaptureDataOutputSynchronizer *)synchronizer didOutputSynchronizedDataCollection:(AVCaptureSynchronizedDataCollection *)synchronizedDataCollection
Again thank you for this awesome tutorial and the reply. :]

1 Like

First, thanks for great tutorial!!

I wonder how to load original photo without depth effect. I mean I want to load IMG_####.jpg not IMG_E####.jpg.
Some app can read original photo from portrait mode photos for example ‘focos’ app. I tried with many ways, but I couldn’t.

Can you please help me?
Thank you


@toojuice Can you please help with this when you get a chance? Thank you - much appreciated! :]

Hi @doonoo,

Are you trying to load the image from the main bundle or from the photo album via UIImagePickerController?

@toojuice
Sorry, my question was not enough


First, I got the PHAsset object via my own image picker, and I tried to load with fullSizeImageURL from requestContentEditingInput. I tried to load with CGImageSourceCreateWithURL and CGImageSourceCreateImageAtIndex also, I could load just photos with depth effect.
For better blur, I think I should get original photos, but I can’t find right way now.

Thank you


Hi @doonoo,

I’m not entirely sure what’s going wrong, since I don’t know what your code looks like. But, check out this WWDC 2017 video:

https://developer.apple.com/videos/play/wwdc2017/508?time=434

I’m linking the video at the time, where the presenter begins talking about using PhotoKit to read image depth data. Does this help?

Thank you for your kindness.!!
Unfortunately, I watched that video already.

I tried these ways

// 1
imageManager.requestImageData(for: self.asset!,
options: nil,
resultHandler: {(data, stringValue, orientation, _) in
let image = UIImage(data: data!)

// 2
imageManager.requestImage(for: self.asset!,
targetSize: self.getFullSize(),
contentMode: .aspectFill,
options: imageOption,
resultHandler: {(image, _) in

// 3
if let imageSource = CGImageSourceCreateWithURL(filePath as CFURL, nil)
{
let cgImage = CGImageSourceCreateImageAtIndex(imageSource, 0, nil)
let image = UIImage(cgImage: cgImage!)

all results were same, I could load photos which applied depth effect. (left photo of below photo)

depth

How can I load right photo?