@doonoo let me look into this more closely. Iāll report back soon.
Hi @doonoo,
So while playing around, I noticed the same thing you did. I believe the HEIC
filetype has the ability to store multiple images in the container. So it could be that the main image is the one with the blurred background.
However, the depth data is always available, if the image has the blurred background or not.
I did this experiment to prove it to myself:
- Take a portrait mode photo.
- Use
PHImageManager
withPHAsset
to read the last photo taken. - Display the photo and the depth data in a
UIImageView
- Go into the Photos app
- Edit the last photo to remove the portrait effect
- Open my test app and see both the unblurred photo and the depth data
I think maybe a next step for you would be to look into the HEIC
image format and see if you can read the original and the portrait photo from the same container. Iām guessing it might be there.
Another interesting test would be to change your camera to take pictures as JPG
s. Then it might actually save two separate images ā one for the original and one for the portrait.
You can access this by going Settings App
ā Camera
ā Formats
ā Most Compatible
Hi,thanks for contributing!
Now,I have a question,Can we get the point cloud data from the picture and how? Hope for your answering!
@toojuice Do you have any feedback about this? Thank you - much appreciated! :]
Hi @zhanqan,
There is no point cloud for depth data as it isnāt obtained using a single depth sensor. Itās using two slightly offset cameras to imitate stereoscopic vision. Just to be clear, this tutorial is not about the camera and sensor used for FaceID. This is about the depth data you can get from dual cameras on the back of an iPhone 7 Plus, iPhone 8 Plus, and iPhone X.
Not relevant anymore, thanks for your awesome reply
Hi,
I was wondering if you have an information on what values the raw depth data takes on. I know that when converting to greyscale, the depth data at the individual pixel level has a value between 1 and 255. However, from my understanding the raw normalized depth data is actually anywhere from 0 to >1 for object closer than 1 meter. If the raw data does in fact have much more range than 1/255, I feel like it could be manipulated for better results.
Any insight on the topic would be a great help! Thanks!
@toojuice Can you please help with this when you get a chance? Thank you - much appreciated! :]
Hi @wedouglas,
Yes, youāre right. The raw disparity data can be greater than 1. Thatās the reason we normalize the data before using it to create a mask.
Take a look at the code for normalize() in CVPixelBufferExtension.swift. You can see here that first the minimum and maximum disparity values are found and then used to normalize all pixels to fall in the range between 0.0 and 1.0.
Other options you could take would be to clamp and values to a maximum of 1.0, but you will lose a little bit of depth information that way. There are perfectly good reasons to do this, though.
Yeah, that makes sense. I was thinking that maybe the depth values could be better manipulated if you were trying to focus on a very specific slice of distance in the image. For example, the raw depth data isnāt terribly linear, so 0 to .5 in raw data corresponds to a much greater physical distance than .5 to 5. Therefor maybe it could better be manipulated such that your mask uses 1-255 for a very narrow slice of physical space, and just 0 for everything else, rather than spreading 0-255 over the entire image.
One more question. Do you know if there is an easy way to save the greyscale depth representation as an image itself? I tried saving the UIImage and CIImage to the library, but it doesnāt seem to work. Maybe thatās itās because those actually donāt contain the real pixel data to create the image from?
Any suggestions?
How did you try to save the image? Using UIImagePNGRepresentation?
How to get the depth image using Truedepth camera in IphoneX ? ā¦ As I know it is not stereo camera ā¦ can you have tutorial to extract the depth images using this camera and store it ?
Iāve tried using both PNG and JPEG representation like that. It seems like those arenāt actually getting any data or that the data is unsupported. From the documentation for UIImagePNGRepresentation:
A data object containing the PNG data, or nil if there was a problem generating the data. This function may return nil if the image has no data or if the underlying CGImageRef contains data in an unsupported bitmap format.
I believe Iām out of my depth here, no pun intended. Is there something about the data in the cvpixelbuffer that isnāt supported by a jpeg/png? If thatās the case, any suggestion on how to convert this data accordingly?
Ok. Let me experiment a little and Iāll get back to you. I havenāt tried this out, yet.
Iāve done a little digging and the reason UIImagePNGRepresentation
is returning nil is because the UIImage
for the depth data map is not based on a CGImage
. It was created from a CIImage
, which was created from a CVPixelBuffer
.
If you want to be able to save the depth data map, youāll need to first create a CGImage
and then create the UIImage
from that. When you create a CGImage
, youāll have to be careful with the orientation of the data, so keep that in mind.
@toojuice Can you post a tutorial about getting depth map using Truedepth camera in iphoneX?
I just want to read the image and save the depthā¦ can you help me on this please
Thank you
Thanks for the suggestion! Iāve forwarded the idea along to Ray :]
@toojuice I tried this code and I added other pictures that I captured using Portrait mode in iphone X with name test10.jpg this program only read test01.jpg to test09.jpg not more !
In addition, I can only see the half top part of the image in original mode ! ā¦ do you know why?
Also I would like to save the depth image before normalize the colors ā¦ how can I do it? ā¦ appreciate your help
Hi, in my app we are implementing a Portrait mode for blurring only the background of an image.
I attach my result.
I assume maybe Apple are doing some more post process on the Depth Data, but I am not sure. Maybe I am missing something obvious?
Would appreciate your input.
Thanks!
Adding a test10.jpg
to the project works for me. Make sure that the extension is lowercaseā¦ i.e. .jpg
and NOT .JPG
As for saving the depth image, youāll need to first create a CGImage and then create the UIImage from that. When you create a CGImage, youāll have to be careful with the orientation of the data, so keep that in mind.