@jack_wu, Great tutorial, though I must admit, I am new to XCode and IOS dev. I’m able to follow along with your tutorial for the most part. I’m working on a research project right now and need to identify diagonal parallel line features within images. I’d like to implement something in Swift like:
func foundDiagParallel (UIImage: img, int: width, float: tolerance) → bool {
//[img] is the image to inspect
//[width] is the distance between the two parallel lines
//[tolerance] is a mechanism to provide some level of tolerance with respect to how parallel the two lines are
//IMAGE PROCESSING
return true if two parallel lines <= width and within the specified tolerance are found
return false if two parallel lines are not found <= width and within specified tolerance
}//end func foundDiagParallel
In a perfect world (for simplicity), Core Image would allow me the capacity to build something like this; however, I am not certain that it does. My question is, can you recommend an API that would be most capable (and simplest) to accomplish something like this? I’m trying to avoid having to spin up on Core Graphics, Core Image, OpenCV, or GPUImage only to learn that there is a better API than the one I happen to take a stab at first. Any insight would be most appreciated.
Suppose I have an image of a frame where the bezel is gray and the center is yellow. I want to lay this frame on top of another image and make the yellow part completely transparent. The image in the background still needs to be able to move up and down behind the bezel.
This was easy to do in C# but I’m struggling to find a simple way in Swift 3.0
The code in this tutorial has some mistakes related to rounding.
ghostSize.width and ghostSize.height can be fractional. As a result, it may lead to EXC_BAD_ACCESS exception in the loop due to the index going out of bounds.
For example, let’s assume width=750, height=742.5. Then, memory will be allocated for 750 * 742.5 = 556,875 elements, while in the loop the condition is written as i<height, j<width and, since 742 is still < 742.5, it will try to iterate for 743 rows (from 0 till 742, inclusive) with 750 elements in each, which makes 743 * 750 = 557,250 > 556,875.
To avoid this, I suggest replacing the line
CGSize ghostSize = CGSizeMake(targetGhostWidth, (targetGhostWidth / ghostImageAspectRatio));
with
CGSize ghostSize = CGSizeMake(targetGhostWidth, (int) (targetGhostWidth / ghostImageAspectRatio));
Similar mistake is with ghostOrigin. Its coordinates can be fractional as well. As a result, when we calculate offsetPixelCountForInput, the fractional part of ghostOrigin.y, being multiplied by inputWidth, will give some additional unexpected integer offset resulting in the picture being misplaced.
To avoid this, I suggest replacing the line
CGPoint ghostOrigin = CGPointMake(inputWidth * 0.5, inputHeight * 0.2);
with
CGPoint ghostOrigin = CGPointMake((int)(inputWidth * 0.5), (int)(inputHeight * 0.2));
This tutorial is more than six months old, so questions are no longer supported at the moment for it. We will update it as soon as possible. Thank you! :]