On chapter 4 at page:136 there is written that above:
SqueezeNet exports a Core ML model ~4.7MB, so it’s a better option. But VisionFeaturePrint_Screen is built into iOS 12, so it produces a much smaller model — only ~41KB.
I am a bit confused. Can you explain exactly what you mean ?
Hi @scy,
You are referring to the “Machine Learning by Tutorial” Book - this info helps anyone trying to help understand what you are referring to. There is a large collection of books on this site.
Now, for your question, the author has explained that the first time you run this, it downloads a pre-trained neural network model. The model VisionFeaturePrint_Screen is now present in iOS12 and hence the exported model is smaller ~ 41KB, if it was used on earlier versions it will be much larger ~ 4.7MB.