Hi there! Thanks for the article. I have a question about the MLModelTrainer
enum. Why it isn’t a struct instead? Is there a particular reason about it?
@frostra1n Thanks for writing and hope you found the article useful!
To your question, no, there is no strong reason to not use a struct here. However, as a best practice, we use enums when there are no stored properties. In this way we can also avoid creating instances and use static methods.
Hi @vsubrahmanian, very nice article! It really helped me to create an app I wanted. I have one question about Core ML. In order to reduce tha app size I uploaded my models to the Core ML dashboard. I created first a collection and then I added my models to the collection. It works great but now I want to add more models to my collection and seems that there is no option for doing so. So the problem is that I want to add more and more models to my collection without having to update the collection identifier on the iOS app every time and publish new app versions. What you think is the best way to do that ?
link from wwdc: Use model deployment and security with Core ML - WWDC20 - Videos - Apple Developer
@delaportas Thanks for your kind words. Glad this article was helpful in building your app! The problem you stated is interesting and very much relevant in this case. However, the CoreML deployment mechanism that Apple is currently providing doesn’t allow you to add new models to the existing collection as you mentioned. I do not see any workaround to get this to work either.
My approach in this scenario would be to use a CDN service which hosts the models as well as a configuration file (json) containing metadata information of the models and URL paths to download them. As described in the Apple documentation, you can then download, compile and use the ML models within the app.
Hope this helps.
https://developer.apple.com/documentation/coreml/downloading_and_compiling_a_model_on_the_user_s_device
1 Like
I see. Thanks for the answer. One more question, is there any specific reason the image is get scaled and cropped to 512 x 512. Could I have the original dimensions and resolution?
@delaportas Sorry I missed your followup question.
Unfortunately, NO. The CreateML model currently supports input and output image sizes of 512 x 512 pixels only. This is a limitation in the framework. So, this doesn’t depend on the stylize/input image dimension and there is no setting to adjust this currently. However, you can provide appropriate crop option as described in the tutorial.
let imageOptions: [MLFeatureValue.ImageOption: Any] = [
.cropAndScale: VNImageCropAndScaleOption.centerCrop.rawValue
]
Also, you can see the image dimension requirement under predictions when you open the .mlmodel file in Xcode. Hope this helps!