Latest news about Bitcoin and all cryptocurrencies. Your daily crypto news habit.
It seems like everyone and their mother is getting into machine learning, Apple included. You can now train neural nets in Xcode!
In iOS 11, Apple included support for native neural net evaluation on iPhones/iPads with the advent of the new model format .mlmodel. Developers can port over their existing models to this new format using Apple’s Python toolkit, but those without much machine learning experience looking to create their own models from scratch were left to fend for themselves.
In its ongoing effort to abstract away all the complicated parts of programming, Apple bundled its own neural net trainer with Xcode 10. In this article, we’ll use this tool to build what could possibly amount to be the greatest use of machine learning and computer vision: a dog breed classification tool. Let’s dive in.
If you’re not familiar with the *hefty* download size of Xcode, you’re about to be. In order to utilize the new MLImageClassifierBuilder class you will need to install a beta of Xcode 10 in additon to macOS 10.14, which is the newly announced Mojave. Both can be obtained from https://developer.apple.com/download/.
On a reasonably fast internet connection you’ll need about an hour and a half to download and install both. If you’re concerned about installing a beta operating system on your personal machine I wouldn’t worry. Early beta releases of macOS tend to be more stable than their mobile counterparts. After using macOS Mojave for a few days I have experienced no problems, and battery performance is quite comparable with macOS 10.13. As an added benefit, you get to experience the gorgeous new dark theme that Apple touts as one of the biggest features of Mojave.
Once you have both installed, you can get right into it. Open up your Xcode 10 beta, create a new playground by navigating to File->New->Playground, and choose the macOS blank template.
If you select iOS as the platform then the compiler will not be able to locate the necessary Swift modules for creating an image classification net. You can create the image classification trainer with just a few lines of code:
After a few seconds, the code will evaluate and you will be prompted to press ⌥⌘↵, or Alt-Command-Enter. Upon doing so, a window should present itself in the right pane of the editor showing a box where you can drop training data.
As fun as the job of collecting and labelling thousands of dog images may appear, I guarantee you the task would get tedious. In order to train a model using Apple’s tool, images for each classification need to be placed inside of a folder with the label for that particular group. In our example, we need a folder containing a number of folders labelled as dog breeds. Each of these sub-folders would then contain images of that breed.
Luckily, Stanford researchers already did all this for us! You can download an almost 1GB bundle of dog photos with the proper labelling scheme from this link. Each of the folders does have a pesky unique identifier in front of the dog breed, but we won’t worry about that.
Once the file has finished downloading, click on it and the .tar file should automatically extract using Apple’s built-in archive utility tool. Perusing through all the different photos is both fun and interesting, so I’d recommend doing that for a bit. Once you’re done, drag and drop the Images folder into the box marked for that purpose in Xcode.
After a few seconds, you should start seeing dog images flashing on the screen and information about training progress will be displayed in the console output. Beta versions of Xcode are woefully unstable, and I got a few crashes that interrupted my progress before I was able to successfully complete the model. If this happens to you, don’t despair. It will work. Eventually.
Training is relatively quick depending upon your hardware setup. On my 2016 Macbook Pro 15" with a discrete graphics processor, training took about 20 minutes. Training does utilize the GPU, as activity monitor reveals, so computers without one may take longer. Apple recently allowed support for external graphics processors, so something like this external AMD Radeon 580 enclosure could accelerate training even further. If you have a stronger graphics processor than the one in my Macbook let me know the speed of your training.
I was holding my laptop on my lap while training, and by the end it started to burn my legs. For the sake of the health of you and your computer I recommend that you position your computer in a way that enables it to properly vent hot air.
Xcode should output something similar to the following in your console while the model is training:
As shown above, Xcode will then utilize a small portion of the sample data to make sure that the model works as intended. It does this by running already utilized images back through the model to check its accuracy. After all is said and done, Xcode will finally output the results of the validation and allow you to save the .mlmodel.
Press the triangle next to the name of your net, and save the file to whatever location you choose as DogNet.mlmodel. Congrats! You now have successfully made a DogNet that you can use in your iOS 11+ applications. In order to verify that this actually works, let’s create a new iOS App. Choose, File->New->Project and select a single view iOS app. For some odd reason Xcode 9 won’t read the generated .mlmodels from Xcode 10 (which might be intentional or might be a bug, I’m not sure), so you have to stick with Xcode 10.
The error you get when attempting to import the file into Xcode 9
First, drag DogNet.mlmodel into the file navigator of your Xcode project, and make sure the “Add to Target” box is selected. Then, open up ViewController.swift, and paste the following code in viewDidLoad(). This code doesn’t handle errors, so if anything is wrong your app will crash.
You will also need to import Vision at the top of the file. I am using an image of my late golden retrieved Chelsea, called Chelsea.JPG, which you can download from here.
Add the file to the Xcode project, and again make sure “Add to Target” is selected. Run the project, and take a look at the console output (ignore the iOS simulator; it should just show a blank screen):
Sure enough, the model identifies Chelsea’s beautiful fur and face as belonging to a golden retriever. It does this with reasonable confidence too! Now how about something harder, like a loaf of bread? Will our model guess corgi? Let’s find out!
Nope. Xcode gives us n02094114-Norfolk_terrier | 0.23143494 as the top result. I suppose this makes sense, given the stubby appearance and bready complexion of the Norfolk terrier (see below image). We did confuse our model a little bit, since a confidence of .23 is shaky at best.
I have trouble telling the difference between the great Pyrenees and the Samoyed, so let’s see if the model can do any better.
Sure enough, the model says that this is a Samoyed with .91 confidence and a great Pyrenees with .003. How about a husky vs. an Alaskan malamute?
Yet again, a decisive .81 for the Malamute and a .13 for a husky. Good job DogNet! As Xcode alluded to with a 61% score for validation, this model is far from perfect, but it does do a decent job of recognizing bread and dogs alike.
If it seems like this was too easy and you didn’t actually do that much work that’s because it was, and you didn’t. One of the most difficult parts of creating neural nets is often collection and labeling of the training data. Stanford researchers had to manually collect and classify those 20,000+ dog photos, as a net such as this likely didn’t exist before the collection of their training set. If you want to go ahead and create your own training set Apple recommends at least 10 photos for each category.
That’s all! If you enjoyed this post check out my website www.AlexWulff.com, my Medium profile, and my YouTube channel. If you like dog-related tech posts then you might want to check out my post If Arduino Boards were Dog Breeds.
DogNet: Neural Net Training with iOS 12 and Xcode 10 was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.
Disclaimer
The views and opinions expressed in this article are solely those of the authors and do not reflect the views of Bitcoin Insider. Every investment and trading move involves risk - this is especially true for cryptocurrencies given their volatility. We strongly advise our readers to conduct their own research when making a decision.