Category
Artificial Intelligence

CoreML: Easily Leverage the Power of Machine Learning

With CoreML you can create AI-powered mobile apps for image and text recognition, as well as games with machine-learning features. Find out what's inside this new Apple's framework and how you can leverage it.

Machine learning represents a huge opportunity for the mobile app development industry. But taking advantage of this powerful new tech isn’t always easy. It requires developers to build and train complex models that rely on equally complex algorithms.

So when Apple released Core ML, a lot of people got excited. In essence, Core ML is a machine-learning framework that makes it possible for developers to integrate both pre-packaged and third-party machine learning models into their apps.

In this article, we’re going to take a deep dive into Core ML. We’ll look at how it works, what it can do for you, and what are its potential downsides. 

Core ML is a machine-learning framework that makes it possible for developers to integrate both pre-packaged and third-party machine learning models into their apps.
Tweet

How Core ML works

CoreML acts as a modular intermediary between Apple’s earlier machine learning frameworks (Accelerate and MPS) and new domain-specific frameworks (Vision, Foundation, and GameplayKit).

On a practical level, this means companies and developers can make use of machine-learning features like image recognition and language processing with a minimum of data science expertise. By simplifying interaction with existing machine-learning frameworks, CoreML signifies a major step in the move towards mainstream, accessible ML functionality for businesses of all shapes and sizes.

  • Vision is an image recognition framework that allows for object detection and classification in images and videos.
  • Foundation (NLP) is a framework for natural language processing: sentiment analysis, text prediction etc.
  • GameplayKit

CoreML acts as a modular intermediary between Apple’s earlier machine learning frameworks (Accelerate and MPS) and the new domain-specific frameworks. 

Accelerate and MPS had several drawbacks that prevented developers from building apps on them. Core ML provides a developer-friendly interface that fixes these problems and allows for much easier and more efficient machine-learning app development.

Apple included into Core ML a host of open-source image recognition models that make it easy to replicate features, like Face ID and predictive keyboard, already available on a range of Apple devices. The company has also introduced a standardized file format called ML Model (.mlmodel) that allows developers to convert a range of trained third-party models to make them compatible with all Apple devices. 

With CoreML companies and developers can make use of image recognition and language processing with a minimum of data science expertise.
Tweet

What Are the Opportunities of CoreML?

CoreML has a handful of obvious positive implications that you can take advantage of immediately. If your apps would benefit from image recognition features (or any other machine-learning functionality) then there isn’t any excuse not to include them. 

The Tandem of Image Analysis and Text Mining

Because Core ML underpins domain-specific libraries, it enables developers to use Vision and Foundation frameworks in concert, thus combining image recognition and natural language processing capabilities. This ability to mix input/output is one of the standout features.

So a developer could, for example, create a model that utilizes Vision to identify the text boundaries (such as on a sign) on an image, and then leverage NLP to get the meaning of the text. 

Data Storage On-Premises

CoreML runs entirely on device hardware. No internet connection is required. This separation from third-party frameworks and APIs, along with the usual reliance on a connection to storage servers, completely overcomes the need for any third-party servers.

The offshoot of data being processed and stored on individual devices is that a high level of privacy is ensured. Some have put forward the argument that an app’s ability to use machine learning to retrieve data for malicious purposes might present a more targeted safety threat (imagine an app filtering through your photos to pinpoint the most damaging).

But others have been quick to point out that this issue (if it is an issue) is not something inherent to CoreML. Such permissions are already granted to apps that don’t incorporate machine learning. As Rene Ritchie, writing in iMore, says, The bottom line here is that, while machine learning could theoretically be used to target specific data, it could only be used in situations where all data is already vulnerable.”

Compatibility with Third-Party Frameworks and Models

Yet another benefit is provided by the unique model format called MLModel that Apple has introduced. It’s compatible with a wide array of other frameworks (Caffe, LibSVM, Keras, etc.), and Apple has provided a Python package, Core ML Tools, for the conversion process. Besides, developers have access to a library of open-source models available to use directly. 

CoreML combines machine learning features, doesn't require internet connection, and proves compatible with a wide array of other frameworks.
Tweet

Pitfalls of Core ML

Despite the obvious positive applications of CoreML, there are a few downsides. It’s important to remember that machine learning is a relatively recent technology, and there isn’t yet a fully-developed consensus on what should be provided in a framework that is purposely accessible like CoreML.

Perhaps the most striking downside is that Core ML only allows you to run inference (make predictions). No training can occur on the device, it has to be done prior to conversion and integration. In this sense, cloud-based models have the advantage.

While it’s important to remember that Core ML is not intended for training models (its purpose is to allow easy integration of pre-trained models), it’s still worth asking if the trade-off is absolutely necessary.

Apple’s approach stands in contrast to federated learning, where training occurs within a device before being uploaded to the cloud, combined with other data and fed into the next model update. This protects privacy but also enables training from a wide pool of data.

The upshot is that developers can’t offer any kind of customization. Users aren’t able to receive outputs based on their historical preferences and tendencies. 

How Can You Use This Technology?

Ok, so we’ve given a broad overview. But what specific features can CoreML allow you to build into your app?

In theory, the scope of CoreML is huge. It can be used to integrate a multitude of functionalities, from real-time image recognition and emotion detection to personalization and speaker identification.

As has already been mentioned, there are two direct applications. First, developers can use out-of-the-box models provided by Apple. Second, they can convert their own machine-learning models built with frameworks like Caffe and TestSensor (soon after launch there was a big fuss about TestSensor models not being supported, but a tailor-made converter was soon released).

The iPhone photo gallery, for example, already uses image recognition to group pictures together. The predictive keyboard is another well-known example of the use of CoreML in Apple devices.

Pinterest is also using CoreML to provide visual search features in its app. Users can take a picture, and the app will recommend pins based on features in the image. This function provides an extra layer of engagement for users by pinpointing preferences that wouldn’t normally be available with typical text-based inputs.

Then, there’s Polar Album+, an image-based app that has used CoreML to provide a high degree of offline functionality. It analyses photos and ranks those of the highest quality, which then appear at the top of their photostream.

Nude App is actually using AI as its central selling point. Their ML model automatically recognizes nude photographs and stores them in a secure folder on the app before deleting them from the camera roll and iCloud. What’s more, it’s all done on the device so there’s no worrying about the transfer of data!

While it is difficult to extend the functionality of CoreML beyond what’s offered (it’s not possible to add extra layers), a broad array of machine learning algorithms are supported - deep neural networks, tree ensembles, linear models, support vector machines, etc. The one notable exception to this is a lack of support for unsupervised models. CoreML will only run models that are trained with labeled data.

ML in the Core of Your Mobile App

To many, machine learning still feels like a distant technology. This is particularly the case for small and medium-sized businesses that don’t necessarily have the development budget to justify investment in resource-heavy features.

Tools like Core ML, however, are closing the gap between an idea and its implementation, benefiting companies of all shapes and sizes.

Only a few years ago, building image recognition software into an app would have required in-depth knowledge and many hours of development. Now, a developer can use pre-trained image recognition models to empower any mobile application.

If you’re thinking about using machine learning models in your own apps, either to boost engagement or boost your value position, now is the time to get started.

Tools like Core ML are closing the gap between a machine learning idea and its implementation.
Tweet
Content type
Blog

WANT TO START A PROJECT?

It’s simple!

Please wait...