X

Oracle AI & Data Science Blog
Learn AI, ML, and data science best practices

Build and deploy a machine learning model in 9 minutes using ONNX

Allen Hosler
Machine Learning Engineer

Reading time: 9 mins

What you’ll accomplish in this post:

If you’ve ever tried to take a machine learning model to production, you know just how difficult it can be. While there’s been immense progress in training and evaluating models, the growth in open source resources for deploying models remains comparatively stagnant. This post covers an easy, secure, scalable workflow for machine learning deployment on Oracle Cloud Infrastructure (OCI) Data Science using ONNX. 

Let’s say you have a client examining a set of handwritten surveys, which they’d like to digitize. They’ve heard that this process can be much faster and cheaper using machine learning. What they want from you is a method to which they can send images of handwritten digits, and receive the corresponding digit value. In this article, we will walk through just how simple this workflow can be using OCI resources.

 

Build a machine learning model using AutoML

The first step in building any model is procuring the data. For simplicity, we’ll use the Handwritten Digits dataset from Sklearn.

 
from ads.dataset.dataset_browser import DatasetBrowser
ds = DatasetBrowser.sklearn().open("digits")

 

 

Handwritten Digits dataset from SklearnHandwritten Digits dataset from SklearnHandwritten Digits dataset from SklearnHandwritten Digits dataset from Sklearn

Next, you would normally clean the data, try a few different models, and run hyperparameter tuning on your favorite couple. Luckily, we have AutoML, which will take care of the cleaning, model selection, and model tuning for us.  

 
from ads.automl.driver import AutoML
from ads.automl.provider import OracleAutoMLProvider
train, test = ds.train_test_split()
automl_tuner = AutoML(train, provider=OracleAutoMLProvider(n_jobs=-1, loglevel=logging.ERROR))
model, baseline = automl_tuner.train(random_state = 42, time_budget=80)

AutoML for data cleaning, model selection, and model tuning

AutoML for data cleaning, model selection, and model tuning

AutoML has selected a TorchMLPClassifier from the PyTorch library. Let’s run it on our test set and get the accuracy:

sum(test.y == model.predict(test.X)) / len(test.y)

This yields a 99.5% accuracy, which will be sufficient for our use case.

 

Convert your model to ONNX format

You’ve spent the time building a great model, and now it’s time to put your model into production.  

Likely, you’ll want to deploy in one of 3 ways: containers, virtual machines, or serverless functions. There are tradeoffs to each, and your preference may come down to the specifics of your use case.

In this example, we’ll deploy using serverless functions. This solution works very well if you can keep your deployed image size small (faster standup time). Given that Pytorch is a large library, this might be challenging-- that is, if we couldn’t use ONNX.

ONNX, Open Neural Network eXchange, is an open source project started in 2017 to make an interoperable and hardware optimized ecosystem for AI models - meaning you could write a model in Python and deploy it in Java. Not only will this deployed model be relatively small, but it will almost definitely be faster than the original Python model. So, what is ONNX?

ONNX is simply a format, a way of structuring metadata and parameters about a model. The ONNX library has tools to read and write ONNX models, make predictions, and draw graphs of the data flow.

Finally, ONNX Runtime is a small (~40MB) library that loads in a serialized ONNX model and, using hardware optimizations, calls “predict” on the model. ONNX Runtime has been written in Python, Java, and C++, among others, and it works with many architectures. All of this means that you can have a miniscule image that’s fast and interoperable. 

ONNX Runtime for machine learning

How does ONNX work?

ONNX is still quite young, at just 3 years old, yet popular machine learning libraries already have support for ONNX format. A few, such as Pytorch and Oracle’s AutoML library, have built-in ONNX conversion support. Many others have converter functions in open source libraries, such as onnxmltools.

Now, let’s get back to our example. We want to convert our digit classifying model to ONNX. For AutoML models, we can use the adsmodel.prepare() method, to automatically convert our model to ONNX and create a Data Transformer class to convert the raw json into the ONNX-specified format. In addition to AutoML, the adsmodel.prepare()  will convert most models to ONNX, where the underlying estimator is from the Sklearn, XGBoost, LightGBM, or PyTorch libraries. 

model_artifact = model.prepare("/home/datascience/digits_model/", force_overwrite=True, data_sample=test, include_data_sample=True, data_science_env=True)

Calling this prepare method creates a model artifact object and stores its contents in the specified folder (in this case "/home/datascience/digits_model/"). If we take a look inside this folder, we’ll see several files:

 

Model artifact object folder

 

We will briefly walk through the files salient to this example, but for more details on each file, check out out our documentation here, and run through the model_deployment.ipynb notebook in the mlcpu conda pack.

First, we will look at score.py. The score module is the brains of this deployment; it defines two functions and a transformer class.

The first function load_model uses the serialized ONNX model to create an ONNX runtime session, which is our deployed model.

The second function predict handles preparing the data, calling the runtime session (model), and returning the result.

Most of the data preparation is delegated to the OnnxTransformer class. This class helps us with Label Encoding, Imputation, and dtype casting. In some cases, users may want to make adjustments to score.py (based on how they want to send/receive data), but we won't need to for this project.

 

Inference script

The model.onnx file is partially human-readable, as most of it reads like JSON. However, much of it is not human-readable, and users should not edit this file directly. Here, again, is the top of the model.onnx file in our example.

model.onnx file

Below we show the file hierarchy; however, it's not expected that we will need to edit any of these files for a standard deployment.

Functions workflow

 

Save your model to the model catalog

The purpose of the model catalog is to provide a managed and centralized storage space for models, to ensure that model artifacts are immutable, and to allow data scientists to share models and reproduce them as needed.

The model catalog can be accessed directly in a notebook session with Accelerated Data Science (ADS) or on the Oracle Cloud Infrastructure Console by going to the Data Science Projects page, selecting a project, and navigating to the Models tab. This Models page is our model catalog.

After a model and its artifacts are stored in the model catalog, they become available for other data scientists who are working on the same project. Read more here.

import os
compartment_id = os.environ["NB_SESSION_COMPARTMENT_OCID"]
project_id = os.environ["PROJECT_OCID"]

model_artifact.save(project_id=project_id, compartment_id=compartment_id, display_name="digits_model", description="Built using automl", training_script_path="/home/datascience/automl-deployment.ipynb", ignore_pending_changes=True)

And just like that, our model is logged in the model catalog for our current project!

 

Deploy your model as a serverless function

Oracle Cloud Infrastructure tenancy setup  

Before deployment, an admin of your Oracle Cloud Infrastructure tenancy needs to set up the appropriate policies and give your Oracle Cloud Infrastructure user access to a compartment where the Function can be deployed. The Oracle Functions team has provided a very easy-to-use onboarding guide that lays out all the steps involved in setting up the tenancy for Functions. Some of the steps include: 

  • Create groups and users
  • Create compartment for Functions
  • Create VCN and subnets
  • Create policies for user groups and Oracle Functions.

We recommend you also read the section Configuring Your Tenancy for Function Development.

 

Launch the Cloud Shell

Launch Cloud Shell

In the Cloud Shell, run the following command.

oci data-science model get-artifact-content --model-id <model_ocid> --file <downloaded_artifact_file.zip>

You can get the “model_ocid” and “downloaded_artifact_file.zip” by going into your Data Science project. Underneath the Resource tab to the left, click into Models, and finally click on “digits_model”. 

digits_model 

 

Set up serverless functions from the shell

1) Create an application

Log into the Oracle Cloud Infrastructure Console page, in the tenancy and region where you want to host your function. Go to Developers Services, then go to Functions. Create an Application. Select the VCN and subnet you have created for your functions.

Set up serverless functions: Create an application

 

2) Select Cloud Shell Setup

You will see the list of applications that have created. Click on the name of the application you want to use. Then, click on the Getting Started tab, and choose the Cloud Shell Setup option. After you have selected the option, you will see a list of step-by-step instructions on how the set up the fn CLI, along with commands that you can copy and paste into your Cloud Shell terminal to set up the CLI.

Select Cloud Shell Setup

Select Cloud Shell Setup

Select Cloud Shell Setup



 

3) Use the context for your region

In your Cloud Shell terminal, you can find out the context available to you using the command

fn list context

Use the context for your region. In the setup instruction, your OCI region name is already populated in the instruction command. You can simply copy and paste the command into your terminal.

fn use context <oci-region-name>

 

4) Update the context with the compartment ID you want to use to deploy your function. The compartment ID of the compartment you are currently using will be populated in the instruction:

fn update context oracle.compartment-id <compartment-ocid>
 

5) Update the context with the location of the Oracle Cloud Infrastructure Registry you want to use

fn update context registry <region-key>.ocir.io/<object-store-namespace>/<repo-name>

Your <object-store-namespace> and <region-key> are populated in the command in the step-by-step instruction. <repo-name> is the repository name you want to push your image to. You can specify the name of the repo you want to use. If the repo has not yet been created, it will be created for you after executing the command.  

<object-store-namespace> is the auto-generated Object Storage namespace string of the tenancy where your repositories are created (as shown on the Oracle Cloud Infrastructure Tenancy Information page).

<region-key> is the key of the Oracle Cloud Infrastructure Registry region where your repositories are created. For example, <region-key> for us-ashburn-1 is iad.

 

6) Generate an AuthToken to enable login to Oracle Cloud Infrastructure Registry

You need to create an auth token to enable login to Registry (OCIR). The link to create the token is in the step-by-step instructions.  You can visit this link for additional information about auth token.

 

7) Log into OCIR using the Auth Token as your password

docker login -u ‘object-store-namespace/user-name’ region-key.ocir.io 

The object storage namespace, user-name and region-key are already populated in the instruction command.

When prompted to enter the password, use the Auth Token you have generated.

 

Create, deploy and invoke Functions

8) Deploy your Function

Go inside the model-artifact folder. You will use the deploy command which will build your image, push it to the repo in OCIR, and deploy the Function.

fn --verbose deploy --app <my-app>

 

9) Invoke your Function 

You can pass a JSON payload to your Function (remember to use the input convention described above) using a simple cat command. For example, below, we pass a list of feature vectors stored in the file data-sample.json to <my-function>. Do not forget to specify the content type which is application/json in most cases:

cat data-sample.json | fn invoke <my-app> <my-function> --content-type application/json

Sample data-sample.json:

Sample data-sample.json

Sample output:

{"prediction": 
    "4"
}

You can inspect that the Function is packaged successfully with the app using fn inspect command:  

fn inspect function <my-app> <my-function>
 

Easy machine learning deployment

Congratulations! We have a deployed model that we can call with a JSON of our handwritten digit, and receive the corresponding digit value in return. If you want to go further, you can set up an API Gateway for this fn. 

 

Learn more in data science 

New to OCI Data Science? Build your own machine learning models on OCI Data Science for free, using $300 in free cloud credits. 

I also invite you to participate in Oracle Developer Live: AI and ML for Your Enterprise on January 28 and February 2. The event features technical sessions and hands-on labs covering topics such as AutoML and the machine learning lifecycle. Register today!

Join the discussion

Comments ( 1 )
  • Dai Software Wednesday, January 27, 2021
    Amazing Blog.
    on demand app development
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.