.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "auto_examples/plot_train_convert_predict.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note Click :ref:`here ` to download the full example code .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_examples_plot_train_convert_predict.py: .. _l-logreg-example: Train, convert and predict with ONNX Runtime ============================================ This example demonstrates an end to end scenario starting with the training of a machine learned model to its use in its converted from. .. contents:: :local: Train a logistic regression +++++++++++++++++++++++++++ The first step consists in retrieving the iris datset. .. GENERATED FROM PYTHON SOURCE LINES 23-33 .. code-block:: default from sklearn.datasets import load_iris iris = load_iris() X, y = iris.data, iris.target from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y) .. GENERATED FROM PYTHON SOURCE LINES 34-35 Then we fit a model. .. GENERATED FROM PYTHON SOURCE LINES 35-41 .. code-block:: default from sklearn.linear_model import LogisticRegression clr = LogisticRegression() clr.fit(X_train, y_train) .. rst-class:: sphx-glr-script-out .. code-block:: none /home/runner/.local/lib/python3.8/site-packages/sklearn/linear_model/_logistic.py:444: ConvergenceWarning: lbfgs failed to converge (status=1): STOP: TOTAL NO. of ITERATIONS REACHED LIMIT. Increase the number of iterations (max_iter) or scale the data as shown in: https://scikit-learn.org/stable/modules/preprocessing.html Please also refer to the documentation for alternative solver options: https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression n_iter_i = _check_optimize_result( .. raw:: html
LogisticRegression()
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.


.. GENERATED FROM PYTHON SOURCE LINES 42-44 We compute the prediction on the test set and we show the confusion matrix. .. GENERATED FROM PYTHON SOURCE LINES 44-49 .. code-block:: default from sklearn.metrics import confusion_matrix pred = clr.predict(X_test) print(confusion_matrix(y_test, pred)) .. rst-class:: sphx-glr-script-out .. code-block:: none [[10 0 0] [ 0 12 1] [ 0 0 15]] .. GENERATED FROM PYTHON SOURCE LINES 50-56 Conversion to ONNX format +++++++++++++++++++++++++ We use module `sklearn-onnx `_ to convert the model into ONNX format. .. GENERATED FROM PYTHON SOURCE LINES 56-65 .. code-block:: default from skl2onnx import convert_sklearn from skl2onnx.common.data_types import FloatTensorType initial_type = [("float_input", FloatTensorType([None, 4]))] onx = convert_sklearn(clr, initial_types=initial_type) with open("logreg_iris.onnx", "wb") as f: f.write(onx.SerializeToString()) .. GENERATED FROM PYTHON SOURCE LINES 66-68 We load the model with ONNX Runtime and look at its input and output. .. GENERATED FROM PYTHON SOURCE LINES 68-76 .. code-block:: default import onnxruntime as rt sess = rt.InferenceSession("logreg_iris.onnx", providers=rt.get_available_providers()) print("input name='{}' and shape={}".format(sess.get_inputs()[0].name, sess.get_inputs()[0].shape)) print("output name='{}' and shape={}".format(sess.get_outputs()[0].name, sess.get_outputs()[0].shape)) .. rst-class:: sphx-glr-script-out .. code-block:: none input name='float_input' and shape=[None, 4] output name='output_label' and shape=[None] .. GENERATED FROM PYTHON SOURCE LINES 77-78 We compute the predictions. .. GENERATED FROM PYTHON SOURCE LINES 78-87 .. code-block:: default input_name = sess.get_inputs()[0].name label_name = sess.get_outputs()[0].name import numpy pred_onx = sess.run([label_name], {input_name: X_test.astype(numpy.float32)})[0] print(confusion_matrix(pred, pred_onx)) .. rst-class:: sphx-glr-script-out .. code-block:: none [[10 0 0] [ 0 12 0] [ 0 0 16]] .. GENERATED FROM PYTHON SOURCE LINES 88-97 The prediction are perfectly identical. Probabilities +++++++++++++ Probabilities are needed to compute other relevant metrics such as the ROC Curve. Let's see how to get them first with scikit-learn. .. GENERATED FROM PYTHON SOURCE LINES 97-101 .. code-block:: default prob_sklearn = clr.predict_proba(X_test) print(prob_sklearn[:3]) .. rst-class:: sphx-glr-script-out .. code-block:: none [[5.28232833e-03 6.68980759e-01 3.25736912e-01] [9.04332610e-06 1.59729212e-02 9.84018035e-01] [9.86751252e-01 1.32487125e-02 3.49731273e-08]] .. GENERATED FROM PYTHON SOURCE LINES 102-104 And then with ONNX Runtime. The probabilies appear to be .. GENERATED FROM PYTHON SOURCE LINES 104-112 .. code-block:: default prob_name = sess.get_outputs()[1].name prob_rt = sess.run([prob_name], {input_name: X_test.astype(numpy.float32)})[0] import pprint pprint.pprint(prob_rt[0:3]) .. rst-class:: sphx-glr-script-out .. code-block:: none [{0: 0.005282332189381123, 1: 0.6689808368682861, 2: 0.3257368505001068}, {0: 9.04333865037188e-06, 1: 0.015972934663295746, 2: 0.9840180277824402}, {0: 0.9867513179779053, 1: 0.013248717412352562, 2: 3.497315503864229e-08}] .. GENERATED FROM PYTHON SOURCE LINES 113-114 Let's benchmark. .. GENERATED FROM PYTHON SOURCE LINES 114-132 .. code-block:: default from timeit import Timer def speed(inst, number=10, repeat=20): timer = Timer(inst, globals=globals()) raw = numpy.array(timer.repeat(repeat, number=number)) ave = raw.sum() / len(raw) / number mi, ma = raw.min() / number, raw.max() / number print("Average %1.3g min=%1.3g max=%1.3g" % (ave, mi, ma)) return ave print("Execution time for clr.predict") speed("clr.predict(X_test)") print("Execution time for ONNX Runtime") speed("sess.run([label_name], {input_name: X_test.astype(numpy.float32)})[0]") .. rst-class:: sphx-glr-script-out .. code-block:: none Execution time for clr.predict Average 4.41e-05 min=4.27e-05 max=5.36e-05 Execution time for ONNX Runtime Average 1.87e-05 min=1.81e-05 max=2.34e-05 1.8663004999694978e-05 .. GENERATED FROM PYTHON SOURCE LINES 133-136 Let's benchmark a scenario similar to what a webservice experiences: the model has to do one prediction at a time as opposed to a batch of prediction. .. GENERATED FROM PYTHON SOURCE LINES 136-158 .. code-block:: default def loop(X_test, fct, n=None): nrow = X_test.shape[0] if n is None: n = nrow for i in range(0, n): im = i % nrow fct(X_test[im : im + 1]) print("Execution time for clr.predict") speed("loop(X_test, clr.predict, 100)") def sess_predict(x): return sess.run([label_name], {input_name: x.astype(numpy.float32)})[0] print("Execution time for sess_predict") speed("loop(X_test, sess_predict, 100)") .. rst-class:: sphx-glr-script-out .. code-block:: none Execution time for clr.predict Average 0.00408 min=0.00405 max=0.00415 Execution time for sess_predict Average 0.000876 min=0.000868 max=0.000904 0.0008763897650001694 .. GENERATED FROM PYTHON SOURCE LINES 159-160 Let's do the same for the probabilities. .. GENERATED FROM PYTHON SOURCE LINES 160-172 .. code-block:: default print("Execution time for predict_proba") speed("loop(X_test, clr.predict_proba, 100)") def sess_predict_proba(x): return sess.run([prob_name], {input_name: x.astype(numpy.float32)})[0] print("Execution time for sess_predict_proba") speed("loop(X_test, sess_predict_proba, 100)") .. rst-class:: sphx-glr-script-out .. code-block:: none Execution time for predict_proba Average 0.00605 min=0.00601 max=0.0061 Execution time for sess_predict_proba Average 0.000899 min=0.000893 max=0.00091 0.0008988612749999448 .. GENERATED FROM PYTHON SOURCE LINES 173-177 This second comparison is better as ONNX Runtime, in this experience, computes the label and the probabilities in every case. .. GENERATED FROM PYTHON SOURCE LINES 179-183 Benchmark with RandomForest +++++++++++++++++++++++++++ We first train and save a model in ONNX format. .. GENERATED FROM PYTHON SOURCE LINES 183-193 .. code-block:: default from sklearn.ensemble import RandomForestClassifier rf = RandomForestClassifier() rf.fit(X_train, y_train) initial_type = [("float_input", FloatTensorType([1, 4]))] onx = convert_sklearn(rf, initial_types=initial_type) with open("rf_iris.onnx", "wb") as f: f.write(onx.SerializeToString()) .. GENERATED FROM PYTHON SOURCE LINES 194-195 We compare. .. GENERATED FROM PYTHON SOURCE LINES 195-209 .. code-block:: default sess = rt.InferenceSession("rf_iris.onnx", providers=rt.get_available_providers()) def sess_predict_proba_rf(x): return sess.run([prob_name], {input_name: x.astype(numpy.float32)})[0] print("Execution time for predict_proba") speed("loop(X_test, rf.predict_proba, 100)") print("Execution time for sess_predict_proba") speed("loop(X_test, sess_predict_proba_rf, 100)") .. rst-class:: sphx-glr-script-out .. code-block:: none Execution time for predict_proba Average 0.707 min=0.705 max=0.709 Execution time for sess_predict_proba Average 0.00104 min=0.00103 max=0.00107 0.001041621885000268 .. GENERATED FROM PYTHON SOURCE LINES 210-211 Let's see with different number of trees. .. GENERATED FROM PYTHON SOURCE LINES 211-240 .. code-block:: default measures = [] for n_trees in range(5, 51, 5): print(n_trees) rf = RandomForestClassifier(n_estimators=n_trees) rf.fit(X_train, y_train) initial_type = [("float_input", FloatTensorType([1, 4]))] onx = convert_sklearn(rf, initial_types=initial_type) with open("rf_iris_%d.onnx" % n_trees, "wb") as f: f.write(onx.SerializeToString()) sess = rt.InferenceSession("rf_iris_%d.onnx" % n_trees, providers=rt.get_available_providers()) def sess_predict_proba_loop(x): return sess.run([prob_name], {input_name: x.astype(numpy.float32)})[0] tsk = speed("loop(X_test, rf.predict_proba, 100)", number=5, repeat=5) trt = speed("loop(X_test, sess_predict_proba_loop, 100)", number=5, repeat=5) measures.append({"n_trees": n_trees, "sklearn": tsk, "rt": trt}) from pandas import DataFrame df = DataFrame(measures) ax = df.plot(x="n_trees", y="sklearn", label="scikit-learn", c="blue", logy=True) df.plot(x="n_trees", y="rt", label="onnxruntime", ax=ax, c="green", logy=True) ax.set_xlabel("Number of trees") ax.set_ylabel("Prediction time (s)") ax.set_title("Speed comparison between scikit-learn and ONNX Runtime\nFor a random forest on Iris dataset") ax.legend() .. image-sg:: /auto_examples/images/sphx_glr_plot_train_convert_predict_001.png :alt: Speed comparison between scikit-learn and ONNX Runtime For a random forest on Iris dataset :srcset: /auto_examples/images/sphx_glr_plot_train_convert_predict_001.png :class: sphx-glr-single-img .. rst-class:: sphx-glr-script-out .. code-block:: none 5 Average 0.0513 min=0.0511 max=0.0517 Average 0.000868 min=0.000859 max=0.000893 10 Average 0.0862 min=0.0858 max=0.0873 Average 0.000884 min=0.000877 max=0.000902 15 Average 0.121 min=0.121 max=0.121 Average 0.000895 min=0.000888 max=0.000915 20 Average 0.155 min=0.155 max=0.155 Average 0.00091 min=0.000899 max=0.000925 25 Average 0.19 min=0.189 max=0.19 Average 0.000923 min=0.000915 max=0.00094 30 Average 0.224 min=0.224 max=0.225 Average 0.000937 min=0.000928 max=0.000956 35 Average 0.258 min=0.258 max=0.259 Average 0.000946 min=0.00094 max=0.000963 40 Average 0.293 min=0.293 max=0.294 Average 0.000966 min=0.000955 max=0.000991 45 Average 0.328 min=0.328 max=0.328 Average 0.000971 min=0.000964 max=0.000986 50 Average 0.363 min=0.362 max=0.365 Average 0.001 min=0.00097 max=0.0011 .. rst-class:: sphx-glr-timing **Total running time of the script:** ( 3 minutes 16.934 seconds) .. _sphx_glr_download_auto_examples_plot_train_convert_predict.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_train_convert_predict.py ` .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_train_convert_predict.ipynb ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_