Quality and Explainability
OpenScale can monitor various performance characteristics of the deployed model to ensure it meets quality thresholds.
Last updated
OpenScale can monitor various performance characteristics of the deployed model to ensure it meets quality thresholds.
Last updated
Quality (performance) monitors allow users to track performance of production AI and its impact on business goals. We will use a Jupyter notebook in the project you imported to enable these additional capabilities in the subscription.
In Watson Studio, select the project that you previously imported and click on the 'Assets' tab on the top of the project page.
Under the 'Notebooks' section, Click on the 'quality-explainability-monitors' notebook and then click on the pencil icon to enable you to edit / run the notebook.
After the notebook environment starts up, scroll down to the section titled 'Configure Service Credentials'. Copy and Paste the Watson Machine Learning service credentials and the Cloud API Key that you saved to a text editor earlier.
Go back to the first cell in the notebook and run the notebook. You can run the cells individually by clicking on each cell and then click the Run
button at the top of the notebook.
The quality monitor scans the requests sent to your model deployment (i.e the payload) to let you know how well your model predicts outcomes. Quality metrics are calculated hourly, when OpenScale sends manually labeled feedback data set to the deployed model.
Open the Watson OpenScale dashboard.
When the dashboard loads, Click on the 'Model Monitors' tab and you will see the one deployment you configured in the previous section.
We now have an alert on the Quality of the model.
Click on the deployment tile to open the details page and then Click on the 'Area under ROC' option on the left panel.
We have set a threshold of 70% and based on the feedback data loaded in the notebook, the model is performing below that threshold.
Feel free to explore the other quality metrics for the model. Click on the blue dot (which represents the quality run we initiated from the Jupyter Notebook), to view more details for a particular point on the performance graph.