
Use the BlazingText algorithm via the SageMaker Estimator to train a text classification model.The following diagram illustrates the architecture and the steps that are involved in the solution: This option demonstrates how you can integrate the use of different data formats between Clarify and SageMaker algorithms by bringing your own container for hosting the SageMaker model. With this solution, we have successfully created a single model whose input is compatible with Clarify and can be used by it to generate explanations. For an example, refer to Inference pipeline with Scikit-learn and Linear Learner. This chains the two models in a linear sequence and creates a single model. Use the preceding two models to create a PipelineModel.When using a different algorithm, SageMaker creates the model using that algorithm’s container. This will deploy the model using the BlazingText container, which accepts JSON as input. After the model training step with the BlazingText algorithm, directly deploy the model.Create a model and a container to convert the data from CSV (or JSON Lines) to JSON.

Because Clarify supports only CSV and JSON Lines as input, you need to complete the following steps:
#Aws sagemaker clarify serial#
For more information, refer to Hosting models along with pre-processing logic as serial inference pipeline behind one endpoint. You can use inference pipelines to deploy a combination of your own custom models and SageMaker built-in algorithms packaged in different containers. The following diagram illustrates an example. An inference pipeline is a SageMaker model that constitutes a sequence of containers that processes inference requests. In this option, we use the inference pipeline feature of SageMaker hosting. We present a couple of options that you can follow to use Clarify. But customers often require specific formats that are compatible with their data pipelines. For example, the BlazingText algorithm container accepts inputs in JSON format. SageMaker algorithms have fixed input and output data formats. We also provide a general design pattern that you can use while using Clarify with any of the SageMaker algorithms.

This can be a key requirement in many application domains like sentiment analysis, legal reviews, medical diagnosis, and more. Among other things, these observations can then be used to improve various processes like data acquisition that reduces bias in the dataset and model validation to ensure that models are performing as intended, and earn trust with all stakeholders when the model is deployed. This helps you understand which parts or words of the text are most important for the predictions made by the model. Specifically, we show how you can explain the predictions of a text classification model that has been trained using the SageMaker BlazingText algorithm. In this post, we illustrate the use of Clarify for explaining NLP models. Apart from supporting explanations for tabular data, Clarify also supports explainability for both computer vision (CV) and natural language processing (NLP) using the same SHAP algorithm. It uses model agnostic methods like SHapely Additive exPlanations (SHAP) for feature attribution. Amazon SageMaker Clarify is a feature of Amazon SageMaker that enables data scientists and ML engineers to explain the predictions of their ML models. This field is often referred to as explainable artificial intelligence (XAI). Model explainability refers to the process of relating the prediction of a machine learning (ML) model to the input feature values of an instance in humanly understandable terms.
