watsonx.governance: In-depth Exploration of Transaction Explainability
(W7S184G-SPVC)
Overview
In this course the learner is introduced to the concepts of lifecycle governance of AI models and uses watsonx.governance to implement governance in a real life use case. The learner is exposed to the capabilities of watsonx.governance, such as lineage and metadata generation for AI models, evaluating deployed predictive AI models for drift of appropriate metrics, bias and fairness. watsonx.governance can accelerate responsible, transparent, and explainable AI workflows for all types of AI models. The course guides the learner through a complete business data science project which starts by introducing and manipulating data in watsonx and ends with evaluating and explaining the decisions of the deployed production model.
Audience
Data Analysts, Data Scientists, Business Analysts, Researchers, and those interested in the governance of AI models.
Prerequisites
nullObjective
- Explain the importance of AI Governance
- Configure watsonx.governance to monitor predictive AI models
- Build, deploy, and govern a predictive AI model by using watsonx.governance
- Evaluate an AI model for drift, bias, and fairness
- Examine model transactions for fairness and explainability
- Outline the pillars of AI Ethics
- Compare and contrast local and global explanation of transactions
- Configure the LIME and SHAP interpretability tools in watsonx.governance
Course Outline
- Introduction
- 1. Create and deploy a predictive model in watsonx
- 2. Govern a predictive model in watsonx.governance
- 3. Explain transactions with watsonx.governance
- Epilogue