This repository is for Duke's AI MEng course, AIPI 590: Emerging Trends in Explainable Artificial Intelligence (XAI). This course is taught by Dr. Brinnae Bent and launches in Fall 2024.
In this repository, you will find example Colab notebooks covering the following topics:
- Local Explanations: Techniques for understanding individual predictions.
- Global Explanations: Methods for understanding model behavior as a whole.
- Counterfactual Explanations: Exploring "what-if" scenarios to understand model decisions.
- Saliency Maps: Visualizing the parts of the input that most influence the output.
- Testing Concept Activation Vectors: Evaluating and interpreting the influence of concepts on model predictions.
- Embedding Visualization: Techniques for visualizing high-dimensional data embeddings.
- Regression Models: Approaches for making regression models more interpretable.
- Generalized Models: Understanding and interpreting generalized linear and additive models.
- Decision Trees: Using decision trees for interpretable predictions.
- RuleFit Algorithm: Combining rule-based and linear models for interpretability.
- Adversarial Attacks: Techniques for generating and understanding adversarial attacks
To get started with the notebooks, simply open them in Google Colab using the provided links. Each notebook contains detailed explanations and code examples.
For your own assignments, please use the template.ipynb notebook in the templates/ subrepo to get started.
These examples are created to be run in Google Colab but can be modified for local development. The necessary python libraries are specified in each notebook.
Contributions are welcome! Please submit a pull request or open an issue to discuss any changes.
The examples in this repository can be used with attribution.