TacoSkill LAB
TacoSkill LAB
HomeSkillHubCreatePlaygroundSkillKit
© 2026 TacoSkill LAB
AboutPrivacyTerms
  1. Home
  2. /
  3. SkillHub
  4. /
  5. shap
Improve

shap

1.3

by majiayu000

191Favorites
136Upvotes
0Downvotes

Model interpretability and explainability using SHAP (SHapley Additive exPlanations). Use this skill when explaining machine learning model predictions, computing feature importance, generating SHAP plots (waterfall, beeswarm, bar, scatter, force, heatmap), debugging models, analyzing model bias or fairness, comparing models, or implementing explainable AI. Works with tree-based models (XGBoost, LightGBM, Random Forest), deep learning (TensorFlow, PyTorch), linear models, and any black-box model.

explainability

1.3

Rating

0

Installs

Machine Learning

Category

Quick Review

No summary available.

LLM Signals

Description coverage-
Task knowledge-
Structure-
Novelty-

GitHub Signals

49
7
1
1
Last commit 0 days ago

Publisher

majiayu000

majiayu000

Skill Author

Related Skills

ml-pipelinemodel-pruningsparse-autoencoder-training

Loading SKILL.md…

Try onlineView on GitHub

Publisher

majiayu000 avatar
majiayu000

Skill Author

Related Skills

ml-pipeline

Jeffallan

6.4

model-pruning

zechenzhangAGI

7.0

sparse-autoencoder-training

zechenzhangAGI

7.6

model-merging

zechenzhangAGI

7.0
Try online