TacoSkill LAB
TacoSkill LAB
HomeSkillHubCreatePlaygroundSkillKit
© 2026 TacoSkill LAB
AboutPrivacyTerms
  1. Home
  2. /
  3. SkillHub
  4. /
  5. peft
Improve

peft

1.3

by majiayu000

138Favorites
74Upvotes
0Downvotes

Parameter-efficient fine-tuning with LoRA and Unsloth. Covers LoraConfig, target module selection, QLoRA for 4-bit training, adapter merging, and Unsloth optimizations for 2x faster training.

fine-tuning

1.3

Rating

0

Installs

Machine Learning

Category

Quick Review

No summary available.

LLM Signals

Description coverage-
Task knowledge-
Structure-
Novelty-

GitHub Signals

49
7
1
1
Last commit 0 days ago

Publisher

majiayu000

majiayu000

Skill Author

Related Skills

ml-pipelinemodel-pruningsparse-autoencoder-training

Loading SKILL.md…

Try onlineView on GitHub

Publisher

majiayu000 avatar
majiayu000

Skill Author

Related Skills

ml-pipeline

Jeffallan

6.4

model-pruning

zechenzhangAGI

7.0

sparse-autoencoder-training

zechenzhangAGI

7.6

model-merging

zechenzhangAGI

7.0
Try online