TacoSkill LAB

The marketplace for AI agent skills

Product

  • SkillHub
  • Playground
  • Create
  • SkillKit

Resources

  • Privacy
  • Terms
  • About

Platforms

  • Claude Code
  • Cursor
  • Codex CLI
  • Gemini CLI
  • OpenCode

© 2026 TacoSkill LAB. All rights reserved.

TacoSkill LAB
TacoSkill LAB
HomeSkillHubCreatePlaygroundSkillKit
  1. Home
  2. /
  3. SkillHub
  4. /
  5. agent-evaluation
Improve

agent-evaluation

1.3

by majiayu000

74Favorites
122Upvotes
0Downvotes

Evaluate and improve Claude Code commands, skills, and agents. Use when testing prompt effectiveness, validating context engineering choices, or measuring improvement quality.

evaluation

1.3

Rating

0

Installs

AI & LLM

Category

Quick Review

No summary available.

LLM Signals

Description coverage-
Task knowledge-
Structure-
Novelty-

GitHub Signals

49
7
1
1
Last commit 0 days ago

Publisher

majiayu000

majiayu000

Skill Author

Related Skills

mcp-developerprompt-engineerfine-tuning-expert

Loading SKILL.md…

Try onlineView on GitHub

Publisher

majiayu000 avatar
majiayu000

Skill Author

Related Skills

mcp-developer

Jeffallan

6.4

prompt-engineer

Jeffallan

7.0

fine-tuning-expert

Jeffallan

6.4

rag-architect

Jeffallan

7.0
Try online