TacoSkill LAB
TacoSkill LAB
HomeSkillHubCreatePlaygroundSkillKit
© 2026 TacoSkill LAB
AboutPrivacyTerms
  1. Home
  2. /
  3. SkillHub
  4. /
  5. promptfoo-evaluation
Improve

promptfoo-evaluation

1.3

by majiayu000

137Favorites
59Upvotes
0Downvotes

Configures and runs LLM evaluation using Promptfoo framework. Use when setting up prompt testing, creating evaluation configs (promptfooconfig.yaml), writing Python custom assertions, implementing llm-rubric for LLM-as-judge, or managing few-shot examples in prompts. Triggers on keywords like "promptfoo", "eval", "LLM evaluation", "prompt testing", or "model comparison".

evaluation

1.3

Rating

0

Installs

AI & LLM

Category

Quick Review

No summary available.

LLM Signals

Description coverage-
Task knowledge-
Structure-
Novelty-

GitHub Signals

49
7
1
1
Last commit 0 days ago

Publisher

majiayu000

majiayu000

Skill Author

Related Skills

mcp-developerprompt-engineerfine-tuning-expert

Loading SKILL.md…

Try onlineView on GitHub

Publisher

majiayu000 avatar
majiayu000

Skill Author

Related Skills

mcp-developer

Jeffallan

6.4

prompt-engineer

Jeffallan

7.0

fine-tuning-expert

Jeffallan

6.4

rag-architect

Jeffallan

7.0
Try online