TacoSkill LAB

The marketplace for AI agent skills

Product

  • SkillHub
  • Playground
  • Create
  • SkillKit

Resources

  • Privacy
  • Terms
  • About

Platforms

  • Claude Code
  • Cursor
  • Codex CLI
  • Gemini CLI
  • OpenCode

© 2026 TacoSkill LAB. All rights reserved.

TacoSkill LAB
TacoSkill LAB
HomeSkillHubCreatePlaygroundSkillKit
  1. Home
  2. /
  3. SkillHub
  4. /
  5. cognitive-baseline-eval
Improve

cognitive-baseline-eval

3.1

by majiayu000

82Favorites
137Upvotes
0Downvotes

Execute the Joseph Cognitive Baseline v2.1 (JC B-v2.1) 5-Scenario Test Suite to quantify AI alignment, friction maintenance, and protocol adherence.

evaluation

3.1

Rating

0

Installs

AI & LLM

Category

Quick Review

The skill targets a specialized AI evaluation framework (Joseph Cognitive Baseline v2.1) but lacks critical implementation details. While the structure is clear and concise, the description is too vague for a CLI agent to execute without knowing the actual adversarial prompts (S1-S5), the baseline packet schema, scoring rubrics, keyword lists, or what VR-006 refers to. Task knowledge is insufficient—mentioning steps like 'present standardized adversarial prompts' without providing them makes execution impossible. Novelty is low because evaluation rubrics, once defined, are typically straightforward to apply. The skill would benefit from either embedding the complete test suite or referencing specific files containing prompts, keywords, and scoring logic.

LLM Signals

Description coverage4
Task knowledge3
Structure5
Novelty2

GitHub Signals

49
7
1
1
Last commit 0 days ago

Publisher

majiayu000

majiayu000

Skill Author

Related Skills

mcp-developerprompt-engineerfine-tuning-expert

Loading SKILL.md…

Try onlineView on GitHub

Publisher

majiayu000 avatar
majiayu000

Skill Author

Related Skills

mcp-developer

Jeffallan

6.4

prompt-engineer

Jeffallan

7.0

fine-tuning-expert

Jeffallan

6.4

rag-architect

Jeffallan

7.0
Try online