TacoSkill LAB
TacoSkill LAB
HomeSkillHubCreatePlaygroundSkillKit
© 2026 TacoSkill LAB
AboutPrivacyTerms
  1. Home
  2. /
  3. SkillHub
  4. /
  5. scholar-evaluation
Improve

scholar-evaluation

7.7

by K-Dense-AI

52Favorites
180Upvotes
0Downvotes

Systematically evaluate scholarly work using the ScholarEval framework, providing structured assessment across research quality dimensions including problem formulation, methodology, analysis, and writing with quantitative scoring and actionable feedback.

evaluation

7.7

Rating

0

Installs

AI & LLM

Category

Quick Review

Excellent skill for systematic scholarly evaluation. The description clearly covers capabilities, enabling a CLI agent to invoke it for academic assessment tasks. Task knowledge is comprehensive with detailed 6-step workflow, dimension-based criteria, scoring rubrics, and referenced framework documentation. Structure is well-organized with clear sections, though SKILL.md is somewhat lengthy. Novelty is good—while paper evaluation exists, the ScholarEval framework integration, quantitative scoring, multi-dimensional analysis, and actionable feedback generation provide meaningful value beyond basic critique. The skill effectively reduces token cost for thorough academic assessments that would otherwise require extensive prompting.

LLM Signals

Description coverage9
Task knowledge9
Structure8
Novelty7

GitHub Signals

6,871
818
49
3
Last commit 1 days ago

Publisher

K-Dense-AI

K-Dense-AI

Skill Author

Related Skills

rag-architectprompt-engineerfine-tuning-expert

Loading SKILL.md…

Try onlineView on GitHub

Publisher

K-Dense-AI avatar
K-Dense-AI

Skill Author

Related Skills

rag-architect

Jeffallan

7.0

prompt-engineer

Jeffallan

7.0

fine-tuning-expert

Jeffallan

6.4

mcp-developer

Jeffallan

6.4
Try online