project-planning
Transform specifications into structured implementation plans with architecture design and task breakdown
subagent-testing
TDD-style testing methodology for skills using fresh subagent instances to prevent priming bias and validate skill effectiveness. Triggers: test skill, validate skill, skill testing, subagent testing, fresh instance testing, TDD for skills, skill validation Use when: validating skill improvements, testing skill effectiveness, preventing priming bias, measuring skill impact on behavior DO NOT use when: implementing skills (use skill-authoring instead), creating hooks (use hook-authoring instead)
delegation-core
Delegate tasks to external LLM services (Gemini, Qwen) with quota, logging, and error handling.
optimizing-large-skills
Systematic methodology to reduce skill file size through externalization, consolidation, and progressive loading patterns. Triggers: large skill, skill optimization, skill size, 300 lines, inline code, skill refactoring, skill context reduction, skill modularization Use when: skills exceed 300 lines, multiple code blocks (10+) with similar functionality, heavy Python inline with markdown, functions >20 lines embedded DO NOT use when: skill is under 300 lines and well-organized. DO NOT use when: creating new skills - use modular-skills instead. Consult this skill when skills-eval shows "Large skill file" warnings.
gemini-delegation
Gemini CLI delegation workflow implementing delegation-core for Google's Gemini models.
project-specification
Transform project brief into detailed, testable specifications using spec-driven development methodology
rigorous-reasoning
Triggers: conflict, disagreement, debate, ethical, controversial, pushback Prevents sycophantic reasoning through checklist-based analysis, categorical integrity, and evidence-following to uncomfortable conclusions. Triggers (conflict-based): disagreement, conflict, debate, ethical question, controversial claim, interpersonal analysis, competing values, adjudication, "who is right", moral dilemma, harm assessment Triggers (red-flag self-monitoring): "I agree that", "You're right", "Great point", "Absolutely", "That's a fair point", "I can see why", agreement without validation, softening conclusions, hedging without evidence, retracting under pressure Use when: analyzing conflicts or disagreements, evaluating ethical claims, adjudicating competing positions, noticing sycophantic thought patterns, making truth claims in contested territory DO NOT use when: routine implementation tasks with no contested claims DO NOT use when: simple factual questions with clear answers CRITICAL: This skill overrides default conversational tendencies toward agreement. Agreement requires validity, accuracy, or truth, not politeness.
development-workflow
detailed development workflow with modular patterns for git, code review, testing, documentation, and deployment
response-compression
Triggers: verbose, bloat, concise, compress, direct, efficient response Eliminates response bloat including emojis, filler words, hedging language, hype, and unnecessary framing. Includes termination and directness guidelines.
style-learner
Triggers: writing style, voice, tone, style guide, exemplar, style learning Learn and extract writing style patterns from exemplar text for consistent application. Triggers: learn style, extract style, style profile, writing voice, tone analysis, style guide generation, exemplar analysis Use when: creating a style guide from existing content, ensuring consistency across documents, learning a specific author's voice, customizing AI output style DO NOT use when: detecting AI slop - use slop-detector instead. DO NOT use when: just need to clean up existing content - use doc-generator with --remediate. Use this skill to build style profiles from exemplar text.
decisive-action
Triggers: question threshold, decisive, autonomous, clarifying questions Guidance on when to ask clarifying questions vs proceed with standard approaches. Reduces interaction rounds while preventing wrong assumptions.
review-chamber
Capture and retrieve PR review knowledge in project memory palaces
shell-review
Audit shell scripts for correctness, portability, and common pitfalls. Triggers: shell script, bash, sh, script review, pipeline, exit code Use when: reviewing shell scripts, CI scripts, hook scripts, wrapper scripts DO NOT use when: creating new scripts - use attune:workflow-setup
clear-context
Automatic context management with graceful handoff to continuation subagent. Triggers: context pressure, 80% threshold, auto-clear, context full, continuation, session state, checkpoint Use when: Context usage approaches 80% during long-running tasks. This skill enables automatic continuation without manual /clear. The key insight: Subagents have fresh context windows. By delegating remaining work to a continuation subagent, we achieve effective "auto-clear" without stopping the workflow.
workflow-setup
Configure GitHub Actions workflows for CI/CD (test, lint, typecheck, publish)
war-room
Multi-LLM deliberation framework for strategic decisions through pressure-based expert consultation
slop-detector
Triggers: ai slop, ai-generated, llm markers, chatgpt phrases, claude tells Detect and flag AI-generated content markers in documentation and prose. Triggers: slop detection, ai cleanup, humanize text, remove ai markers, detect chatgpt, detect llm, writing quality, ai tells Use when: reviewing documentation for AI markers, cleaning up LLM-generated content, auditing prose quality, preparing content for publication DO NOT use when: generating new content - use doc-generator instead. DO NOT use when: learning writing styles - use style-learner instead. Use this skill to identify and remediate AI slop in existing content.
workflow-monitor
Triggers: workflow error, inefficient execution, workflow failure, execution monitor Monitor workflow executions for errors and inefficiencies. When issues are detected, automatically create GitHub issues for workflow improvements via /fix-workflow. Use when: workflows fail, timeout, or show inefficient patterns DO NOT use when: normal workflow execution, simple command errors
bloat-detector
Detect codebase bloat through progressive analysis: dead code, duplication, complexity, documentation bloat. Triggers: bloat detection, dead code, code cleanup, duplication, technical debt, unused code Use when: context usage high, quarterly maintenance, pre-release cleanup, before refactoring DO NOT use when: active feature development, time-sensitive bugs, codebase < 1000 lines
project-brainstorming
Socratic questioning and ideation methodology for project conception using structured brainstorming frameworks
methodology-curator
Surfaces expert frameworks and proven methodologies before creating OR evaluating skills, hooks, agents, or commands. Helps select approaches from domain masters. Triggers: methodology, framework, expert approach, best practices, masters, proven method, domain expertise, how should I approach, what's the best way, evaluate skill, review methodology, is this grounded, optimization check Use when: starting skill/hook/agent creation, evaluating existing skills for methodology gaps, seeking domain expertise, wanting to ground work in proven methodologies, before brainstorming, quick optimization check on existing work. DO NOT use when: you already have a specific methodology in mind, working on implementation details, fixing syntax/structural issues.
fpf-review
Triggers: architecture review, FPF, functional programming framework, systems architecture Architecture review using FPF (Functional Programming Framework) methodology. Evaluates codebases through functional, practical, and foundation perspectives. Use when: conducting architecture reviews, evaluating system design DO NOT use when: simple code reviews, bug fixes, documentation updates
code-quality-principles
Triggers: KISS, YAGNI, SOLID, clean code, code quality, refactor, design principles Provides guidance on fundamental software design principles to reduce complexity, prevent over-engineering, and improve maintainability.
precommit-setup
Configure comprehensive three-layer pre-commit quality system with linting, type checking, and testing enforcement
shared
Shared utilities and constants for scribe plugin skills. This module provides common patterns, word lists, and utilities used across slop-detector, style-learner, and doc-generator.
session-management
Triggers: session, resume, rename, checkpoint Manage Claude Code sessions with naming, checkpointing, and resume strategies. Use when: organizing long-running work, creating debug checkpoints, managing PR reviews
architecture-paradigms
Interactive selector and router for architecture paradigms. Triggers: architecture selection, pattern comparison, system design, ADR creation, architecture decision, paradigm evaluation, new system architecture, architecture planning, which architecture, compare architectures Use when: selecting architecture patterns for new systems, comparing paradigm trade-offs, creating architecture decision records, evaluating architecture fit for team size and domain complexity, planning implementation roadmaps DO NOT use when: implementing a specific known paradigm - use the specific architecture-paradigm-* skill (hexagonal, layered, microservices, etc.) instead. DO NOT use when: reviewing existing architecture - use architecture-review instead. Use this skill BEFORE making architecture decisions. Check even if unsure about needs.
makefile-generation
Generate language-specific Makefile with common development targets
doc-generator
Triggers: documentation, generate docs, write docs, technical writing Generate or remediate documentation with human-quality writing and style adherence. Triggers: generate documentation, write readme, create guide, doc generation, technical writing, remediate docs, polish content, clean up docs Use when: creating new documentation, rewriting AI-generated content, applying style profiles to content, polishing drafts DO NOT use when: just detecting slop - use slop-detector for analysis only. DO NOT use when: learning styles - use style-learner first. Use this skill to produce human-quality documentation.
project-execution
Systematic task execution with checkpoint validation, progress tracking, and quality gates
qwen-delegation
Qwen CLI delegation workflow implementing delegation-core for Alibaba's Qwen models.
debug-helper
Systematic debugging approach for identifying and fixing issues

