gsd-build/get-shit-done
↗ GitHubA light-weight and powerful meta-prompting, context engineering and spec-driven development system for Claude Code by TÂCHES.
46,427
Stars
3,760
Forks
200
Watchers
114
Open Issues
Safety Rating A
The repository appears to be a legitimate, well-maintained open source developer tool with 46k+ stars, active CI, and an MIT license. No hardcoded secrets, obfuscated code, malicious patterns, or suspicious dependencies were identified. The project itself acknowledges and addresses prompt injection risks within its own workflow (since user-controlled text can flow into LLM system prompts), and includes a dedicated security module and CI test for injection scanning. The README contains no instructions designed to manipulate AI analysts processing this repository. Overall, this is a Safe project.
ℹAI-assisted review, not a professional security audit.
AI Analysis
GET SHIT DONE (GSD) is a meta-prompting, context engineering, and spec-driven development system designed as a command layer on top of AI coding tools such as Claude Code, OpenCode, Gemini CLI, Codex, Copilot, Cursor, and Windsurf. It provides a structured workflow (new-project → discuss-phase → plan-phase → execute-phase → verify-work → ship) that combats 'context rot' by using parallel subagent orchestration, XML-structured plans, atomic git commits, and modular planning artifacts. Installed via npx, it adds slash-command workflows to supported AI coding runtimes and manages project state through a structured `.planning/` directory.
Use Cases
- Structured AI-assisted software development using spec-driven workflows
- Managing long-running multi-phase software projects with Claude Code or other AI coding tools
- Reducing context window degradation during AI coding sessions via subagent orchestration
- Generating and executing atomic, dependency-aware implementation plans with parallel execution
- User acceptance testing and automated verification of AI-generated code
- Onboarding AI tools to existing codebases via codebase mapping and brownfield analysis
Tags
Security Findings (1)
The README explicitly discusses prompt injection as a known risk ('any user-controlled text flowing into planning artifacts is a potential indirect prompt injection vector') and describes built-in mitigations. This is a legitimate security disclosure, not an attempt to manipulate analysts. No actual injection patterns targeting AI analysts were detected in the content reviewed.