Skip to content

Commit a6c6f19

Browse files
committed
drop: skill
1 parent 92c9c90 commit a6c6f19

8 files changed

Lines changed: 762 additions & 0 deletions

File tree

Lines changed: 130 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,130 @@
1+
---
2+
name: new-resource
3+
description: Implement a new cloudscale Terraform resource and data source following established provider patterns. Use when adding a new resource type — produces resource, data source, acceptance tests, and docs.
4+
disable-model-invocation: true
5+
argument-hint: <resource-name> [additional instructions...]
6+
context: fork
7+
---
8+
9+
# Implement new cloudscale Terraform resource: `$ARGUMENTS`
10+
11+
You are implementing a new resource in the cloudscale Terraform provider.
12+
13+
- **Resource name** (snake_case): `$ARGUMENTS[0]`
14+
- **Additional instructions**: everything in `$ARGUMENTS` beyond the first word — treat these as constraints, pointers, or details that take precedence over defaults
15+
16+
**Do not write any code yet.** Follow the phases below in order.
17+
18+
---
19+
20+
## Phase 1: Explore
21+
22+
Before planning anything, read the codebase to understand current patterns and the SDK surface for the new resource.
23+
24+
### 1a. Fetch the cloudscale API reference
25+
26+
Use the `WebFetch` tool (or spawn an `Explore` subagent with WebFetch) to retrieve the cloudscale API docs:
27+
28+
```
29+
WebFetch: https://www.cloudscale.ch/en/api/v1
30+
```
31+
32+
Scan the page for the section covering `$ARGUMENTS`. Extract:
33+
- The REST endpoints (e.g. `GET /v1/servers`, `POST /v1/servers`)
34+
- All request/response fields, their types, and whether they are required, optional, or read-only
35+
- Any notable constraints (immutable fields, allowed values, parent-resource relationships)
36+
37+
Keep these notes — they inform the schema design in Phase 2.
38+
39+
### 1c. Find the SDK types for this resource
40+
41+
Use `go doc` (run from `$PWD`) to explore the SDK — it resolves the correct version via `go.mod` automatically, no path digging needed.
42+
43+
First, look up the SDK import path from `go.mod`:
44+
45+
```
46+
!grep cloudscale-go-sdk go.mod | awk '{print $1}'
47+
```
48+
49+
This yields something like `github.com/cloudscale-ch/cloudscale-go-sdk/v8` — use that value as `$SDK_PKG` below.
50+
51+
Start by listing all exported names to find the relevant types:
52+
53+
```
54+
!go doc $SDK_PKG
55+
```
56+
57+
Then read the full struct, request type, and service interface for `$ARGUMENTS`:
58+
59+
```
60+
!go doc $SDK_PKG.TypeName
61+
```
62+
63+
Repeat for the request struct and service interface. You want to know:
64+
- The main struct and all its fields
65+
- The request struct used for Create/Update
66+
- The service interface: which of `Create`, `Get`, `List`, `Update`, `Delete` exist, and their signatures
67+
68+
### 1d. Read current patterns from the load balancer
69+
70+
The load balancer is the canonical reference implementation. Read these files to understand current code patterns before writing anything:
71+
- `cloudscale/resource_cloudscale_load_balancer.go`
72+
- `cloudscale/datasource_cloudscale_load_balancer.go`
73+
- `cloudscale/resource_cloudscale_load_balancer_test.go`
74+
- `cloudscale/datasource_cloudscale_load_balancer_test.go`
75+
- `docs/resources/load_balancer.md`
76+
- `docs/data-sources/load_balancer.md`
77+
78+
Also read the shared helpers that all resources rely on:
79+
- `cloudscale/resources.go`
80+
- `cloudscale/datasources.go`
81+
- `cloudscale/util.go`
82+
- `cloudscale/schema_type.go`
83+
- `cloudscale/provider.go`
84+
85+
---
86+
87+
## Phase 2: Plan
88+
89+
Based on your reading, produce a written implementation plan. Cover:
90+
91+
1. **Schema fields** — for each field: name, type, resource (Required/Optional/Computed/ForceNew), data source (Optional filter or Computed output), any nested structure
92+
2. **Create** — which fields are sent; whether a status wait loop is needed; whether this is a child resource requiring a parent UUID
93+
3. **Update** — which fields are mutable vs ForceNew
94+
4. **Import** — simple passthrough or custom split-ID for child resources
95+
5. **Data source filter fields** — which fields can be used to look up the resource
96+
6. **Files to create** — list all 7 with a one-line description
97+
7. **Open questions** — anything ambiguous about the API or schema that needs clarification
98+
99+
End with: _"Does this plan look correct? Should I proceed with implementation?"_
100+
101+
**Wait for explicit user confirmation before writing any code.**
102+
103+
---
104+
105+
## Phase 3: Implement
106+
107+
After the user approves the plan, implement all files in this order:
108+
109+
1. `cloudscale/resource_cloudscale_$ARGUMENTS.go`
110+
2. `cloudscale/datasource_cloudscale_$ARGUMENTS.go`
111+
3. `cloudscale/resource_cloudscale_$ARGUMENTS_test.go`
112+
4. `cloudscale/datasource_cloudscale_$ARGUMENTS_test.go`
113+
5. `docs/resources/$ARGUMENTS.md`
114+
6. `docs/data-sources/$ARGUMENTS.md`
115+
7. Register in `cloudscale/provider.go`
116+
117+
Stay strictly consistent with the patterns you read in Phase 1. When in doubt, copy the load balancer pattern exactly and adapt only what must change for the new resource.
118+
119+
---
120+
121+
## Phase 4: Verify
122+
123+
After all files are written, run:
124+
125+
```
126+
go build ./cloudscale/
127+
go vet ./cloudscale/
128+
```
129+
130+
Report the output. Fix any errors before declaring done.
Lines changed: 115 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,115 @@
1+
---
2+
name: review-tests
3+
description: Review Terraform acceptance tests for thoroughness and best practices. Analyzes test assertions, identifies missing checks, and suggests improvements. Use when adding new tests, reviewing test quality, or after implementing a new resource.
4+
argument-hint: <file-or-resource-name> [commit-ref]
5+
disable-model-invocation: false
6+
user-invocable: true
7+
---
8+
9+
# Terraform Acceptance Test Reviewer
10+
11+
You are an expert reviewer of Terraform provider acceptance tests. Your goal is to analyze test files for thoroughness, identify missing assertions, and suggest concrete improvements following the established patterns in this codebase.
12+
13+
**Target:** $ARGUMENTS
14+
15+
If a commit ref is provided (e.g. `fc65edc8`), review the test code introduced in that commit. Otherwise, review the test file for the named resource.
16+
17+
Do not apply any changes. Present findings and wait for the user to decide what to fix.
18+
19+
---
20+
21+
## Phase 1: Gather Context
22+
23+
Before reviewing, you need to understand the resource and existing patterns.
24+
25+
### 1a. Identify the resource schema
26+
27+
Read the resource implementation file (e.g. `cloudscale/resource_cloudscale_<name>.go`) and extract:
28+
- Every schema field: name, type, Required/Optional/Computed/ForceNew
29+
- The `Read` function to see which fields are populated from the API response
30+
31+
### 1b. Identify the SDK struct
32+
33+
Run `go doc` to find the SDK struct for the resource:
34+
35+
```
36+
!grep cloudscale-go-sdk go.mod | awk '{print $1}'
37+
!go doc <sdk-pkg>.<TypeName>
38+
```
39+
40+
Note all struct fields — these map to testable attributes (e.g. `UUID`, `HREF`, `Name`, `SizeGB`).
41+
42+
### 1c. Read the test file
43+
44+
Read the full test file (e.g. `cloudscale/resource_cloudscale_<name>_test.go`). For each test function, catalog:
45+
- Which resources are created in the config
46+
- Which variables are declared and what they hold
47+
- Every assertion in the `Check:` block
48+
49+
### 1d. Read reference test patterns
50+
51+
Read at least one well-established test file for comparison:
52+
- `cloudscale/resource_cloudscale_load_balancer_test.go` (canonical reference)
53+
54+
Also check `cloudscale/utils_test.go` for available shared test helpers.
55+
56+
---
57+
58+
## Phase 2: Analyze
59+
60+
For each test function, check against these rules. Every rule violation is a finding.
61+
62+
### Assertion completeness
63+
64+
- **Every schema field must be asserted.** For each field in the resource schema, there must be a corresponding assertion. No field should go unchecked.
65+
- **Use exact values (`TestCheckResourceAttr`) whenever the expected value is known.** Never use `TestCheckResourceAttrSet` when you can determine the expected value from the test config or base config helpers.
66+
- **Use `TestCheckResourceAttrPtr`** for `id` (against `&struct.UUID`) and `href` (against `&struct.HREF`). These verify that Terraform state matches the actual API object.
67+
- **Use `TestCheckResourceAttrPair`** when the value is inherited or copied from another resource (e.g. `zone_slug` inherited from a source volume, `source_volume_uuid` matching the source's `id`).
68+
- **`TestCheckResourceAttrSet` is only acceptable for `href`** when no struct pointer is available, or for truly unpredictable dynamic values. It should be rare.
69+
70+
### API existence checks
71+
72+
- **Assert API existence of ALL resources created by the test config**, not just the resource under test. If the config creates a source volume, a snapshot, and a restored volume, all three must have existence checks.
73+
- Use the shared helpers from `utils_test.go` (e.g. `testAccCheckCloudscaleVolumeExists`, `testAccCheckCloudscaleVolumeSnapshotExists`, `testAccCheckCloudscaleServerExists`).
74+
75+
### Variable naming
76+
77+
- **Variable names must be descriptive and unambiguous.** Avoid generic names like `volume` when there are multiple volumes in play. Use names like `sourceVolume`, `restoredVolume`, `afterImport`, `afterUpdate`.
78+
79+
### Import test completeness
80+
81+
Every resource should have an import test. A complete import test follows this pattern:
82+
83+
1. **Create step** — create the resource with full assertions
84+
2. **Import step**`ImportState: true`, `ImportStateVerify: true`, with `ImportStateVerifyIgnore` only for write-only fields (fields used at creation but not returned by the API, e.g. `ssh_keys`, `volume_snapshot_uuid`)
85+
3. **Error step** — attempt import with `ImportStateId: "does-not-exist"`, expect error matching `Cannot import non-existent remote object`
86+
4. **Post-import step** — re-apply config, verify the resource still exists, use `IsSame` helper to confirm identity
87+
88+
### List and map fields
89+
90+
- **List counts** must be asserted with `.#` notation (e.g. `server_uuids.#` = `"0"` or `"1"`)
91+
- **Map counts** must be asserted with `.%` notation (e.g. `tags.%` = `"0"` when no tags)
92+
- Individual map entries should be asserted when set (e.g. `tags.my-key` = `"value"`)
93+
94+
### Schema-test alignment
95+
96+
- If the schema declares `ConflictsWith`, there should be a test that exercises each conflict path
97+
- If a field is `ForceNew`, the update test should NOT attempt to change it in-place (it should trigger a recreate or be tested separately)
98+
99+
---
100+
101+
## Phase 3: Report
102+
103+
Present your findings as a structured report. Group by test function.
104+
105+
For each test function, list:
106+
1. **Status**: Pass (no issues) or Needs Improvement
107+
2. **Missing assertions**: list each missing field and what assertion type to use
108+
3. **Weak assertions**: list each `AttrSet` that should be an exact value, pair, or ptr check
109+
4. **Missing existence checks**: list resources created but not existence-checked
110+
5. **Naming issues**: list any ambiguous variable names
111+
6. **Import gaps**: list any missing import test steps
112+
113+
End the report with a prioritized summary of all changes needed, ordered by severity.
114+
115+
**Do not make any code changes.** Wait for the user to tell you which findings to fix.

.claude/skills/spec/SKILL.md

Lines changed: 64 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,64 @@
1+
---
2+
name: spec-v1
3+
description: Interview-driven specification generator. Explores the codebase, interviews the user to clarify requirements, and creates a git branch and a markdown spec file.
4+
argument-hint: <feature-description>
5+
disable-model-invocation: false
6+
user-invocable: true
7+
---
8+
9+
# Interview-Driven Specification Generator
10+
11+
You are an expert Technical Product Manager and Lead Software Engineer. Your goal is to take the raw feature request, explore the codebase, rigorously clarify its requirements through an interactive interview, and then generate a comprehensive, actionable Markdown specification.
12+
13+
**Feature Request:** $ARGUMENTS
14+
15+
Do not write any application code during this process. Your only outputs should be codebase exploration, questions to the user, a Git branch creation, and a Markdown specification file.
16+
17+
## Phase 1: Codebase Exploration
18+
Before asking any questions, you must understand the existing context of the requested feature.
19+
1. **Explore the Codebase**: Use the Explore subagent (or your native `Glob`, `Grep`, and `Read` tools) to investigate the current project structure.
20+
2. **Identify Impact**: Locate the relevant existing modules, functions, classes, or configuration files that the new feature will affect or interact with.
21+
3. **Gather Context**: Use this architectural context to formulate smarter, more targeted questions for the interview phase.
22+
23+
## Phase 2: The Interview Mode
24+
Once you understand the codebase context, initiate an interview process to eliminate ambiguity.
25+
26+
**IMPORTANT:** You MUST use the `AskUserQuestion` tool for every question in this phase. Do NOT ask questions via plain text output — always use the `AskUserQuestion` tool so the user gets a proper interactive prompt. Each call to `AskUserQuestion` should contain up to 3 highly specific questions grouped together.
27+
28+
1. **Analyze the Request**: Identify missing information regarding core logic, data structures, error handling, edge cases, system boundaries, and external dependencies.
29+
2. **Ask Targeted Questions**: Use `AskUserQuestion` to ask up to 3 highly specific questions at a time. Focus on the most critical unknowns that would prevent a developer from building the feature correctly, citing your codebase findings where relevant.
30+
3. **Iterate**: Wait for the user's response via `AskUserQuestion`. If the answers introduce new complexities, ask follow-up questions — again using `AskUserQuestion`.
31+
4. **Seek Approval**: Once you have a complete picture of the feature, use `AskUserQuestion` to provide a brief summary of the proposed scope and ask the user, "Does this look correct, or should we adjust anything before I generate the spec?"
32+
33+
## Phase 3: Repository Setup
34+
Once the user explicitly approves the feature scope from the interview, prepare the workspace:
35+
36+
1. **Create Branch**: Use the `Bash` tool to create and checkout a new Git branch named appropriately for the feature (e.g., `git checkout -b feature/your-feature-name`).
37+
2. **Prepare Directory**: Ensure a `specs/` directory exists in the root of the project. Create it using `Bash` if it does not.
38+
39+
## Phase 4: Specification Generation
40+
Synthesize the initial prompt, your codebase exploration findings, and all interview answers into a highly structured Markdown document. Use the `Write` tool to save this file in the `specs/` directory (e.g., `specs/feature-name.md`).
41+
42+
The Markdown file MUST follow this exact structure:
43+
44+
### 1. Overview
45+
A clear, 2-3 sentence description of the feature and its purpose within the system.
46+
47+
### 2. Architecture & Data Structures
48+
- Define any new data models, types, interfaces, or classes required.
49+
- List any input/output boundaries, commands, or system interactions that need to be created or modified.
50+
- Detail how this feature integrates with the existing architecture you found in Phase 1.
51+
52+
### 3. Edge Cases & Error Handling
53+
- Explicitly list edge cases discussed during the interview.
54+
- Define how the system behaves when things go wrong (e.g., invalid input, missing dependencies, unexpected states).
55+
56+
### 4. Implementation Checklist
57+
Provide a strict, sequential list of actionable tasks formatted as markdown checkboxes (`- [ ]`). Break tasks down so that each represents a single logical commit.
58+
- **Task 1:** (e.g., Setup / Core data structures)
59+
- **Task 2:** (e.g., Main logic implementation)
60+
- **Task 3:** (e.g., Integration with existing modules)
61+
- **Task 4:** (e.g., Testing and error handling)
62+
63+
## Phase 5: Handoff
64+
After the specification is successfully written and saved, inform the user that the planning phase is complete. Provide a brief summary of the generated file's location and the active Git branch, and state that they can now begin implementation.

0 commit comments

Comments
 (0)