|
| 1 | +--- |
| 2 | +permalink: /agents |
| 3 | +title: Agentic Testing |
| 4 | +--- |
| 5 | + |
| 6 | +# Agentic Testing |
| 7 | + |
| 8 | +CodeceptJS ships an **MCP server and a skillset** that lets an AI agent (Claude Code, Cursor, Codex, others) write and fix tests by driving the real browser. The agent runs the same `I.*` commands the test does, reads how the page responds, and only commits the lines that succeeded. |
| 9 | + |
| 10 | +## Why MCP |
| 11 | + |
| 12 | +The traditional agent testing loop is test/fix/retry, where the agent executes a test, watches it fail, reads artifacts, performs code fixes, and reruns the test. The agent applies fixes by intelligent guess — looking at the ARIA tree, HTML, and screenshot — then assumes the fix is enough and reruns the test hoping it will pass. If the guess is wrong and the test runs for over a minute, it may take dozens of minutes of iteration and a lot of wasted tokens. |
| 13 | + |
| 14 | +To improve that flow, the agent can spawn a browser and open the page the way the test does. This lets it interact with the page more freely and perform multi-step actions. But putting that experience back into test code is not efficient either: actions executed in the browser may not be relevant in test context, so the agent ends up in another guess-and-try loop. |
| 15 | + |
| 16 | +The problem is that **the test runs in a different context than the agent**. |
| 17 | + |
| 18 | +The agent can launch a test but can't control it while it's running. It can't access the browser. It can't set a breakpoint. |
| 19 | + |
| 20 | +This is where CodeceptJS MCP steps in. Connected to the agent, it can: |
| 21 | + |
| 22 | +- run a test and pause it on failure |
| 23 | +- interact with the browser in a test context |
| 24 | +- test locators and perform actions live while the test is running |
| 25 | +- write successful actions to the test file |
| 26 | + |
| 27 | +This lets the agent get a test working in one iteration. The agent can live-write the test before your eyes by exploring the page and performing actions that eventually land in the CodeceptJS test file. |
| 28 | + |
| 29 | +**Live debugging of tests** is what CodeceptJS MCP provides. The agent receives feedback faster — not from a whole test execution but from specific actions on a specific page — so it can adjust and react faster, trying different approaches. |
| 30 | + |
| 31 | +The MCP server is the agent-facing equivalent of the `pause()` REPL — same access, driven by tool calls instead of keystrokes. Full tool reference at [/mcp](/mcp). |
| 32 | + |
| 33 | +## The loop |
| 34 | + |
| 35 | +Whether the agent is writing a new test or fixing an old one, it follows the same cycle. |
| 36 | + |
| 37 | +1. **Open the page.** Run a stub test (new work) or set a breakpoint at the failing step (fix). The browser lands at the right starting point and yields control to the agent. |
| 38 | +2. **Read the page.** MCP saves HTML, ARIA, and screenshot of the page to files (and the agent can call the `snapshot` tool to refresh them). The agent reads those files before deciding what to try next, controlling its token usage. |
| 39 | +3. **Run a CodeceptJS command.** The agent tries `I.*` commands like `I.click('Add to cart')`, `I.fillField('Email', secret(process.env.EMAIL))`, `I.see('Confirmed')`. On success, that line goes into the test — same syntax. |
| 40 | +4. **Check the result.** The response after each command shows the new page state. If the URL changed and the modal opened, the line goes into the verified sequence. If not, the agent reads the page again and tries a different locator or a wait. |
| 41 | +5. **Move forward.** The agent looks at the new state and chooses the next command. Steps 2–4 repeat until the scenario is whole. |
| 42 | +6. **Commit to the file.** The agent edits the test — replaces `pause()` (new tests) or the broken line (fixes) with the verified sequence — then reruns end-to-end and reads the trace to confirm. |
| 43 | + |
| 44 | +## How the agent reads the page |
| 45 | + |
| 46 | +MCP commands are token efficient — they don't stream large HTML pages back to the model. MCP writes artifacts to disk under `output/trace_*/` and returns file paths. The agent reads each artifact with its own bash tools — `cat`, `grep`, `jq`. |
| 47 | + |
| 48 | +A `run_code` response, for example, looks like this: |
| 49 | + |
| 50 | +```json |
| 51 | +{ |
| 52 | + "status": "success", |
| 53 | + "artifacts": { |
| 54 | + "url": "http://localhost:8000/", |
| 55 | + "html": "file:///output/trace_run_code_.../mcp_page.html", |
| 56 | + "aria": "file:///output/trace_run_code_.../mcp_aria.txt", |
| 57 | + "screenshot": "file:///output/trace_run_code_.../mcp_screenshot.png", |
| 58 | + "console": "file:///output/trace_run_code_.../mcp_console.json", |
| 59 | + "storage": "file:///output/trace_run_code_.../mcp_storage.json" |
| 60 | + } |
| 61 | +} |
| 62 | +``` |
| 63 | + |
| 64 | +Only `url` is inline. The rest are paths the agent opens with the right tool: |
| 65 | + |
| 66 | +| Artifact | How the agent reads it | |
| 67 | +|----------|------------------------| |
| 68 | +| `*_screenshot.png` | As an image — most agents are multimodal | |
| 69 | +| `*_aria.txt` | Whole — small and structured | |
| 70 | +| `*_page.html` | With `grep` — too large for context, searchable for specific elements/attributes | |
| 71 | +| `*_console.json` | With `jq` — filter for errors, 4xx/5xx, deprecation warnings | |
| 72 | +| `*_storage.json` | Whole — cookies and `localStorage` snapshot | |
| 73 | +| `trace.md` | Whole — markdown index linking every step to its artifacts | |
| 74 | + |
| 75 | +Saved HTML is formatted, with non-semantic elements stripped out: `<style>`, `<script>`, Tailwind-style trash classes, and inline `style=""` attributes. `grep` can then effectively find the correct tree branch in raw page source. ARIA snapshots are smaller and more structured than HTML, which is why the agent prefers them when picking locators. |
| 76 | + |
| 77 | +## Setup |
| 78 | + |
| 79 | +When CodeceptJS is installed, the MCP server can be launched with this command: |
| 80 | + |
| 81 | +```bash |
| 82 | +npx codeceptjs-mcp |
| 83 | +``` |
| 84 | + |
| 85 | +> See [/mcp](/mcp) for detailed client setup. |
| 86 | +
|
| 87 | +We recommend pairing CodeceptJS MCP with the skills bundle. |
| 88 | + |
| 89 | +Install for any agent: |
| 90 | + |
| 91 | +```bash |
| 92 | +npx skills add codeceptjs/skills |
| 93 | +``` |
| 94 | + |
| 95 | +Or, in Claude Code: |
| 96 | + |
| 97 | +```text |
| 98 | +/plugin marketplace add codeceptjs/skills |
| 99 | +/plugin install codeceptjs@codeceptjs-skills |
| 100 | +``` |
| 101 | + |
| 102 | +## Usage Examples |
| 103 | + |
| 104 | +When MCP and skills are connected, the agent receives predefined workflows and can act effectively for testing purposes. Common scenarios it handles: |
| 105 | + |
| 106 | +### Writing a new test |
| 107 | + |
| 108 | +You ask: "Add a test for the checkout flow." |
| 109 | + |
| 110 | +The agent writes a stub: |
| 111 | + |
| 112 | +```javascript |
| 113 | +Scenario('checkout', ({ I }) => { |
| 114 | + I.amOnPage('/cart') |
| 115 | + pause() |
| 116 | +}) |
| 117 | +``` |
| 118 | + |
| 119 | +It runs the stub. The browser opens at `/cart` and yields control at `pause()`. The agent reads the ARIA tree, runs `I.click('Add to cart')`, sees the cart total update — that line goes into the verified sequence. It runs `I.fillField('Email', '...')`, sees the field accept the value, records it. Through `I.click('Continue to payment')`, `I.see('Payment')`, `I.fillField('Card', secret(process.env.TEST_CARD))`, `I.click('Pay')`, `I.see('Order confirmed')` — each command commits only after the response confirms it worked. |
| 120 | + |
| 121 | +When the scenario is whole, the agent edits the test file: replaces `pause()` with the verified sequence, renames the scenario, wraps credentials with `secret()`. It reruns the file end-to-end with `aiTrace` on and hands you the diff. |
| 122 | + |
| 123 | +### Fixing a failing test |
| 124 | + |
| 125 | +A test fails. You point the agent at the scenario. |
| 126 | + |
| 127 | +It opens `output/trace_<TestName>_*/trace.md` from the last run, reads the steps, and finds the one marked failed. Most of the time the screenshot and ARIA from that step explain the cause — "Save" is now "Save changes," or a spinner is gating the next action. The agent patches the line and reruns. |
| 128 | + |
| 129 | +When the trace doesn't say enough, the agent passes a step number to `run_test` so the test pauses right before the failing step. From the live page, it tries `I.click({ role: 'button', name: 'Save changes' })`, sees the modal close. Or `I.waitForInvisible('.spinner', 10)` followed by the original click — watches it pass. Whatever holds goes into the test. |
| 130 | + |
| 131 | +The fix lands with a one-line note explaining what changed. |
| 132 | + |
| 133 | +### Auto-fixing on CI |
| 134 | + |
| 135 | +After a failed run, the agent reads every trace under `output/`, clusters failures by signature, and patches what fits a small set of safe fixes (locator drift, missing waits, raw `I.wait(N)` replacement). It reruns only the failing scenarios, compares against the baseline, and writes a markdown report at `output/ci-fix.md`. |
| 136 | + |
| 137 | +If the fix held, the PR goes green. If it didn't, every edit is rolled back with `git checkout` and the report says which patterns the agent couldn't safely handle. No half-applied fixes left behind, no `retries: 3` masking the problem. |
| 138 | + |
| 139 | +## Skills bundle |
| 140 | + |
| 141 | +Skills teach the agent best practices for using CodeceptJS. Plug them in when you develop tests with agents, and update them regularly to ensure you use CodeceptJS in the most effective way. |
| 142 | + |
| 143 | +| Skill | Use case | |
| 144 | +|-------|----------| |
| 145 | +| `writing-codeceptjs-tests` | Author or extend a scenario. Runs the loop above with a stub-and-pause flow for greenfield work, incremental `run_code` for known flows. | |
| 146 | +| `debugging-codeceptjs-tests` | A test is failing or flaky. Reads the trace, decides whether to patch from the trace alone or set a breakpoint on the live page. | |
| 147 | +| `ci-fix-tests` | Conservative auto-repair on CI | |
| 148 | +| `refactoring-codeceptjs-tests` | Extract page objects, tame long locators, move raw browser code into helpers. Proposes in batches. | |
| 149 | +| `codeceptjs-fundamentals` | Obtain actual CodeceptJS knowledge. | |
| 150 | +| `codeceptjs-exploration` | Pick a stable locator from messy markup. | |
| 151 | +| `codeceptjs-run-analysis` | Read trace artifacts, cluster CI failures into root causes, verify a fix held across many traces. | |
| 152 | +| `codeceptjs-auth` | Authorize efficiently with `auth` plugin. | |
| 153 | + |
| 154 | +## Pointers |
| 155 | + |
| 156 | +- [/mcp](/mcp) — full MCP tool reference, client setup |
| 157 | +- [/aitrace](/aitrace) — trace plugin configuration and capture options |
| 158 | +- [/debugging](/debugging) — pause modes, IDE setup, the `pause` plugin |
| 159 | +- [skills repo](https://github.com/codeceptjs/skills) — source and install for non-Claude clients |
0 commit comments