You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/docs/concepts/context-and-sessions.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,8 +4,8 @@ description: "How Component, Backend, Context, and Session fit together in Melle
4
4
# diataxis: explanation
5
5
---
6
6
7
-
Every call to an LLM in Mellea passes through four layers: [**Component**](../guide/glossary#component), [**Backend**](../guide/glossary#backend),
8
-
[**Context**](../guide/glossary#context), and **Session**. Understanding how these fit together explains both why
7
+
Every call to an LLM in Mellea passes through four layers: [**Component**](../reference/glossary#component), [**Backend**](../reference/glossary#backend),
8
+
[**Context**](../reference/glossary#context), and **Session**. Understanding how these fit together explains both why
9
9
Mellea is structured the way it is and how to extend it effectively.
10
10
11
11
> **Looking to use this in code?** See [Context and Sessions](../how-to/use-context-and-sessions) for practical examples and session extension patterns.
@@ -30,7 +30,7 @@ raw text or a parsed representation of a model output.
30
30
### Backends
31
31
32
32
A `Backend` takes a `Component`, formats it into a prompt, sends it to an LLM, and
33
-
returns the model output as a [`ModelOutputThunk`](../guide/glossary#modeloutputthunk). The `Thunk` is a lazy wrapper: it
33
+
returns the model output as a [`ModelOutputThunk`](../reference/glossary#modeloutputthunk). The `Thunk` is a lazy wrapper: it
34
34
holds the raw model output and parses it on access (via `.value` or `str()`).
35
35
36
36
The backend is responsible for:
@@ -52,15 +52,15 @@ The context serves two purposes:
52
52
53
53
1.**Prompt construction** — the backend calls `ctx.view_for_generation()` to get
54
54
the components that should appear in the prompt. For `ChatContext`, this includes
55
-
all prior turns. For [`SimpleContext`](../guide/glossary#simplecontext), it includes only the current instruction.
55
+
all prior turns. For [`SimpleContext`](../reference/glossary#simplecontext), it includes only the current instruction.
56
56
57
57
2.**Validation** — during the IVR loop, requirement validators receive the
58
58
`Context` object. They can call `ctx.last_output()` to inspect the most recent
59
59
model output, or examine the full history for more complex checks.
60
60
61
61
### Sessions
62
62
63
-
[`MelleaSession`](../guide/glossary#melleasession) is the developer-facing layer. It wraps a backend and a context,
63
+
[`MelleaSession`](../reference/glossary#melleasession) is the developer-facing layer. It wraps a backend and a context,
64
64
exposes the `instruct()`, `chat()`, `validate()`, and other methods you use in your
65
65
code, and handles the bookkeeping that ties components, context updates, and backend
66
66
calls together.
@@ -201,7 +201,7 @@ print(last.value)
201
201
turn = m.ctx.last_turn()
202
202
```
203
203
204
-
`last_turn()` returns a [`ContextTurn`](../guide/glossary#contextturn) with `.input` and `.output` fields. It is
204
+
`last_turn()` returns a [`ContextTurn`](../reference/glossary#contextturn) with `.input` and `.output` fields. It is
205
205
useful for observability or when you need to log exactly what the model received and
Copy file name to clipboardExpand all lines: docs/docs/concepts/generative-functions.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ description: "How the @generative decorator turns a Python function signature in
6
6
7
7
In classical programming, a pure function takes inputs and produces outputs deterministically.
8
8
In a generative program, a function can have the same interface but delegate its implementation
9
-
to an LLM. Mellea calls these [**generative functions**](../guide/glossary#generative-function) and provides the [`@generative`](../guide/glossary#generative) decorator
9
+
to an LLM. Mellea calls these [**generative functions**](../reference/glossary#generative-function) and provides the [`@generative`](../reference/glossary#generative) decorator
10
10
to define them.
11
11
12
12
> **Looking to use this in code?** See [Generative Functions](../how-to/generative-functions) for practical examples and API details.
Copy file name to clipboardExpand all lines: docs/docs/concepts/generative-programming.md
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@ description: "The ideas behind Mellea — what generative programs are, why they
4
4
# diataxis: explanation
5
5
---
6
6
7
-
A [_generative program_](../guide/glossary#generative-program) is any program that contains calls to an LLM. This covers
7
+
A [_generative program_](../reference/glossary#generative-program) is any program that contains calls to an LLM. This covers
8
8
everything from a simple prompt wrapper to a complex multi-step reasoning system.
9
9
The term is deliberately broad: what matters is not how many LLM calls a program
10
10
makes, but the structural challenges that arise when you combine stochastic LLM
@@ -32,7 +32,7 @@ unchecked through the system.
32
32
33
33
## Requirements as the core tool
34
34
35
-
The primary mechanism Mellea provides for managing stochasticity is [_requirements_](../guide/glossary#requirement).
35
+
The primary mechanism Mellea provides for managing stochasticity is [_requirements_](../reference/glossary#requirement).
36
36
A requirement is a validation function that checks whether an LLM output meets a
37
37
specified criterion:
38
38
@@ -51,7 +51,7 @@ result = m.instruct(
51
51
```
52
52
53
53
When the model's output fails a requirement, Mellea can retry the generation with
54
-
feedback — the [_Instruct–Validate–Repair_ (IVR)](../guide/glossary#ivr-instruct-validate-repair) loop. This transforms a
54
+
feedback — the [_Instruct–Validate–Repair_ (IVR)](../reference/glossary#ivr-instruct-validate-repair) loop. This transforms a
55
55
probabilistically unreliable call into one with measurable, controllable reliability:
56
56
set a `loop_budget` and the probability of the output satisfying your requirements
57
57
approaches 1 as budget increases.
@@ -66,7 +66,7 @@ Not all requirements can be checked cheaply. A constraint like "this JSON is
66
66
syntactically valid" can be verified in microseconds; a constraint like "this
67
67
answer is grounded in the provided context" may require a second model call.
68
68
69
-
Mellea's [sampling strategies](../guide/glossary#sampling-strategy) control how retries work:
69
+
Mellea's [sampling strategies](../reference/glossary#sampling-strategy) control how retries work:
70
70
71
71
-**`RejectionSamplingStrategy`** — retry until a requirement passes or the budget
72
72
is exhausted. The simplest strategy; good for cheap validators.
@@ -100,10 +100,10 @@ large enough to exceed model limits or degrade output quality.
100
100
101
101
Mellea addresses this through explicit context management:
102
102
103
-
-**[`SimpleContext`](../guide/glossary#context)** (default) resets history on each call. The model sees only
103
+
-**[`SimpleContext`](../reference/glossary#context)** (default) resets history on each call. The model sees only
104
104
the current instruction. This is usually the right choice for independent calls.
105
-
-**[`ChatContext`](../guide/glossary#context)** preserves history for multi-turn conversations.
106
-
-**[Components](../guide/glossary#component)** ([`@mify`](../guide/glossary#mify--mify), [`@generative`](../guide/glossary#generative)) encapsulate the context needed for a
105
+
-**[`ChatContext`](../reference/glossary#context)** preserves history for multi-turn conversations.
106
+
-**[Components](../reference/glossary#component)** ([`@mify`](../reference/glossary#mify--mify), [`@generative`](../reference/glossary#generative)) encapsulate the context needed for a
107
107
single call, keeping context management compositional rather than global.
`instruct()` is the primary API in Mellea. It builds a structured [`Instruction`](../guide/glossary#component)
10
+
`instruct()` is the primary API in Mellea. It builds a structured [`Instruction`](../reference/glossary#component)
11
11
component — not a raw chat message — with a description, requirements, user variables,
12
12
grounding context, few-shot examples, and images. The instruction is rendered through
13
-
[Jinja2](https://jinja.palletsprojects.com/) templates and run through an [instruct–validate–repair (IVR)](../guide/glossary#ivr-instruct-validate-repair) loop by default.
13
+
[Jinja2](https://jinja.palletsprojects.com/) templates and run through an [instruct–validate–repair (IVR)](../reference/glossary#ivr-instruct-validate-repair) loop by default.
14
14
15
15
## Basic `instruct()`
16
16
@@ -23,7 +23,7 @@ print(str(email))
23
23
# Output will vary — LLM responses depend on model and temperature.
24
24
```
25
25
26
-
`instruct()` returns a [`ModelOutputThunk`](../guide/glossary#modeloutputthunk). Access the result as a string with
26
+
`instruct()` returns a [`ModelOutputThunk`](../reference/glossary#modeloutputthunk). Access the result as a string with
27
27
`str(email)` or via `email.value`.
28
28
29
29
## User variables
@@ -76,7 +76,7 @@ print(str(email))
76
76
77
77
## Custom validation functions
78
78
79
-
For deterministic checks, attach a `validation_fn` to a [`Requirement`](../guide/glossary#requirement):
79
+
For deterministic checks, attach a `validation_fn` to a [`Requirement`](../reference/glossary#requirement):
80
80
81
81
```python
82
82
from mellea import start_session
@@ -129,7 +129,7 @@ print(str(email))
129
129
130
130
## Sampling strategies and the IVR loop
131
131
132
-
By default, `instruct()` uses [`RejectionSamplingStrategy`](../guide/glossary#sampling-strategy)`(loop_budget=2)`: it
132
+
By default, `instruct()` uses [`RejectionSamplingStrategy`](../reference/glossary#sampling-strategy)`(loop_budget=2)`: it
133
133
generates once, validates all requirements, and retries up to two times if any fail.
134
134
135
135
Configure the loop explicitly with `strategy`:
@@ -160,7 +160,7 @@ else:
160
160
print(str(result.sample_generations[0].value))
161
161
```
162
162
163
-
With `return_sampling_results=True`, `instruct()` returns a [`SamplingResult`](../guide/glossary#samplingresult) instead
163
+
With `return_sampling_results=True`, `instruct()` returns a [`SamplingResult`](../reference/glossary#samplingresult) instead
164
164
of a `ModelOutputThunk`. This lets you inspect whether validation passed and access
-**[Component](../guide/glossary#component) Lifecycle** — before and after component execution (`component_pre_execute`, `component_post_success`, `component_post_error`)
21
+
-**[Component](../reference/glossary#component) Lifecycle** — before and after component execution (`component_pre_execute`, `component_post_success`, `component_post_error`)
22
22
-**Generation Pipeline** — before and after LLM calls (`generation_pre_call`, `generation_post_call`, `generation_error`)
23
-
-**Validation** — before and after [requirement](../guide/glossary#requirement) checks (`validation_pre_check`, `validation_post_check`)
23
+
-**Validation** — before and after [requirement](../reference/glossary#requirement) checks (`validation_pre_check`, `validation_post_check`)
0 commit comments