Skip to content

Commit d5daf83

Browse files
committed
docs(workspace): document table-export endpoint for Snowflake and BigQuery
DMD-1332 Add a new page describing POST /v2/storage/workspaces/{id}/table-export, its request/response, supported backends (Snowflake + BigQuery), formats (CSV + Parquet + gzip), and limitations. Link to it from the Unloading Data section of the workspace index. Pairs with keboola/connection#7188.
1 parent 604c0bf commit d5daf83

2 files changed

Lines changed: 86 additions & 0 deletions

File tree

workspace/index.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -294,6 +294,10 @@ or [File Output Mapping](/transformations/mappings/#file-output-mapping) (or bot
294294
Unloading data is useful, for example, when your ad-hoc analysis leads to
295295
valuable results, or when you trained a new model which you'd like to use in transformations.
296296

297+
For Snowflake and BigQuery workspaces, you can also export a single table directly from the workspace schema to
298+
[File Storage](/storage/files/) via the Storage API. See
299+
[Exporting Workspace Tables to Files](/workspace/table-export/) for details.
300+
297301
### Data Persistency (beta)
298302
When this feature is enabled in a project, your data in workspaces can be kept. This way you can, when you return, start where you left off without losing data or time by importing the data again or executing scripts to get to the right stage.
299303

workspace/table-export.md

Lines changed: 82 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,82 @@
1+
---
2+
title: Exporting Workspace Tables to Files
3+
permalink: /workspace/table-export/
4+
---
5+
6+
* TOC
7+
{:toc}
8+
9+
You can export a table that lives in a Snowflake or BigQuery workspace schema to [File Storage](/storage/files/)
10+
via the Storage API. The export runs as an asynchronous storage job, and the job result contains the file ID of the
11+
exported file, which can then be downloaded like any other Keboola file.
12+
13+
This is useful when you build data in a workspace (for example, via the [SQL Editor](/workspace/sql-editor/) or a
14+
custom integration) and need to move the resulting table outside of the workspace without going through Storage
15+
output mapping.
16+
17+
Currently supported backends:
18+
19+
- **Snowflake**
20+
- **BigQuery**
21+
22+
## Endpoint
23+
24+
```
25+
POST https://connection.keboola.com/v2/storage/workspaces/{workspace_id}/table-export
26+
X-StorageApi-Token: your_token
27+
Content-Type: application/json
28+
29+
{
30+
"tableName": "my_table",
31+
"fileName": "custom_export",
32+
"fileType": "csv",
33+
"gzip": true
34+
}
35+
```
36+
37+
### Request Body
38+
39+
| Field | Type | Required | Description |
40+
|-------------|---------|----------|--------------------------------------------------------------------------------------------|
41+
| `tableName` | string | yes | Name of the table (or view) to export from the workspace schema. |
42+
| `fileName` | string | yes | Name that will be used for the resulting file in File Storage. |
43+
| `fileType` | string | no | Output format: `csv` (default) or `parquet`. |
44+
| `gzip` | boolean | no | When `true`, the exported file is gzip-compressed. Default `false`. Ignored for Parquet. |
45+
46+
### Response
47+
48+
The endpoint returns a standard asynchronous [storage job](/overview/#storage-jobs) with HTTP 202. When the job
49+
finishes, its `results` contain the ID of the exported file:
50+
51+
```json
52+
{
53+
"file": {
54+
"id": 12345678
55+
}
56+
}
57+
```
58+
59+
Download the file with the standard [file download](/integrate/storage/api/importer/#download-a-file) flow.
60+
61+
## Backend-Specific Notes
62+
63+
### Snowflake
64+
65+
- Supports **CSV** and **Parquet**.
66+
- Works with all project file storage providers (AWS S3, Azure Blob Storage, Google Cloud Storage).
67+
68+
### BigQuery
69+
70+
- Supports **CSV** and **Parquet**.
71+
- Available for BigQuery projects only; the exported file always lands in the project's GCS file storage.
72+
73+
## Limitations
74+
75+
- The workspace must be a **table workspace**. File/Python/R workspaces are not supported.
76+
- **Reader account** workspaces cannot export data through this endpoint.
77+
- The workspace must use a supported backend (Snowflake or BigQuery).
78+
79+
## API Reference
80+
81+
See the full request/response specification in the
82+
[Storage API reference](https://keboolastorageapi.docs.apiary.io/#reference/workspaces/export-table-from-workspace/export-table-from-workspace).

0 commit comments

Comments
 (0)