|
| 1 | +--- |
| 2 | +title: Exporting Workspace Tables to Files |
| 3 | +permalink: /workspace/table-export/ |
| 4 | +--- |
| 5 | + |
| 6 | +* TOC |
| 7 | +{:toc} |
| 8 | + |
| 9 | +You can export a table that lives in a Snowflake or BigQuery workspace schema to [File Storage](/storage/files/) |
| 10 | +via the Storage API. The export runs as an asynchronous storage job, and the job result contains the file ID of the |
| 11 | +exported file, which can then be downloaded like any other Keboola file. |
| 12 | + |
| 13 | +This is useful when you build data in a workspace (for example, via the [SQL Editor](/workspace/sql-editor/) or a |
| 14 | +custom integration) and need to move the resulting table outside of the workspace without going through Storage |
| 15 | +output mapping. |
| 16 | + |
| 17 | +Currently supported backends: |
| 18 | + |
| 19 | +- **Snowflake** |
| 20 | +- **BigQuery** |
| 21 | + |
| 22 | +## Endpoint |
| 23 | + |
| 24 | +``` |
| 25 | +POST https://connection.keboola.com/v2/storage/workspaces/{workspace_id}/table-export |
| 26 | +X-StorageApi-Token: your_token |
| 27 | +Content-Type: application/json |
| 28 | +
|
| 29 | +{ |
| 30 | + "tableName": "my_table", |
| 31 | + "fileName": "custom_export", |
| 32 | + "fileType": "csv", |
| 33 | + "gzip": true |
| 34 | +} |
| 35 | +``` |
| 36 | + |
| 37 | +### Request Body |
| 38 | + |
| 39 | +| Field | Type | Required | Description | |
| 40 | +|-------------|---------|----------|--------------------------------------------------------------------------------------------| |
| 41 | +| `tableName` | string | yes | Name of the table (or view) to export from the workspace schema. | |
| 42 | +| `fileName` | string | yes | Name that will be used for the resulting file in File Storage. | |
| 43 | +| `fileType` | string | no | Output format: `csv` (default) or `parquet`. | |
| 44 | +| `gzip` | boolean | no | When `true`, the exported file is gzip-compressed. Default `false`. Ignored for Parquet. | |
| 45 | + |
| 46 | +### Response |
| 47 | + |
| 48 | +The endpoint returns a standard asynchronous [storage job](/overview/#storage-jobs) with HTTP 202. When the job |
| 49 | +finishes, its `results` contain the ID of the exported file: |
| 50 | + |
| 51 | +```json |
| 52 | +{ |
| 53 | + "file": { |
| 54 | + "id": 12345678 |
| 55 | + } |
| 56 | +} |
| 57 | +``` |
| 58 | + |
| 59 | +Download the file with the standard [file download](/integrate/storage/api/importer/#download-a-file) flow. |
| 60 | + |
| 61 | +## Backend-Specific Notes |
| 62 | + |
| 63 | +### Snowflake |
| 64 | + |
| 65 | +- Supports **CSV** and **Parquet**. |
| 66 | +- Works with all project file storage providers (AWS S3, Azure Blob Storage, Google Cloud Storage). |
| 67 | + |
| 68 | +### BigQuery |
| 69 | + |
| 70 | +- Supports **CSV** and **Parquet**. |
| 71 | +- Available for BigQuery projects only; the exported file always lands in the project's GCS file storage. |
| 72 | + |
| 73 | +## Limitations |
| 74 | + |
| 75 | +- The workspace must be a **table workspace**. File/Python/R workspaces are not supported. |
| 76 | +- **Reader account** workspaces cannot export data through this endpoint. |
| 77 | +- The workspace must use a supported backend (Snowflake or BigQuery). |
| 78 | + |
| 79 | +## API Reference |
| 80 | + |
| 81 | +See the full request/response specification in the |
| 82 | +[Storage API reference](https://keboolastorageapi.docs.apiary.io/#reference/workspaces/export-table-from-workspace/export-table-from-workspace). |
0 commit comments