Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions workspace/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -294,6 +294,10 @@ or [File Output Mapping](/transformations/mappings/#file-output-mapping) (or bot
Unloading data is useful, for example, when your ad-hoc analysis leads to
valuable results, or when you trained a new model which you'd like to use in transformations.

For Snowflake and BigQuery workspaces, you can also export a single table directly from the workspace schema to
[File Storage](/storage/files/) via the Storage API. See
[Exporting Workspace Tables to Files](/workspace/table-export/) for details.

### Data Persistency (beta)
When this feature is enabled in a project, your data in workspaces can be kept. This way you can, when you return, start where you left off without losing data or time by importing the data again or executing scripts to get to the right stage.

Expand Down
82 changes: 82 additions & 0 deletions workspace/table-export.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,82 @@
---
title: Exporting Workspace Tables to Files
permalink: /workspace/table-export/
---

* TOC
{:toc}

You can export a table that lives in a Snowflake or BigQuery workspace schema to [File Storage](/storage/files/)
via the Storage API. The export runs as an asynchronous storage job, and the job result contains the file ID of the
exported file, which can then be downloaded like any other Keboola file.

This is useful when you build data in a workspace (for example, via the [SQL Editor](/workspace/sql-editor/) or a
custom integration) and need to move the resulting table outside of the workspace without going through Storage
output mapping.

Currently supported backends:

- **Snowflake**
- **BigQuery**

## Endpoint

```
POST https://connection.keboola.com/v2/storage/workspaces/{workspace_id}/table-export
X-StorageApi-Token: your_token
Content-Type: application/json

{
"tableName": "my_table",
"fileName": "custom_export",
"fileType": "csv",
"gzip": true
}
```

### Request Body

| Field | Type | Required | Description |
|-------------|---------|----------|--------------------------------------------------------------------------------------------|
| `tableName` | string | yes | Name of the table (or view) to export from the workspace schema. |
| `fileName` | string | yes | Name that will be used for the resulting file in File Storage. |
| `fileType` | string | no | Output format: `csv` (default) or `parquet`. |
| `gzip` | boolean | no | When `true`, the exported file is gzip-compressed. Default `false`. Ignored for Parquet. |

### Response

The endpoint returns a standard asynchronous [storage job](/overview/#storage-jobs) with HTTP 202. When the job
finishes, its `results` contain the ID of the exported file:

```json
{
"file": {
"id": 12345678
}
}
```

Download the file with the standard [file download](/integrate/storage/api/importer/#download-a-file) flow.

## Backend-Specific Notes

### Snowflake

- Supports **CSV** and **Parquet**.
- Works with all project file storage providers (AWS S3, Azure Blob Storage, Google Cloud Storage).

### BigQuery

- Supports **CSV** and **Parquet**.
- Available for BigQuery projects only; the exported file always lands in the project's GCS file storage.

## Limitations

- The workspace must be a **table workspace**. File/Python/R workspaces are not supported.
- **Reader account** workspaces cannot export data through this endpoint.
- The workspace must use a supported backend (Snowflake or BigQuery).

## API Reference

See the full request/response specification in the
[Storage API reference](https://keboolastorageapi.docs.apiary.io/#reference/workspaces/export-table-from-workspace/export-table-from-workspace).
Loading