Skip to content

Commit 1935ab7

Browse files
authored
Merge pull request #6709 from keboola/vb/DMD-1057/remove_rs_prod
refactor: remove Redshift from production code (DMD-1057 Phase 3)
2 parents b70db64 + 686c548 commit 1935ab7

1 file changed

Lines changed: 9 additions & 100 deletions

File tree

apiary.apib

Lines changed: 9 additions & 100 deletions
Original file line numberDiff line numberDiff line change
@@ -589,7 +589,6 @@ and the associated administrator (`admin`) with organization (if any). If the pr
589589
"rowsCount": 1036,
590590
"hasMysql": false,
591591
"hasSynapse": false,
592-
"hasRedshift": false,
593592
"hasSnowflake": true,
594593
"hasExasol": false,
595594
"hasTeradata": false,
@@ -742,7 +741,6 @@ This call can be executed by all tokens.
742741
"rowsCount": 1036,
743742
"hasMysql": false,
744743
"hasSynapse": false,
745-
"hasRedshift": false,
746744
"hasSnowflake": true,
747745
"hasExasol": false,
748746
"hasTeradata": false,
@@ -958,8 +956,8 @@ the KBC Component architecture, see the [Developers documentation](https://devel
958956
# Group Buckets
959957
[Buckets](https://help.keboola.com/storage/buckets/) are containers for one or more data tables.
960958
Access to buckets can be limited by access tokens. Each bucket has a *backend* in which all tables are created:
961-
- Snowlake (default)
962-
- Redshift
959+
- Snowflake (default)
960+
- BigQuery
963961

964962
## Create or List Buckets [/v2/storage/buckets]
965963
### List all buckets [GET /v2/storage/buckets?include={include}]
@@ -1063,7 +1061,6 @@ existing bucket from another project (see below).
10631061
+ backend (optional, enum[string]) - Bucket backend type; the default value is determined by the project settings.
10641062
+ Members
10651063
+ snowflake
1066-
+ redshift
10671064
+ bigquery
10681065
+ displayName (optional) - Bucket displayName, this name is displayed in UI and can be changed anytime; only alphanumeric characters,underscores and dashes are allowed.
10691066
+ color (optional) - Bucket color. Accept valid CSS values for colors.
@@ -3133,7 +3130,7 @@ To enable compression of API response traffic, please include the following HTTP
31333130
+ json - will return JSON formatted output
31343131
+ Default: rfc
31353132
+ whereFilters (optional, array[WhereFiltersObject])
3136-
+ orderBy (optional, array[OrderByObject]) - Not supported for Redshift
3133+
+ orderBy (optional, array[OrderByObject])
31373134
+ fulltextSearch (optional, string) - Makes fulltext search over all data in table. It cannot be combined with `whereFilters`. Snowflake/BigQuery only
31383135
31393136
+ Request with default `rfc` format (`CSV` data)
@@ -3407,7 +3404,7 @@ will show information about the created file and caching:
34073404
+ ne - Not equals - can be used with multiple values.
34083405
+ Default: eq
34093406
+ whereFilters (optional, array[WhereFiltersObject])
3410-
+ orderBy (optional, array[OrderByObject]) - Not supported for Redshift
3407+
+ orderBy (optional, array[OrderByObject])
34113408
+ gzip (optional, boolean) - The response will be gzipped if set to true.
34123409
+ includeInternalTimestamp (optional, boolean) - Include internal _timestamp column in the exported data (column is included even when not in columns list). This is timestamp of row last change. (Available only for Snowflake)
34133410
+ fileType (optional, enum[string]) - Type of the file to be created in File Storage.
@@ -3835,51 +3832,6 @@ Use the optional `force` parameter to delete its aliases too.
38353832
38363833
38373834
3838-
## Table Optimize [/v2/storage/tables/{table_id}/optimize]
3839-
3840-
### Optimize table [POST]
3841-
This is a utility command implemented only for projects with the Redshift backend.
3842-
Redshift tables with a lot of small-increment loads bloat in size. An optimize command is automatically scheduled to fix
3843-
this issue. This API call can be used to trigger immediate optimization of a table.
3844-
3845-
+ Parameters
3846-
+ table_id (required) - Table Id
3847-
3848-
+ Request
3849-
+ Headers
3850-
3851-
X-StorageApi-Token: your_token
3852-
3853-
+ Response 200 (application/json)
3854-
3855-
{
3856-
"id": 245,
3857-
"status": "waiting",
3858-
"url": "/v2/storage/jobs/245",
3859-
"tableId": "in.c-API-tests.MyLanguages_test",
3860-
"operationName": "tableOptimize",
3861-
"operationParams": {
3862-
"queue": "main_fast"
3863-
},
3864-
"createdTime": "2016-10-17T10:31:52+0200",
3865-
"startTime": null,
3866-
"endTime": null,
3867-
"runId": null,
3868-
"results": null,
3869-
"creatorToken": {
3870-
"id": "31",
3871-
"description": "dev@keboola.com"
3872-
},
3873-
"metrics": {
3874-
"inCompressed": false,
3875-
"inBytes": 0,
3876-
"inBytesUncompressed": 0,
3877-
"outCompressed": false,
3878-
"outBytes": 0,
3879-
"outBytesUncompressed": 0
3880-
}
3881-
}
3882-
38833835
38843836
## List Tables [/v2/storage/tables?include={include}]
38853837
@@ -4040,7 +3992,7 @@ attributes and information about the containing bucket.
40403992
### Add Column to Table [POST]
40413993
Adds a new column to an existing table. This request is [asynchronous](#introduction/synchronous-and-asynchronous-calls).
40423994
4043-
*Attribute definition and basetype is allowed (and required) for typed tables created via [table definition](#reference/tables/create-table-definition)*. Redshift backend is not supported.
3995+
*Attribute definition and basetype is allowed (and required) for typed tables created via [table definition](#reference/tables/create-table-definition)*.
40443996
40453997
+ Parameters
40463998
+ table_id (required) - Table Id
@@ -4252,7 +4204,7 @@ This request is [asynchronous](#introduction/synchronous-and-asynchronous-calls)
42524204
+ table (string) - Exact name of table present in the workspace when doing this API call
42534205
+ column (string) - Name of the column, which values (all rows) will be used as values to DELETE rows by. Datatype of this column has to match with datatype of the column specified in root of this API call.
42544206
4255-
+ dataType (optional, enum[string]) - Not supported for Redshift - for comparing (`[gt|lt|le|ge]`) numeric values you have to specify data type. BigQuery supports all listed types and converts them to BigQuery equivalent in the background
4207+
+ dataType (optional, enum[string]) - For comparing (`[gt|lt|le|ge]`) numeric values you have to specify data type. BigQuery supports all listed types and converts them to BigQuery equivalent in the background
42564208
+ Members
42574209
+ INTEGER - for numbers without a decimal point (Snowflake, BigQuery)
42584210
+ DOUBLE - for number with a decimal point (Snowflake, BigQuery)
@@ -4844,7 +4796,6 @@ NOTE: To create reader workspace, use Create Configuration Workspaces endpoint i
48444796
+ Attributes
48454797
+ backend: snowflake (optional, enum[string]) - Workspace backend. When omitted, the default backend is used.
48464798
+ Members
4847-
+ redshift
48484799
+ snowflake
48494800
+ bigquery
48504801
+ abs - Azure blob storage file workspace
@@ -4926,7 +4877,6 @@ Creates a new workspace for [Development Branch](#reference/development-branches
49264877
+ Attributes
49274878
+ backend: snowflake (optional, enum[string]) - Workspace backend. When omitted, the default backend is used.
49284879
+ Members
4929-
+ redshift
49304880
+ snowflake
49314881
+ bigquery
49324882
+ abs - Azure blob storage file workspace
@@ -5124,20 +5074,7 @@ Loads tables from Storage into a Workspace. BigQuery supports only loading as vi
51245074
Default: true
51255075
+ convertEmptyValuesToNull (optional, boolean) - Empty values replaced by NULL (ignored for tables with types)
51265076
Default: false
5127-
+ compression (optional, enum[string]) - For Redshift only
5128-
+ Members
5129-
+ RAW
5130-
+ BYTEDICT
5131-
+ DELTA
5132-
+ DELTA32K
5133-
+ LZO
5134-
+ MOSTLY8
5135-
+ MOSTLY16
5136-
+ MOSTLY32
5137-
+ RUNLENGTH
5138-
+ TEXT255
5139-
+ TEXT32K
5140-
+ ZSTD
5077+
51415078
+ whereColumn (optional) - **Deprecated** (use whereFilters instead). Column for [filtering](#reference/tables/unload-data-asynchronously/asynchronous-export)
51425079
+ whereValues[] (optional) - **Deprecated** (use whereFilters instead). Values for filtering
51435080
+ whereOperator (optional, enum[string]) - **Deprecated** (use whereFilters instead). Comparison operator
@@ -5146,13 +5083,6 @@ Loads tables from Storage into a Workspace. BigQuery supports only loading as vi
51465083
+ ne - Not equal to
51475084
+ Default: eq
51485085
+ whereFilters (optional, array[WhereFiltersObject])
5149-
+ sortKey[] (optional, array) - Redshift only - Column(s) to be used as a sort key
5150-
+ distStyle (optional, enum[string]) - Redshift only - Distribution style (even, all, or key)
5151-
+ Members
5152-
+ even
5153-
+ all
5154-
+ key
5155-
+ distKey (optional) - Redshift only - Column to use for the key distribution style
51565086
+ incremental (optional, boolean) - Rows will be appended to an existing table, not supported in file workspaces (load is always full)
51575087
Default: false
51585088
+ overwrite (optional, boolean) - When preserve is true duplicate tables will be overwritten
@@ -5272,20 +5202,7 @@ Loads tables from Storage into a Workspace in [Development Branch](#reference/de
52725202
Default: true
52735203
+ convertEmptyValuesToNull (optional, boolean) - Empty values replaced by NULL
52745204
Default: false
5275-
+ compression (optional, enum[string]) - For Redshift only
5276-
+ Members
5277-
+ RAW
5278-
+ BYTEDICT
5279-
+ DELTA
5280-
+ DELTA32K
5281-
+ LZO
5282-
+ MOSTLY8
5283-
+ MOSTLY16
5284-
+ MOSTLY32
5285-
+ RUNLENGTH
5286-
+ TEXT255
5287-
+ TEXT32K
5288-
+ ZSTD
5205+
52895206
+ whereColumn (optional) - **Deprecated** (use whereFilters instead). Column for [filtering](#reference/tables/unload-data-asynchronously/asynchronous-export)
52905207
+ whereValues[] (optional) - **Deprecated** (use whereFilters instead). Values for filtering
52915208
+ whereOperator (optional, enum[string]) - **Deprecated** (use whereFilters instead). Comparison operator
@@ -5294,13 +5211,6 @@ Loads tables from Storage into a Workspace in [Development Branch](#reference/de
52945211
+ ne - Not equal to
52955212
+ Default: eq
52965213
+ whereFilters (optional, array[WhereFiltersObject])
5297-
+ sortKey[] (optional, array) - Redshift only - Column(s) to be used as a sort key
5298-
+ distStyle (optional, enum[string]) - Redshift only - Distribution style (even, all, or key)
5299-
+ Members
5300-
+ even
5301-
+ all
5302-
+ key
5303-
+ distKey (optional) - Redshift only - Column to use for the key distribution style
53045214
+ incremental (optional, boolean) - Rows will be appended to an existing table, not supported in file workspaces (load is always full)
53055215
Default: false
53065216
+ overwrite (optional, boolean) - When preserve is true duplicate tables will be overwritten
@@ -8977,7 +8887,6 @@ Creates a new workspace for an existing configuration in [Development Branch](#r
89778887
+ Attributes
89788888
+ backend: snowflake (optional, enum[string]) - Workspace backend. When omitted, the default backend is used.
89798889
+ Members
8980-
+ redshift
89818890
+ snowflake
89828891
+ bigquery
89838892
+ abs - Azure blob storage file workspace
@@ -10422,7 +10331,7 @@ Requesting changes will remove all approvals and move the Merge request to `deve
1042210331
+ le - Less than or equals - Snowflake only
1042310332
+ Default: eq
1042410333
+ values (required, array[string]) - array of variables to compare
10425-
+ dataType (optional, enum[string]) - Not supported for Redshift - for comparing (`[gt|lt|le|ge]`) numeric values you have to specify data type. BigQuery supports all listed types and converts them to BigQuery equivalent in the background
10334+
+ dataType (optional, enum[string]) - For comparing (`[gt|lt|le|ge]`) numeric values you have to specify data type. BigQuery supports all listed types and converts them to BigQuery equivalent in the background
1042610335
+ Members
1042710336
+ INTEGER - for numbers without a decimal point (Snowflake, BigQuery)
1042810337
+ DOUBLE - for number with a decimal point (Snowflake, BigQuery)

0 commit comments

Comments
 (0)