You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: apiary.apib
+9-100Lines changed: 9 additions & 100 deletions
Original file line number
Diff line number
Diff line change
@@ -589,7 +589,6 @@ and the associated administrator (`admin`) with organization (if any). If the pr
589
589
"rowsCount":1036,
590
590
"hasMysql":false,
591
591
"hasSynapse":false,
592
-
"hasRedshift":false,
593
592
"hasSnowflake":true,
594
593
"hasExasol":false,
595
594
"hasTeradata":false,
@@ -742,7 +741,6 @@ This call can be executed by all tokens.
742
741
"rowsCount":1036,
743
742
"hasMysql":false,
744
743
"hasSynapse":false,
745
-
"hasRedshift":false,
746
744
"hasSnowflake":true,
747
745
"hasExasol":false,
748
746
"hasTeradata":false,
@@ -958,8 +956,8 @@ the KBC Component architecture, see the [Developers documentation](https://devel
958
956
# Group Buckets
959
957
[Buckets](https://help.keboola.com/storage/buckets/) are containers for one or more data tables.
960
958
Access to buckets can be limited by access tokens. Each bucket has a *backend* in which all tables are created:
961
-
-Snowlake (default)
962
-
-Redshift
959
+
-Snowflake (default)
960
+
-BigQuery
963
961
964
962
## Create or List Buckets [/v2/storage/buckets]
965
963
### List all buckets [GET /v2/storage/buckets?include={include}]
@@ -1063,7 +1061,6 @@ existing bucket from another project (see below).
1063
1061
+ backend (optional, enum[string]) - Bucket backend type; the default value is determined by the project settings.
1064
1062
+ Members
1065
1063
+ snowflake
1066
-
+ redshift
1067
1064
+ bigquery
1068
1065
+ displayName (optional) - Bucket displayName, this name is displayed in UI and can be changed anytime; only alphanumeric characters,underscores and dashes are allowed.
1069
1066
+ color (optional) - Bucket color. Accept valid CSS values for colors.
@@ -3133,7 +3130,7 @@ To enable compression of API response traffic, please include the following HTTP
+ orderBy (optional, array[OrderByObject]) - Not supported for Redshift
3407
+
+ orderBy (optional, array[OrderByObject])
3411
3408
+ gzip (optional, boolean) - The response will be gzipped if set to true.
3412
3409
+ includeInternalTimestamp (optional, boolean) - Include internal _timestamp column in the exported data (column is included even when not in columns list). This is timestamp of row last change. (Available only for Snowflake)
3413
3410
+ fileType (optional, enum[string]) - Type of the file to be created in File Storage.
@@ -3835,51 +3832,6 @@ Use the optional `force` parameter to delete its aliases too.
This is a utility command implemented only for projects with the Redshift backend.
3842
-
Redshift tables with a lot of small-increment loads bloat in size. An optimize command is automatically scheduled to fix
3843
-
this issue. This API call can be used to trigger immediate optimization of a table.
3844
-
3845
-
+ Parameters
3846
-
+ table_id (required) - Table Id
3847
-
3848
-
+ Request
3849
-
+ Headers
3850
-
3851
-
X-StorageApi-Token: your_token
3852
-
3853
-
+ Response 200 (application/json)
3854
-
3855
-
{
3856
-
"id": 245,
3857
-
"status": "waiting",
3858
-
"url": "/v2/storage/jobs/245",
3859
-
"tableId": "in.c-API-tests.MyLanguages_test",
3860
-
"operationName": "tableOptimize",
3861
-
"operationParams": {
3862
-
"queue": "main_fast"
3863
-
},
3864
-
"createdTime": "2016-10-17T10:31:52+0200",
3865
-
"startTime": null,
3866
-
"endTime": null,
3867
-
"runId": null,
3868
-
"results": null,
3869
-
"creatorToken": {
3870
-
"id": "31",
3871
-
"description": "dev@keboola.com"
3872
-
},
3873
-
"metrics": {
3874
-
"inCompressed": false,
3875
-
"inBytes": 0,
3876
-
"inBytesUncompressed": 0,
3877
-
"outCompressed": false,
3878
-
"outBytes": 0,
3879
-
"outBytesUncompressed": 0
3880
-
}
3881
-
}
3882
-
3883
3835
3884
3836
## List Tables [/v2/storage/tables?include={include}]
3885
3837
@@ -4040,7 +3992,7 @@ attributes and information about the containing bucket.
4040
3992
### Add Column to Table [POST]
4041
3993
Adds a new column to an existing table. This request is [asynchronous](#introduction/synchronous-and-asynchronous-calls).
4042
3994
4043
-
*Attribute definition and basetype is allowed (and required) for typed tables created via [table definition](#reference/tables/create-table-definition)*. Redshift backend is not supported.
3995
+
*Attribute definition and basetype is allowed (and required) for typed tables created via [table definition](#reference/tables/create-table-definition)*.
4044
3996
4045
3997
+ Parameters
4046
3998
+ table_id (required) - Table Id
@@ -4252,7 +4204,7 @@ This request is [asynchronous](#introduction/synchronous-and-asynchronous-calls)
4252
4204
+ table (string) - Exact name of table present in the workspace when doing this API call
4253
4205
+ column (string) - Name of the column, which values (all rows) will be used as values to DELETE rows by. Datatype of this column has to match with datatype of the column specified in root of this API call.
4254
4206
4255
-
+ dataType (optional, enum[string]) - Not supported for Redshift - for comparing (`[gt|lt|le|ge]`) numeric values you have to specify data type. BigQuery supports all listed types and converts them to BigQuery equivalent in the background
4207
+
+ dataType (optional, enum[string]) - For comparing (`[gt|lt|le|ge]`) numeric values you have to specify data type. BigQuery supports all listed types and converts them to BigQuery equivalent in the background
4256
4208
+ Members
4257
4209
+ INTEGER - for numbers without a decimal point (Snowflake, BigQuery)
4258
4210
+ DOUBLE - for number with a decimal point (Snowflake, BigQuery)
@@ -4844,7 +4796,6 @@ NOTE: To create reader workspace, use Create Configuration Workspaces endpoint i
4844
4796
+ Attributes
4845
4797
+ backend: snowflake (optional, enum[string]) - Workspace backend. When omitted, the default backend is used.
4846
4798
+ Members
4847
-
+ redshift
4848
4799
+ snowflake
4849
4800
+ bigquery
4850
4801
+ abs - Azure blob storage file workspace
@@ -4926,7 +4877,6 @@ Creates a new workspace for [Development Branch](#reference/development-branches
4926
4877
+ Attributes
4927
4878
+ backend: snowflake (optional, enum[string]) - Workspace backend. When omitted, the default backend is used.
4928
4879
+ Members
4929
-
+ redshift
4930
4880
+ snowflake
4931
4881
+ bigquery
4932
4882
+ abs - Azure blob storage file workspace
@@ -5124,20 +5074,7 @@ Loads tables from Storage into a Workspace. BigQuery supports only loading as vi
5124
5074
Default: true
5125
5075
+ convertEmptyValuesToNull (optional, boolean) - Empty values replaced by NULL (ignored for tables with types)
5126
5076
Default: false
5127
-
+ compression (optional, enum[string]) - For Redshift only
+ sortKey[] (optional, array) - Redshift only - Column(s) to be used as a sort key
5298
-
+ distStyle (optional, enum[string]) - Redshift only - Distribution style (even, all, or key)
5299
-
+ Members
5300
-
+ even
5301
-
+ all
5302
-
+ key
5303
-
+ distKey (optional) - Redshift only - Column to use for the key distribution style
5304
5214
+ incremental (optional, boolean) - Rows will be appended to an existing table, not supported in file workspaces (load is always full)
5305
5215
Default: false
5306
5216
+ overwrite (optional, boolean) - When preserve is true duplicate tables will be overwritten
@@ -8977,7 +8887,6 @@ Creates a new workspace for an existing configuration in [Development Branch](#r
8977
8887
+ Attributes
8978
8888
+ backend: snowflake (optional, enum[string]) - Workspace backend. When omitted, the default backend is used.
8979
8889
+ Members
8980
-
+ redshift
8981
8890
+ snowflake
8982
8891
+ bigquery
8983
8892
+ abs - Azure blob storage file workspace
@@ -10422,7 +10331,7 @@ Requesting changes will remove all approvals and move the Merge request to `deve
10422
10331
+ le - Less than or equals - Snowflake only
10423
10332
+ Default: eq
10424
10333
+ values (required, array[string]) - array of variables to compare
10425
-
+ dataType (optional, enum[string]) - Not supported for Redshift - for comparing (`[gt|lt|le|ge]`) numeric values you have to specify data type. BigQuery supports all listed types and converts them to BigQuery equivalent in the background
10334
+
+ dataType (optional, enum[string]) - For comparing (`[gt|lt|le|ge]`) numeric values you have to specify data type. BigQuery supports all listed types and converts them to BigQuery equivalent in the background
10426
10335
+ Members
10427
10336
+ INTEGER - for numbers without a decimal point (Snowflake, BigQuery)
10428
10337
+ DOUBLE - for number with a decimal point (Snowflake, BigQuery)
0 commit comments