Skip to content

Commit d4ce6ed

Browse files
devin-ai-integration[bot]Jakub Kotek
authored andcommitted
docs: remove references to Synapse and Redshift backends
Synapse and Redshift are no longer offered as transformation backends. This removes their mentions from the overview, transformations, mappings, tutorial, architecture guide, and project limits pages. Writer/connector documentation for Synapse and Redshift is preserved as those are external database targets, not Keboola backends. Co-Authored-By: Jakub Kotek <jakub.kotek@keboola.com>
1 parent b51eb45 commit d4ce6ed

6 files changed

Lines changed: 16 additions & 25 deletions

File tree

management/project/limits/index.md

Lines changed: 1 addition & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -120,11 +120,7 @@ The platform limits may be **soft** limits or **hard** limits. They are also lik
120120
development continues and often can be mitigated by a good project design. Contact us for advice if you are
121121
concerned about any of them!
122122

123-
For example, the [Redshift backend](/storage/#backend-properties) allows the maximum table cell size of 64kB. This
124-
is a hard limit and nothing can be done about it as long as Redshift is a hard requirement (the Snowflake backend
125-
can take larger cells).
126-
127-
As another example, you should not have more than 200 tables in a single bucket. This is a soft limit related to
123+
For example, you should not have more than 200 tables in a single bucket. This is a soft limit related to
128124
how we believe the Storage component should be used. Nothing prevents you from exceeding that limit but the
129125
component performance may degrade.
130126

overview/index.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ Designed for data engineers, analysts, and scientists, Keboola **simplifies data
1111

1212
Key features of Keboola:
1313
- **Data Integration:** Effortlessly extract data from various sources like databases, cloud services, and APIs. Load it seamlessly into destinations of your choice for comprehensive analysis.
14-
- **Data Storage:** Use Keboola's robust data warehousing (Snowflake, BigQuery, Redshift, Synapse, etc.) for secure and accessible data storage.
14+
- **Data Storage:** Use Keboola's robust data warehousing (Snowflake, BigQuery, etc.) for secure and accessible data storage.
1515
- **Data Manipulation:** With our extensive toolset, clean, enrich, and transform your data using SQL, Python, R, and more directly within Keboola.
1616
- **Automation:** Automate your data workflows end-to-end with Keboola's intuitive Flows, saving time and reducing manual errors.
1717

@@ -23,7 +23,7 @@ Keboola supports various deployment models to suit your specific needs:
2323

2424
- **Fully Managed:** Let us handle everything for you.
2525
- **Multi-Tenant:** Let us fully manage and maintain all resources.
26-
- **Multi-Tenant with BYO Database:** Use your data storage (Snowflake, BigQuery, Redshift, Synapse, etc.) while we manage the rest.
26+
- **Multi-Tenant with BYO Database:** Use your data storage (Snowflake, BigQuery, etc.) while we manage the rest.
2727
- **Single-Tenant:** Deploy Keboola in your cloud environment (AWS, Azure, GCP) for maximum control and security.
2828

2929
## Keboola Architecture
@@ -47,13 +47,13 @@ to gather data from various sources. They can connect to APIs of external servic
4747
- [Table Storage](https://help.keboola.com/storage/tables/), where all data tables are organized into buckets, further categorized into in and out stages.
4848

4949
This component acts as a middle layer that works with various [backend](/transformations/#backends) database systems like
50-
[Snowflake](https://www.snowflake.com/), [Redshift](https://aws.amazon.com/redshift/), [BigQuery](https://cloud.google.com/bigquery/),
51-
[Synapse](https://azure.microsoft.com/en-us/services/synapse-analytics/), [and others](https://help.keboola.com/transformations/#backends). It provides a key Storage API for working with data,
50+
[Snowflake](https://www.snowflake.com/), [BigQuery](https://cloud.google.com/bigquery/),
51+
[and others](https://help.keboola.com/transformations/#backends). It provides a key Storage API for working with data,
5252
making it easier to connect with other parts of the system and third-party applications.
5353

5454
### Transformations & Workspaces
5555
[Transformations](/transformations/) allow you to manipulate data in your project. They are the tasks you want to perform and enable you to write custom scripts
56-
in [SQL](https://en.wikipedia.org/wiki/SQL) (Snowflake, Redshift, BigQuery, etc.), dbt, [Python](https://www.python.org/about/),
56+
in [SQL](https://en.wikipedia.org/wiki/SQL) (Snowflake, BigQuery, etc.), dbt, [Python](https://www.python.org/about/),
5757
and [R](https://www.r-project.org/about.html).
5858

5959
All transformations operate on a copy of Storage data in an isolated environment — a [workspace](/workspace/), guaranteeing safety for your

transformations/index.md

Lines changed: 6 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -54,20 +54,18 @@ in Storage. This means, for example, that you can rename tables in Storage witho
5454
There are a number of staging options that influence the transformation script code, too.
5555
Typically, you will create an SQL transformation that works with data in a Snowflake database or a Python script
5656
that works with CSV files on a "local" disk (local from the script's perspective). However, it is possible to have
57-
a Python script that works with data in a Synapse database or with data on Azure Blob Storage (ABS).
57+
a Python script that works with CSV files on Azure Blob Storage (ABS).
5858
The transformations are very flexible, though all the combinations might not be available in all projects at all times.
5959

6060
## Backends
6161
The **Transformation Script** is code that defines what happens with the data when the
6262
tables from the input mapping are taken, modified, and produced into the tables referenced in the output mapping.
6363

6464
A backend is the engine running the transformation script. It is a database server
65-
([Amazon Redshift](https://aws.amazon.com/redshift/),
66-
[Snowflake](https://www.snowflake.com/),
65+
([Snowflake](https://www.snowflake.com/),
6766
[Exasol](https://www.exasol.com/),
6867
[Teradata](https://www.teradata.com/),
69-
[Microsoft Synapse](https://azure.microsoft.com/en-us/services/synapse-analytics/) on Azure Stack),
70-
[BigQuery](https://cloud.google.com/bigquery),
68+
[BigQuery](https://cloud.google.com/bigquery)),
7169
or a language interpreter
7270
([Python](https://www.python.org/about/),
7371
[R](https://www.r-project.org/about.html)).
@@ -100,9 +98,8 @@ and play with your arbitrary transformation scripts on copies of your tables
10098
without affecting data in your Storage or your transformations. You can convert a workspace to a transformation
10199
and vice versa.
102100

103-
For Redshift and Synapse, you'll get a separate database
104-
to which the data from input mapping can be loaded. You'll obtain database credentials, which you can
105-
use with a database client of your choice. You can do the same for Snowflake. In addition, we provide access
101+
You'll obtain database credentials, which you can
102+
use with a database client of your choice for Snowflake. In addition, we provide access
106103
to the [Snowflake web interface](https://docs.snowflake.com/en/user-guide/ui-snowsight-gs). Therefore, you can
107104
develop transformations without downloading and installing a database client.
108105

@@ -271,4 +268,4 @@ When triggered
271268

272269
With the [read-only input mapping](/transformations/mappings/#read-only-input-mapping) feature, you can access all buckets (your own or linked) in transformations. Your transformation user
273270
has read-only access to buckets (and their tables), so you can access such data. So, there is no need to specify standard input mapping
274-
for your transformations. The name of the backend object (database, schema, etc.) depends on the backend you use, and it contains the bucket ID (not the bucket name).
271+
for your transformations. The name of the backend object (database, schema, etc.) depends on the backend you use, and it contains the bucket ID (not the bucket name).

transformations/mappings/index.md

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -63,8 +63,7 @@ Depending on the transformation types, you can either build your transformations
6363
with database tables or with CSV files. Furthermore, the CSV files can be placed locally with the transformation
6464
script or they can be placed on a remote storage such as [Amazon S3](https://aws.amazon.com/s3/) or
6565
[Azure Blob Storage](https://azure.microsoft.com/en-us/services/storage/blobs/).
66-
The supported database types are [Snowflake](https://www.snowflake.com/),
67-
[Redshift](https://aws.amazon.com/redshift/), and [Synapse](https://azure.microsoft.com/en-us/services/synapse-analytics/).
66+
The supported database type is [Snowflake](https://www.snowflake.com/).
6867

6968
{: .image-popup}
7069
![Table Input mapping](/transformations/mappings/table-input-mapping.png)
@@ -246,8 +245,7 @@ Depending on the transformation backend, the table output mapping process can do
246245
the project [Storage tables](/storage/tables/).
247246
- In case of **File Staging** --- import the specified *CSV files* into project [Storage tables](/storage/tables/).
248247

249-
The supported staging database types are as follows: [Snowflake](https://www.snowflake.com/),
250-
[Redshift](https://aws.amazon.com/redshift/), and [Synapse](https://azure.microsoft.com/en-us/services/synapse-analytics/).
248+
The supported staging database type is [Snowflake](https://www.snowflake.com/).
251249
The supported staging for CSV files is a storage local to the transformation.
252250

253251
{: .image-popup}

tutorial/ad-hoc/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -283,7 +283,7 @@ For more information about workspaces (including disk and memory limits), see th
283283

284284
## Final Note
285285
This is the end of our stroll around Keboola. On our walk, we missed quite a few things:
286-
Applications, Python and R transformations, Redshift and Snowflake features, to name a few.
286+
Applications, Python and R transformations, Snowflake features, to name a few.
287287
However, teaching you everything was not really the point of this tutorial.
288288
We wanted to show you how Keboola can help in connecting different systems together.
289289

tutorial/onboarding/architecture-guide/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ storage, and processes. It provides a structured framework for distributing task
1818
Each project includes the following key components:
1919

2020
1. **Storage**
21-
- Relational databases such as Snowflake, Redshift, Synapse, Exasol, and others
21+
- Relational databases such as Snowflake, Exasol, and others
2222
- Object storage options like S3 or Azure Blob Storage
2323
2. **Components**
2424
- Data sources and destinations for loading data into Keboola Storage and exporting to databases, services, or applications

0 commit comments

Comments
 (0)