diff --git a/compose.yaml b/compose.yaml
index f1e6d7d0..a63691d8 100644
--- a/compose.yaml
+++ b/compose.yaml
@@ -19,6 +19,7 @@ services:
- --system-parameter-default=max_clusters=100
- --system-parameter-default=max_sources=100
- --system-parameter-default=max_aws_privatelink_connections=10
+ - --system-parameter-default=enable_create_table_from_source=on
- --all-features
environment:
MZ_NO_TELEMETRY: 1
@@ -48,6 +49,7 @@ services:
- --system-parameter-default=max_sources=100
- --system-parameter-default=max_aws_privatelink_connections=10
- --system-parameter-default=transaction_isolation=serializable
+ - --system-parameter-default=enable_create_table_from_source=on
- --all-features
environment:
MZ_NO_TELEMETRY: 1
diff --git a/docs/data-sources/source_reference.md b/docs/data-sources/source_reference.md
new file mode 100644
index 00000000..eefa72c9
--- /dev/null
+++ b/docs/data-sources/source_reference.md
@@ -0,0 +1,53 @@
+---
+# generated by https://github.com/hashicorp/terraform-plugin-docs
+page_title: "materialize_source_reference Data Source - terraform-provider-materialize"
+subcategory: ""
+description: |-
+ The materialize_source_reference data source retrieves a list of available upstream references for a given Materialize source. These references represent potential tables that can be created based on the source, but they do not necessarily indicate references the source is already ingesting. This allows users to see all upstream data that could be materialized into tables.
+---
+
+# materialize_source_reference (Data Source)
+
+The `materialize_source_reference` data source retrieves a list of *available* upstream references for a given Materialize source. These references represent potential tables that can be created based on the source, but they do not necessarily indicate references the source is already ingesting. This allows users to see all upstream data that could be materialized into tables.
+
+## Example Usage
+
+```terraform
+data "materialize_source_reference" "source_references" {
+ source_id = materialize_source_mysql.test.id
+}
+
+output "source_references" {
+ value = data.materialize_source_reference.my_source_references.references
+}
+```
+
+
+## Schema
+
+### Required
+
+- `source_id` (String) The ID of the source to get references for
+
+### Optional
+
+- `region` (String) The region in which the resource is located.
+
+### Read-Only
+
+- `id` (String) The ID of this resource.
+- `references` (List of Object) The source references (see [below for nested schema](#nestedatt--references))
+
+
+### Nested Schema for `references`
+
+Read-Only:
+
+- `columns` (List of String)
+- `name` (String)
+- `namespace` (String)
+- `source_database_name` (String)
+- `source_name` (String)
+- `source_schema_name` (String)
+- `source_type` (String)
+- `updated_at` (String)
diff --git a/docs/data-sources/source_table.md b/docs/data-sources/source_table.md
new file mode 100644
index 00000000..29a9fa8b
--- /dev/null
+++ b/docs/data-sources/source_table.md
@@ -0,0 +1,65 @@
+---
+# generated by https://github.com/hashicorp/terraform-plugin-docs
+page_title: "materialize_source_table Data Source - terraform-provider-materialize"
+subcategory: ""
+description: |-
+
+---
+
+# materialize_source_table (Data Source)
+
+
+
+## Example Usage
+
+```terraform
+data "materialize_source_table" "all" {}
+
+data "materialize_source_table" "materialize" {
+ database_name = "materialize"
+}
+
+data "materialize_source_table" "materialize_schema" {
+ database_name = "materialize"
+ schema_name = "schema"
+}
+```
+
+
+## Schema
+
+### Optional
+
+- `database_name` (String) Limit tables to a specific database
+- `region` (String) The region in which the resource is located.
+- `schema_name` (String) Limit tables to a specific schema within a specific database
+
+### Read-Only
+
+- `id` (String) The ID of this resource.
+- `tables` (List of Object) The source tables in the account (see [below for nested schema](#nestedatt--tables))
+
+
+### Nested Schema for `tables`
+
+Read-Only:
+
+- `comment` (String)
+- `database_name` (String)
+- `id` (String)
+- `name` (String)
+- `owner_name` (String)
+- `schema_name` (String)
+- `source` (List of Object) (see [below for nested schema](#nestedobjatt--tables--source))
+- `source_type` (String)
+- `upstream_name` (String)
+- `upstream_schema_name` (String)
+
+
+### Nested Schema for `tables.source`
+
+Read-Only:
+
+- `database_name` (String)
+- `name` (String)
+- `schema_name` (String)
diff --git a/docs/data-sources/table.md b/docs/data-sources/table.md
index 2a29c7f4..61cc3f5d 100644
--- a/docs/data-sources/table.md
+++ b/docs/data-sources/table.md
@@ -10,7 +10,20 @@ description: |-
+## Example Usage
+```terraform
+data "materialize_table" "all" {}
+
+data "materialize_table" "materialize" {
+ database_name = "materialize"
+}
+
+data "materialize_table" "materialize_schema" {
+ database_name = "materialize"
+ schema_name = "schema"
+}
+```
## Schema
diff --git a/docs/guides/materialize_source_table.md b/docs/guides/materialize_source_table.md
new file mode 100644
index 00000000..1cb57137
--- /dev/null
+++ b/docs/guides/materialize_source_table.md
@@ -0,0 +1,243 @@
+---
+# generated by https://github.com/hashicorp/terraform-plugin-docs
+# template file: templates/guides/materialize_source_table.md.tmpl
+page_title: "Source Table Migration Guide"
+subcategory: ""
+description: |-
+ Guide for migrating to the new materialize_source_table_{source_type} resources.
+---
+
+# Source versioning: migrating to `materialize_source_table_{source_type}` Resource
+
+In previous versions of the Materialize Terraform provider, source tables were defined within the source resource itself and were considered subsources of the source rather than separate entities.
+
+This guide will walk you through the process of migrating your existing source table definitions to the new `materialize_source_table_{source_type}` resource.
+
+For each MySQL and Postgres source, you will need to create a new `materialize_source_table_{source_type}` resource for each table that was previously defined within the source resource. This ensures that the tables are preserved during the migration process. For Kafka sources, you will need to create a `materialize_source_table_kafka` table with the same name as the kafka source to contain the data for the kafka topic.
+
+## Old Approach
+
+Previously, source tables were defined directly within the source resource:
+
+### Example: MySQL Source
+
+```hcl
+resource "materialize_source_mysql" "mysql_source" {
+ name = "mysql_source"
+ cluster_name = "cluster_name"
+
+ mysql_connection {
+ name = materialize_connection_mysql.mysql_connection.name
+ }
+
+ table {
+ upstream_name = "mysql_table1"
+ upstream_schema_name = "shop"
+ name = "mysql_table1_local"
+ }
+}
+```
+
+### Example: Kafka Source
+
+```hcl
+resource "materialize_source_kafka" "example_source_kafka_format_text" {
+ name = "source_kafka_text"
+ comment = "source kafka comment"
+ cluster_name = materialize_cluster.cluster_source.name
+ topic = "topic1"
+
+ kafka_connection {
+ name = materialize_connection_kafka.kafka_connection.name
+ schema_name = materialize_connection_kafka.kafka_connection.schema_name
+ database_name = materialize_connection_kafka.kafka_connection.database_name
+ }
+ key_format {
+ text = true
+ }
+ value_format {
+ text = true
+ }
+}
+```
+
+## New Approach
+
+The new approach separates source definitions and table definitions. You will now create the source without specifying the tables, and then define each table using the `materialize_source_table_{source_type}` resource.
+
+## Manual Migration Process
+
+This manual migration process requires users to create new source tables using the new `materialize_source_table_{source_type}` resource and then remove the old ones. We'll cover examples for both MySQL and Kafka sources.
+
+### Step 1: Define `materialize_source_table_{source_type}` Resources
+
+Before making any changes to your existing source resources, create new `materialize_source_table_{source_type}` resources for each table that is currently defined within your sources.
+
+#### MySQL Example:
+
+```hcl
+resource "materialize_source_table_mysql" "mysql_table_from_source" {
+ name = "mysql_table1_from_source"
+ schema_name = "public"
+ database_name = "materialize"
+
+ source {
+ name = materialize_source_mysql.mysql_source.name
+ // Define the schema and database for the source if needed
+ }
+
+ upstream_name = "mysql_table1"
+ upstream_schema_name = "shop"
+
+ ignore_columns = ["about"]
+}
+```
+
+#### Kafka Example:
+
+```hcl
+resource "materialize_source_table_kafka" "kafka_table_from_source" {
+ name = "kafka_table_from_source"
+ schema_name = "public"
+ database_name = "materialize"
+
+ source_name {
+ name = materialize_source_kafka.kafka_source.name
+ }
+
+ key_format {
+ text = true
+ }
+
+ value_format {
+ text = true
+ }
+
+}
+```
+
+### Step 2: Apply the Changes
+
+Run `terraform plan` and `terraform apply` to create the new `materialize_source_table_{source_type}` resources.
+
+### Step 3: Remove Table Blocks from Source Resources
+
+Once the new `materialize_source_table_{source_type}` resources are successfully created, remove all the deprecated and table-specific attributes from your source resources.
+
+#### MySQL Example:
+
+For MySQL sources, remove the `table` block and any table-specific attributes from the source resource:
+
+```hcl
+resource "materialize_source_mysql" "mysql_source" {
+ name = "mysql_source"
+ cluster_name = "cluster_name"
+
+ mysql_connection {
+ name = materialize_connection_mysql.mysql_connection.name
+ }
+
+ // Remove the table blocks from here
+ - table {
+ - upstream_name = "mysql_table1"
+ - upstream_schema_name = "shop"
+ - name = "mysql_table1_local"
+ -
+ - ignore_columns = ["about"]
+ -
+ ...
+}
+```
+
+#### Kafka Example:
+
+For Kafka sources, remove the `format`, `include_key`, `include_headers`, and other table-specific attributes from the source resource:
+
+```hcl
+resource "materialize_source_kafka" "kafka_source" {
+ name = "kafka_source"
+ cluster_name = "cluster_name"
+
+ kafka_connection {
+ name = materialize_connection_kafka.kafka_connection.name
+ }
+
+ topic = "example_topic"
+
+ lifecycle {
+ ignore_changes = [
+ include_key,
+ include_headers,
+ format,
+ ...
+ ]
+ }
+ // Remove the format, include_key, include_headers, and other table-specific attributes
+}
+```
+
+In the `lifecycle` block, add the `ignore_changes` meta-argument to prevent Terraform from trying to update these attributes during subsequent applies, that way Terraform won't try to update these values based on incomplete information from the state as they will no longer be defined in the source resource itself but in the new `materialize_source_table_{source_type}` resources.
+
+### Step 4: Update Terraform State
+
+After removing the `table` blocks and the table/topic specific attributes from your source resources, run `terraform plan` and `terraform apply` again to update the Terraform state and apply the changes.
+
+### Step 5: Verify the Migration
+
+After applying the changes, verify that your tables are still correctly set up in Materialize by checking the table definitions using Materialize's SQL commands.
+
+For a more detailed view of a specific table, you can use the `SHOW CREATE TABLE` command:
+
+```sql
+SHOW CREATE TABLE materialize.public.mysql_table1_from_source;
+```
+
+## Importing Existing Tables
+
+To import existing tables into your Terraform state, use the following command:
+
+```bash
+terraform import materialize_source_table_{source_type}.table_name :
+```
+
+Replace `{source}` with the appropriate source type (e.g., `mysql`, `kafka`), `` with the actual region, and `` with the table ID.
+
+### Important Note on Importing
+
+Due to limitations in the current read function, not all properties of the source tables are available when importing. To work around this, you'll need to use the `ignore_changes` lifecycle meta-argument for certain attributes that can't be read back from the state.
+
+For example, for a Kafka source table:
+
+```hcl
+resource "materialize_source_table_kafka" "kafka_table_from_source" {
+ name = "kafka_table_from_source"
+ schema_name = "public"
+ database_name = "materialize"
+
+ source_name = materialize_source_kafka.kafka_source.name
+
+ include_key = true
+ include_headers = true
+
+ envelope {
+ upsert = true
+ }
+
+ lifecycle {
+ ignore_changes = [
+ include_key,
+ include_headers,
+ envelope
+ ... Add other attributes here as needed
+ ]
+ }
+}
+```
+
+This `ignore_changes` block tells Terraform to ignore changes to these attributes during subsequent applies, preventing Terraform from trying to update these values based on incomplete information from the state.
+
+After importing, you may need to manually update these ignored attributes in your Terraform configuration to match the actual state in Materialize.
+
+## Future Improvements
+
+Webhooks sources have not yet been migrated to the new model. Once this changes, the migration process will be updated to include them.
diff --git a/docs/resources/source_kafka.md b/docs/resources/source_kafka.md
index 9095035f..3df0f869 100644
--- a/docs/resources/source_kafka.md
+++ b/docs/resources/source_kafka.md
@@ -56,19 +56,19 @@ resource "materialize_source_kafka" "example_source_kafka" {
- `cluster_name` (String) The cluster to maintain this source.
- `comment` (String) Comment on an object in the database.
- `database_name` (String) The identifier for the source database in Materialize. Defaults to `MZ_DATABASE` environment variable if set or `materialize` if environment variable is not set.
-- `envelope` (Block List, Max: 1) How Materialize should interpret records (e.g. append-only, upsert).. (see [below for nested schema](#nestedblock--envelope))
+- `envelope` (Block List, Max: 1, Deprecated) How Materialize should interpret records (e.g. append-only, upsert). Deprecated: Use the new `materialize_source_table_kafka` resource instead. (see [below for nested schema](#nestedblock--envelope))
- `expose_progress` (Block List, Max: 1) The name of the progress collection for the source. If this is not specified, the collection will be named `_progress`. (see [below for nested schema](#nestedblock--expose_progress))
- `format` (Block List, Max: 1) How to decode raw bytes from different formats into data structures Materialize can understand at runtime. (see [below for nested schema](#nestedblock--format))
-- `include_headers` (Boolean) Include message headers.
-- `include_headers_alias` (String) Provide an alias for the headers column.
-- `include_key` (Boolean) Include a column containing the Kafka message key.
-- `include_key_alias` (String) Provide an alias for the key column.
-- `include_offset` (Boolean) Include an offset column containing the Kafka message offset.
-- `include_offset_alias` (String) Provide an alias for the offset column.
-- `include_partition` (Boolean) Include a partition column containing the Kafka message partition
-- `include_partition_alias` (String) Provide an alias for the partition column.
-- `include_timestamp` (Boolean) Include a timestamp column containing the Kafka message timestamp.
-- `include_timestamp_alias` (String) Provide an alias for the timestamp column.
+- `include_headers` (Boolean, Deprecated) Include message headers. Deprecated: Use the new `materialize_source_table_kafka` resource instead.
+- `include_headers_alias` (String, Deprecated) Provide an alias for the headers column. Deprecated: Use the new `materialize_source_table_kafka` resource instead.
+- `include_key` (Boolean, Deprecated) Include a column containing the Kafka message key. Deprecated: Use the new `materialize_source_table_kafka` resource instead.
+- `include_key_alias` (String, Deprecated) Provide an alias for the key column. Deprecated: Use the new `materialize_source_table_kafka` resource instead.
+- `include_offset` (Boolean, Deprecated) Include an offset column containing the Kafka message offset. Deprecated: Use the new `materialize_source_table_kafka` resource instead.
+- `include_offset_alias` (String, Deprecated) Provide an alias for the offset column. Deprecated: Use the new `materialize_source_table_kafka` resource instead.
+- `include_partition` (Boolean, Deprecated) Include a partition column containing the Kafka message partition. Deprecated: Use the new `materialize_source_table_kafka` resource instead.
+- `include_partition_alias` (String, Deprecated) Provide an alias for the partition column. Deprecated: Use the new `materialize_source_table_kafka` resource instead.
+- `include_timestamp` (Boolean, Deprecated) Include a timestamp column containing the Kafka message timestamp. Deprecated: Use the new `materialize_source_table_kafka` resource instead.
+- `include_timestamp_alias` (String, Deprecated) Provide an alias for the timestamp column. Deprecated: Use the new `materialize_source_table_kafka` resource instead.
- `key_format` (Block List, Max: 1) Set the key format explicitly. (see [below for nested schema](#nestedblock--key_format))
- `ownership_role` (String) The owernship role of the object.
- `region` (String) The region to use for the resource connection. If not set, the default region is used.
diff --git a/docs/resources/source_load_generator.md b/docs/resources/source_load_generator.md
index a5d18c33..cb3f0bda 100644
--- a/docs/resources/source_load_generator.md
+++ b/docs/resources/source_load_generator.md
@@ -40,6 +40,7 @@ resource "materialize_source_load_generator" "example_source_load_generator" {
### Optional
+- `all_tables` (Boolean) Whether to include all tables in the source. Compatible with `auction_options`, `marketing_options`, and `tpch_options`. If not specified, use the `materialize_source_table_load_generator` resource to specify tables to include.
- `auction_options` (Block List, Max: 1) Auction Options. (see [below for nested schema](#nestedblock--auction_options))
- `cluster_name` (String) The cluster to maintain this source.
- `comment` (String) Comment on an object in the database.
diff --git a/docs/resources/source_mysql.md b/docs/resources/source_mysql.md
index 99f3563e..41246367 100644
--- a/docs/resources/source_mysql.md
+++ b/docs/resources/source_mysql.md
@@ -36,10 +36,6 @@ resource "materialize_source_mysql" "test" {
name = "mysql_table2_local"
}
}
-
-# CREATE SOURCE schema.source_mysql
-# FROM MYSQL CONNECTION "database"."schema"."mysql_connection" (PUBLICATION 'mz_source')
-# FOR TABLES (shop.mysql_table1 AS mysql_table1_local, shop.mysql_table2 AS mysql_table2_local);
```
@@ -52,16 +48,17 @@ resource "materialize_source_mysql" "test" {
### Optional
+- `all_tables` (Boolean, Deprecated) Include all tables in the source. If `table` is specified, this will be ignored.
- `cluster_name` (String) The cluster to maintain this source.
- `comment` (String) Comment on an object in the database.
- `database_name` (String) The identifier for the source database in Materialize. Defaults to `MZ_DATABASE` environment variable if set or `materialize` if environment variable is not set.
- `expose_progress` (Block List, Max: 1) The name of the progress collection for the source. If this is not specified, the collection will be named `_progress`. (see [below for nested schema](#nestedblock--expose_progress))
-- `ignore_columns` (List of String) Ignore specific columns when reading data from MySQL. Can only be updated in place when also updating a corresponding `table` attribute.
+- `ignore_columns` (List of String, Deprecated) Ignore specific columns when reading data from MySQL. Can only be updated in place when also updating a corresponding `table` attribute. Deprecated: Use the new `materialize_source_table_mysql` resource instead.
- `ownership_role` (String) The owernship role of the object.
- `region` (String) The region to use for the resource connection. If not set, the default region is used.
- `schema_name` (String) The identifier for the source schema in Materialize. Defaults to `public`.
-- `table` (Block Set) Specify the tables to be included in the source. If not specified, all tables are included. (see [below for nested schema](#nestedblock--table))
-- `text_columns` (List of String) Decode data as text for specific columns that contain MySQL types that are unsupported in Materialize. Can only be updated in place when also updating a corresponding `table` attribute.
+- `table` (Block Set, Deprecated) Specify the tables to be included in the source. Deprecated: Use the new `materialize_source_table_mysql` resource instead. (see [below for nested schema](#nestedblock--table))
+- `text_columns` (List of String, Deprecated) Decode data as text for specific columns that contain MySQL types that are unsupported in Materialize. Can only be updated in place when also updating a corresponding `table` attribute. Deprecated: Use the new `materialize_source_table_mysql` resource instead.
### Read-Only
diff --git a/docs/resources/source_postgres.md b/docs/resources/source_postgres.md
index cda942fb..fad5965b 100644
--- a/docs/resources/source_postgres.md
+++ b/docs/resources/source_postgres.md
@@ -52,7 +52,6 @@ resource "materialize_source_postgres" "example_source_postgres" {
- `name` (String) The identifier for the source.
- `postgres_connection` (Block List, Min: 1, Max: 1) The PostgreSQL connection to use in the source. (see [below for nested schema](#nestedblock--postgres_connection))
- `publication` (String) The PostgreSQL publication (the replication data set containing the tables to be streamed to Materialize).
-- `table` (Block Set, Min: 1) Creates subsources for specific tables in the Postgres connection. (see [below for nested schema](#nestedblock--table))
### Optional
@@ -63,7 +62,8 @@ resource "materialize_source_postgres" "example_source_postgres" {
- `ownership_role` (String) The owernship role of the object.
- `region` (String) The region to use for the resource connection. If not set, the default region is used.
- `schema_name` (String) The identifier for the source schema in Materialize. Defaults to `public`.
-- `text_columns` (List of String) Decode data as text for specific columns that contain PostgreSQL types that are unsupported in Materialize. Can only be updated in place when also updating a corresponding `table` attribute.
+- `table` (Block Set, Deprecated) Creates subsources for specific tables in the Postgres connection. Deprecated: Use the new `materialize_source_table_postgres` resource instead. (see [below for nested schema](#nestedblock--table))
+- `text_columns` (List of String, Deprecated) Decode data as text for specific columns that contain PostgreSQL types that are unsupported in Materialize. Can only be updated in place when also updating a corresponding `table` attribute. Deprecated: Use the new `materialize_source_table_postgres` resource instead.
### Read-Only
@@ -84,32 +84,32 @@ Optional:
- `schema_name` (String) The postgres_connection schema name. Defaults to `public`.
-
-### Nested Schema for `table`
+
+### Nested Schema for `expose_progress`
Required:
-- `upstream_name` (String) The name of the table in the upstream Postgres database.
+- `name` (String) The expose_progress name.
Optional:
-- `database_name` (String) The database of the table in Materialize.
-- `name` (String) The name of the table in Materialize.
-- `schema_name` (String) The schema of the table in Materialize.
-- `upstream_schema_name` (String) The schema of the table in the upstream Postgres database.
+- `database_name` (String) The expose_progress database name. Defaults to `MZ_DATABASE` environment variable if set or `materialize` if environment variable is not set.
+- `schema_name` (String) The expose_progress schema name. Defaults to `public`.
-
-### Nested Schema for `expose_progress`
+
+### Nested Schema for `table`
Required:
-- `name` (String) The expose_progress name.
+- `upstream_name` (String) The name of the table in the upstream Postgres database.
Optional:
-- `database_name` (String) The expose_progress database name. Defaults to `MZ_DATABASE` environment variable if set or `materialize` if environment variable is not set.
-- `schema_name` (String) The expose_progress schema name. Defaults to `public`.
+- `database_name` (String) The database of the table in Materialize.
+- `name` (String) The name of the table in Materialize.
+- `schema_name` (String) The schema of the table in Materialize.
+- `upstream_schema_name` (String) The schema of the table in the upstream Postgres database.
## Import
diff --git a/docs/resources/source_table_kafka.md b/docs/resources/source_table_kafka.md
new file mode 100644
index 00000000..3c5e1859
--- /dev/null
+++ b/docs/resources/source_table_kafka.md
@@ -0,0 +1,383 @@
+---
+# generated by https://github.com/hashicorp/terraform-plugin-docs
+page_title: "materialize_source_table_kafka Resource - terraform-provider-materialize"
+subcategory: ""
+description: |-
+ A Kafka source describes a Kafka cluster you want Materialize to read data from.
+---
+
+# materialize_source_table_kafka (Resource)
+
+A Kafka source describes a Kafka cluster you want Materialize to read data from.
+
+## Example Usage
+
+```terraform
+resource "materialize_source_table_kafka" "kafka_source_table" {
+ name = "kafka_source_table"
+ schema_name = "public"
+ database_name = "materialize"
+
+ source {
+ name = materialize_source_kafka.test_source_kafka.name
+ schema_name = materialize_source_kafka.test_source_kafka.schema_name
+ database_name = materialize_source_kafka.test_source_kafka.database_name
+ }
+
+ topic = "terraform"
+ include_key = true
+ include_key_alias = "message_key"
+ include_headers = true
+ include_headers_alias = "message_headers"
+ include_partition = true
+ include_partition_alias = "message_partition"
+ include_offset = true
+ include_offset_alias = "message_offset"
+ include_timestamp = true
+ include_timestamp_alias = "message_timestamp"
+
+
+ key_format {
+ text = true
+ }
+ value_format {
+ json = true
+ }
+
+ envelope {
+ upsert = true
+ upsert_options {
+ value_decoding_errors {
+ inline {
+ enabled = true
+ alias = "decoding_error"
+ }
+ }
+ }
+ }
+
+ ownership_role = "mz_system"
+ comment = "This is a test Kafka source table"
+}
+```
+
+
+## Schema
+
+### Required
+
+- `name` (String) The identifier for the source table.
+- `source` (Block List, Min: 1, Max: 1) The source this table is created from. (see [below for nested schema](#nestedblock--source))
+
+### Optional
+
+- `comment` (String) Comment on an object in the database.
+- `database_name` (String) The identifier for the source table database in Materialize. Defaults to `MZ_DATABASE` environment variable if set or `materialize` if environment variable is not set.
+- `envelope` (Block List, Max: 1) How Materialize should interpret records (e.g. append-only, upsert).. (see [below for nested schema](#nestedblock--envelope))
+- `expose_progress` (Block List, Max: 1) The name of the progress collection for the source. If this is not specified, the collection will be named `_progress`. (see [below for nested schema](#nestedblock--expose_progress))
+- `format` (Block List, Max: 1) How to decode raw bytes from different formats into data structures Materialize can understand at runtime. (see [below for nested schema](#nestedblock--format))
+- `include_headers` (Boolean) Include message headers.
+- `include_headers_alias` (String) Provide an alias for the headers column.
+- `include_key` (Boolean) Include a column containing the Kafka message key.
+- `include_key_alias` (String) Provide an alias for the key column.
+- `include_offset` (Boolean) Include an offset column containing the Kafka message offset.
+- `include_offset_alias` (String) Provide an alias for the offset column.
+- `include_partition` (Boolean) Include a partition column containing the Kafka message partition
+- `include_partition_alias` (String) Provide an alias for the partition column.
+- `include_timestamp` (Boolean) Include a timestamp column containing the Kafka message timestamp.
+- `include_timestamp_alias` (String) Provide an alias for the timestamp column.
+- `key_format` (Block List, Max: 1) Set the key format explicitly. (see [below for nested schema](#nestedblock--key_format))
+- `ownership_role` (String) The owernship role of the object.
+- `region` (String) The region to use for the resource connection. If not set, the default region is used.
+- `schema_name` (String) The identifier for the source table schema in Materialize. Defaults to `public`.
+- `topic` (String) The name of the Kafka topic in the Kafka cluster.
+- `value_format` (Block List, Max: 1) Set the value format explicitly. (see [below for nested schema](#nestedblock--value_format))
+
+### Read-Only
+
+- `id` (String) The ID of this resource.
+- `qualified_sql_name` (String) The fully qualified name of the source table.
+
+
+### Nested Schema for `source`
+
+Required:
+
+- `name` (String) The source name.
+
+Optional:
+
+- `database_name` (String) The source database name. Defaults to `MZ_DATABASE` environment variable if set or `materialize` if environment variable is not set.
+- `schema_name` (String) The source schema name. Defaults to `public`.
+
+
+
+### Nested Schema for `envelope`
+
+Optional:
+
+- `debezium` (Boolean) Use the Debezium envelope, which uses a diff envelope to handle CRUD operations.
+- `none` (Boolean) Use an append-only envelope. This means that records will only be appended and cannot be updated or deleted.
+- `upsert` (Boolean) Use the upsert envelope, which uses message keys to handle CRUD operations.
+- `upsert_options` (Block List, Max: 1) Options for the upsert envelope. (see [below for nested schema](#nestedblock--envelope--upsert_options))
+
+
+### Nested Schema for `envelope.upsert_options`
+
+Optional:
+
+- `value_decoding_errors` (Block List, Max: 1) Specify how to handle value decoding errors in the upsert envelope. (see [below for nested schema](#nestedblock--envelope--upsert_options--value_decoding_errors))
+
+
+### Nested Schema for `envelope.upsert_options.value_decoding_errors`
+
+Optional:
+
+- `inline` (Block List, Max: 1) Configuration for inline value decoding errors. (see [below for nested schema](#nestedblock--envelope--upsert_options--value_decoding_errors--inline))
+
+
+### Nested Schema for `envelope.upsert_options.value_decoding_errors.inline`
+
+Optional:
+
+- `alias` (String) Specify an alias for the value decoding errors column, to use an alternative name for the error column. If not specified, the column name will be `error`.
+- `enabled` (Boolean) Enable inline value decoding errors.
+
+
+
+
+
+
+### Nested Schema for `expose_progress`
+
+Required:
+
+- `name` (String) The expose_progress name.
+
+Optional:
+
+- `database_name` (String) The expose_progress database name. Defaults to `MZ_DATABASE` environment variable if set or `materialize` if environment variable is not set.
+- `schema_name` (String) The expose_progress schema name. Defaults to `public`.
+
+
+
+### Nested Schema for `format`
+
+Optional:
+
+- `avro` (Block List, Max: 1) Avro format. (see [below for nested schema](#nestedblock--format--avro))
+- `bytes` (Boolean) BYTES format.
+- `csv` (Block List, Max: 2) CSV format. (see [below for nested schema](#nestedblock--format--csv))
+- `json` (Boolean) JSON format.
+- `protobuf` (Block List, Max: 1) Protobuf format. (see [below for nested schema](#nestedblock--format--protobuf))
+- `text` (Boolean) Text format.
+
+
+### Nested Schema for `format.avro`
+
+Required:
+
+- `schema_registry_connection` (Block List, Min: 1, Max: 1) The name of a schema registry connection. (see [below for nested schema](#nestedblock--format--avro--schema_registry_connection))
+
+Optional:
+
+- `key_strategy` (String) How Materialize will define the Avro schema reader key strategy.
+- `value_strategy` (String) How Materialize will define the Avro schema reader value strategy.
+
+
+### Nested Schema for `format.avro.schema_registry_connection`
+
+Required:
+
+- `name` (String) The schema_registry_connection name.
+
+Optional:
+
+- `database_name` (String) The schema_registry_connection database name. Defaults to `MZ_DATABASE` environment variable if set or `materialize` if environment variable is not set.
+- `schema_name` (String) The schema_registry_connection schema name. Defaults to `public`.
+
+
+
+
+### Nested Schema for `format.csv`
+
+Optional:
+
+- `column` (Number) The columns to use for the source.
+- `delimited_by` (String) The delimiter to use for the source.
+- `header` (List of String) The number of columns and the name of each column using the header row.
+
+
+
+### Nested Schema for `format.protobuf`
+
+Required:
+
+- `message` (String) The name of the Protobuf message to use for the source.
+- `schema_registry_connection` (Block List, Min: 1, Max: 1) The name of a schema registry connection. (see [below for nested schema](#nestedblock--format--protobuf--schema_registry_connection))
+
+
+### Nested Schema for `format.protobuf.schema_registry_connection`
+
+Required:
+
+- `name` (String) The schema_registry_connection name.
+
+Optional:
+
+- `database_name` (String) The schema_registry_connection database name. Defaults to `MZ_DATABASE` environment variable if set or `materialize` if environment variable is not set.
+- `schema_name` (String) The schema_registry_connection schema name. Defaults to `public`.
+
+
+
+
+
+### Nested Schema for `key_format`
+
+Optional:
+
+- `avro` (Block List, Max: 1) Avro format. (see [below for nested schema](#nestedblock--key_format--avro))
+- `bytes` (Boolean) BYTES format.
+- `csv` (Block List, Max: 2) CSV format. (see [below for nested schema](#nestedblock--key_format--csv))
+- `json` (Boolean) JSON format.
+- `protobuf` (Block List, Max: 1) Protobuf format. (see [below for nested schema](#nestedblock--key_format--protobuf))
+- `text` (Boolean) Text format.
+
+
+### Nested Schema for `key_format.avro`
+
+Required:
+
+- `schema_registry_connection` (Block List, Min: 1, Max: 1) The name of a schema registry connection. (see [below for nested schema](#nestedblock--key_format--avro--schema_registry_connection))
+
+Optional:
+
+- `key_strategy` (String) How Materialize will define the Avro schema reader key strategy.
+- `value_strategy` (String) How Materialize will define the Avro schema reader value strategy.
+
+
+### Nested Schema for `key_format.avro.schema_registry_connection`
+
+Required:
+
+- `name` (String) The schema_registry_connection name.
+
+Optional:
+
+- `database_name` (String) The schema_registry_connection database name. Defaults to `MZ_DATABASE` environment variable if set or `materialize` if environment variable is not set.
+- `schema_name` (String) The schema_registry_connection schema name. Defaults to `public`.
+
+
+
+
+### Nested Schema for `key_format.csv`
+
+Optional:
+
+- `column` (Number) The columns to use for the source.
+- `delimited_by` (String) The delimiter to use for the source.
+- `header` (List of String) The number of columns and the name of each column using the header row.
+
+
+
+### Nested Schema for `key_format.protobuf`
+
+Required:
+
+- `message` (String) The name of the Protobuf message to use for the source.
+- `schema_registry_connection` (Block List, Min: 1, Max: 1) The name of a schema registry connection. (see [below for nested schema](#nestedblock--key_format--protobuf--schema_registry_connection))
+
+
+### Nested Schema for `key_format.protobuf.schema_registry_connection`
+
+Required:
+
+- `name` (String) The schema_registry_connection name.
+
+Optional:
+
+- `database_name` (String) The schema_registry_connection database name. Defaults to `MZ_DATABASE` environment variable if set or `materialize` if environment variable is not set.
+- `schema_name` (String) The schema_registry_connection schema name. Defaults to `public`.
+
+
+
+
+
+### Nested Schema for `value_format`
+
+Optional:
+
+- `avro` (Block List, Max: 1) Avro format. (see [below for nested schema](#nestedblock--value_format--avro))
+- `bytes` (Boolean) BYTES format.
+- `csv` (Block List, Max: 2) CSV format. (see [below for nested schema](#nestedblock--value_format--csv))
+- `json` (Boolean) JSON format.
+- `protobuf` (Block List, Max: 1) Protobuf format. (see [below for nested schema](#nestedblock--value_format--protobuf))
+- `text` (Boolean) Text format.
+
+
+### Nested Schema for `value_format.avro`
+
+Required:
+
+- `schema_registry_connection` (Block List, Min: 1, Max: 1) The name of a schema registry connection. (see [below for nested schema](#nestedblock--value_format--avro--schema_registry_connection))
+
+Optional:
+
+- `key_strategy` (String) How Materialize will define the Avro schema reader key strategy.
+- `value_strategy` (String) How Materialize will define the Avro schema reader value strategy.
+
+
+### Nested Schema for `value_format.avro.schema_registry_connection`
+
+Required:
+
+- `name` (String) The schema_registry_connection name.
+
+Optional:
+
+- `database_name` (String) The schema_registry_connection database name. Defaults to `MZ_DATABASE` environment variable if set or `materialize` if environment variable is not set.
+- `schema_name` (String) The schema_registry_connection schema name. Defaults to `public`.
+
+
+
+
+### Nested Schema for `value_format.csv`
+
+Optional:
+
+- `column` (Number) The columns to use for the source.
+- `delimited_by` (String) The delimiter to use for the source.
+- `header` (List of String) The number of columns and the name of each column using the header row.
+
+
+
+### Nested Schema for `value_format.protobuf`
+
+Required:
+
+- `message` (String) The name of the Protobuf message to use for the source.
+- `schema_registry_connection` (Block List, Min: 1, Max: 1) The name of a schema registry connection. (see [below for nested schema](#nestedblock--value_format--protobuf--schema_registry_connection))
+
+
+### Nested Schema for `value_format.protobuf.schema_registry_connection`
+
+Required:
+
+- `name` (String) The schema_registry_connection name.
+
+Optional:
+
+- `database_name` (String) The schema_registry_connection database name. Defaults to `MZ_DATABASE` environment variable if set or `materialize` if environment variable is not set.
+- `schema_name` (String) The schema_registry_connection schema name. Defaults to `public`.
+
+## Import
+
+Import is supported using the following syntax:
+
+```shell
+# Source tables can be imported using the source table id:
+terraform import materialize_source_table_kafka.example_source_table_kafka :
+
+# Source id and information be found in the `mz_catalog.mz_tables` table
+# The region is the region where the database is located (e.g. aws/us-east-1)
+```
diff --git a/docs/resources/source_table_load_generator.md b/docs/resources/source_table_load_generator.md
new file mode 100644
index 00000000..c422cc5f
--- /dev/null
+++ b/docs/resources/source_table_load_generator.md
@@ -0,0 +1,78 @@
+---
+# generated by https://github.com/hashicorp/terraform-plugin-docs
+page_title: "materialize_source_table_load_generator Resource - terraform-provider-materialize"
+subcategory: ""
+description: |-
+
+---
+
+# materialize_source_table_load_generator (Resource)
+
+
+
+## Example Usage
+
+```terraform
+resource "materialize_source_table_load_generator" "load_generator_table_from_source" {
+ name = "load_generator_table_from_source"
+ schema_name = "public"
+ database_name = "materialize"
+
+ # The load generator source must be of type: `auction_options`, `marketing_options`, and `tpch_options` load generator sources.
+ source {
+ name = materialize_source_load_generator.example.name
+ schema_name = materialize_source_load_generator.example.schema_name
+ database_name = materialize_source_load_generator.example.database_name
+ }
+
+ upstream_name = "load_generator_table_name" # The name of the table from the load generator
+
+}
+```
+
+
+## Schema
+
+### Required
+
+- `name` (String) The identifier for the table.
+- `source` (Block List, Min: 1, Max: 1) The source this table is created from. Compatible with `auction_options`, `marketing_options`, and `tpch_options` load generator sources. (see [below for nested schema](#nestedblock--source))
+- `upstream_name` (String) The name of the table in the upstream database.
+
+### Optional
+
+- `comment` (String) Comment on an object in the database.
+- `database_name` (String) The identifier for the table database in Materialize. Defaults to `MZ_DATABASE` environment variable if set or `materialize` if environment variable is not set.
+- `ownership_role` (String) The owernship role of the object.
+- `region` (String) The region to use for the resource connection. If not set, the default region is used.
+- `schema_name` (String) The identifier for the table schema in Materialize. Defaults to `public`.
+- `upstream_schema_name` (String) The schema of the table in the upstream database.
+
+### Read-Only
+
+- `id` (String) The ID of this resource.
+- `qualified_sql_name` (String) The fully qualified name of the table.
+
+
+### Nested Schema for `source`
+
+Required:
+
+- `name` (String) The source name.
+
+Optional:
+
+- `database_name` (String) The source database name. Defaults to `MZ_DATABASE` environment variable if set or `materialize` if environment variable is not set.
+- `schema_name` (String) The source schema name. Defaults to `public`.
+
+## Import
+
+Import is supported using the following syntax:
+
+```shell
+# Source tables can be imported using the source table id:
+terraform import materialize_source_table_load_generator.example_source_table_loadgen :
+
+# Source id and information be found in the `mz_catalog.mz_tables` table
+# The region is the region where the database is located (e.g. aws/us-east-1)
+```
diff --git a/docs/resources/source_table_mysql.md b/docs/resources/source_table_mysql.md
new file mode 100644
index 00000000..163fd51b
--- /dev/null
+++ b/docs/resources/source_table_mysql.md
@@ -0,0 +1,85 @@
+---
+# generated by https://github.com/hashicorp/terraform-plugin-docs
+page_title: "materialize_source_table_mysql Resource - terraform-provider-materialize"
+subcategory: ""
+description: |-
+
+---
+
+# materialize_source_table_mysql (Resource)
+
+
+
+## Example Usage
+
+```terraform
+resource "materialize_source_table_mysql" "mysql_table_from_source" {
+ name = "mysql_table_from_source"
+ schema_name = "public"
+ database_name = "materialize"
+
+ source {
+ name = materialize_source_mysql.example.name
+ schema_name = materialize_source_mysql.example.schema_name
+ database_name = materialize_source_mysql.example.database_name
+ }
+
+ upstream_name = "mysql_table_name" # The name of the table in the MySQL database
+ upstream_schema_name = "mysql_db_name" # The name of the database in the MySQL database
+
+ text_columns = [
+ "updated_at"
+ ]
+
+ ignore_columns = ["about"]
+}
+```
+
+
+## Schema
+
+### Required
+
+- `name` (String) The identifier for the table.
+- `source` (Block List, Min: 1, Max: 1) The source this table is created from. (see [below for nested schema](#nestedblock--source))
+- `upstream_name` (String) The name of the table in the upstream database.
+
+### Optional
+
+- `comment` (String) Comment on an object in the database.
+- `database_name` (String) The identifier for the table database in Materialize. Defaults to `MZ_DATABASE` environment variable if set or `materialize` if environment variable is not set.
+- `exclude_columns` (List of String) Exclude specific columns when reading data from MySQL. This option used to be called `ignore_columns`.
+- `ownership_role` (String) The owernship role of the object.
+- `region` (String) The region to use for the resource connection. If not set, the default region is used.
+- `schema_name` (String) The identifier for the table schema in Materialize. Defaults to `public`.
+- `text_columns` (List of String) Columns to be decoded as text.
+- `upstream_schema_name` (String) The schema of the table in the upstream database.
+
+### Read-Only
+
+- `id` (String) The ID of this resource.
+- `qualified_sql_name` (String) The fully qualified name of the table.
+
+
+### Nested Schema for `source`
+
+Required:
+
+- `name` (String) The source name.
+
+Optional:
+
+- `database_name` (String) The source database name. Defaults to `MZ_DATABASE` environment variable if set or `materialize` if environment variable is not set.
+- `schema_name` (String) The source schema name. Defaults to `public`.
+
+## Import
+
+Import is supported using the following syntax:
+
+```shell
+# Source tables can be imported using the source table id:
+terraform import materialize_source_table_mysql.example_source_table_mysql :
+
+# Source id and information be found in the `mz_catalog.mz_tables` table
+# The region is the region where the database is located (e.g. aws/us-east-1)
+```
diff --git a/docs/resources/source_table_postgres.md b/docs/resources/source_table_postgres.md
new file mode 100644
index 00000000..ffe107c3
--- /dev/null
+++ b/docs/resources/source_table_postgres.md
@@ -0,0 +1,83 @@
+---
+# generated by https://github.com/hashicorp/terraform-plugin-docs
+page_title: "materialize_source_table_postgres Resource - terraform-provider-materialize"
+subcategory: ""
+description: |-
+
+---
+
+# materialize_source_table_postgres (Resource)
+
+
+
+## Example Usage
+
+```terraform
+resource "materialize_source_table_postgres" "postgres_table_from_source" {
+ name = "postgres_table_from_source"
+ schema_name = "public"
+ database_name = "materialize"
+
+ source {
+ name = materialize_source_postgres.example.name
+ schema_name = materialize_source_postgres.example.schema_name
+ database_name = materialize_source_postgres.example.database_name
+ }
+
+ upstream_name = "postgres_table_name" # The name of the table in the postgres database
+ upstream_schema_name = "postgres_schema_name" # The name of the database in the postgres database
+
+ text_columns = [
+ "updated_at"
+ ]
+
+}
+```
+
+
+## Schema
+
+### Required
+
+- `name` (String) The identifier for the table.
+- `source` (Block List, Min: 1, Max: 1) The source this table is created from. (see [below for nested schema](#nestedblock--source))
+- `upstream_name` (String) The name of the table in the upstream database.
+
+### Optional
+
+- `comment` (String) Comment on an object in the database.
+- `database_name` (String) The identifier for the table database in Materialize. Defaults to `MZ_DATABASE` environment variable if set or `materialize` if environment variable is not set.
+- `ownership_role` (String) The owernship role of the object.
+- `region` (String) The region to use for the resource connection. If not set, the default region is used.
+- `schema_name` (String) The identifier for the table schema in Materialize. Defaults to `public`.
+- `text_columns` (List of String) Columns to be decoded as text.
+- `upstream_schema_name` (String) The schema of the table in the upstream database.
+
+### Read-Only
+
+- `id` (String) The ID of this resource.
+- `qualified_sql_name` (String) The fully qualified name of the table.
+
+
+### Nested Schema for `source`
+
+Required:
+
+- `name` (String) The source name.
+
+Optional:
+
+- `database_name` (String) The source database name. Defaults to `MZ_DATABASE` environment variable if set or `materialize` if environment variable is not set.
+- `schema_name` (String) The source schema name. Defaults to `public`.
+
+## Import
+
+Import is supported using the following syntax:
+
+```shell
+# Source tables can be imported using the source table id:
+terraform import materialize_source_table_postgres.example_source_table_postgres :
+
+# Source id and information be found in the `mz_catalog.mz_tables` table
+# The region is the region where the database is located (e.g. aws/us-east-1)
+```
diff --git a/docs/resources/source_table_webhook.md b/docs/resources/source_table_webhook.md
new file mode 100644
index 00000000..cc7e1414
--- /dev/null
+++ b/docs/resources/source_table_webhook.md
@@ -0,0 +1,144 @@
+---
+# generated by https://github.com/hashicorp/terraform-plugin-docs
+page_title: "materialize_source_table_webhook Resource - terraform-provider-materialize"
+subcategory: ""
+description: |-
+ A webhook source table allows reading data directly from webhooks.
+---
+
+# materialize_source_table_webhook (Resource)
+
+A webhook source table allows reading data directly from webhooks.
+
+## Example Usage
+
+```terraform
+resource "materialize_source_table_webhook" "example_webhook" {
+ name = "example_webhook"
+ body_format = "json"
+ check_expression = "headers->'x-mz-api-key' = secret"
+ include_headers {
+ not = ["x-mz-api-key"]
+ }
+
+ check_options {
+ field {
+ headers = true
+ }
+ }
+
+ check_options {
+ field {
+ secret {
+ name = materialize_secret.password.name
+ database_name = materialize_secret.password.database_name
+ schema_name = materialize_secret.password.schema_name
+ }
+ }
+ alias = "secret"
+ }
+}
+
+# CREATE TABLE example_webhook FROM WEBHOOK
+# BODY FORMAT json
+# INCLUDE HEADERS ( NOT 'x-mz-api-key' )
+# CHECK (
+# WITH ( HEADERS, SECRET materialize.public.password AS secret)
+# headers->'x-mz-api-key' = secret
+# );
+```
+
+
+## Schema
+
+### Required
+
+- `body_format` (String) The body format of the webhook.
+- `name` (String) The identifier for the table.
+
+### Optional
+
+- `check_expression` (String) The check expression for the webhook.
+- `check_options` (Block List) The check options for the webhook. (see [below for nested schema](#nestedblock--check_options))
+- `comment` (String) Comment on an object in the database.
+- `database_name` (String) The identifier for the table database in Materialize. Defaults to `MZ_DATABASE` environment variable if set or `materialize` if environment variable is not set.
+- `include_header` (Block List) Map a header value from a request into a column. (see [below for nested schema](#nestedblock--include_header))
+- `include_headers` (Block List, Max: 1) Include headers in the webhook. (see [below for nested schema](#nestedblock--include_headers))
+- `ownership_role` (String) The owernship role of the object.
+- `region` (String) The region to use for the resource connection. If not set, the default region is used.
+- `schema_name` (String) The identifier for the table schema in Materialize. Defaults to `public`.
+
+### Read-Only
+
+- `id` (String) The ID of this resource.
+- `qualified_sql_name` (String) The fully qualified name of the table.
+
+
+### Nested Schema for `check_options`
+
+Required:
+
+- `field` (Block List, Min: 1, Max: 1) The field for the check options. (see [below for nested schema](#nestedblock--check_options--field))
+
+Optional:
+
+- `alias` (String) The alias for the check options.
+- `bytes` (Boolean) Change type to `bytea`.
+
+
+### Nested Schema for `check_options.field`
+
+Optional:
+
+- `body` (Boolean) The body for the check options.
+- `headers` (Boolean) The headers for the check options.
+- `secret` (Block List, Max: 1) The secret for the check options. (see [below for nested schema](#nestedblock--check_options--field--secret))
+
+
+### Nested Schema for `check_options.field.secret`
+
+Required:
+
+- `name` (String) The secret name.
+
+Optional:
+
+- `database_name` (String) The secret database name. Defaults to `MZ_DATABASE` environment variable if set or `materialize` if environment variable is not set.
+- `schema_name` (String) The secret schema name. Defaults to `public`.
+
+
+
+
+
+### Nested Schema for `include_header`
+
+Required:
+
+- `header` (String) The name for the header.
+
+Optional:
+
+- `alias` (String) The alias for the header.
+- `bytes` (Boolean) Change type to `bytea`.
+
+
+
+### Nested Schema for `include_headers`
+
+Optional:
+
+- `all` (Boolean) Include all headers.
+- `not` (List of String) Headers that should be excluded.
+- `only` (List of String) Headers that should be included.
+
+## Import
+
+Import is supported using the following syntax:
+
+```shell
+# Source tables can be imported using the source table id:
+terraform import materialize_source_table_webhook.example_source_table_webhook :
+
+# Source id and information be found in the `mz_catalog.mz_tables` table
+# The region is the region where the database is located (e.g. aws/us-east-1)
+```
diff --git a/docs/resources/source_webhook.md b/docs/resources/source_webhook.md
index 5b9e1024..091737d8 100644
--- a/docs/resources/source_webhook.md
+++ b/docs/resources/source_webhook.md
@@ -3,12 +3,12 @@
page_title: "materialize_source_webhook Resource - terraform-provider-materialize"
subcategory: ""
description: |-
- A webhook source describes a webhook you want Materialize to read data from.
+ A webhook source describes a webhook you want Materialize to read data from. This resource is deprecated and will be removed in a future release. Please use materialize_source_table_webhook instead.
---
# materialize_source_webhook (Resource)
-A webhook source describes a webhook you want Materialize to read data from.
+A webhook source describes a webhook you want Materialize to read data from. This resource is deprecated and will be removed in a future release. Please use materialize_source_table_webhook instead.
## Example Usage
diff --git a/docs/resources/table_grant.md b/docs/resources/table_grant.md
index 9ed3f62a..9efdb4e3 100644
--- a/docs/resources/table_grant.md
+++ b/docs/resources/table_grant.md
@@ -53,4 +53,4 @@ Import is supported using the following syntax:
terraform import materialize_table_grant.example :GRANT|TABLE|||
# The region is the region where the database is located (e.g. aws/us-east-1)
-```
\ No newline at end of file
+```
diff --git a/examples/data-sources/materialize_source_reference/data-source.tf b/examples/data-sources/materialize_source_reference/data-source.tf
new file mode 100644
index 00000000..b4c430ee
--- /dev/null
+++ b/examples/data-sources/materialize_source_reference/data-source.tf
@@ -0,0 +1,7 @@
+data "materialize_source_reference" "source_references" {
+ source_id = materialize_source_mysql.test.id
+}
+
+output "source_references" {
+ value = data.materialize_source_reference.my_source_references.references
+}
diff --git a/examples/data-sources/materialize_source_table/data-source.tf b/examples/data-sources/materialize_source_table/data-source.tf
new file mode 100644
index 00000000..30e5c940
--- /dev/null
+++ b/examples/data-sources/materialize_source_table/data-source.tf
@@ -0,0 +1,10 @@
+data "materialize_source_table" "all" {}
+
+data "materialize_source_table" "materialize" {
+ database_name = "materialize"
+}
+
+data "materialize_source_table" "materialize_schema" {
+ database_name = "materialize"
+ schema_name = "schema"
+}
diff --git a/examples/data-sources/materialize_table/data-source.tf b/examples/data-sources/materialize_table/data-source.tf
new file mode 100644
index 00000000..9f79d339
--- /dev/null
+++ b/examples/data-sources/materialize_table/data-source.tf
@@ -0,0 +1,10 @@
+data "materialize_table" "all" {}
+
+data "materialize_table" "materialize" {
+ database_name = "materialize"
+}
+
+data "materialize_table" "materialize_schema" {
+ database_name = "materialize"
+ schema_name = "schema"
+}
diff --git a/examples/resources/materialize_source_mysql/resource.tf b/examples/resources/materialize_source_mysql/resource.tf
index e891a474..72fbb0f6 100644
--- a/examples/resources/materialize_source_mysql/resource.tf
+++ b/examples/resources/materialize_source_mysql/resource.tf
@@ -21,7 +21,3 @@ resource "materialize_source_mysql" "test" {
name = "mysql_table2_local"
}
}
-
-# CREATE SOURCE schema.source_mysql
-# FROM MYSQL CONNECTION "database"."schema"."mysql_connection" (PUBLICATION 'mz_source')
-# FOR TABLES (shop.mysql_table1 AS mysql_table1_local, shop.mysql_table2 AS mysql_table2_local);
diff --git a/examples/resources/materialize_source_table_kafka/import.sh b/examples/resources/materialize_source_table_kafka/import.sh
new file mode 100644
index 00000000..b4d52540
--- /dev/null
+++ b/examples/resources/materialize_source_table_kafka/import.sh
@@ -0,0 +1,5 @@
+# Source tables can be imported using the source table id:
+terraform import materialize_source_table_kafka.example_source_table_kafka :
+
+# Source id and information be found in the `mz_catalog.mz_tables` table
+# The region is the region where the database is located (e.g. aws/us-east-1)
diff --git a/examples/resources/materialize_source_table_kafka/resource.tf b/examples/resources/materialize_source_table_kafka/resource.tf
new file mode 100644
index 00000000..ec2ceffb
--- /dev/null
+++ b/examples/resources/materialize_source_table_kafka/resource.tf
@@ -0,0 +1,46 @@
+resource "materialize_source_table_kafka" "kafka_source_table" {
+ name = "kafka_source_table"
+ schema_name = "public"
+ database_name = "materialize"
+
+ source {
+ name = materialize_source_kafka.test_source_kafka.name
+ schema_name = materialize_source_kafka.test_source_kafka.schema_name
+ database_name = materialize_source_kafka.test_source_kafka.database_name
+ }
+
+ topic = "terraform"
+ include_key = true
+ include_key_alias = "message_key"
+ include_headers = true
+ include_headers_alias = "message_headers"
+ include_partition = true
+ include_partition_alias = "message_partition"
+ include_offset = true
+ include_offset_alias = "message_offset"
+ include_timestamp = true
+ include_timestamp_alias = "message_timestamp"
+
+
+ key_format {
+ text = true
+ }
+ value_format {
+ json = true
+ }
+
+ envelope {
+ upsert = true
+ upsert_options {
+ value_decoding_errors {
+ inline {
+ enabled = true
+ alias = "decoding_error"
+ }
+ }
+ }
+ }
+
+ ownership_role = "mz_system"
+ comment = "This is a test Kafka source table"
+}
diff --git a/examples/resources/materialize_source_table_load_generator/import.sh b/examples/resources/materialize_source_table_load_generator/import.sh
new file mode 100644
index 00000000..c673df14
--- /dev/null
+++ b/examples/resources/materialize_source_table_load_generator/import.sh
@@ -0,0 +1,5 @@
+# Source tables can be imported using the source table id:
+terraform import materialize_source_table_load_generator.example_source_table_loadgen :
+
+# Source id and information be found in the `mz_catalog.mz_tables` table
+# The region is the region where the database is located (e.g. aws/us-east-1)
diff --git a/examples/resources/materialize_source_table_load_generator/resource.tf b/examples/resources/materialize_source_table_load_generator/resource.tf
new file mode 100644
index 00000000..4bc698ea
--- /dev/null
+++ b/examples/resources/materialize_source_table_load_generator/resource.tf
@@ -0,0 +1,15 @@
+resource "materialize_source_table_load_generator" "load_generator_table_from_source" {
+ name = "load_generator_table_from_source"
+ schema_name = "public"
+ database_name = "materialize"
+
+ # The load generator source must be of type: `auction_options`, `marketing_options`, and `tpch_options` load generator sources.
+ source {
+ name = materialize_source_load_generator.example.name
+ schema_name = materialize_source_load_generator.example.schema_name
+ database_name = materialize_source_load_generator.example.database_name
+ }
+
+ upstream_name = "load_generator_table_name" # The name of the table from the load generator
+
+}
diff --git a/examples/resources/materialize_source_table_mysql/import.sh b/examples/resources/materialize_source_table_mysql/import.sh
new file mode 100644
index 00000000..1d910379
--- /dev/null
+++ b/examples/resources/materialize_source_table_mysql/import.sh
@@ -0,0 +1,5 @@
+# Source tables can be imported using the source table id:
+terraform import materialize_source_table_mysql.example_source_table_mysql :
+
+# Source id and information be found in the `mz_catalog.mz_tables` table
+# The region is the region where the database is located (e.g. aws/us-east-1)
diff --git a/examples/resources/materialize_source_table_mysql/resource.tf b/examples/resources/materialize_source_table_mysql/resource.tf
new file mode 100644
index 00000000..bedac347
--- /dev/null
+++ b/examples/resources/materialize_source_table_mysql/resource.tf
@@ -0,0 +1,20 @@
+resource "materialize_source_table_mysql" "mysql_table_from_source" {
+ name = "mysql_table_from_source"
+ schema_name = "public"
+ database_name = "materialize"
+
+ source {
+ name = materialize_source_mysql.example.name
+ schema_name = materialize_source_mysql.example.schema_name
+ database_name = materialize_source_mysql.example.database_name
+ }
+
+ upstream_name = "mysql_table_name" # The name of the table in the MySQL database
+ upstream_schema_name = "mysql_db_name" # The name of the database in the MySQL database
+
+ text_columns = [
+ "updated_at"
+ ]
+
+ ignore_columns = ["about"]
+}
diff --git a/examples/resources/materialize_source_table_postgres/import.sh b/examples/resources/materialize_source_table_postgres/import.sh
new file mode 100644
index 00000000..91e794a4
--- /dev/null
+++ b/examples/resources/materialize_source_table_postgres/import.sh
@@ -0,0 +1,5 @@
+# Source tables can be imported using the source table id:
+terraform import materialize_source_table_postgres.example_source_table_postgres :
+
+# Source id and information be found in the `mz_catalog.mz_tables` table
+# The region is the region where the database is located (e.g. aws/us-east-1)
diff --git a/examples/resources/materialize_source_table_postgres/resource.tf b/examples/resources/materialize_source_table_postgres/resource.tf
new file mode 100644
index 00000000..60bac8be
--- /dev/null
+++ b/examples/resources/materialize_source_table_postgres/resource.tf
@@ -0,0 +1,19 @@
+resource "materialize_source_table_postgres" "postgres_table_from_source" {
+ name = "postgres_table_from_source"
+ schema_name = "public"
+ database_name = "materialize"
+
+ source {
+ name = materialize_source_postgres.example.name
+ schema_name = materialize_source_postgres.example.schema_name
+ database_name = materialize_source_postgres.example.database_name
+ }
+
+ upstream_name = "postgres_table_name" # The name of the table in the postgres database
+ upstream_schema_name = "postgres_schema_name" # The name of the database in the postgres database
+
+ text_columns = [
+ "updated_at"
+ ]
+
+}
diff --git a/examples/resources/materialize_source_table_webhook/import.sh b/examples/resources/materialize_source_table_webhook/import.sh
new file mode 100644
index 00000000..6d13c91c
--- /dev/null
+++ b/examples/resources/materialize_source_table_webhook/import.sh
@@ -0,0 +1,5 @@
+# Source tables can be imported using the source table id:
+terraform import materialize_source_table_webhook.example_source_table_webhook :
+
+# Source id and information be found in the `mz_catalog.mz_tables` table
+# The region is the region where the database is located (e.g. aws/us-east-1)
diff --git a/examples/resources/materialize_source_table_webhook/resource.tf b/examples/resources/materialize_source_table_webhook/resource.tf
new file mode 100644
index 00000000..81d2053f
--- /dev/null
+++ b/examples/resources/materialize_source_table_webhook/resource.tf
@@ -0,0 +1,33 @@
+resource "materialize_source_table_webhook" "example_webhook" {
+ name = "example_webhook"
+ body_format = "json"
+ check_expression = "headers->'x-mz-api-key' = secret"
+ include_headers {
+ not = ["x-mz-api-key"]
+ }
+
+ check_options {
+ field {
+ headers = true
+ }
+ }
+
+ check_options {
+ field {
+ secret {
+ name = materialize_secret.password.name
+ database_name = materialize_secret.password.database_name
+ schema_name = materialize_secret.password.schema_name
+ }
+ }
+ alias = "secret"
+ }
+}
+
+# CREATE TABLE example_webhook FROM WEBHOOK
+# BODY FORMAT json
+# INCLUDE HEADERS ( NOT 'x-mz-api-key' )
+# CHECK (
+# WITH ( HEADERS, SECRET materialize.public.password AS secret)
+# headers->'x-mz-api-key' = secret
+# );
diff --git a/integration/postgres/postgres_bootstrap.sql b/integration/postgres/postgres_bootstrap.sql
index 0fc7ca01..39163abb 100644
--- a/integration/postgres/postgres_bootstrap.sql
+++ b/integration/postgres/postgres_bootstrap.sql
@@ -11,7 +11,8 @@ CREATE TABLE table2 (
);
CREATE TABLE table3 (
- id INT GENERATED ALWAYS AS IDENTITY
+ id INT GENERATED ALWAYS AS IDENTITY,
+ updated_at timestamp NOT NULL
);
-- Enable REPLICA for both tables
@@ -24,4 +25,4 @@ CREATE PUBLICATION mz_source FOR TABLE table1, table2, table3;
INSERT INTO table1 VALUES (1), (2), (3), (4), (5);
INSERT INTO table2 VALUES (1, NOW()), (2, NOW()), (3, NOW()), (4, NOW()), (5, NOW());
-INSERT INTO table3 VALUES (1), (2), (3), (4), (5);
+INSERT INTO table3 VALUES (1, NOW()), (2, NOW()), (3, NOW()), (4, NOW()), (5, NOW());
diff --git a/integration/source.tf b/integration/source.tf
index 4ff3e287..798c8938 100644
--- a/integration/source.tf
+++ b/integration/source.tf
@@ -70,6 +70,23 @@ resource "materialize_source_load_generator" "load_generator_auction" {
}
}
+# Create source table from Auction load generator source
+resource "materialize_source_table_load_generator" "load_generator_auction_table" {
+ name = "load_gen_auction_table"
+ schema_name = "public"
+ database_name = "materialize"
+
+ source {
+ name = materialize_source_load_generator.load_generator_auction.name
+ schema_name = materialize_source_load_generator.load_generator_auction.schema_name
+ database_name = materialize_source_load_generator.load_generator_auction.database_name
+ }
+
+ comment = "source table load generator comment"
+
+ upstream_name = "bids"
+}
+
resource "materialize_source_load_generator" "load_generator_marketing" {
name = "load_gen_marketing"
schema_name = materialize_schema.schema.name
@@ -82,6 +99,23 @@ resource "materialize_source_load_generator" "load_generator_marketing" {
}
}
+# Create source table from Marketing load generator source
+resource "materialize_source_table_load_generator" "load_generator_marketing_table" {
+ name = "load_gen_marketing_table"
+ schema_name = "public"
+ database_name = "materialize"
+
+ source {
+ name = materialize_source_load_generator.load_generator_marketing.name
+ schema_name = materialize_source_load_generator.load_generator_marketing.schema_name
+ database_name = materialize_source_load_generator.load_generator_marketing.database_name
+ }
+
+ comment = "source table load generator comment"
+
+ upstream_name = "leads"
+}
+
resource "materialize_source_load_generator" "load_generator_tpch" {
name = "load_gen_tpch"
schema_name = materialize_schema.schema.name
@@ -144,6 +178,26 @@ resource "materialize_source_postgres" "example_source_postgres" {
}
}
+# Create source table from Postgres source
+resource "materialize_source_table_postgres" "source_table_postgres" {
+ name = "source_table2_postgres"
+ schema_name = "public"
+ database_name = "materialize"
+
+ source {
+ name = materialize_source_postgres.example_source_postgres.name
+ schema_name = materialize_source_postgres.example_source_postgres.schema_name
+ database_name = materialize_source_postgres.example_source_postgres.database_name
+ }
+
+ upstream_name = "table2"
+ upstream_schema_name = "public"
+
+ text_columns = [
+ "updated_at"
+ ]
+}
+
resource "materialize_source_kafka" "example_source_kafka_format_text" {
name = "source_kafka_text"
comment = "source kafka comment"
@@ -168,6 +222,60 @@ resource "materialize_source_kafka" "example_source_kafka_format_text" {
depends_on = [materialize_sink_kafka.sink_kafka]
}
+# Create source table from Kafka source
+resource "materialize_source_table_kafka" "source_table_kafka" {
+ name = "source_table_kafka"
+ schema_name = "public"
+ database_name = "materialize"
+
+ source {
+ name = materialize_source_kafka.example_source_kafka_format_text.name
+ schema_name = materialize_source_kafka.example_source_kafka_format_text.schema_name
+ database_name = materialize_source_kafka.example_source_kafka_format_text.database_name
+ }
+
+ topic = "topic1"
+
+ key_format {
+ text = true
+ }
+ value_format {
+ json = true
+ }
+
+ include_key = true
+ include_key_alias = "message_key"
+ include_headers = true
+ include_headers_alias = "message_headers"
+ include_partition = true
+ include_partition_alias = "message_partition"
+ include_offset = true
+ include_offset_alias = "message_offset"
+ include_timestamp = true
+ include_timestamp_alias = "message_timestamp"
+
+}
+
+resource "materialize_source_table_kafka" "source_table_kafka_no_topic" {
+ name = "source_table_kafka_no_topic"
+ schema_name = "public"
+ database_name = "materialize"
+
+ source {
+ name = materialize_source_kafka.example_source_kafka_format_text.name
+ schema_name = materialize_source_kafka.example_source_kafka_format_text.schema_name
+ database_name = materialize_source_kafka.example_source_kafka_format_text.database_name
+ }
+
+ key_format {
+ text = true
+ }
+ value_format {
+ json = true
+ }
+
+}
+
resource "materialize_source_kafka" "example_source_kafka_format_bytes" {
name = "source_kafka_bytes"
cluster_name = materialize_cluster.cluster_source.name
@@ -185,6 +293,27 @@ resource "materialize_source_kafka" "example_source_kafka_format_bytes" {
depends_on = [materialize_sink_kafka.sink_kafka]
}
+# Create source table from Kafka source with bytes format
+resource "materialize_source_table_kafka" "source_table_kafka_bytes" {
+ name = "source_table_kafka_bytes"
+ schema_name = "public"
+ database_name = "materialize"
+
+ source {
+ name = materialize_source_kafka.example_source_kafka_format_bytes.name
+ schema_name = materialize_source_kafka.example_source_kafka_format_bytes.schema_name
+ database_name = materialize_source_kafka.example_source_kafka_format_bytes.database_name
+ }
+
+ topic = "topic1"
+
+ format {
+ bytes = true
+ }
+
+ depends_on = [materialize_sink_kafka.sink_kafka]
+}
+
resource "materialize_source_kafka" "example_source_kafka_format_avro" {
name = "source_kafka_avro"
cluster_name = materialize_cluster.cluster_source.name
@@ -211,6 +340,33 @@ resource "materialize_source_kafka" "example_source_kafka_format_avro" {
depends_on = [materialize_sink_kafka.sink_kafka]
}
+# Source table from Kafka source with Avro format
+resource "materialize_source_table_kafka" "source_table_kafka_avro" {
+ name = "source_table_kafka_avro"
+ schema_name = "public"
+ database_name = "materialize"
+
+ source {
+ name = materialize_source_kafka.example_source_kafka_format_avro.name
+ schema_name = materialize_source_kafka.example_source_kafka_format_avro.schema_name
+ database_name = materialize_source_kafka.example_source_kafka_format_avro.database_name
+ }
+
+ topic = "topic1"
+
+ format {
+ avro {
+ schema_registry_connection {
+ name = materialize_connection_confluent_schema_registry.schema_registry.name
+ schema_name = materialize_connection_confluent_schema_registry.schema_registry.schema_name
+ database_name = materialize_connection_confluent_schema_registry.schema_registry.database_name
+ }
+ }
+ }
+
+ depends_on = [materialize_sink_kafka.sink_kafka]
+}
+
resource "materialize_source_webhook" "example_webhook_source" {
name = "example_webhook_source"
comment = "source webhook comment"
@@ -271,6 +427,22 @@ resource "materialize_source_mysql" "test" {
}
}
+# Create source table from MySQL source
+resource "materialize_source_table_mysql" "source_table_mysql" {
+ name = "source_table1_mysql"
+ schema_name = "public"
+ database_name = "materialize"
+
+ source {
+ name = materialize_source_mysql.test.name
+ schema_name = materialize_source_mysql.test.schema_name
+ database_name = materialize_source_mysql.test.database_name
+ }
+
+ upstream_name = "mysql_table1"
+ upstream_schema_name = "shop"
+}
+
resource "materialize_source_grant" "source_grant_select" {
role_name = materialize_role.role_1.name
privilege = "SELECT"
@@ -317,6 +489,48 @@ resource "materialize_source_kafka" "kafka_upsert_options_source" {
include_key_alias = "key_alias"
}
+# Create source table from Kafka source with upsert options
+resource "materialize_source_table_kafka" "source_table_kafka_upsert_options" {
+ name = "source_table_kafka_upsert_options"
+ schema_name = "public"
+ database_name = "materialize"
+
+ source {
+ name = materialize_source_kafka.kafka_upsert_options_source.name
+ schema_name = materialize_source_kafka.kafka_upsert_options_source.schema_name
+ database_name = materialize_source_kafka.kafka_upsert_options_source.database_name
+ }
+
+ topic = "topic1"
+
+ key_format {
+ text = true
+ }
+ value_format {
+ text = true
+ }
+
+
+ envelope {
+ upsert = true
+ upsert_options {
+ value_decoding_errors {
+ inline {
+ enabled = true
+ alias = "decoding_error"
+ }
+ }
+ }
+ }
+
+ include_timestamp_alias = "timestamp_alias"
+ include_offset = true
+ include_offset_alias = "offset_alias"
+ include_partition = true
+ include_partition_alias = "partition_alias"
+ include_key_alias = "key_alias"
+}
+
output "qualified_load_generator" {
value = materialize_source_load_generator.load_generator.qualified_sql_name
}
diff --git a/mocks/cloud/Dockerfile b/mocks/cloud/Dockerfile
index adbd4af3..53a2198b 100644
--- a/mocks/cloud/Dockerfile
+++ b/mocks/cloud/Dockerfile
@@ -1,5 +1,5 @@
# Start from the official Golang base image
-FROM golang:1.22 as builder
+FROM golang:1.22 AS builder
# Set the Current Working Directory inside the container
WORKDIR /app
diff --git a/pkg/datasources/datasource_source_reference.go b/pkg/datasources/datasource_source_reference.go
new file mode 100644
index 00000000..2b6e4572
--- /dev/null
+++ b/pkg/datasources/datasource_source_reference.go
@@ -0,0 +1,118 @@
+package datasources
+
+import (
+ "context"
+
+ "github.com/MaterializeInc/terraform-provider-materialize/pkg/materialize"
+ "github.com/MaterializeInc/terraform-provider-materialize/pkg/utils"
+
+ "github.com/hashicorp/terraform-plugin-sdk/v2/diag"
+ "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
+)
+
+func SourceReference() *schema.Resource {
+ return &schema.Resource{
+ ReadContext: sourceReferenceRead,
+ Description: "The `materialize_source_reference` data source retrieves a list of *available* upstream references for a given Materialize source. These references represent potential tables that can be created based on the source, but they do not necessarily indicate references the source is already ingesting. This allows users to see all upstream data that could be materialized into tables.",
+ Schema: map[string]*schema.Schema{
+ "source_id": {
+ Type: schema.TypeString,
+ Required: true,
+ Description: "The ID of the source to get references for",
+ },
+ "references": {
+ Type: schema.TypeList,
+ Computed: true,
+ Description: "The source references",
+ Elem: &schema.Resource{
+ Schema: map[string]*schema.Schema{
+ "namespace": {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "The namespace of the reference",
+ },
+ "name": {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "The name of the reference",
+ },
+ "updated_at": {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "The last update timestamp of the reference",
+ },
+ "columns": {
+ Type: schema.TypeList,
+ Computed: true,
+ Description: "The columns of the reference",
+ Elem: &schema.Schema{
+ Type: schema.TypeString,
+ },
+ },
+ "source_name": {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "The name of the source",
+ },
+ "source_schema_name": {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "The schema name of the source",
+ },
+ "source_database_name": {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "The database name of the source",
+ },
+ "source_type": {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "The type of the source",
+ },
+ },
+ },
+ },
+ "region": RegionSchema(),
+ },
+ }
+}
+
+func sourceReferenceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics {
+ sourceID := d.Get("source_id").(string)
+ sourceID = utils.ExtractId(sourceID)
+
+ var diags diag.Diagnostics
+
+ metaDb, region, err := utils.GetDBClientFromMeta(meta, d)
+ if err != nil {
+ return diag.FromErr(err)
+ }
+
+ sourceReference, err := materialize.ListSourceReferences(metaDb, sourceID)
+ if err != nil {
+ return diag.FromErr(err)
+ }
+
+ referenceFormats := []map[string]interface{}{}
+ for _, sr := range sourceReference {
+ referenceMap := map[string]interface{}{
+ "namespace": sr.Namespace.String,
+ "name": sr.Name.String,
+ "updated_at": sr.UpdatedAt.String,
+ "columns": sr.Columns,
+ "source_name": sr.SourceName.String,
+ "source_schema_name": sr.SourceSchemaName.String,
+ "source_database_name": sr.SourceDBName.String,
+ "source_type": sr.SourceType.String,
+ }
+ referenceFormats = append(referenceFormats, referenceMap)
+ }
+
+ if err := d.Set("references", referenceFormats); err != nil {
+ return diag.FromErr(err)
+ }
+
+ d.SetId(utils.TransformIdWithRegion(string(region), sourceID))
+
+ return diags
+}
diff --git a/pkg/datasources/datasource_source_reference_test.go b/pkg/datasources/datasource_source_reference_test.go
new file mode 100644
index 00000000..8dfa0774
--- /dev/null
+++ b/pkg/datasources/datasource_source_reference_test.go
@@ -0,0 +1,46 @@
+package datasources
+
+import (
+ "context"
+ "testing"
+
+ "github.com/MaterializeInc/terraform-provider-materialize/pkg/testhelpers"
+ "github.com/MaterializeInc/terraform-provider-materialize/pkg/utils"
+
+ "github.com/DATA-DOG/go-sqlmock"
+ "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
+ "github.com/stretchr/testify/require"
+)
+
+func TestSourceReferenceDatasource(t *testing.T) {
+ r := require.New(t)
+
+ in := map[string]interface{}{
+ "source_id": "source-id",
+ }
+ d := schema.TestResourceDataRaw(t, SourceReference().Schema, in)
+ r.NotNil(d)
+
+ testhelpers.WithMockProviderMeta(t, func(db *utils.ProviderMeta, mock sqlmock.Sqlmock) {
+ predicate := `WHERE sr.source_id = 'source-id'`
+ testhelpers.MockSourceReferenceScan(mock, predicate)
+
+ if err := sourceReferenceRead(context.TODO(), d, db); err != nil {
+ t.Fatal(err)
+ }
+
+ // Verify the results
+ references := d.Get("references").([]interface{})
+ r.Equal(1, len(references))
+
+ reference := references[0].(map[string]interface{})
+ r.Equal("namespace", reference["namespace"])
+ r.Equal("reference_name", reference["name"])
+ r.Equal("2023-10-01T12:34:56Z", reference["updated_at"])
+ r.Equal([]interface{}{"column1", "column2"}, reference["columns"])
+ r.Equal("source_name", reference["source_name"])
+ r.Equal("source_schema_name", reference["source_schema_name"])
+ r.Equal("source_database_name", reference["source_database_name"])
+ r.Equal("source_type", reference["source_type"])
+ })
+}
diff --git a/pkg/datasources/datasource_source_table.go b/pkg/datasources/datasource_source_table.go
new file mode 100644
index 00000000..8d39b625
--- /dev/null
+++ b/pkg/datasources/datasource_source_table.go
@@ -0,0 +1,157 @@
+package datasources
+
+import (
+ "context"
+
+ "github.com/MaterializeInc/terraform-provider-materialize/pkg/materialize"
+ "github.com/MaterializeInc/terraform-provider-materialize/pkg/utils"
+
+ "github.com/hashicorp/terraform-plugin-sdk/v2/diag"
+ "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
+)
+
+func SourceTable() *schema.Resource {
+ return &schema.Resource{
+ ReadContext: sourceTableRead,
+ Schema: map[string]*schema.Schema{
+ "database_name": {
+ Type: schema.TypeString,
+ Optional: true,
+ Description: "Limit tables to a specific database",
+ },
+ "schema_name": {
+ Type: schema.TypeString,
+ Optional: true,
+ Description: "Limit tables to a specific schema within a specific database",
+ RequiredWith: []string{"database_name"},
+ },
+ "tables": {
+ Type: schema.TypeList,
+ Computed: true,
+ Description: "The source tables in the account",
+ Elem: &schema.Resource{
+ Schema: map[string]*schema.Schema{
+ "id": {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "The ID of the source table",
+ },
+ "name": {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "The name of the source table",
+ },
+ "schema_name": {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "The schema name of the source table",
+ },
+ "database_name": {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "The database name of the source table",
+ },
+ "source": {
+ Type: schema.TypeList,
+ Computed: true,
+ Description: "Information about the source",
+ Elem: &schema.Resource{
+ Schema: map[string]*schema.Schema{
+ "name": {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "The name of the source",
+ },
+ "schema_name": {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "The schema name of the source",
+ },
+ "database_name": {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "The database name of the source",
+ },
+ },
+ },
+ },
+ "source_type": {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "The type of the source",
+ },
+ "upstream_name": {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "The name of the upstream table",
+ },
+ "upstream_schema_name": {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "The schema name of the upstream table",
+ },
+ "comment": {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "The comment on the source table",
+ },
+ "owner_name": {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "The name of the owner of the source table",
+ },
+ },
+ },
+ },
+ "region": RegionSchema(),
+ },
+ }
+}
+
+func sourceTableRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics {
+ schemaName := d.Get("schema_name").(string)
+ databaseName := d.Get("database_name").(string)
+
+ var diags diag.Diagnostics
+
+ metaDb, region, err := utils.GetDBClientFromMeta(meta, d)
+ if err != nil {
+ return diag.FromErr(err)
+ }
+ dataSource, err := materialize.ListSourceTables(metaDb, schemaName, databaseName)
+ if err != nil {
+ return diag.FromErr(err)
+ }
+
+ tableFormats := []map[string]interface{}{}
+ for _, p := range dataSource {
+ tableMap := map[string]interface{}{
+ "id": p.TableId.String,
+ "name": p.TableName.String,
+ "schema_name": p.SchemaName.String,
+ "database_name": p.DatabaseName.String,
+ "source_type": p.SourceType.String,
+ "upstream_name": p.UpstreamName.String,
+ "upstream_schema_name": p.UpstreamSchemaName.String,
+ "comment": p.Comment.String,
+ "owner_name": p.OwnerName.String,
+ }
+
+ sourceMap := map[string]interface{}{
+ "name": p.SourceName.String,
+ "schema_name": p.SourceSchemaName.String,
+ "database_name": p.SourceDatabaseName.String,
+ }
+ tableMap["source"] = []interface{}{sourceMap}
+
+ tableFormats = append(tableFormats, tableMap)
+ }
+
+ if err := d.Set("tables", tableFormats); err != nil {
+ return diag.FromErr(err)
+ }
+
+ SetId(string(region), "source_tables", databaseName, schemaName, d)
+
+ return diags
+}
diff --git a/pkg/datasources/datasource_source_table_test.go b/pkg/datasources/datasource_source_table_test.go
new file mode 100644
index 00000000..59413f73
--- /dev/null
+++ b/pkg/datasources/datasource_source_table_test.go
@@ -0,0 +1,53 @@
+package datasources
+
+import (
+ "context"
+ "testing"
+
+ "github.com/MaterializeInc/terraform-provider-materialize/pkg/testhelpers"
+ "github.com/MaterializeInc/terraform-provider-materialize/pkg/utils"
+
+ "github.com/DATA-DOG/go-sqlmock"
+ "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
+ "github.com/stretchr/testify/require"
+)
+
+func TestSourceTableDatasource(t *testing.T) {
+ r := require.New(t)
+
+ in := map[string]interface{}{
+ "schema_name": "schema",
+ "database_name": "database",
+ }
+ d := schema.TestResourceDataRaw(t, SourceTable().Schema, in)
+ r.NotNil(d)
+
+ testhelpers.WithMockProviderMeta(t, func(db *utils.ProviderMeta, mock sqlmock.Sqlmock) {
+ p := `WHERE mz_databases.name = 'database' AND mz_schemas.name = 'schema'`
+ testhelpers.MockSourceTableScan(mock, p)
+
+ if err := sourceTableRead(context.TODO(), d, db); err != nil {
+ t.Fatal(err)
+ }
+
+ // Verify the results
+ tables := d.Get("tables").([]interface{})
+ r.Equal(1, len(tables))
+
+ table := tables[0].(map[string]interface{})
+ r.Equal("u1", table["id"])
+ r.Equal("table", table["name"])
+ r.Equal("schema", table["schema_name"])
+ r.Equal("database", table["database_name"])
+ r.Equal("KAFKA", table["source_type"])
+ r.Equal("table", table["upstream_name"])
+ r.Equal("schema", table["upstream_schema_name"])
+ r.Equal("comment", table["comment"])
+ r.Equal("materialize", table["owner_name"])
+
+ source := table["source"].([]interface{})[0].(map[string]interface{})
+ r.Equal("source", source["name"])
+ r.Equal("public", source["schema_name"])
+ r.Equal("materialize", source["database_name"])
+ })
+}
diff --git a/pkg/datasources/datasource_table.go b/pkg/datasources/datasource_table.go
index 699ab894..e2ffddf8 100644
--- a/pkg/datasources/datasource_table.go
+++ b/pkg/datasources/datasource_table.go
@@ -32,20 +32,24 @@ func Table() *schema.Resource {
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"id": {
- Type: schema.TypeString,
- Computed: true,
+ Description: "The unique identifier for the table",
+ Type: schema.TypeString,
+ Computed: true,
},
"name": {
- Type: schema.TypeString,
- Computed: true,
+ Description: "The name of the table",
+ Type: schema.TypeString,
+ Computed: true,
},
"schema_name": {
- Type: schema.TypeString,
- Computed: true,
+ Description: "The schema of the table",
+ Type: schema.TypeString,
+ Computed: true,
},
"database_name": {
- Type: schema.TypeString,
- Computed: true,
+ Description: "The database of the table",
+ Type: schema.TypeString,
+ Computed: true,
},
},
},
diff --git a/pkg/materialize/format_specs.go b/pkg/materialize/format_specs.go
index ad3178fd..d9c51d48 100644
--- a/pkg/materialize/format_specs.go
+++ b/pkg/materialize/format_specs.go
@@ -73,7 +73,7 @@ func GetFormatSpecStruc(v interface{}) SourceFormatSpecStruct {
}
if protobuf, ok := u["protobuf"]; ok && protobuf != nil && len(protobuf.([]interface{})) > 0 {
if csr, ok := protobuf.([]interface{})[0].(map[string]interface{})["schema_registry_connection"]; ok {
- message := protobuf.([]interface{})[0].(map[string]interface{})["message_name"].(string)
+ message := protobuf.([]interface{})[0].(map[string]interface{})["message"].(string)
format.Protobuf = &ProtobufFormatSpec{
SchemaRegistryConnection: GetIdentifierSchemaStruct(csr),
MessageName: message,
diff --git a/pkg/materialize/source_load_generator.go b/pkg/materialize/source_load_generator.go
index 24c5d52c..2aec6865 100644
--- a/pkg/materialize/source_load_generator.go
+++ b/pkg/materialize/source_load_generator.go
@@ -127,6 +127,7 @@ type SourceLoadgenBuilder struct {
clusterName string
size string
loadGeneratorType string
+ allTables bool
counterOptions CounterOptions
auctionOptions AuctionOptions
marketingOptions MarketingOptions
@@ -157,6 +158,11 @@ func (b *SourceLoadgenBuilder) LoadGeneratorType(l string) *SourceLoadgenBuilder
return b
}
+func (b *SourceLoadgenBuilder) AllTables() *SourceLoadgenBuilder {
+ b.allTables = true
+ return b
+}
+
func (b *SourceLoadgenBuilder) ExposeProgress(e IdentifierSchemaStruct) *SourceLoadgenBuilder {
b.exposeProgress = e
return b
@@ -251,7 +257,9 @@ func (b *SourceLoadgenBuilder) Create() error {
// Include for multi-output sources
if b.loadGeneratorType == "AUCTION" || b.loadGeneratorType == "MARKETING" || b.loadGeneratorType == "TPCH" {
- q.WriteString(` FOR ALL TABLES`)
+ if b.allTables {
+ q.WriteString(` FOR ALL TABLES`)
+ }
}
if b.exposeProgress.Name != "" {
diff --git a/pkg/materialize/source_load_generator_test.go b/pkg/materialize/source_load_generator_test.go
index d70ed3c5..781771f2 100644
--- a/pkg/materialize/source_load_generator_test.go
+++ b/pkg/materialize/source_load_generator_test.go
@@ -44,6 +44,7 @@ func TestSourceLoadgenAuctionCreate(t *testing.T) {
b := NewSourceLoadgenBuilder(db, sourceLoadgen)
b.LoadGeneratorType("AUCTION")
+ b.AllTables()
b.AuctionOptions(AuctionOptions{
TickInterval: "1s",
})
@@ -65,6 +66,7 @@ func TestSourceLoadgenMarketingCreate(t *testing.T) {
b := NewSourceLoadgenBuilder(db, sourceLoadgen)
b.LoadGeneratorType("MARKETING")
+ b.AllTables()
b.MarketingOptions(MarketingOptions{
TickInterval: "1s",
})
@@ -86,6 +88,7 @@ func TestSourceLoadgenTPCHParamsCreate(t *testing.T) {
b := NewSourceLoadgenBuilder(db, sourceLoadgen)
b.LoadGeneratorType("TPCH")
+ b.AllTables()
b.TPCHOptions(TPCHOptions{
TickInterval: "1s",
ScaleFactor: 0.01,
diff --git a/pkg/materialize/source_mysql.go b/pkg/materialize/source_mysql.go
index f90d79b3..97db730a 100644
--- a/pkg/materialize/source_mysql.go
+++ b/pkg/materialize/source_mysql.go
@@ -15,6 +15,7 @@ type SourceMySQLBuilder struct {
ignoreColumns []string
textColumns []string
tables []TableStruct
+ allTables bool
exposeProgress IdentifierSchemaStruct
}
@@ -55,6 +56,11 @@ func (b *SourceMySQLBuilder) Tables(tables []TableStruct) *SourceMySQLBuilder {
return b
}
+func (b *SourceMySQLBuilder) AllTables() *SourceMySQLBuilder {
+ b.allTables = true
+ return b
+}
+
func (b *SourceMySQLBuilder) ExposeProgress(e IdentifierSchemaStruct) *SourceMySQLBuilder {
b.exposeProgress = e
return b
@@ -111,7 +117,9 @@ func (b *SourceMySQLBuilder) Create() error {
}
q.WriteString(`)`)
} else {
- q.WriteString(` FOR ALL TABLES`)
+ if b.allTables {
+ q.WriteString(` FOR ALL TABLES`)
+ }
}
if b.exposeProgress.Name != "" {
diff --git a/pkg/materialize/source_mysql_test.go b/pkg/materialize/source_mysql_test.go
index c9735126..e4c4f3d6 100644
--- a/pkg/materialize/source_mysql_test.go
+++ b/pkg/materialize/source_mysql_test.go
@@ -22,6 +22,7 @@ func TestSourceMySQLAllTablesCreate(t *testing.T) {
b := NewSourceMySQLBuilder(db, sourceMySQL)
b.MySQLConnection(IdentifierSchemaStruct{Name: "mysql_connection", SchemaName: "schema", DatabaseName: "database"})
+ b.AllTables()
if err := b.Create(); err != nil {
t.Fatal(err)
diff --git a/pkg/materialize/source_postgres.go b/pkg/materialize/source_postgres.go
index d744af9b..2c18633f 100644
--- a/pkg/materialize/source_postgres.go
+++ b/pkg/materialize/source_postgres.go
@@ -80,26 +80,28 @@ func (b *SourcePostgresBuilder) Create() error {
q.WriteString(fmt.Sprintf(` (%s)`, p))
- q.WriteString(` FOR TABLES (`)
- for i, t := range b.table {
- if t.UpstreamSchemaName == "" {
- t.UpstreamSchemaName = b.SchemaName
- }
- if t.Name == "" {
- t.Name = t.UpstreamName
- }
- if t.SchemaName == "" {
- t.SchemaName = b.SchemaName
- }
- if t.DatabaseName == "" {
- t.DatabaseName = b.DatabaseName
- }
- q.WriteString(fmt.Sprintf(`%s.%s AS %s.%s.%s`, QuoteIdentifier(t.UpstreamSchemaName), QuoteIdentifier(t.UpstreamName), QuoteIdentifier(t.DatabaseName), QuoteIdentifier(t.SchemaName), QuoteIdentifier(t.Name)))
- if i < len(b.table)-1 {
- q.WriteString(`, `)
+ if b.table != nil && len(b.table) > 0 {
+ q.WriteString(` FOR TABLES (`)
+ for i, t := range b.table {
+ if t.UpstreamSchemaName == "" {
+ t.UpstreamSchemaName = b.SchemaName
+ }
+ if t.Name == "" {
+ t.Name = t.UpstreamName
+ }
+ if t.SchemaName == "" {
+ t.SchemaName = b.SchemaName
+ }
+ if t.DatabaseName == "" {
+ t.DatabaseName = b.DatabaseName
+ }
+ q.WriteString(fmt.Sprintf(`%s.%s AS %s.%s.%s`, QuoteIdentifier(t.UpstreamSchemaName), QuoteIdentifier(t.UpstreamName), QuoteIdentifier(t.DatabaseName), QuoteIdentifier(t.SchemaName), QuoteIdentifier(t.Name)))
+ if i < len(b.table)-1 {
+ q.WriteString(`, `)
+ }
}
+ q.WriteString(`)`)
}
- q.WriteString(`)`)
if b.exposeProgress.Name != "" {
q.WriteString(fmt.Sprintf(` EXPOSE PROGRESS AS %s`, b.exposeProgress.QualifiedName()))
diff --git a/pkg/materialize/source_reference.go b/pkg/materialize/source_reference.go
new file mode 100644
index 00000000..408d657e
--- /dev/null
+++ b/pkg/materialize/source_reference.go
@@ -0,0 +1,90 @@
+package materialize
+
+import (
+ "database/sql"
+ "fmt"
+
+ "github.com/jmoiron/sqlx"
+ "github.com/lib/pq"
+)
+
+type SourceReferenceParams struct {
+ SourceId sql.NullString `db:"source_id"`
+ Namespace sql.NullString `db:"namespace"`
+ Name sql.NullString `db:"name"`
+ UpdatedAt sql.NullString `db:"updated_at"`
+ Columns pq.StringArray `db:"columns"`
+ SourceName sql.NullString `db:"source_name"`
+ SourceSchemaName sql.NullString `db:"source_schema_name"`
+ SourceDBName sql.NullString `db:"source_database_name"`
+ SourceType sql.NullString `db:"source_type"`
+}
+
+var sourceReferenceQuery = NewBaseQuery(`
+ SELECT
+ sr.source_id,
+ sr.namespace,
+ sr.name,
+ sr.updated_at,
+ sr.columns,
+ s.name AS source_name,
+ ss.name AS source_schema_name,
+ sd.name AS source_database_name,
+ s.type AS source_type
+ FROM mz_internal.mz_source_references sr
+ JOIN mz_sources s ON sr.source_id = s.id
+ JOIN mz_schemas ss ON s.schema_id = ss.id
+ JOIN mz_databases sd ON ss.database_id = sd.id
+`)
+
+func SourceReferenceId(conn *sqlx.DB, sourceId string) (string, error) {
+ p := map[string]string{
+ "sr.source_id": sourceId,
+ }
+ q := sourceReferenceQuery.QueryPredicate(p)
+
+ var s SourceReferenceParams
+ if err := conn.Get(&s, q); err != nil {
+ return "", err
+ }
+
+ return s.SourceId.String, nil
+}
+
+func ScanSourceReference(conn *sqlx.DB, id string) (SourceReferenceParams, error) {
+ q := sourceReferenceQuery.QueryPredicate(map[string]string{"sr.source_id": id})
+
+ var s SourceReferenceParams
+ if err := conn.Get(&s, q); err != nil {
+ return s, err
+ }
+
+ return s, nil
+}
+
+func refreshSourceReferences(conn *sqlx.DB, sourceName, schemaName, databaseName string) error {
+ query := fmt.Sprintf(`ALTER SOURCE %s REFRESH REFERENCES`, QualifiedName(databaseName, schemaName, sourceName))
+ _, err := conn.Exec(query)
+ return err
+}
+
+func ListSourceReferences(conn *sqlx.DB, id string) ([]SourceReferenceParams, error) {
+ source, err := ScanSource(conn, id)
+ if err == nil {
+ if err := refreshSourceReferences(conn, source.SourceName.String, source.SchemaName.String, source.DatabaseName.String); err != nil {
+ return nil, fmt.Errorf("error refreshing source references: %v", err)
+ }
+ }
+
+ p := map[string]string{
+ "sr.source_id": id,
+ }
+ q := sourceReferenceQuery.QueryPredicate(p)
+
+ var references []SourceReferenceParams
+ if err := conn.Select(&references, q); err != nil {
+ return references, err
+ }
+
+ return references, nil
+}
diff --git a/pkg/materialize/source_reference_test.go b/pkg/materialize/source_reference_test.go
new file mode 100644
index 00000000..b34bfbe4
--- /dev/null
+++ b/pkg/materialize/source_reference_test.go
@@ -0,0 +1,95 @@
+package materialize
+
+import (
+ "testing"
+
+ sqlmock "github.com/DATA-DOG/go-sqlmock"
+ "github.com/MaterializeInc/terraform-provider-materialize/pkg/testhelpers"
+ "github.com/jmoiron/sqlx"
+ "github.com/lib/pq"
+)
+
+func TestSourceReferenceId(t *testing.T) {
+ testhelpers.WithMockDb(t, func(db *sqlx.DB, mock sqlmock.Sqlmock) {
+ mock.ExpectQuery(
+ `SELECT sr\.source_id, sr\.namespace, sr\.name, sr\.updated_at, sr\.columns, s\.name AS source_name, ss\.name AS source_schema_name, sd\.name AS source_database_name, s\.type AS source_type
+ FROM mz_internal\.mz_source_references sr
+ JOIN mz_sources s ON sr\.source_id = s\.id
+ JOIN mz_schemas ss ON s\.schema_id = ss\.id
+ JOIN mz_databases sd ON ss\.database_id = sd\.id
+ WHERE sr\.source_id = 'test-source-id'`,
+ ).
+ WillReturnRows(sqlmock.NewRows([]string{"source_id"}).AddRow("test-source-id"))
+
+ result, err := SourceReferenceId(db, "test-source-id")
+ if err != nil {
+ t.Fatalf("unexpected error: %v", err)
+ }
+ if result != "test-source-id" {
+ t.Errorf("expected source id to be 'test-source-id', got %v", result)
+ }
+ })
+}
+
+func TestScanSourceReference(t *testing.T) {
+ testhelpers.WithMockDb(t, func(db *sqlx.DB, mock sqlmock.Sqlmock) {
+ mock.ExpectQuery(
+ `SELECT sr\.source_id, sr\.namespace, sr\.name, sr\.updated_at, sr\.columns, s\.name AS source_name, ss\.name AS source_schema_name, sd\.name AS source_database_name, s\.type AS source_type
+ FROM mz_internal\.mz_source_references sr
+ JOIN mz_sources s ON sr\.source_id = s\.id
+ JOIN mz_schemas ss ON s\.schema_id = ss\.id
+ JOIN mz_databases sd ON ss\.database_id = sd\.id
+ WHERE sr\.source_id = 'test-source-id'`,
+ ).
+ WillReturnRows(sqlmock.NewRows([]string{"source_id", "namespace", "name", "updated_at", "columns", "source_name", "source_schema_name", "source_database_name", "source_type"}).
+ AddRow("test-source-id", "test-namespace", "test-name", "2024-10-28", pq.StringArray{"col1", "col2"}, "source-name", "source-schema-name", "source-database-name", "source-type"))
+
+ result, err := ScanSourceReference(db, "test-source-id")
+ if err != nil {
+ t.Fatalf("unexpected error: %v", err)
+ }
+ if result.SourceId.String != "test-source-id" {
+ t.Errorf("expected source id to be 'test-source-id', got %v", result.SourceId.String)
+ }
+ })
+}
+
+func TestRefreshSourceReferences(t *testing.T) {
+ testhelpers.WithMockDb(t, func(db *sqlx.DB, mock sqlmock.Sqlmock) {
+ mock.ExpectExec(
+ `ALTER SOURCE "test-database"\."test-schema"\."test-source" REFRESH REFERENCES`,
+ ).
+ WillReturnResult(sqlmock.NewResult(1, 1))
+
+ err := refreshSourceReferences(db, "test-source", "test-schema", "test-database")
+ if err != nil {
+ t.Fatalf("unexpected error: %v", err)
+ }
+ })
+}
+
+func TestListSourceReferences(t *testing.T) {
+ testhelpers.WithMockDb(t, func(db *sqlx.DB, mock sqlmock.Sqlmock) {
+ mock.ExpectQuery(
+ `SELECT sr\.source_id, sr\.namespace, sr\.name, sr\.updated_at, sr\.columns, s\.name AS source_name, ss\.name AS source_schema_name, sd\.name AS source_database_name, s\.type AS source_type
+ FROM mz_internal\.mz_source_references sr
+ JOIN mz_sources s ON sr\.source_id = s\.id
+ JOIN mz_schemas ss ON s\.schema_id = ss\.id
+ JOIN mz_databases sd ON ss\.database_id = sd\.id
+ WHERE sr\.source_id = 'test-source-id'`,
+ ).
+ WillReturnRows(sqlmock.NewRows([]string{"source_id", "namespace", "name", "updated_at", "columns", "source_name", "source_schema_name", "source_database_name", "source_type"}).
+ AddRow("test-source-id", "test-namespace", "test-name", "2024-10-28", pq.StringArray{"col1", "col2"}, "source-name", "source-schema-name", "source-database-name", "source-type"))
+
+ result, err := ListSourceReferences(db, "test-source-id")
+ if err != nil {
+ t.Fatalf("unexpected error: %v", err)
+ }
+ if len(result) != 1 {
+ t.Errorf("expected 1 result, got %d", len(result))
+ }
+ if result[0].SourceId.String != "test-source-id" {
+ t.Errorf("expected source id to be 'test-source-id', got %v", result[0].SourceId.String)
+ }
+ })
+}
diff --git a/pkg/materialize/source_table.go b/pkg/materialize/source_table.go
new file mode 100644
index 00000000..c1932cce
--- /dev/null
+++ b/pkg/materialize/source_table.go
@@ -0,0 +1,196 @@
+package materialize
+
+import (
+ "database/sql"
+ "fmt"
+ "strings"
+
+ "github.com/jmoiron/sqlx"
+ "github.com/lib/pq"
+)
+
+type SourceTableParams struct {
+ TableId sql.NullString `db:"id"`
+ TableName sql.NullString `db:"name"`
+ SchemaName sql.NullString `db:"schema_name"`
+ DatabaseName sql.NullString `db:"database_name"`
+ SourceName sql.NullString `db:"source_name"`
+ SourceSchemaName sql.NullString `db:"source_schema_name"`
+ SourceDatabaseName sql.NullString `db:"source_database_name"`
+ SourceType sql.NullString `db:"source_type"`
+ UpstreamName sql.NullString `db:"upstream_table_name"`
+ UpstreamSchemaName sql.NullString `db:"upstream_schema_name"`
+ TextColumns pq.StringArray `db:"text_columns"`
+ Comment sql.NullString `db:"comment"`
+ OwnerName sql.NullString `db:"owner_name"`
+ Privileges pq.StringArray `db:"privileges"`
+}
+
+var sourceTableQuery = NewBaseQuery(`
+ SELECT
+ mz_tables.id,
+ mz_tables.name,
+ mz_schemas.name AS schema_name,
+ mz_databases.name AS database_name,
+ mz_sources.name AS source_name,
+ source_schemas.name AS source_schema_name,
+ source_databases.name AS source_database_name,
+ mz_sources.type AS source_type,
+ COALESCE(mz_kafka_source_tables.topic,
+ mz_mysql_source_tables.table_name,
+ mz_postgres_source_tables.table_name) AS upstream_table_name,
+ COALESCE(mz_mysql_source_tables.schema_name,
+ mz_postgres_source_tables.schema_name) AS upstream_schema_name,
+ comments.comment AS comment,
+ mz_roles.name AS owner_name,
+ mz_tables.privileges
+ FROM mz_tables
+ JOIN mz_schemas
+ ON mz_tables.schema_id = mz_schemas.id
+ JOIN mz_databases
+ ON mz_schemas.database_id = mz_databases.id
+ JOIN mz_sources
+ ON mz_tables.source_id = mz_sources.id
+ JOIN mz_schemas AS source_schemas
+ ON mz_sources.schema_id = source_schemas.id
+ JOIN mz_databases AS source_databases
+ ON source_schemas.database_id = source_databases.id
+ LEFT JOIN mz_internal.mz_kafka_source_tables
+ ON mz_tables.id = mz_kafka_source_tables.id
+ LEFT JOIN mz_internal.mz_mysql_source_tables
+ ON mz_tables.id = mz_mysql_source_tables.id
+ LEFT JOIN mz_internal.mz_postgres_source_tables
+ ON mz_tables.id = mz_postgres_source_tables.id
+ JOIN mz_roles
+ ON mz_tables.owner_id = mz_roles.id
+ LEFT JOIN (
+ SELECT id, comment
+ FROM mz_internal.mz_comments
+ WHERE object_type = 'table'
+ AND object_sub_id IS NULL
+ ) comments
+ ON mz_tables.id = comments.id
+`)
+
+func SourceTableId(conn *sqlx.DB, obj MaterializeObject) (string, error) {
+ p := map[string]string{
+ "mz_tables.name": obj.Name,
+ "mz_schemas.name": obj.SchemaName,
+ "mz_databases.name": obj.DatabaseName,
+ }
+ q := sourceTableQuery.QueryPredicate(p)
+
+ var t SourceTableParams
+ if err := conn.Get(&t, q); err != nil {
+ return "", err
+ }
+
+ return t.TableId.String, nil
+}
+
+func ScanSourceTable(conn *sqlx.DB, id string) (SourceTableParams, error) {
+ q := sourceTableQuery.QueryPredicate(map[string]string{"mz_tables.id": id})
+
+ var t SourceTableParams
+ if err := conn.Get(&t, q); err != nil {
+ return t, err
+ }
+
+ return t, nil
+}
+
+type SourceTableBuilder struct {
+ ddl Builder
+ tableName string
+ schemaName string
+ databaseName string
+ source IdentifierSchemaStruct
+ upstreamName string
+ upstreamSchemaName string
+ conn *sqlx.DB
+}
+
+func NewSourceTableBuilder(conn *sqlx.DB, obj MaterializeObject) *SourceTableBuilder {
+ return &SourceTableBuilder{
+ ddl: Builder{conn, Table},
+ tableName: obj.Name,
+ schemaName: obj.SchemaName,
+ databaseName: obj.DatabaseName,
+ conn: conn,
+ }
+}
+
+func (b *SourceTableBuilder) QualifiedName() string {
+ return QualifiedName(b.databaseName, b.schemaName, b.tableName)
+}
+
+func (b *SourceTableBuilder) Source(s IdentifierSchemaStruct) *SourceTableBuilder {
+ b.source = s
+ return b
+}
+
+func (b *SourceTableBuilder) UpstreamName(n string) *SourceTableBuilder {
+ b.upstreamName = n
+ return b
+}
+
+func (b *SourceTableBuilder) UpstreamSchemaName(n string) *SourceTableBuilder {
+ b.upstreamSchemaName = n
+ return b
+}
+
+func (b *SourceTableBuilder) Rename(newName string) error {
+ oldName := b.QualifiedName()
+ b.tableName = newName
+ newName = b.QualifiedName()
+ return b.ddl.rename(oldName, newName)
+}
+
+func (b *SourceTableBuilder) Drop() error {
+ qn := b.QualifiedName()
+ return b.ddl.drop(qn)
+}
+
+// BaseCreate provides a template for the Create method
+func (b *SourceTableBuilder) BaseCreate(sourceType string, additionalOptions func() string) error {
+ q := strings.Builder{}
+ q.WriteString(fmt.Sprintf(`CREATE TABLE %s`, b.QualifiedName()))
+ q.WriteString(fmt.Sprintf(` FROM SOURCE %s`, b.source.QualifiedName()))
+
+ // Reference is not required for Kafka sources and single-output load generator sources
+ if b.upstreamName != "" {
+ q.WriteString(` (REFERENCE `)
+
+ if b.upstreamSchemaName != "" {
+ q.WriteString(fmt.Sprintf(`%s.`, QuoteIdentifier(b.upstreamSchemaName)))
+ }
+ q.WriteString(QuoteIdentifier(b.upstreamName))
+
+ q.WriteString(")")
+ }
+
+ if additionalOptions != nil {
+ options := additionalOptions()
+ if options != "" {
+ q.WriteString(options)
+ }
+ }
+
+ q.WriteString(`;`)
+ return b.ddl.exec(q.String())
+}
+
+func ListSourceTables(conn *sqlx.DB, schemaName, databaseName string) ([]SourceTableParams, error) {
+ p := map[string]string{
+ "mz_schemas.name": schemaName,
+ "mz_databases.name": databaseName,
+ }
+ q := sourceTableQuery.QueryPredicate(p)
+
+ var c []SourceTableParams
+ if err := conn.Select(&c, q); err != nil {
+ return c, err
+ }
+
+ return c, nil
+}
diff --git a/pkg/materialize/source_table_kafka.go b/pkg/materialize/source_table_kafka.go
new file mode 100644
index 00000000..a6d7b41d
--- /dev/null
+++ b/pkg/materialize/source_table_kafka.go
@@ -0,0 +1,389 @@
+package materialize
+
+import (
+ "fmt"
+ "strings"
+
+ "github.com/jmoiron/sqlx"
+)
+
+type SourceTableKafkaParams struct {
+ SourceTableParams
+}
+
+var sourceTableKafkaQuery = `
+ SELECT
+ mz_tables.id,
+ mz_tables.name,
+ mz_schemas.name AS schema_name,
+ mz_databases.name AS database_name,
+ mz_sources.name AS source_name,
+ source_schemas.name AS source_schema_name,
+ source_databases.name AS source_database_name,
+ mz_kafka_source_tables.topic AS upstream_table_name,
+ mz_sources.type AS source_type,
+ comments.comment AS comment,
+ mz_roles.name AS owner_name,
+ mz_tables.privileges
+ FROM mz_tables
+ JOIN mz_schemas
+ ON mz_tables.schema_id = mz_schemas.id
+ JOIN mz_databases
+ ON mz_schemas.database_id = mz_databases.id
+ JOIN mz_sources
+ ON mz_tables.source_id = mz_sources.id
+ JOIN mz_schemas AS source_schemas
+ ON mz_sources.schema_id = source_schemas.id
+ JOIN mz_databases AS source_databases
+ ON source_schemas.database_id = source_databases.id
+ LEFT JOIN mz_internal.mz_kafka_source_tables
+ ON mz_tables.id = mz_kafka_source_tables.id
+ JOIN mz_roles
+ ON mz_tables.owner_id = mz_roles.id
+ LEFT JOIN (
+ SELECT id, comment
+ FROM mz_internal.mz_comments
+ WHERE object_type = 'table'
+ AND object_sub_id IS NULL
+ ) comments
+ ON mz_tables.id = comments.id
+`
+
+func SourceTableKafkaId(conn *sqlx.DB, obj MaterializeObject) (string, error) {
+ p := map[string]string{
+ "mz_tables.name": obj.Name,
+ "mz_schemas.name": obj.SchemaName,
+ "mz_databases.name": obj.DatabaseName,
+ }
+ q := NewBaseQuery(sourceTableKafkaQuery).QueryPredicate(p)
+
+ var t SourceTableKafkaParams
+ if err := conn.Get(&t, q); err != nil {
+ return "", err
+ }
+
+ return t.TableId.String, nil
+}
+
+func ScanSourceTableKafka(conn *sqlx.DB, id string) (SourceTableKafkaParams, error) {
+ q := NewBaseQuery(sourceTableKafkaQuery).QueryPredicate(map[string]string{"mz_tables.id": id})
+
+ var params SourceTableKafkaParams
+ if err := conn.Get(¶ms, q); err != nil {
+ return params, err
+ }
+
+ return params, nil
+}
+
+type SourceTableKafkaBuilder struct {
+ *SourceTableBuilder
+ includeKey bool
+ includeHeaders bool
+ includePartition bool
+ includeOffset bool
+ includeTimestamp bool
+ keyAlias string
+ headersAlias string
+ partitionAlias string
+ offsetAlias string
+ timestampAlias string
+ format SourceFormatSpecStruct
+ keyFormat SourceFormatSpecStruct
+ valueFormat SourceFormatSpecStruct
+ envelope KafkaSourceEnvelopeStruct
+ exposeProgress IdentifierSchemaStruct
+}
+
+func NewSourceTableKafkaBuilder(conn *sqlx.DB, obj MaterializeObject) *SourceTableKafkaBuilder {
+ return &SourceTableKafkaBuilder{
+ SourceTableBuilder: NewSourceTableBuilder(conn, obj),
+ }
+}
+
+func (b *SourceTableKafkaBuilder) IncludeKey() *SourceTableKafkaBuilder {
+ b.includeKey = true
+ return b
+}
+
+func (b *SourceTableKafkaBuilder) IncludeHeaders() *SourceTableKafkaBuilder {
+ b.includeHeaders = true
+ return b
+}
+
+func (b *SourceTableKafkaBuilder) IncludePartition() *SourceTableKafkaBuilder {
+ b.includePartition = true
+ return b
+}
+
+func (b *SourceTableKafkaBuilder) IncludeOffset() *SourceTableKafkaBuilder {
+ b.includeOffset = true
+ return b
+}
+
+func (b *SourceTableKafkaBuilder) IncludeTimestamp() *SourceTableKafkaBuilder {
+ b.includeTimestamp = true
+ return b
+}
+
+func (b *SourceTableKafkaBuilder) IncludeKeyAlias(alias string) *SourceTableKafkaBuilder {
+ b.includeKey = true
+ b.keyAlias = alias
+ return b
+}
+
+func (b *SourceTableKafkaBuilder) IncludeHeadersAlias(alias string) *SourceTableKafkaBuilder {
+ b.includeHeaders = true
+ b.headersAlias = alias
+ return b
+}
+
+func (b *SourceTableKafkaBuilder) IncludePartitionAlias(alias string) *SourceTableKafkaBuilder {
+ b.includePartition = true
+ b.partitionAlias = alias
+ return b
+}
+
+func (b *SourceTableKafkaBuilder) IncludeOffsetAlias(alias string) *SourceTableKafkaBuilder {
+ b.includeOffset = true
+ b.offsetAlias = alias
+ return b
+}
+
+func (b *SourceTableKafkaBuilder) IncludeTimestampAlias(alias string) *SourceTableKafkaBuilder {
+ b.includeTimestamp = true
+ b.timestampAlias = alias
+ return b
+}
+
+func (b *SourceTableKafkaBuilder) Format(f SourceFormatSpecStruct) *SourceTableKafkaBuilder {
+ b.format = f
+ return b
+}
+
+func (b *SourceTableKafkaBuilder) Envelope(e KafkaSourceEnvelopeStruct) *SourceTableKafkaBuilder {
+ b.envelope = e
+ return b
+}
+
+func (b *SourceTableKafkaBuilder) KeyFormat(k SourceFormatSpecStruct) *SourceTableKafkaBuilder {
+ b.keyFormat = k
+ return b
+}
+
+func (b *SourceTableKafkaBuilder) ValueFormat(v SourceFormatSpecStruct) *SourceTableKafkaBuilder {
+ b.valueFormat = v
+ return b
+}
+
+func (b *SourceTableKafkaBuilder) ExposeProgress(e IdentifierSchemaStruct) *SourceTableKafkaBuilder {
+ b.exposeProgress = e
+ return b
+}
+
+func (b *SourceTableKafkaBuilder) Create() error {
+ return b.BaseCreate("kafka", func() string {
+ q := strings.Builder{}
+ var options []string
+
+ // Format
+ if b.format.Avro != nil {
+ if b.format.Avro.SchemaRegistryConnection.Name != "" {
+ options = append(options, fmt.Sprintf(`FORMAT AVRO USING CONFLUENT SCHEMA REGISTRY CONNECTION %s`, QualifiedName(b.format.Avro.SchemaRegistryConnection.DatabaseName, b.format.Avro.SchemaRegistryConnection.SchemaName, b.format.Avro.SchemaRegistryConnection.Name)))
+ }
+ if b.format.Avro.KeyStrategy != "" {
+ options = append(options, fmt.Sprintf(`KEY STRATEGY %s`, b.format.Avro.KeyStrategy))
+ }
+ if b.format.Avro.ValueStrategy != "" {
+ options = append(options, fmt.Sprintf(`VALUE STRATEGY %s`, b.format.Avro.ValueStrategy))
+ }
+ }
+
+ if b.format.Protobuf != nil {
+ if b.format.Protobuf.SchemaRegistryConnection.Name != "" && b.format.Protobuf.MessageName != "" {
+ options = append(options, fmt.Sprintf(`FORMAT PROTOBUF MESSAGE %s USING CONFLUENT SCHEMA REGISTRY CONNECTION %s`, QuoteString(b.format.Protobuf.MessageName), QualifiedName(b.format.Protobuf.SchemaRegistryConnection.DatabaseName, b.format.Protobuf.SchemaRegistryConnection.SchemaName, b.format.Protobuf.SchemaRegistryConnection.Name)))
+ } else if b.format.Protobuf.SchemaRegistryConnection.Name != "" {
+ options = append(options, fmt.Sprintf(`FORMAT PROTOBUF USING CONFLUENT SCHEMA REGISTRY CONNECTION %s`, QualifiedName(b.format.Protobuf.SchemaRegistryConnection.DatabaseName, b.format.Protobuf.SchemaRegistryConnection.SchemaName, b.format.Protobuf.SchemaRegistryConnection.Name)))
+ }
+ }
+
+ if b.format.Csv != nil {
+ if b.format.Csv.Columns > 0 {
+ options = append(options, fmt.Sprintf(`FORMAT CSV WITH %d COLUMNS`, b.format.Csv.Columns))
+ }
+ if b.format.Csv.Header != nil {
+ options = append(options, fmt.Sprintf(`FORMAT CSV WITH HEADER ( %s )`, strings.Join(b.format.Csv.Header, ", ")))
+ }
+ if b.format.Csv.DelimitedBy != "" {
+ options = append(options, fmt.Sprintf(`DELIMITER %s`, QuoteString(b.format.Csv.DelimitedBy)))
+ }
+ }
+
+ if b.format.Bytes {
+ options = append(options, `FORMAT BYTES`)
+ }
+ if b.format.Text {
+ options = append(options, `FORMAT TEXT`)
+ }
+ if b.format.Json {
+ options = append(options, `FORMAT JSON`)
+ }
+
+ // Key Format
+ if b.keyFormat.Avro != nil {
+ if b.keyFormat.Avro.SchemaRegistryConnection.Name != "" {
+ options = append(options, fmt.Sprintf(`KEY FORMAT AVRO USING CONFLUENT SCHEMA REGISTRY CONNECTION %s`, QualifiedName(b.keyFormat.Avro.SchemaRegistryConnection.DatabaseName, b.keyFormat.Avro.SchemaRegistryConnection.SchemaName, b.keyFormat.Avro.SchemaRegistryConnection.Name)))
+ }
+ if b.keyFormat.Avro.KeyStrategy != "" {
+ options = append(options, fmt.Sprintf(`KEY STRATEGY %s`, b.keyFormat.Avro.KeyStrategy))
+ }
+ if b.keyFormat.Avro.ValueStrategy != "" {
+ options = append(options, fmt.Sprintf(`VALUE STRATEGY %s`, b.keyFormat.Avro.ValueStrategy))
+ }
+ }
+
+ if b.keyFormat.Protobuf != nil {
+ if b.keyFormat.Protobuf.SchemaRegistryConnection.Name != "" && b.keyFormat.Protobuf.MessageName != "" {
+ options = append(options, fmt.Sprintf(`KEY FORMAT PROTOBUF MESSAGE %s USING CONFLUENT SCHEMA REGISTRY CONNECTION %s`, QuoteString(b.keyFormat.Protobuf.MessageName), QualifiedName(b.keyFormat.Protobuf.SchemaRegistryConnection.DatabaseName, b.keyFormat.Protobuf.SchemaRegistryConnection.SchemaName, b.keyFormat.Protobuf.SchemaRegistryConnection.Name)))
+ } else if b.keyFormat.Protobuf.SchemaRegistryConnection.Name != "" {
+ options = append(options, fmt.Sprintf(`KEY FORMAT PROTOBUF USING CONFLUENT SCHEMA REGISTRY CONNECTION %s`, QualifiedName(b.keyFormat.Protobuf.SchemaRegistryConnection.DatabaseName, b.keyFormat.Protobuf.SchemaRegistryConnection.SchemaName, b.keyFormat.Protobuf.SchemaRegistryConnection.Name)))
+ }
+ }
+
+ if b.keyFormat.Csv != nil {
+ if b.keyFormat.Csv.Columns > 0 {
+ options = append(options, fmt.Sprintf(`KEY FORMAT CSV WITH %d COLUMNS`, b.keyFormat.Csv.Columns))
+ }
+ if b.keyFormat.Csv.Header != nil {
+ options = append(options, fmt.Sprintf(`KEY FORMAT CSV WITH HEADER ( %s )`, strings.Join(b.keyFormat.Csv.Header, ", ")))
+ }
+ if b.keyFormat.Csv.DelimitedBy != "" {
+ options = append(options, fmt.Sprintf(`KEY DELIMITER %s`, QuoteString(b.keyFormat.Csv.DelimitedBy)))
+ }
+ }
+
+ if b.keyFormat.Bytes {
+ options = append(options, `KEY FORMAT BYTES`)
+ }
+ if b.keyFormat.Text {
+ options = append(options, `KEY FORMAT TEXT`)
+ }
+ if b.keyFormat.Json {
+ options = append(options, `KEY FORMAT JSON`)
+ }
+
+ // Value Format
+ if b.valueFormat.Avro != nil {
+ if b.valueFormat.Avro.SchemaRegistryConnection.Name != "" {
+ options = append(options, fmt.Sprintf(`VALUE FORMAT AVRO USING CONFLUENT SCHEMA REGISTRY CONNECTION %s`, QualifiedName(b.valueFormat.Avro.SchemaRegistryConnection.DatabaseName, b.valueFormat.Avro.SchemaRegistryConnection.SchemaName, b.valueFormat.Avro.SchemaRegistryConnection.Name)))
+ }
+ if b.valueFormat.Avro.KeyStrategy != "" {
+ options = append(options, fmt.Sprintf(`VALUE STRATEGY %s`, b.valueFormat.Avro.KeyStrategy))
+ }
+ if b.valueFormat.Avro.ValueStrategy != "" {
+ options = append(options, fmt.Sprintf(`VALUE STRATEGY %s`, b.valueFormat.Avro.ValueStrategy))
+ }
+ }
+
+ if b.valueFormat.Protobuf != nil {
+ if b.valueFormat.Protobuf.SchemaRegistryConnection.Name != "" && b.valueFormat.Protobuf.MessageName != "" {
+ options = append(options, fmt.Sprintf(`VALUE FORMAT PROTOBUF MESSAGE %s USING CONFLUENT SCHEMA REGISTRY CONNECTION %s`, QuoteString(b.valueFormat.Protobuf.MessageName), QualifiedName(b.valueFormat.Protobuf.SchemaRegistryConnection.DatabaseName, b.valueFormat.Protobuf.SchemaRegistryConnection.SchemaName, b.valueFormat.Protobuf.SchemaRegistryConnection.Name)))
+ } else if b.valueFormat.Protobuf.SchemaRegistryConnection.Name != "" {
+ options = append(options, fmt.Sprintf(`VALUE FORMAT PROTOBUF USING CONFLUENT SCHEMA REGISTRY CONNECTION %s`, QualifiedName(b.valueFormat.Protobuf.SchemaRegistryConnection.DatabaseName, b.valueFormat.Protobuf.SchemaRegistryConnection.SchemaName, b.valueFormat.Protobuf.SchemaRegistryConnection.Name)))
+ }
+ }
+
+ if b.valueFormat.Csv != nil {
+ if b.valueFormat.Csv.Columns > 0 {
+ options = append(options, fmt.Sprintf(`VALUE FORMAT CSV WITH %d COLUMNS`, b.valueFormat.Csv.Columns))
+ }
+ if b.valueFormat.Csv.Header != nil {
+ options = append(options, fmt.Sprintf(`VALUE FORMAT CSV WITH HEADER ( %s )`, strings.Join(b.valueFormat.Csv.Header, ", ")))
+ }
+ if b.valueFormat.Csv.DelimitedBy != "" {
+ options = append(options, fmt.Sprintf(`VALUE DELIMITER %s`, QuoteString(b.valueFormat.Csv.DelimitedBy)))
+ }
+ }
+
+ if b.valueFormat.Bytes {
+ options = append(options, `VALUE FORMAT BYTES`)
+ }
+ if b.valueFormat.Text {
+ options = append(options, `VALUE FORMAT TEXT`)
+ }
+ if b.valueFormat.Json {
+ options = append(options, `VALUE FORMAT JSON`)
+ }
+
+ // Metadata
+ var metadataOptions []string
+ if b.includeKey {
+ if b.keyAlias != "" {
+ metadataOptions = append(metadataOptions, fmt.Sprintf("KEY AS %s", QuoteIdentifier(b.keyAlias)))
+ } else {
+ metadataOptions = append(metadataOptions, "KEY")
+ }
+ }
+ if b.includeHeaders {
+ if b.headersAlias != "" {
+ metadataOptions = append(metadataOptions, fmt.Sprintf("HEADERS AS %s", QuoteIdentifier(b.headersAlias)))
+ } else {
+ metadataOptions = append(metadataOptions, "HEADERS")
+ }
+ }
+ if b.includePartition {
+ if b.partitionAlias != "" {
+ metadataOptions = append(metadataOptions, fmt.Sprintf("PARTITION AS %s", QuoteIdentifier(b.partitionAlias)))
+ } else {
+ metadataOptions = append(metadataOptions, "PARTITION")
+ }
+ }
+ if b.includeOffset {
+ if b.offsetAlias != "" {
+ metadataOptions = append(metadataOptions, fmt.Sprintf("OFFSET AS %s", QuoteIdentifier(b.offsetAlias)))
+ } else {
+ metadataOptions = append(metadataOptions, "OFFSET")
+ }
+ }
+ if b.includeTimestamp {
+ if b.timestampAlias != "" {
+ metadataOptions = append(metadataOptions, fmt.Sprintf("TIMESTAMP AS %s", QuoteIdentifier(b.timestampAlias)))
+ } else {
+ metadataOptions = append(metadataOptions, "TIMESTAMP")
+ }
+ }
+ if len(metadataOptions) > 0 {
+ options = append(options, fmt.Sprintf(`INCLUDE %s`, strings.Join(metadataOptions, ", ")))
+ }
+
+ // Envelope
+ if b.envelope.Debezium {
+ options = append(options, `ENVELOPE DEBEZIUM`)
+ }
+ if b.envelope.Upsert {
+ upsertOption := "ENVELOPE UPSERT"
+ if b.envelope.UpsertOptions != nil {
+ inlineOptions := b.envelope.UpsertOptions.ValueDecodingErrors.Inline
+ if inlineOptions.Enabled {
+ upsertOption += " (VALUE DECODING ERRORS = (INLINE"
+ if inlineOptions.Alias != "" {
+ upsertOption += fmt.Sprintf(" AS %s", QuoteIdentifier(inlineOptions.Alias))
+ }
+ upsertOption += "))"
+ }
+ }
+ options = append(options, upsertOption)
+ }
+ if b.envelope.None {
+ options = append(options, `ENVELOPE NONE`)
+ }
+
+ // Expose Progress
+ if b.exposeProgress.Name != "" {
+ options = append(options, fmt.Sprintf(`EXPOSE PROGRESS AS %s`, b.exposeProgress.QualifiedName()))
+ }
+
+ q.WriteString(strings.Join(options, " "))
+ return " " + q.String()
+ })
+}
diff --git a/pkg/materialize/source_table_kafka_test.go b/pkg/materialize/source_table_kafka_test.go
new file mode 100644
index 00000000..85ca8a8c
--- /dev/null
+++ b/pkg/materialize/source_table_kafka_test.go
@@ -0,0 +1,123 @@
+package materialize
+
+import (
+ "testing"
+
+ sqlmock "github.com/DATA-DOG/go-sqlmock"
+ "github.com/MaterializeInc/terraform-provider-materialize/pkg/testhelpers"
+ "github.com/jmoiron/sqlx"
+)
+
+func TestResourceSourceTableKafkaCreate(t *testing.T) {
+ testhelpers.WithMockDb(t, func(db *sqlx.DB, mock sqlmock.Sqlmock) {
+ mock.ExpectExec(
+ `CREATE TABLE "database"."schema"."source"
+ FROM SOURCE "database"."schema"."kafka_source"
+ \(REFERENCE "topic"\)
+ FORMAT JSON
+ INCLUDE KEY AS "message_key", HEADERS AS "message_headers", PARTITION AS "message_partition"
+ ENVELOPE UPSERT
+ EXPOSE PROGRESS AS "database"."schema"."progress";`,
+ ).WillReturnResult(sqlmock.NewResult(1, 1))
+
+ o := MaterializeObject{Name: "source", SchemaName: "schema", DatabaseName: "database"}
+ b := NewSourceTableKafkaBuilder(db, o)
+ b.Source(IdentifierSchemaStruct{Name: "kafka_source", DatabaseName: "database", SchemaName: "schema"})
+ b.UpstreamName("topic")
+ b.Format(SourceFormatSpecStruct{Json: true})
+ b.IncludeKey()
+ b.IncludeKeyAlias("message_key")
+ b.IncludeHeaders()
+ b.IncludeHeadersAlias("message_headers")
+ b.IncludePartition()
+ b.IncludePartitionAlias("message_partition")
+ b.Envelope(KafkaSourceEnvelopeStruct{Upsert: true})
+ b.ExposeProgress(IdentifierSchemaStruct{Name: "progress", DatabaseName: "database", SchemaName: "schema"})
+
+ if err := b.Create(); err != nil {
+ t.Fatal(err)
+ }
+ })
+}
+
+func TestResourceSourceTableKafkaCreateWithAvroFormat(t *testing.T) {
+ testhelpers.WithMockDb(t, func(db *sqlx.DB, mock sqlmock.Sqlmock) {
+ mock.ExpectExec(
+ `CREATE TABLE "database"."schema"."source"
+ FROM SOURCE "database"."schema"."kafka_source"
+ \(REFERENCE "topic"\)
+ FORMAT AVRO USING CONFLUENT SCHEMA REGISTRY CONNECTION "database"."schema"."schema_registry"
+ KEY STRATEGY EXTRACT
+ VALUE STRATEGY EXTRACT
+ INCLUDE TIMESTAMP
+ ENVELOPE DEBEZIUM;`,
+ ).WillReturnResult(sqlmock.NewResult(1, 1))
+
+ o := MaterializeObject{Name: "source", SchemaName: "schema", DatabaseName: "database"}
+ b := NewSourceTableKafkaBuilder(db, o)
+ b.Source(IdentifierSchemaStruct{Name: "kafka_source", DatabaseName: "database", SchemaName: "schema"})
+ b.UpstreamName("topic")
+ b.Format(SourceFormatSpecStruct{
+ Avro: &AvroFormatSpec{
+ SchemaRegistryConnection: IdentifierSchemaStruct{Name: "schema_registry", DatabaseName: "database", SchemaName: "schema"},
+ KeyStrategy: "EXTRACT",
+ ValueStrategy: "EXTRACT",
+ },
+ })
+ b.IncludeTimestamp()
+ b.Envelope(KafkaSourceEnvelopeStruct{Debezium: true})
+
+ if err := b.Create(); err != nil {
+ t.Fatal(err)
+ }
+ })
+}
+
+func TestResourceSourceTableKafkaCreateWithUpsertOptions(t *testing.T) {
+ testhelpers.WithMockDb(t, func(db *sqlx.DB, mock sqlmock.Sqlmock) {
+ mock.ExpectExec(
+ `CREATE TABLE "database"."schema"."source"
+ FROM SOURCE "database"."schema"."kafka_source"
+ \(REFERENCE "topic"\)
+ FORMAT JSON
+ INCLUDE KEY, HEADERS, PARTITION, OFFSET, TIMESTAMP
+ ENVELOPE UPSERT \(VALUE DECODING ERRORS = \(INLINE AS "my_error_col"\)\)
+ EXPOSE PROGRESS AS "database"."schema"."progress";`,
+ ).WillReturnResult(sqlmock.NewResult(1, 1))
+
+ o := MaterializeObject{Name: "source", SchemaName: "schema", DatabaseName: "database"}
+ b := NewSourceTableKafkaBuilder(db, o)
+ b.Source(IdentifierSchemaStruct{Name: "kafka_source", DatabaseName: "database", SchemaName: "schema"})
+ b.UpstreamName("topic")
+ b.Format(SourceFormatSpecStruct{Json: true})
+ b.IncludeKey()
+ b.IncludeHeaders()
+ b.IncludePartition()
+ b.IncludeOffset()
+ b.IncludeTimestamp()
+ b.Envelope(KafkaSourceEnvelopeStruct{
+ Upsert: true,
+ UpsertOptions: &UpsertOptionsStruct{
+ ValueDecodingErrors: struct {
+ Inline struct {
+ Enabled bool
+ Alias string
+ }
+ }{
+ Inline: struct {
+ Enabled bool
+ Alias string
+ }{
+ Enabled: true,
+ Alias: "my_error_col",
+ },
+ },
+ },
+ })
+ b.ExposeProgress(IdentifierSchemaStruct{Name: "progress", DatabaseName: "database", SchemaName: "schema"})
+
+ if err := b.Create(); err != nil {
+ t.Fatal(err)
+ }
+ })
+}
diff --git a/pkg/materialize/source_table_load_generator.go b/pkg/materialize/source_table_load_generator.go
new file mode 100644
index 00000000..c3f81906
--- /dev/null
+++ b/pkg/materialize/source_table_load_generator.go
@@ -0,0 +1,19 @@
+package materialize
+
+import (
+ "github.com/jmoiron/sqlx"
+)
+
+type SourceTableLoadGenBuilder struct {
+ *SourceTableBuilder
+}
+
+func NewSourceTableLoadGenBuilder(conn *sqlx.DB, obj MaterializeObject) *SourceTableLoadGenBuilder {
+ return &SourceTableLoadGenBuilder{
+ SourceTableBuilder: NewSourceTableBuilder(conn, obj),
+ }
+}
+
+func (b *SourceTableLoadGenBuilder) Create() error {
+ return b.BaseCreate("load-generator", nil)
+}
diff --git a/pkg/materialize/source_table_load_generator_test.go b/pkg/materialize/source_table_load_generator_test.go
new file mode 100644
index 00000000..31136e90
--- /dev/null
+++ b/pkg/materialize/source_table_load_generator_test.go
@@ -0,0 +1,56 @@
+package materialize
+
+import (
+ "testing"
+
+ sqlmock "github.com/DATA-DOG/go-sqlmock"
+ "github.com/MaterializeInc/terraform-provider-materialize/pkg/testhelpers"
+ "github.com/jmoiron/sqlx"
+)
+
+var sourceTableLoadGen = MaterializeObject{Name: "table", SchemaName: "schema", DatabaseName: "database"}
+
+func TestSourceTableLoadgenCreate(t *testing.T) {
+ testhelpers.WithMockDb(t, func(db *sqlx.DB, mock sqlmock.Sqlmock) {
+ mock.ExpectExec(
+ `CREATE TABLE "database"."schema"."table"
+ FROM SOURCE "materialize"."public"."source"
+ \(REFERENCE "upstream_schema"."upstream_table"\);`,
+ ).WillReturnResult(sqlmock.NewResult(1, 1))
+
+ b := NewSourceTableLoadGenBuilder(db, sourceTableLoadGen)
+ b.Source(IdentifierSchemaStruct{Name: "source", SchemaName: "public", DatabaseName: "materialize"})
+ b.UpstreamName("upstream_table")
+ b.UpstreamSchemaName("upstream_schema")
+
+ if err := b.Create(); err != nil {
+ t.Fatal(err)
+ }
+ })
+}
+
+func TestSourceTableLoadGenRename(t *testing.T) {
+ testhelpers.WithMockDb(t, func(db *sqlx.DB, mock sqlmock.Sqlmock) {
+ mock.ExpectExec(
+ `ALTER TABLE "database"."schema"."table" RENAME TO "database"."schema"."new_table";`,
+ ).WillReturnResult(sqlmock.NewResult(1, 1))
+
+ b := NewSourceTableLoadGenBuilder(db, sourceTableLoadGen)
+ if err := b.Rename("new_table"); err != nil {
+ t.Fatal(err)
+ }
+ })
+}
+
+func TestSourceTableLoadGenDrop(t *testing.T) {
+ testhelpers.WithMockDb(t, func(db *sqlx.DB, mock sqlmock.Sqlmock) {
+ mock.ExpectExec(
+ `DROP TABLE "database"."schema"."table";`,
+ ).WillReturnResult(sqlmock.NewResult(1, 1))
+
+ b := NewSourceTableLoadGenBuilder(db, sourceTableLoadGen)
+ if err := b.Drop(); err != nil {
+ t.Fatal(err)
+ }
+ })
+}
diff --git a/pkg/materialize/source_table_mysql.go b/pkg/materialize/source_table_mysql.go
new file mode 100644
index 00000000..c9aa375c
--- /dev/null
+++ b/pkg/materialize/source_table_mysql.go
@@ -0,0 +1,129 @@
+package materialize
+
+import (
+ "fmt"
+ "strings"
+
+ "github.com/jmoiron/sqlx"
+ "github.com/lib/pq"
+)
+
+// MySQL specific params and query
+type SourceTableMySQLParams struct {
+ SourceTableParams
+ ExcludeColumns pq.StringArray `db:"exclude_columns"`
+ TextColumns pq.StringArray `db:"text_columns"`
+}
+
+var sourceTableMySQLQuery = `
+ SELECT
+ mz_tables.id,
+ mz_tables.name,
+ mz_schemas.name AS schema_name,
+ mz_databases.name AS database_name,
+ mz_sources.name AS source_name,
+ source_schemas.name AS source_schema_name,
+ source_databases.name AS source_database_name,
+ mz_mysql_source_tables.table_name AS upstream_table_name,
+ mz_mysql_source_tables.schema_name AS upstream_schema_name,
+ mz_sources.type AS source_type,
+ comments.comment AS comment,
+ mz_roles.name AS owner_name,
+ mz_tables.privileges
+ FROM mz_tables
+ JOIN mz_schemas
+ ON mz_tables.schema_id = mz_schemas.id
+ JOIN mz_databases
+ ON mz_schemas.database_id = mz_databases.id
+ JOIN mz_sources
+ ON mz_tables.source_id = mz_sources.id
+ JOIN mz_schemas AS source_schemas
+ ON mz_sources.schema_id = source_schemas.id
+ JOIN mz_databases AS source_databases
+ ON source_schemas.database_id = source_databases.id
+ LEFT JOIN mz_internal.mz_mysql_source_tables
+ ON mz_tables.id = mz_mysql_source_tables.id
+ JOIN mz_roles
+ ON mz_tables.owner_id = mz_roles.id
+ LEFT JOIN (
+ SELECT id, comment
+ FROM mz_internal.mz_comments
+ WHERE object_type = 'table'
+ AND object_sub_id IS NULL
+ ) comments
+ ON mz_tables.id = comments.id
+`
+
+func SourceTableMySQLId(conn *sqlx.DB, obj MaterializeObject) (string, error) {
+ p := map[string]string{
+ "mz_tables.name": obj.Name,
+ "mz_schemas.name": obj.SchemaName,
+ "mz_databases.name": obj.DatabaseName,
+ }
+ q := NewBaseQuery(sourceTableMySQLQuery).QueryPredicate(p)
+
+ var t SourceTableParams
+ if err := conn.Get(&t, q); err != nil {
+ return "", err
+ }
+
+ return t.TableId.String, nil
+}
+
+func ScanSourceTableMySQL(conn *sqlx.DB, id string) (SourceTableMySQLParams, error) {
+ q := NewBaseQuery(sourceTableMySQLQuery).QueryPredicate(map[string]string{"mz_tables.id": id})
+
+ var params SourceTableMySQLParams
+ if err := conn.Get(¶ms, q); err != nil {
+ return params, err
+ }
+
+ return params, nil
+}
+
+// SourceTableMySQLBuilder for MySQL sources
+type SourceTableMySQLBuilder struct {
+ *SourceTableBuilder
+ textColumns []string
+ excludeColumns []string
+}
+
+func NewSourceTableMySQLBuilder(conn *sqlx.DB, obj MaterializeObject) *SourceTableMySQLBuilder {
+ return &SourceTableMySQLBuilder{
+ SourceTableBuilder: NewSourceTableBuilder(conn, obj),
+ }
+}
+
+func (b *SourceTableMySQLBuilder) TextColumns(c []string) *SourceTableMySQLBuilder {
+ b.textColumns = c
+ return b
+}
+
+func (b *SourceTableMySQLBuilder) ExcludeColumns(c []string) *SourceTableMySQLBuilder {
+ b.excludeColumns = c
+ return b
+}
+
+func (b *SourceTableMySQLBuilder) Create() error {
+ return b.BaseCreate("mysql", func() string {
+ q := strings.Builder{}
+ var options []string
+ if len(b.textColumns) > 0 {
+ s := strings.Join(b.textColumns, ", ")
+ options = append(options, fmt.Sprintf(`TEXT COLUMNS (%s)`, s))
+ }
+
+ if len(b.excludeColumns) > 0 {
+ s := strings.Join(b.excludeColumns, ", ")
+ options = append(options, fmt.Sprintf(`EXCLUDE COLUMNS (%s)`, s))
+ }
+
+ if len(options) > 0 {
+ q.WriteString(" WITH (")
+ q.WriteString(strings.Join(options, ", "))
+ q.WriteString(")")
+ }
+
+ return q.String()
+ })
+}
diff --git a/pkg/materialize/source_table_mysql_test.go b/pkg/materialize/source_table_mysql_test.go
new file mode 100644
index 00000000..31b1082b
--- /dev/null
+++ b/pkg/materialize/source_table_mysql_test.go
@@ -0,0 +1,59 @@
+package materialize
+
+import (
+ "testing"
+
+ sqlmock "github.com/DATA-DOG/go-sqlmock"
+ "github.com/MaterializeInc/terraform-provider-materialize/pkg/testhelpers"
+ "github.com/jmoiron/sqlx"
+)
+
+var sourceTableMySQL = MaterializeObject{Name: "table", SchemaName: "schema", DatabaseName: "database"}
+
+func TestSourceTableCreateWithMySQLSource(t *testing.T) {
+ testhelpers.WithMockDb(t, func(db *sqlx.DB, mock sqlmock.Sqlmock) {
+ mock.ExpectExec(
+ `CREATE TABLE "database"."schema"."table"
+ FROM SOURCE "materialize"."public"."source"
+ \(REFERENCE "upstream_schema"."upstream_table"\)
+ WITH \(TEXT COLUMNS \(column1, column2\), EXCLUDE COLUMNS \(exclude1, exclude2\)\);`,
+ ).WillReturnResult(sqlmock.NewResult(1, 1))
+
+ b := NewSourceTableMySQLBuilder(db, sourceTableMySQL)
+ b.Source(IdentifierSchemaStruct{Name: "source", SchemaName: "public", DatabaseName: "materialize"})
+ b.UpstreamName("upstream_table")
+ b.UpstreamSchemaName("upstream_schema")
+ b.TextColumns([]string{"column1", "column2"})
+ b.ExcludeColumns([]string{"exclude1", "exclude2"})
+
+ if err := b.Create(); err != nil {
+ t.Fatal(err)
+ }
+ })
+}
+
+func TestSourceTableMySQLRename(t *testing.T) {
+ testhelpers.WithMockDb(t, func(db *sqlx.DB, mock sqlmock.Sqlmock) {
+ mock.ExpectExec(
+ `ALTER TABLE "database"."schema"."table" RENAME TO "database"."schema"."new_table";`,
+ ).WillReturnResult(sqlmock.NewResult(1, 1))
+
+ b := NewSourceTableMySQLBuilder(db, sourceTableMySQL)
+ if err := b.Rename("new_table"); err != nil {
+ t.Fatal(err)
+ }
+ })
+}
+
+func TestSourceTableMySQLDrop(t *testing.T) {
+ testhelpers.WithMockDb(t, func(db *sqlx.DB, mock sqlmock.Sqlmock) {
+ mock.ExpectExec(
+ `DROP TABLE "database"."schema"."table";`,
+ ).WillReturnResult(sqlmock.NewResult(1, 1))
+
+ b := NewSourceTableMySQLBuilder(db, sourceTableMySQL)
+ if err := b.Drop(); err != nil {
+ t.Fatal(err)
+ }
+ })
+}
diff --git a/pkg/materialize/source_table_postgres.go b/pkg/materialize/source_table_postgres.go
new file mode 100644
index 00000000..ad658bbb
--- /dev/null
+++ b/pkg/materialize/source_table_postgres.go
@@ -0,0 +1,119 @@
+package materialize
+
+import (
+ "fmt"
+ "strings"
+
+ "github.com/jmoiron/sqlx"
+ "github.com/lib/pq"
+)
+
+// Postgres specific params and query
+type SourceTablePostgresParams struct {
+ SourceTableParams
+ // Add upstream table and schema name once supported
+ IgnoreColumns pq.StringArray `db:"ignore_columns"`
+ TextColumns pq.StringArray `db:"text_columns"`
+}
+
+var sourceTablePostgresQuery = `
+ SELECT
+ mz_tables.id,
+ mz_tables.name,
+ mz_schemas.name AS schema_name,
+ mz_databases.name AS database_name,
+ mz_sources.name AS source_name,
+ source_schemas.name AS source_schema_name,
+ source_databases.name AS source_database_name,
+ mz_postgres_source_tables.table_name AS upstream_table_name,
+ mz_postgres_source_tables.schema_name AS upstream_schema_name,
+ mz_sources.type AS source_type,
+ comments.comment AS comment,
+ mz_roles.name AS owner_name,
+ mz_tables.privileges
+ FROM mz_tables
+ JOIN mz_schemas
+ ON mz_tables.schema_id = mz_schemas.id
+ JOIN mz_databases
+ ON mz_schemas.database_id = mz_databases.id
+ JOIN mz_sources
+ ON mz_tables.source_id = mz_sources.id
+ JOIN mz_schemas AS source_schemas
+ ON mz_sources.schema_id = source_schemas.id
+ JOIN mz_databases AS source_databases
+ ON source_schemas.database_id = source_databases.id
+ LEFT JOIN mz_internal.mz_postgres_source_tables
+ ON mz_tables.id = mz_postgres_source_tables.id
+ JOIN mz_roles
+ ON mz_tables.owner_id = mz_roles.id
+ LEFT JOIN (
+ SELECT id, comment
+ FROM mz_internal.mz_comments
+ WHERE object_type = 'table'
+ AND object_sub_id IS NULL
+ ) comments
+ ON mz_tables.id = comments.id
+`
+
+func SourceTablePostgresId(conn *sqlx.DB, obj MaterializeObject) (string, error) {
+ p := map[string]string{
+ "mz_tables.name": obj.Name,
+ "mz_schemas.name": obj.SchemaName,
+ "mz_databases.name": obj.DatabaseName,
+ }
+ q := NewBaseQuery(sourceTablePostgresQuery).QueryPredicate(p)
+
+ var t SourceTableParams
+ if err := conn.Get(&t, q); err != nil {
+ return "", err
+ }
+
+ return t.TableId.String, nil
+}
+
+func ScanSourceTablePostgres(conn *sqlx.DB, id string) (SourceTablePostgresParams, error) {
+ q := NewBaseQuery(sourceTablePostgresQuery).QueryPredicate(map[string]string{"mz_tables.id": id})
+
+ var params SourceTablePostgresParams
+ if err := conn.Get(¶ms, q); err != nil {
+ return params, err
+ }
+
+ return params, nil
+}
+
+// SourceTablePostgresBuilder for Postgres sources
+type SourceTablePostgresBuilder struct {
+ *SourceTableBuilder
+ textColumns []string
+}
+
+func NewSourceTablePostgresBuilder(conn *sqlx.DB, obj MaterializeObject) *SourceTablePostgresBuilder {
+ return &SourceTablePostgresBuilder{
+ SourceTableBuilder: NewSourceTableBuilder(conn, obj),
+ }
+}
+
+func (b *SourceTablePostgresBuilder) TextColumns(c []string) *SourceTablePostgresBuilder {
+ b.textColumns = c
+ return b
+}
+
+func (b *SourceTablePostgresBuilder) Create() error {
+ return b.BaseCreate("postgres", func() string {
+ q := strings.Builder{}
+ var options []string
+ if len(b.textColumns) > 0 {
+ s := strings.Join(b.textColumns, ", ")
+ options = append(options, fmt.Sprintf(`TEXT COLUMNS (%s)`, s))
+ }
+
+ if len(options) > 0 {
+ q.WriteString(" WITH (")
+ q.WriteString(strings.Join(options, ", "))
+ q.WriteString(")")
+ }
+
+ return q.String()
+ })
+}
diff --git a/pkg/materialize/source_table_postgres_test.go b/pkg/materialize/source_table_postgres_test.go
new file mode 100644
index 00000000..8d120adc
--- /dev/null
+++ b/pkg/materialize/source_table_postgres_test.go
@@ -0,0 +1,58 @@
+package materialize
+
+import (
+ "testing"
+
+ sqlmock "github.com/DATA-DOG/go-sqlmock"
+ "github.com/MaterializeInc/terraform-provider-materialize/pkg/testhelpers"
+ "github.com/jmoiron/sqlx"
+)
+
+var sourceTablePostgres = MaterializeObject{Name: "table", SchemaName: "schema", DatabaseName: "database"}
+
+func TestSourceTablePostgresCreate(t *testing.T) {
+ testhelpers.WithMockDb(t, func(db *sqlx.DB, mock sqlmock.Sqlmock) {
+ mock.ExpectExec(
+ `CREATE TABLE "database"."schema"."table"
+ FROM SOURCE "materialize"."public"."source"
+ \(REFERENCE "upstream_schema"."upstream_table"\)
+ WITH \(TEXT COLUMNS \(column1, column2\)\);`,
+ ).WillReturnResult(sqlmock.NewResult(1, 1))
+
+ b := NewSourceTablePostgresBuilder(db, sourceTablePostgres)
+ b.Source(IdentifierSchemaStruct{Name: "source", SchemaName: "public", DatabaseName: "materialize"})
+ b.UpstreamName("upstream_table")
+ b.UpstreamSchemaName("upstream_schema")
+ b.TextColumns([]string{"column1", "column2"})
+
+ if err := b.Create(); err != nil {
+ t.Fatal(err)
+ }
+ })
+}
+
+func TestSourceTablePostgresRename(t *testing.T) {
+ testhelpers.WithMockDb(t, func(db *sqlx.DB, mock sqlmock.Sqlmock) {
+ mock.ExpectExec(
+ `ALTER TABLE "database"."schema"."table" RENAME TO "database"."schema"."new_table";`,
+ ).WillReturnResult(sqlmock.NewResult(1, 1))
+
+ b := NewSourceTablePostgresBuilder(db, sourceTablePostgres)
+ if err := b.Rename("new_table"); err != nil {
+ t.Fatal(err)
+ }
+ })
+}
+
+func TestSourceTablePostgresDrop(t *testing.T) {
+ testhelpers.WithMockDb(t, func(db *sqlx.DB, mock sqlmock.Sqlmock) {
+ mock.ExpectExec(
+ `DROP TABLE "database"."schema"."table";`,
+ ).WillReturnResult(sqlmock.NewResult(1, 1))
+
+ b := NewSourceTablePostgresBuilder(db, sourceTablePostgres)
+ if err := b.Drop(); err != nil {
+ t.Fatal(err)
+ }
+ })
+}
diff --git a/pkg/materialize/source_table_test.go b/pkg/materialize/source_table_test.go
new file mode 100644
index 00000000..0d3d5548
--- /dev/null
+++ b/pkg/materialize/source_table_test.go
@@ -0,0 +1 @@
+package materialize
diff --git a/pkg/materialize/source_table_webhook.go b/pkg/materialize/source_table_webhook.go
new file mode 100644
index 00000000..6bf93408
--- /dev/null
+++ b/pkg/materialize/source_table_webhook.go
@@ -0,0 +1,231 @@
+package materialize
+
+import (
+ "fmt"
+ "strings"
+
+ "github.com/jmoiron/sqlx"
+)
+
+// SourceTableWebhookParams contains the parameters for a webhook source table
+type SourceTableWebhookParams struct {
+ SourceTableParams
+}
+
+// Query to get webhook source table information
+var sourceTableWebhookQuery = `
+ SELECT
+ mz_tables.id,
+ mz_tables.name,
+ mz_schemas.name AS schema_name,
+ mz_databases.name AS database_name,
+ mz_sources.type AS source_type,
+ comments.comment AS comment,
+ mz_roles.name AS owner_name,
+ mz_tables.privileges
+ FROM mz_tables
+ JOIN mz_schemas
+ ON mz_tables.schema_id = mz_schemas.id
+ JOIN mz_databases
+ ON mz_schemas.database_id = mz_databases.id
+ JOIN mz_roles
+ ON mz_tables.owner_id = mz_roles.id
+ LEFT JOIN (
+ SELECT id, comment
+ FROM mz_internal.mz_comments
+ WHERE object_type = 'table'
+ AND object_sub_id IS NULL
+ ) comments
+ ON mz_tables.id = comments.id
+`
+
+// SourceTableWebhookId retrieves the ID of a webhook source table
+func SourceTableWebhookId(conn *sqlx.DB, obj MaterializeObject) (string, error) {
+ p := map[string]string{
+ "mz_tables.name": obj.Name,
+ "mz_schemas.name": obj.SchemaName,
+ "mz_databases.name": obj.DatabaseName,
+ }
+ q := NewBaseQuery(sourceTableWebhookQuery).QueryPredicate(p)
+
+ var t SourceTableParams
+ if err := conn.Get(&t, q); err != nil {
+ return "", err
+ }
+
+ return t.TableId.String, nil
+}
+
+// ScanSourceTableWebhook scans a webhook source table by ID
+func ScanSourceTableWebhook(conn *sqlx.DB, id string) (SourceTableWebhookParams, error) {
+ q := NewBaseQuery(sourceTableWebhookQuery).QueryPredicate(map[string]string{"mz_tables.id": id})
+
+ var params SourceTableWebhookParams
+ if err := conn.Get(¶ms, q); err != nil {
+ return params, err
+ }
+
+ return params, nil
+}
+
+// SourceTableWebhookBuilder builds webhook source tables
+type SourceTableWebhookBuilder struct {
+ ddl Builder
+ tableName string
+ schemaName string
+ databaseName string
+ bodyFormat string
+ includeHeader []HeaderStruct
+ includeHeaders IncludeHeadersStruct
+ checkOptions []CheckOptionsStruct
+ checkExpression string
+}
+
+// NewSourceTableWebhookBuilder creates a new webhook source table builder
+func NewSourceTableWebhookBuilder(conn *sqlx.DB, obj MaterializeObject) *SourceTableWebhookBuilder {
+ return &SourceTableWebhookBuilder{
+ ddl: Builder{conn, Table},
+ tableName: obj.Name,
+ schemaName: obj.SchemaName,
+ databaseName: obj.DatabaseName,
+ }
+}
+
+// QualifiedName returns the fully qualified name of the table
+func (b *SourceTableWebhookBuilder) QualifiedName() string {
+ return QualifiedName(b.databaseName, b.schemaName, b.tableName)
+}
+
+// BodyFormat sets the body format
+func (b *SourceTableWebhookBuilder) BodyFormat(f string) *SourceTableWebhookBuilder {
+ b.bodyFormat = f
+ return b
+}
+
+// IncludeHeader adds header inclusions
+func (b *SourceTableWebhookBuilder) IncludeHeader(h []HeaderStruct) *SourceTableWebhookBuilder {
+ b.includeHeader = h
+ return b
+}
+
+// IncludeHeaders sets headers to include
+func (b *SourceTableWebhookBuilder) IncludeHeaders(h IncludeHeadersStruct) *SourceTableWebhookBuilder {
+ b.includeHeaders = h
+ return b
+}
+
+// CheckOptions sets the check options
+func (b *SourceTableWebhookBuilder) CheckOptions(o []CheckOptionsStruct) *SourceTableWebhookBuilder {
+ b.checkOptions = o
+ return b
+}
+
+// CheckExpression sets the check expression
+func (b *SourceTableWebhookBuilder) CheckExpression(e string) *SourceTableWebhookBuilder {
+ b.checkExpression = e
+ return b
+}
+
+// Drop removes the webhook source table
+func (b *SourceTableWebhookBuilder) Drop() error {
+ qn := b.QualifiedName()
+ return b.ddl.drop(qn)
+}
+
+func (b *SourceTableWebhookBuilder) Rename(newName string) error {
+ oldName := b.QualifiedName()
+ b.tableName = newName
+ newName = b.QualifiedName()
+ return b.ddl.rename(oldName, newName)
+}
+
+// Create creates the webhook source table
+func (b *SourceTableWebhookBuilder) Create() error {
+ q := strings.Builder{}
+ q.WriteString(fmt.Sprintf(`CREATE TABLE %s FROM WEBHOOK`, b.QualifiedName()))
+
+ // Add webhook-specific options
+ var options []string
+
+ // Body Format
+ options = append(options, fmt.Sprintf(`BODY FORMAT %s`, b.bodyFormat))
+
+ // Include Header
+ if len(b.includeHeader) > 0 {
+ for _, h := range b.includeHeader {
+ headerOption := fmt.Sprintf(`INCLUDE HEADER %s`, QuoteString(h.Header))
+ if h.Alias != "" {
+ headerOption += fmt.Sprintf(` AS %s`, h.Alias)
+ }
+ if h.Bytes {
+ headerOption += ` BYTES`
+ }
+ options = append(options, headerOption)
+ }
+ }
+
+ // Include Headers
+ if b.includeHeaders.All || len(b.includeHeaders.Only) > 0 || len(b.includeHeaders.Not) > 0 {
+ headerOption := `INCLUDE HEADERS`
+ var headers []string
+
+ for _, h := range b.includeHeaders.Only {
+ headers = append(headers, QuoteString(h))
+ }
+ for _, h := range b.includeHeaders.Not {
+ headers = append(headers, fmt.Sprintf("NOT %s", QuoteString(h)))
+ }
+
+ if len(headers) > 0 {
+ headerOption += fmt.Sprintf(` (%s)`, strings.Join(headers, ", "))
+ }
+ options = append(options, headerOption)
+ }
+
+ // Check Options and Expression
+ if len(b.checkOptions) > 0 || b.checkExpression != "" {
+ checkOption := "CHECK ("
+
+ if len(b.checkOptions) > 0 {
+ var checkOpts []string
+ for _, opt := range b.checkOptions {
+ var o string
+ if opt.Field.Body {
+ o = "BODY"
+ }
+ if opt.Field.Headers {
+ o = "HEADERS"
+ }
+ if opt.Field.Secret.Name != "" {
+ o = "SECRET " + opt.Field.Secret.QualifiedName()
+ }
+ if opt.Alias != "" {
+ o += fmt.Sprintf(" AS %s", opt.Alias)
+ }
+ if opt.Bytes {
+ o += " BYTES"
+ }
+ checkOpts = append(checkOpts, o)
+ }
+ checkOption += fmt.Sprintf(" WITH (%s)", strings.Join(checkOpts, ", "))
+ }
+
+ if b.checkExpression != "" {
+ if len(b.checkOptions) > 0 {
+ checkOption += " "
+ }
+ checkOption += b.checkExpression
+ }
+
+ checkOption += ")"
+ options = append(options, checkOption)
+ }
+
+ if len(options) > 0 {
+ q.WriteString(" ")
+ q.WriteString(strings.Join(options, " "))
+ }
+
+ q.WriteString(";")
+ return b.ddl.exec(q.String())
+}
diff --git a/pkg/materialize/source_table_webhook_test.go b/pkg/materialize/source_table_webhook_test.go
new file mode 100644
index 00000000..2d981b77
--- /dev/null
+++ b/pkg/materialize/source_table_webhook_test.go
@@ -0,0 +1,209 @@
+package materialize
+
+import (
+ "testing"
+
+ sqlmock "github.com/DATA-DOG/go-sqlmock"
+ "github.com/MaterializeInc/terraform-provider-materialize/pkg/testhelpers"
+ "github.com/jmoiron/sqlx"
+)
+
+var sourceTableWebhook = MaterializeObject{Name: "webhook_table", SchemaName: "schema", DatabaseName: "database"}
+
+func TestSourceTableWebhookCreateExposeHeaders(t *testing.T) {
+ testhelpers.WithMockDb(t, func(db *sqlx.DB, mock sqlmock.Sqlmock) {
+ mock.ExpectExec(
+ `CREATE TABLE "database"."schema"."webhook_table"
+ FROM WEBHOOK BODY FORMAT JSON INCLUDE HEADER 'timestamp' AS ts
+ INCLUDE HEADER 'x-event-type' AS event_type;`,
+ ).WillReturnResult(sqlmock.NewResult(1, 1))
+
+ var includeHeader = []HeaderStruct{
+ {
+ Header: "timestamp",
+ Alias: "ts",
+ },
+ {
+ Header: "x-event-type",
+ Alias: "event_type",
+ },
+ }
+
+ b := NewSourceTableWebhookBuilder(db, sourceTableWebhook)
+ b.BodyFormat("JSON")
+ b.IncludeHeader(includeHeader)
+
+ if err := b.Create(); err != nil {
+ t.Fatal(err)
+ }
+ })
+}
+
+func TestSourceTableWebhookCreateIncludeHeaders(t *testing.T) {
+ testhelpers.WithMockDb(t, func(db *sqlx.DB, mock sqlmock.Sqlmock) {
+ mock.ExpectExec(
+ `CREATE TABLE "database"."schema"."webhook_table"
+ FROM WEBHOOK BODY FORMAT JSON INCLUDE HEADERS \(NOT 'authorization', NOT 'x-api-key'\);`,
+ ).WillReturnResult(sqlmock.NewResult(1, 1))
+
+ b := NewSourceTableWebhookBuilder(db, sourceTableWebhook)
+ b.BodyFormat("JSON")
+ b.IncludeHeaders(IncludeHeadersStruct{
+ Not: []string{"authorization", "x-api-key"},
+ })
+
+ if err := b.Create(); err != nil {
+ t.Fatal(err)
+ }
+ })
+}
+
+func TestSourceTableWebhookCreateValidated(t *testing.T) {
+ testhelpers.WithMockDb(t, func(db *sqlx.DB, mock sqlmock.Sqlmock) {
+ mock.ExpectExec(
+ `CREATE TABLE "database"."schema"."webhook_table"
+ FROM WEBHOOK BODY FORMAT JSON CHECK
+ \( WITH \(HEADERS, BODY AS request_body, SECRET "database"."schema"."my_webhook_shared_secret"\)
+ decode\(headers->'x-signature', 'base64'\) = hmac\(request_body, my_webhook_shared_secret, 'sha256'\)\);`,
+ ).WillReturnResult(sqlmock.NewResult(1, 1))
+
+ var checkOptions = []CheckOptionsStruct{
+ {
+ Field: FieldStruct{Headers: true},
+ },
+ {
+ Field: FieldStruct{Body: true},
+ Alias: "request_body",
+ },
+ {
+ Field: FieldStruct{
+ Secret: IdentifierSchemaStruct{
+ DatabaseName: "database",
+ SchemaName: "schema",
+ Name: "my_webhook_shared_secret",
+ },
+ },
+ },
+ }
+
+ b := NewSourceTableWebhookBuilder(db, sourceTableWebhook)
+ b.BodyFormat("JSON")
+ b.CheckOptions(checkOptions)
+ b.CheckExpression("decode(headers->'x-signature', 'base64') = hmac(request_body, my_webhook_shared_secret, 'sha256')")
+
+ if err := b.Create(); err != nil {
+ t.Fatal(err)
+ }
+ })
+}
+
+func TestSourceTableWebhookCreateSegment(t *testing.T) {
+ testhelpers.WithMockDb(t, func(db *sqlx.DB, mock sqlmock.Sqlmock) {
+ mock.ExpectExec(
+ `CREATE TABLE "database"."schema"."webhook_table"
+ FROM WEBHOOK BODY FORMAT JSON INCLUDE HEADER 'event-type' AS event_type INCLUDE HEADERS CHECK
+ \( WITH \(BODY BYTES, HEADERS, SECRET "database"."schema"."my_webhook_shared_secret" AS secret BYTES\)
+ decode\(headers->'x-signature', 'hex'\) = hmac\(body, secret, 'sha1'\)\);`,
+ ).WillReturnResult(sqlmock.NewResult(1, 1))
+
+ var includeHeader = []HeaderStruct{
+ {
+ Header: "event-type",
+ Alias: "event_type",
+ },
+ }
+ var checkOptions = []CheckOptionsStruct{
+ {
+ Field: FieldStruct{Body: true},
+ Bytes: true,
+ },
+ {
+ Field: FieldStruct{Headers: true},
+ },
+ {
+ Field: FieldStruct{
+ Secret: IdentifierSchemaStruct{
+ DatabaseName: "database",
+ SchemaName: "schema",
+ Name: "my_webhook_shared_secret",
+ },
+ },
+ Alias: "secret",
+ Bytes: true,
+ },
+ }
+
+ b := NewSourceTableWebhookBuilder(db, sourceTableWebhook)
+ b.BodyFormat("JSON")
+ b.IncludeHeader(includeHeader)
+ b.IncludeHeaders(IncludeHeadersStruct{All: true})
+ b.CheckOptions(checkOptions)
+ b.CheckExpression("decode(headers->'x-signature', 'hex') = hmac(body, secret, 'sha1')")
+
+ if err := b.Create(); err != nil {
+ t.Fatal(err)
+ }
+ })
+}
+
+func TestSourceTableWebhookCreateRudderstack(t *testing.T) {
+ testhelpers.WithMockDb(t, func(db *sqlx.DB, mock sqlmock.Sqlmock) {
+ mock.ExpectExec(
+ `CREATE TABLE "database"."schema"."webhook_table" FROM WEBHOOK BODY FORMAT JSON CHECK \( WITH \(HEADERS, BODY AS request_body, SECRET "database"."schema"."my_webhook_shared_secret"\) headers->'authorization' = rudderstack_shared_secret\);`,
+ ).WillReturnResult(sqlmock.NewResult(1, 1))
+
+ var checkOptions = []CheckOptionsStruct{
+ {
+ Field: FieldStruct{Headers: true},
+ },
+ {
+ Field: FieldStruct{Body: true},
+ Alias: "request_body",
+ },
+ {
+ Field: FieldStruct{
+ Secret: IdentifierSchemaStruct{
+ DatabaseName: "database",
+ SchemaName: "schema",
+ Name: "my_webhook_shared_secret",
+ },
+ },
+ },
+ }
+
+ b := NewSourceTableWebhookBuilder(db, sourceTableWebhook)
+ b.BodyFormat("JSON")
+ b.CheckOptions(checkOptions)
+ b.CheckExpression("headers->'authorization' = rudderstack_shared_secret")
+
+ if err := b.Create(); err != nil {
+ t.Fatal(err)
+ }
+ })
+}
+
+func TestSourceTableWebhookRename(t *testing.T) {
+ testhelpers.WithMockDb(t, func(db *sqlx.DB, mock sqlmock.Sqlmock) {
+ mock.ExpectExec(
+ `ALTER TABLE "database"."schema"."webhook_table" RENAME TO "database"."schema"."new_webhook_table";`,
+ ).WillReturnResult(sqlmock.NewResult(1, 1))
+
+ b := NewSourceTableWebhookBuilder(db, sourceTableWebhook)
+ if err := b.Rename("new_webhook_table"); err != nil {
+ t.Fatal(err)
+ }
+ })
+}
+
+func TestSourceTableWebhookDrop(t *testing.T) {
+ testhelpers.WithMockDb(t, func(db *sqlx.DB, mock sqlmock.Sqlmock) {
+ mock.ExpectExec(
+ `DROP TABLE "database"."schema"."webhook_table";`,
+ ).WillReturnResult(sqlmock.NewResult(1, 1))
+
+ b := NewSourceTableWebhookBuilder(db, sourceTableWebhook)
+ if err := b.Drop(); err != nil {
+ t.Fatal(err)
+ }
+ })
+}
diff --git a/pkg/provider/acceptance_cluster_test.go b/pkg/provider/acceptance_cluster_test.go
index 0ab70632..83182933 100644
--- a/pkg/provider/acceptance_cluster_test.go
+++ b/pkg/provider/acceptance_cluster_test.go
@@ -459,7 +459,6 @@ func testAccManagedClusterResourceAlterGraceful(clusterName, clusterSize string,
enabled = true
timeout = "10m"
on_timeout = "%[4]s"
-
}
}
`,
diff --git a/pkg/provider/acceptance_datasource_source_reference_test.go b/pkg/provider/acceptance_datasource_source_reference_test.go
new file mode 100644
index 00000000..c20a44f2
--- /dev/null
+++ b/pkg/provider/acceptance_datasource_source_reference_test.go
@@ -0,0 +1,172 @@
+package provider
+
+import (
+ "fmt"
+ "testing"
+
+ "github.com/hashicorp/terraform-plugin-testing/helper/acctest"
+ "github.com/hashicorp/terraform-plugin-testing/helper/resource"
+)
+
+func TestAccDataSourceSourceReference_basic(t *testing.T) {
+ addTestTopic()
+ nameSpace := acctest.RandStringFromCharSet(10, acctest.CharSetAlpha)
+ resource.ParallelTest(t, resource.TestCase{
+ PreCheck: func() { testAccPreCheck(t) },
+ ProviderFactories: testAccProviderFactories,
+ Steps: []resource.TestStep{
+ {
+ Config: testAccDataSourceSourceReferenceConfig(nameSpace),
+ Check: resource.ComposeTestCheckFunc(
+ resource.TestCheckResourceAttrSet("data.materialize_source_reference.kafka", "source_id"),
+ resource.TestCheckResourceAttrSet("data.materialize_source_reference.postgres", "source_id"),
+ resource.TestCheckResourceAttrSet("data.materialize_source_reference.mysql", "source_id"),
+
+ // Check total references
+ resource.TestCheckResourceAttr("data.materialize_source_reference.kafka", "references.#", "1"),
+ resource.TestCheckResourceAttr("data.materialize_source_reference.postgres", "references.#", "3"),
+ resource.TestCheckResourceAttr("data.materialize_source_reference.mysql", "references.#", "4"),
+
+ // Check Postgres reference attributes
+ resource.TestCheckResourceAttr("data.materialize_source_reference.postgres", "references.0.namespace", "public"),
+ resource.TestCheckResourceAttrSet("data.materialize_source_reference.postgres", "references.0.name"),
+ resource.TestCheckResourceAttr("data.materialize_source_reference.postgres", "references.0.source_name", fmt.Sprintf("%s_source_postgres", nameSpace)),
+ resource.TestCheckResourceAttr("data.materialize_source_reference.postgres", "references.0.source_type", "postgres"),
+ resource.TestCheckResourceAttrSet("data.materialize_source_reference.postgres", "references.0.updated_at"),
+
+ // Check MySQL reference attributes
+ resource.TestCheckResourceAttr("data.materialize_source_reference.mysql", "references.0.namespace", "shop"),
+ resource.TestCheckResourceAttrSet("data.materialize_source_reference.mysql", "references.0.name"),
+ resource.TestCheckResourceAttr("data.materialize_source_reference.mysql", "references.0.source_name", fmt.Sprintf("%s_source_mysql", nameSpace)),
+ resource.TestCheckResourceAttr("data.materialize_source_reference.mysql", "references.1.source_type", "mysql"),
+ resource.TestCheckResourceAttrSet("data.materialize_source_reference.mysql", "references.1.updated_at"),
+
+ // Check Kafka reference attributes
+ resource.TestCheckResourceAttr("data.materialize_source_reference.kafka", "references.0.name", "terraform"),
+ resource.TestCheckResourceAttr("data.materialize_source_reference.kafka", "references.0.source_name", fmt.Sprintf("%s_source_kafka", nameSpace)),
+ resource.TestCheckResourceAttr("data.materialize_source_reference.kafka", "references.0.source_type", "kafka"),
+ resource.TestCheckResourceAttrSet("data.materialize_source_reference.kafka", "references.0.updated_at"),
+ ),
+ },
+ },
+ })
+}
+
+func testAccDataSourceSourceReferenceConfig(nameSpace string) string {
+ return fmt.Sprintf(`
+ // Postgres setup
+ resource "materialize_secret" "postgres_password" {
+ name = "%[1]s_secret_postgres"
+ value = "c2VjcmV0Cg=="
+ }
+
+ resource "materialize_connection_postgres" "postgres_connection" {
+ name = "%[1]s_connection_postgres"
+ host = "postgres"
+ port = 5432
+ user {
+ text = "postgres"
+ }
+ password {
+ name = materialize_secret.postgres_password.name
+ }
+ database = "postgres"
+ }
+
+ resource "materialize_source_postgres" "test_source_postgres" {
+ name = "%[1]s_source_postgres"
+ cluster_name = "quickstart"
+
+ postgres_connection {
+ name = materialize_connection_postgres.postgres_connection.name
+ }
+ publication = "mz_source"
+ }
+
+ resource "materialize_source_table_postgres" "table_from_source_pg" {
+ name = "%[1]s_table"
+ schema_name = "public"
+ database_name = "materialize"
+
+ source {
+ name = materialize_source_postgres.test_source_postgres.name
+ }
+
+ upstream_name = "table2"
+ upstream_schema_name = "public"
+ }
+
+ // MySQL setup
+ resource "materialize_secret" "mysql_password" {
+ name = "%[1]s_secret_mysql"
+ value = "c2VjcmV0Cg=="
+ }
+
+ resource "materialize_connection_mysql" "mysql_connection" {
+ name = "%[1]s_connection_mysql"
+ host = "mysql"
+ port = 3306
+ user {
+ text = "repluser"
+ }
+ password {
+ name = materialize_secret.mysql_password.name
+ }
+ }
+
+ resource "materialize_source_mysql" "test_source_mysql" {
+ name = "%[1]s_source_mysql"
+ cluster_name = "quickstart"
+
+ mysql_connection {
+ name = materialize_connection_mysql.mysql_connection.name
+ }
+ }
+
+ // Kafka setup
+ resource "materialize_connection_kafka" "kafka_connection" {
+ name = "%[1]s_connection_kafka"
+ kafka_broker {
+ broker = "redpanda:9092"
+ }
+ security_protocol = "PLAINTEXT"
+ }
+
+ resource "materialize_source_kafka" "test_source_kafka" {
+ name = "%[1]s_source_kafka"
+ cluster_name = "quickstart"
+ topic = "terraform"
+
+ kafka_connection {
+ name = materialize_connection_kafka.kafka_connection.name
+ }
+ value_format {
+ json = true
+ }
+ key_format {
+ json = true
+ }
+ }
+
+ data "materialize_source_reference" "kafka" {
+ source_id = materialize_source_kafka.test_source_kafka.id
+ depends_on = [
+ materialize_source_kafka.test_source_kafka
+ ]
+ }
+
+ data "materialize_source_reference" "postgres" {
+ source_id = materialize_source_postgres.test_source_postgres.id
+ depends_on = [
+ materialize_source_postgres.test_source_postgres
+ ]
+ }
+
+ data "materialize_source_reference" "mysql" {
+ source_id = materialize_source_mysql.test_source_mysql.id
+ depends_on = [
+ materialize_source_mysql.test_source_mysql
+ ]
+ }
+ `, nameSpace)
+}
diff --git a/pkg/provider/acceptance_datasource_source_table_test.go b/pkg/provider/acceptance_datasource_source_table_test.go
new file mode 100644
index 00000000..ab7befe3
--- /dev/null
+++ b/pkg/provider/acceptance_datasource_source_table_test.go
@@ -0,0 +1,67 @@
+package provider
+
+import (
+ "fmt"
+ "testing"
+
+ "github.com/hashicorp/terraform-plugin-testing/helper/acctest"
+ "github.com/hashicorp/terraform-plugin-testing/helper/resource"
+)
+
+func TestAccDataSourceSourceTable_basic(t *testing.T) {
+ nameSpace := acctest.RandomWithPrefix("tf_test")
+ resource.ParallelTest(t, resource.TestCase{
+ PreCheck: func() { testAccPreCheck(t) },
+ ProviderFactories: testAccProviderFactories,
+ Steps: []resource.TestStep{
+ {
+ Config: testAccDataSourceSourceTable(nameSpace),
+ Check: resource.ComposeTestCheckFunc(
+ resource.TestCheckResourceAttr("data.materialize_source_table.test", "tables.0.name", fmt.Sprintf("%s_table", nameSpace)),
+ resource.TestCheckResourceAttr("data.materialize_source_table.test", "tables.0.schema_name", "public"),
+ resource.TestCheckResourceAttr("data.materialize_source_table.test", "tables.0.database_name", "materialize"),
+ resource.TestCheckResourceAttr("data.materialize_source_table.test", "tables.0.source.#", "1"),
+ resource.TestCheckResourceAttr("data.materialize_source_table.test", "tables.0.source.0.name", fmt.Sprintf("%s_source", nameSpace)),
+ resource.TestCheckResourceAttr("data.materialize_source_table.test", "tables.0.source.0.schema_name", "public"),
+ resource.TestCheckResourceAttr("data.materialize_source_table.test", "tables.0.source.0.database_name", "materialize"),
+ resource.TestCheckResourceAttr("data.materialize_source_table.test", "tables.0.source_type", "load-generator"),
+ resource.TestCheckResourceAttr("data.materialize_source_table.test", "tables.0.comment", "test comment"),
+ resource.TestCheckResourceAttrSet("data.materialize_source_table.test", "tables.0.owner_name"),
+ ),
+ },
+ },
+ })
+}
+
+func testAccDataSourceSourceTable(nameSpace string) string {
+ return fmt.Sprintf(`
+resource "materialize_source_load_generator" "test" {
+ name = "%[1]s_source"
+ schema_name = "public"
+ database_name = "materialize"
+ load_generator_type = "AUCTION"
+ auction_options {
+ tick_interval = "1s"
+ }
+}
+
+resource "materialize_source_table_load_generator" "test" {
+ name = "%[1]s_table"
+ schema_name = "public"
+ database_name = "materialize"
+ source {
+ name = materialize_source_load_generator.test.name
+ schema_name = materialize_source_load_generator.test.schema_name
+ database_name = materialize_source_load_generator.test.database_name
+ }
+ upstream_name = "bids"
+ comment = "test comment"
+}
+
+data "materialize_source_table" "test" {
+ schema_name = "public"
+ database_name = "materialize"
+ depends_on = [materialize_source_table_load_generator.test]
+}
+`, nameSpace)
+}
diff --git a/pkg/provider/acceptance_source_table_kafka_test.go b/pkg/provider/acceptance_source_table_kafka_test.go
new file mode 100644
index 00000000..37c6d8d9
--- /dev/null
+++ b/pkg/provider/acceptance_source_table_kafka_test.go
@@ -0,0 +1,229 @@
+package provider
+
+import (
+ "fmt"
+ "testing"
+
+ "github.com/MaterializeInc/terraform-provider-materialize/pkg/materialize"
+ "github.com/hashicorp/terraform-plugin-testing/helper/acctest"
+ "github.com/hashicorp/terraform-plugin-testing/helper/resource"
+)
+
+func TestAccSourceTableKafka_basic(t *testing.T) {
+ addTestTopic()
+ nameSpace := acctest.RandStringFromCharSet(10, acctest.CharSetAlpha)
+ resource.ParallelTest(t, resource.TestCase{
+ PreCheck: func() { testAccPreCheck(t) },
+ ProviderFactories: testAccProviderFactories,
+ CheckDestroy: nil,
+ Steps: []resource.TestStep{
+ {
+ Config: testAccSourceTableKafkaBasicResource(nameSpace),
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckSourceTableExists("materialize_source_table_kafka.test_kafka"),
+ resource.TestMatchResourceAttr("materialize_source_table_kafka.test_kafka", "id", terraformObjectIdRegex),
+ resource.TestCheckResourceAttr("materialize_source_table_kafka.test_kafka", "name", nameSpace+"_table_kafka"),
+ resource.TestCheckResourceAttr("materialize_source_table_kafka.test_kafka", "database_name", "materialize"),
+ resource.TestCheckResourceAttr("materialize_source_table_kafka.test_kafka", "schema_name", "public"),
+ resource.TestCheckResourceAttr("materialize_source_table_kafka.test_kafka", "topic", "terraform"),
+ resource.TestCheckResourceAttr("materialize_source_table_kafka.test_kafka", "include_key", "true"),
+ resource.TestCheckResourceAttr("materialize_source_table_kafka.test_kafka", "include_key_alias", "message_key"),
+ resource.TestCheckResourceAttr("materialize_source_table_kafka.test_kafka", "include_headers", "true"),
+ resource.TestCheckResourceAttr("materialize_source_table_kafka.test_kafka", "include_headers_alias", "message_headers"),
+ resource.TestCheckResourceAttr("materialize_source_table_kafka.test_kafka", "include_partition", "true"),
+ resource.TestCheckResourceAttr("materialize_source_table_kafka.test_kafka", "include_partition_alias", "message_partition"),
+ resource.TestCheckResourceAttr("materialize_source_table_kafka.test_kafka", "include_offset", "true"),
+ resource.TestCheckResourceAttr("materialize_source_table_kafka.test_kafka", "include_offset_alias", "message_offset"),
+ resource.TestCheckResourceAttr("materialize_source_table_kafka.test_kafka", "include_timestamp", "true"),
+ resource.TestCheckResourceAttr("materialize_source_table_kafka.test_kafka", "include_timestamp_alias", "message_timestamp"),
+ resource.TestCheckResourceAttr("materialize_source_table_kafka.test_kafka", "key_format.0.text", "true"),
+ resource.TestCheckResourceAttr("materialize_source_table_kafka.test_kafka", "value_format.0.json", "true"),
+ resource.TestCheckResourceAttr("materialize_source_table_kafka.test_kafka", "envelope.0.upsert", "true"),
+ resource.TestCheckResourceAttr("materialize_source_table_kafka.test_kafka", "envelope.0.upsert_options.0.value_decoding_errors.0.inline.0.enabled", "true"),
+ resource.TestCheckResourceAttr("materialize_source_table_kafka.test_kafka", "envelope.0.upsert_options.0.value_decoding_errors.0.inline.0.alias", "decoding_error"),
+ resource.TestCheckResourceAttr("materialize_source_table_kafka.test_kafka", "ownership_role", "mz_system"),
+ resource.TestCheckResourceAttr("materialize_source_table_kafka.test_kafka", "comment", "This is a test Kafka source table"),
+ resource.TestCheckResourceAttr("materialize_source_table_kafka.test_kafka", "source.#", "1"),
+ resource.TestCheckResourceAttr("materialize_source_table_kafka.test_kafka", "source.0.name", nameSpace+"_source_kafka"),
+ resource.TestCheckResourceAttr("materialize_source_table_kafka.test_kafka", "source.0.schema_name", "public"),
+ resource.TestCheckResourceAttr("materialize_source_table_kafka.test_kafka", "source.0.database_name", "materialize"),
+ ),
+ },
+ },
+ })
+}
+
+func TestAccSourceTableKafka_update(t *testing.T) {
+ nameSpace := acctest.RandStringFromCharSet(10, acctest.CharSetAlpha)
+ resource.Test(t, resource.TestCase{
+ PreCheck: func() { testAccPreCheck(t) },
+ ProviderFactories: testAccProviderFactories,
+ CheckDestroy: nil,
+ Steps: []resource.TestStep{
+ {
+ Config: testAccSourceTableKafkaResource(nameSpace, "terraform", "mz_system", ""),
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckSourceTableExists("materialize_source_table_kafka.test"),
+ resource.TestCheckResourceAttr("materialize_source_table_kafka.test", "name", nameSpace+"_table"),
+ resource.TestCheckResourceAttr("materialize_source_table_kafka.test", "topic", "terraform"),
+ resource.TestCheckResourceAttr("materialize_source_table_kafka.test", "ownership_role", "mz_system"),
+ resource.TestCheckResourceAttr("materialize_source_table_kafka.test", "comment", ""),
+ ),
+ },
+ {
+ Config: testAccSourceTableKafkaResource(nameSpace, "terraform", nameSpace+"_role", "Updated comment"),
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckSourceTableExists("materialize_source_table_kafka.test"),
+ resource.TestCheckResourceAttr("materialize_source_table_kafka.test", "name", nameSpace+"_table"),
+ resource.TestCheckResourceAttr("materialize_source_table_kafka.test", "topic", "terraform"),
+ resource.TestCheckResourceAttr("materialize_source_table_kafka.test", "ownership_role", nameSpace+"_role"),
+ resource.TestCheckResourceAttr("materialize_source_table_kafka.test", "comment", "Updated comment"),
+ ),
+ },
+ },
+ })
+}
+
+func TestAccSourceTableKafka_disappears(t *testing.T) {
+ nameSpace := acctest.RandStringFromCharSet(10, acctest.CharSetAlpha)
+ resource.ParallelTest(t, resource.TestCase{
+ PreCheck: func() { testAccPreCheck(t) },
+ ProviderFactories: testAccProviderFactories,
+ CheckDestroy: testAccCheckAllSourceTableDestroyed,
+ Steps: []resource.TestStep{
+ {
+ Config: testAccSourceTableKafkaResource(nameSpace, "kafka_table2", "mz_system", ""),
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckSourceTableExists("materialize_source_table_kafka.test"),
+ testAccCheckObjectDisappears(
+ materialize.MaterializeObject{
+ ObjectType: "TABLE",
+ Name: nameSpace + "_table",
+ },
+ ),
+ ),
+ PlanOnly: true,
+ ExpectNonEmptyPlan: true,
+ },
+ },
+ })
+}
+
+func testAccSourceTableKafkaBasicResource(nameSpace string) string {
+ return fmt.Sprintf(`
+ resource "materialize_connection_kafka" "kafka_connection" {
+ name = "%[1]s_connection_kafka"
+ kafka_broker {
+ broker = "redpanda:9092"
+ }
+ security_protocol = "PLAINTEXT"
+ }
+
+ resource "materialize_source_kafka" "test_source_kafka" {
+ name = "%[1]s_source_kafka"
+ cluster_name = "quickstart"
+ topic = "terraform"
+
+ kafka_connection {
+ name = materialize_connection_kafka.kafka_connection.name
+ }
+ }
+
+ resource "materialize_source_table_kafka" "test_kafka" {
+ name = "%[1]s_table_kafka"
+ schema_name = "public"
+ database_name = "materialize"
+
+ source {
+ name = materialize_source_kafka.test_source_kafka.name
+ }
+
+ topic = "terraform"
+ include_key = true
+ include_key_alias = "message_key"
+ include_headers = true
+ include_headers_alias = "message_headers"
+ include_partition = true
+ include_partition_alias = "message_partition"
+ include_offset = true
+ include_offset_alias = "message_offset"
+ include_timestamp = true
+ include_timestamp_alias = "message_timestamp"
+
+
+ key_format {
+ text = true
+ }
+ value_format {
+ json = true
+ }
+
+ envelope {
+ upsert = true
+ upsert_options {
+ value_decoding_errors {
+ inline {
+ enabled = true
+ alias = "decoding_error"
+ }
+ }
+ }
+ }
+
+ ownership_role = "mz_system"
+ comment = "This is a test Kafka source table"
+ }
+ `, nameSpace)
+}
+
+func testAccSourceTableKafkaResource(nameSpace, upstreamName, ownershipRole, comment string) string {
+ return fmt.Sprintf(`
+ resource "materialize_connection_kafka" "kafka_connection" {
+ name = "%[1]s_connection_kafka"
+ kafka_broker {
+ broker = "redpanda:9092"
+ }
+ security_protocol = "PLAINTEXT"
+ }
+
+ resource "materialize_source_kafka" "test_source_kafka" {
+ name = "%[1]s_source_kafka"
+ cluster_name = "quickstart"
+ topic = "terraform"
+
+ kafka_connection {
+ name = materialize_connection_kafka.kafka_connection.name
+ }
+
+ key_format {
+ json = true
+ }
+ value_format {
+ json = true
+ }
+ }
+
+ resource "materialize_role" "test_role" {
+ name = "%[1]s_role"
+ }
+
+ resource "materialize_source_table_kafka" "test" {
+ name = "%[1]s_table"
+ schema_name = "public"
+ database_name = "materialize"
+
+ source {
+ name = materialize_source_kafka.test_source_kafka.name
+ schema_name = "public"
+ database_name = "materialize"
+ }
+
+ topic = "%[2]s"
+
+ ownership_role = "%[3]s"
+ comment = "%[4]s"
+
+ depends_on = [materialize_role.test_role]
+ }
+ `, nameSpace, upstreamName, ownershipRole, comment)
+}
diff --git a/pkg/provider/acceptance_source_table_load_generator_test.go b/pkg/provider/acceptance_source_table_load_generator_test.go
new file mode 100644
index 00000000..2ceb95bc
--- /dev/null
+++ b/pkg/provider/acceptance_source_table_load_generator_test.go
@@ -0,0 +1,208 @@
+package provider
+
+import (
+ "database/sql"
+ "fmt"
+ "testing"
+
+ "github.com/MaterializeInc/terraform-provider-materialize/pkg/materialize"
+ "github.com/MaterializeInc/terraform-provider-materialize/pkg/utils"
+ "github.com/hashicorp/terraform-plugin-testing/helper/acctest"
+ "github.com/hashicorp/terraform-plugin-testing/helper/resource"
+ "github.com/hashicorp/terraform-plugin-testing/terraform"
+)
+
+func TestAccSourceTableLoadGen_basic(t *testing.T) {
+ nameSpace := acctest.RandStringFromCharSet(10, acctest.CharSetAlpha)
+ resource.ParallelTest(t, resource.TestCase{
+ PreCheck: func() { testAccPreCheck(t) },
+ ProviderFactories: testAccProviderFactories,
+ CheckDestroy: nil,
+ Steps: []resource.TestStep{
+ {
+ Config: testAccSourceTableLoadGenBasicResource(nameSpace),
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckSourceTableLoadGenExists("materialize_source_table_load_generator.test_loadgen"),
+ resource.TestMatchResourceAttr("materialize_source_table_load_generator.test_loadgen", "id", terraformObjectIdRegex),
+ resource.TestCheckResourceAttr("materialize_source_table_load_generator.test_loadgen", "name", nameSpace+"_table_loadgen2"),
+ resource.TestCheckResourceAttr("materialize_source_table_load_generator.test_loadgen", "database_name", "materialize"),
+ resource.TestCheckResourceAttr("materialize_source_table_load_generator.test_loadgen", "schema_name", "public"),
+ resource.TestCheckResourceAttr("materialize_source_table_load_generator.test_loadgen", "upstream_name", "bids"),
+ resource.TestCheckResourceAttr("materialize_source_table_load_generator.test_loadgen", "source.#", "1"),
+ resource.TestCheckResourceAttr("materialize_source_table_load_generator.test_loadgen", "source.0.name", nameSpace+"_loadgen2"),
+ resource.TestCheckResourceAttr("materialize_source_table_load_generator.test_loadgen", "source.0.schema_name", "public"),
+ resource.TestCheckResourceAttr("materialize_source_table_load_generator.test_loadgen", "source.0.database_name", "materialize"),
+ ),
+ },
+ },
+ })
+}
+
+func TestAccSourceTableLoadGen_update(t *testing.T) {
+ nameSpace := acctest.RandStringFromCharSet(10, acctest.CharSetAlpha)
+ resource.Test(t, resource.TestCase{
+ PreCheck: func() { testAccPreCheck(t) },
+ ProviderFactories: testAccProviderFactories,
+ CheckDestroy: testAccCheckAllSourceTableLoadGenDestroyed,
+ Steps: []resource.TestStep{
+ {
+ Config: testAccSourceTableLoadGenResource(nameSpace, "bids", "", ""),
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckSourceTableLoadGenExists("materialize_source_table_load_generator.test_loadgen"),
+ resource.TestMatchResourceAttr("materialize_source_table_load_generator.test_loadgen", "id", terraformObjectIdRegex),
+ resource.TestCheckResourceAttr("materialize_source_table_load_generator.test_loadgen", "name", nameSpace+"_table_loadgen"),
+ resource.TestCheckResourceAttr("materialize_source_table_load_generator.test_loadgen", "schema_name", "public"),
+ resource.TestCheckResourceAttr("materialize_source_table_load_generator.test_loadgen", "database_name", "materialize"),
+ resource.TestCheckResourceAttr("materialize_source_table_load_generator.test_loadgen", "upstream_name", "bids"),
+ resource.TestCheckResourceAttr("materialize_source_table_load_generator.test_loadgen", "source.#", "1"),
+ resource.TestCheckResourceAttr("materialize_source_table_load_generator.test_loadgen", "source.0.name", nameSpace+"_loadgen"),
+ ),
+ },
+ {
+ Config: testAccSourceTableLoadGenResource(nameSpace, "bids", nameSpace+"_role", "Updated comment"),
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckSourceTableLoadGenExists("materialize_source_table_load_generator.test_loadgen"),
+ resource.TestCheckResourceAttr("materialize_source_table_load_generator.test_loadgen", "name", nameSpace+"_table_loadgen"),
+ resource.TestCheckResourceAttr("materialize_source_table_load_generator.test_loadgen", "schema_name", "public"),
+ resource.TestCheckResourceAttr("materialize_source_table_load_generator.test_loadgen", "database_name", "materialize"),
+ resource.TestCheckResourceAttr("materialize_source_table_load_generator.test_loadgen", "upstream_name", "bids"),
+ resource.TestCheckResourceAttr("materialize_source_table_load_generator.test_loadgen", "source.#", "1"),
+ resource.TestCheckResourceAttr("materialize_source_table_load_generator.test_loadgen", "source.0.name", nameSpace+"_loadgen"),
+ resource.TestCheckResourceAttr("materialize_source_table_load_generator.test_loadgen", "ownership_role", nameSpace+"_role"),
+ resource.TestCheckResourceAttr("materialize_source_table_load_generator.test_loadgen", "comment", "Updated comment"),
+ ),
+ },
+ },
+ })
+}
+
+func TestAccSourceTableLoadGen_disappears(t *testing.T) {
+ nameSpace := acctest.RandStringFromCharSet(10, acctest.CharSetAlpha)
+ resource.ParallelTest(t, resource.TestCase{
+ PreCheck: func() { testAccPreCheck(t) },
+ ProviderFactories: testAccProviderFactories,
+ CheckDestroy: testAccCheckAllSourceTableLoadGenDestroyed,
+ Steps: []resource.TestStep{
+ {
+ Config: testAccSourceTableLoadGenResource(nameSpace, "bids", "mz_system", ""),
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckSourceTableLoadGenExists("materialize_source_table_load_generator.test"),
+ testAccCheckObjectDisappears(
+ materialize.MaterializeObject{
+ ObjectType: "TABLE",
+ Name: nameSpace + "_table",
+ },
+ ),
+ ),
+ PlanOnly: true,
+ ExpectNonEmptyPlan: true,
+ },
+ },
+ })
+}
+
+func testAccSourceTableLoadGenBasicResource(nameSpace string) string {
+ return fmt.Sprintf(`
+ resource "materialize_source_load_generator" "test_loadgen" {
+ name = "%[1]s_loadgen2"
+ load_generator_type = "AUCTION"
+
+ schema_name = "public"
+ database_name = "materialize"
+
+ auction_options {
+ tick_interval = "500ms"
+ }
+ }
+
+ resource "materialize_source_table_load_generator" "test_loadgen" {
+ name = "%[1]s_table_loadgen2"
+ schema_name = "public"
+ database_name = "materialize"
+
+ source {
+ name = materialize_source_load_generator.test_loadgen.name
+ schema_name = "public"
+ database_name = "materialize"
+ }
+
+ upstream_name = "bids"
+ }
+ `, nameSpace)
+}
+
+func testAccSourceTableLoadGenResource(nameSpace, upstreamName, ownershipRole, comment string) string {
+ return fmt.Sprintf(`
+ resource "materialize_source_load_generator" "test_loadgen" {
+ name = "%[1]s_loadgen"
+ load_generator_type = "AUCTION"
+
+ schema_name = "public"
+ database_name = "materialize"
+
+ auction_options {
+ tick_interval = "500ms"
+ }
+ }
+
+ resource "materialize_role" "test_role" {
+ name = "%[1]s_role"
+ }
+
+ resource "materialize_source_table_load_generator" "test_loadgen" {
+ name = "%[1]s_table_loadgen"
+ schema_name = "public"
+ database_name = "materialize"
+
+ source {
+ name = materialize_source_load_generator.test_loadgen.name
+ schema_name = "public"
+ database_name = "materialize"
+ }
+
+ upstream_name = "%[2]s"
+ ownership_role = "%[3]s"
+ comment = "%[4]s"
+
+ depends_on = [materialize_role.test_role]
+ }
+ `, nameSpace, upstreamName, ownershipRole, comment)
+}
+
+func testAccCheckSourceTableLoadGenExists(name string) resource.TestCheckFunc {
+ return func(s *terraform.State) error {
+ meta := testAccProvider.Meta()
+ db, _, err := utils.GetDBClientFromMeta(meta, nil)
+ if err != nil {
+ return fmt.Errorf("error getting DB client: %s", err)
+ }
+ r, ok := s.RootModule().Resources[name]
+ if !ok {
+ return fmt.Errorf("source table not found: %s", name)
+ }
+ _, err = materialize.ScanSourceTable(db, utils.ExtractId(r.Primary.ID))
+ return err
+ }
+}
+
+func testAccCheckAllSourceTableLoadGenDestroyed(s *terraform.State) error {
+ meta := testAccProvider.Meta()
+ db, _, err := utils.GetDBClientFromMeta(meta, nil)
+ if err != nil {
+ return fmt.Errorf("error getting DB client: %s", err)
+ }
+
+ for _, r := range s.RootModule().Resources {
+ if r.Type != "materialize_source_table_load_generator" {
+ continue
+ }
+
+ _, err := materialize.ScanSourceTable(db, utils.ExtractId(r.Primary.ID))
+ if err == nil {
+ return fmt.Errorf("source table %v still exists", utils.ExtractId(r.Primary.ID))
+ } else if err != sql.ErrNoRows {
+ return err
+ }
+ }
+ return nil
+}
diff --git a/pkg/provider/acceptance_source_table_mysql_test.go b/pkg/provider/acceptance_source_table_mysql_test.go
new file mode 100644
index 00000000..a78709b9
--- /dev/null
+++ b/pkg/provider/acceptance_source_table_mysql_test.go
@@ -0,0 +1,258 @@
+package provider
+
+import (
+ "database/sql"
+ "fmt"
+ "testing"
+
+ "github.com/MaterializeInc/terraform-provider-materialize/pkg/materialize"
+ "github.com/MaterializeInc/terraform-provider-materialize/pkg/utils"
+ "github.com/hashicorp/terraform-plugin-testing/helper/acctest"
+ "github.com/hashicorp/terraform-plugin-testing/helper/resource"
+ "github.com/hashicorp/terraform-plugin-testing/terraform"
+)
+
+func TestAccSourceTableMySQL_basic(t *testing.T) {
+ nameSpace := acctest.RandStringFromCharSet(10, acctest.CharSetAlpha)
+ resource.ParallelTest(t, resource.TestCase{
+ PreCheck: func() { testAccPreCheck(t) },
+ ProviderFactories: testAccProviderFactories,
+ CheckDestroy: nil,
+ Steps: []resource.TestStep{
+ {
+ Config: testAccSourceTableMySQLBasicResource(nameSpace),
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckSourceTableExists("materialize_source_table_mysql.test_mysql"),
+ resource.TestMatchResourceAttr("materialize_source_table_mysql.test_mysql", "id", terraformObjectIdRegex),
+ resource.TestCheckResourceAttr("materialize_source_table_mysql.test_mysql", "name", nameSpace+"_table_mysql"),
+ resource.TestCheckResourceAttr("materialize_source_table_mysql.test_mysql", "database_name", "materialize"),
+ resource.TestCheckResourceAttr("materialize_source_table_mysql.test_mysql", "schema_name", "public"),
+ resource.TestCheckResourceAttr("materialize_source_table_mysql.test_mysql", "upstream_name", "mysql_table1"),
+ resource.TestCheckResourceAttr("materialize_source_table_mysql.test_mysql", "upstream_schema_name", "shop"),
+ resource.TestCheckResourceAttr("materialize_source_table_mysql.test_mysql", "exclude_columns.#", "1"),
+ resource.TestCheckResourceAttr("materialize_source_table_mysql.test_mysql", "exclude_columns.0", "banned"),
+ resource.TestCheckResourceAttr("materialize_source_table_mysql.test_mysql", "source.#", "1"),
+ resource.TestCheckResourceAttr("materialize_source_table_mysql.test_mysql", "source.0.name", nameSpace+"_source_mysql"),
+ resource.TestCheckResourceAttr("materialize_source_table_mysql.test_mysql", "source.0.schema_name", "public"),
+ resource.TestCheckResourceAttr("materialize_source_table_mysql.test_mysql", "source.0.database_name", "materialize"),
+ ),
+ },
+ {
+ ResourceName: "materialize_source_table_mysql.test_mysql",
+ ImportState: true,
+ ImportStateVerify: false,
+ },
+ },
+ })
+}
+
+func TestAccSourceTableMySQL_update(t *testing.T) {
+ nameSpace := acctest.RandStringFromCharSet(10, acctest.CharSetAlpha)
+ resource.Test(t, resource.TestCase{
+ PreCheck: func() { testAccPreCheck(t) },
+ ProviderFactories: testAccProviderFactories,
+ CheckDestroy: nil,
+ Steps: []resource.TestStep{
+ {
+ Config: testAccSourceTableMySQLResource(nameSpace, "mysql_table2", "mz_system", ""),
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckSourceTableExists("materialize_source_table_mysql.test"),
+ resource.TestCheckResourceAttr("materialize_source_table_mysql.test", "name", nameSpace+"_table"),
+ resource.TestCheckResourceAttr("materialize_source_table_mysql.test", "upstream_name", "mysql_table2"),
+ resource.TestCheckResourceAttr("materialize_source_table_mysql.test", "ownership_role", "mz_system"),
+ resource.TestCheckResourceAttr("materialize_source_table_mysql.test", "comment", ""),
+ resource.TestCheckResourceAttr("materialize_source_table_mysql.test", "source.#", "1"),
+ resource.TestCheckResourceAttr("materialize_source_table_mysql.test", "source.0.name", nameSpace+"_source_mysql"),
+ resource.TestCheckResourceAttr("materialize_source_table_mysql.test", "source.0.schema_name", "public"),
+ resource.TestCheckResourceAttr("materialize_source_table_mysql.test", "source.0.database_name", "materialize"),
+ ),
+ },
+ {
+ Config: testAccSourceTableMySQLResource(nameSpace, "mysql_table1", nameSpace+"_role", "Updated comment"),
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckSourceTableExists("materialize_source_table_mysql.test"),
+ resource.TestCheckResourceAttr("materialize_source_table_mysql.test", "name", nameSpace+"_table"),
+ resource.TestCheckResourceAttr("materialize_source_table_mysql.test", "upstream_name", "mysql_table1"),
+ resource.TestCheckResourceAttr("materialize_source_table_mysql.test", "ownership_role", nameSpace+"_role"),
+ resource.TestCheckResourceAttr("materialize_source_table_mysql.test", "comment", "Updated comment"),
+ resource.TestCheckResourceAttr("materialize_source_table_mysql.test", "source.#", "1"),
+ resource.TestCheckResourceAttr("materialize_source_table_mysql.test", "source.0.name", nameSpace+"_source_mysql"),
+ resource.TestCheckResourceAttr("materialize_source_table_mysql.test", "source.0.schema_name", "public"),
+ ),
+ },
+ },
+ })
+}
+
+func TestAccSourceTableMySQL_disappears(t *testing.T) {
+ nameSpace := acctest.RandStringFromCharSet(10, acctest.CharSetAlpha)
+ resource.ParallelTest(t, resource.TestCase{
+ PreCheck: func() { testAccPreCheck(t) },
+ ProviderFactories: testAccProviderFactories,
+ CheckDestroy: testAccCheckAllSourceTableDestroyed,
+ Steps: []resource.TestStep{
+ {
+ Config: testAccSourceTableMySQLResource(nameSpace, "mysql_table2", "mz_system", ""),
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckSourceTableExists("materialize_source_table_mysql.test"),
+ testAccCheckObjectDisappears(
+ materialize.MaterializeObject{
+ ObjectType: "TABLE",
+ Name: nameSpace + "_table",
+ },
+ ),
+ ),
+ PlanOnly: true,
+ ExpectNonEmptyPlan: true,
+ },
+ },
+ })
+}
+
+func testAccSourceTableMySQLBasicResource(nameSpace string) string {
+ return fmt.Sprintf(`
+ resource "materialize_secret" "mysql_password" {
+ name = "%[1]s_secret_mysql"
+ value = "c2VjcmV0Cg=="
+ }
+
+ resource "materialize_connection_mysql" "mysql_connection" {
+ name = "%[1]s_connection_mysql"
+ host = "mysql"
+ port = 3306
+ user {
+ text = "repluser"
+ }
+ password {
+ name = materialize_secret.mysql_password.name
+ }
+ }
+
+ resource "materialize_source_mysql" "test_source_mysql" {
+ name = "%[1]s_source_mysql"
+ cluster_name = "quickstart"
+
+ mysql_connection {
+ name = materialize_connection_mysql.mysql_connection.name
+ }
+
+ table {
+ upstream_name = "mysql_table1"
+ upstream_schema_name = "shop"
+ name = "mysql_table1_local"
+ }
+ }
+
+ resource "materialize_source_table_mysql" "test_mysql" {
+ name = "%[1]s_table_mysql"
+ schema_name = "public"
+ database_name = "materialize"
+
+ source {
+ name = materialize_source_mysql.test_source_mysql.name
+ }
+
+ upstream_name = "mysql_table1"
+ upstream_schema_name = "shop"
+ exclude_columns = ["banned"]
+ }
+ `, nameSpace)
+}
+
+func testAccSourceTableMySQLResource(nameSpace, upstreamName, ownershipRole, comment string) string {
+ return fmt.Sprintf(`
+ resource "materialize_secret" "mysql_password" {
+ name = "%[1]s_secret_mysql"
+ value = "c2VjcmV0Cg=="
+ }
+
+ resource "materialize_connection_mysql" "mysql_connection" {
+ name = "%[1]s_connection_mysql"
+ host = "mysql"
+ port = 3306
+ user {
+ text = "repluser"
+ }
+ password {
+ name = materialize_secret.mysql_password.name
+ }
+ }
+
+ resource "materialize_source_mysql" "test_source_mysql" {
+ name = "%[1]s_source_mysql"
+ cluster_name = "quickstart"
+
+ mysql_connection {
+ name = materialize_connection_mysql.mysql_connection.name
+ }
+
+ table {
+ upstream_name = "mysql_table1"
+ upstream_schema_name = "shop"
+ name = "mysql_table1_local"
+ }
+ }
+
+ resource "materialize_role" "test_role" {
+ name = "%[1]s_role"
+ }
+
+ resource "materialize_source_table_mysql" "test" {
+ name = "%[1]s_table"
+ schema_name = "public"
+ database_name = "materialize"
+
+ source {
+ name = materialize_source_mysql.test_source_mysql.name
+ schema_name = "public"
+ database_name = "materialize"
+ }
+
+ upstream_name = "%[2]s"
+ upstream_schema_name = "shop"
+
+ ownership_role = "%[3]s"
+ comment = "%[4]s"
+
+ depends_on = [materialize_role.test_role]
+ }
+ `, nameSpace, upstreamName, ownershipRole, comment)
+}
+
+func testAccCheckSourceTableExists(name string) resource.TestCheckFunc {
+ return func(s *terraform.State) error {
+ meta := testAccProvider.Meta()
+ db, _, err := utils.GetDBClientFromMeta(meta, nil)
+ if err != nil {
+ return fmt.Errorf("error getting DB client: %s", err)
+ }
+ r, ok := s.RootModule().Resources[name]
+ if !ok {
+ return fmt.Errorf("source table not found: %s", name)
+ }
+ _, err = materialize.ScanSourceTable(db, utils.ExtractId(r.Primary.ID))
+ return err
+ }
+}
+
+func testAccCheckAllSourceTableDestroyed(s *terraform.State) error {
+ meta := testAccProvider.Meta()
+ db, _, err := utils.GetDBClientFromMeta(meta, nil)
+ if err != nil {
+ return fmt.Errorf("error getting DB client: %s", err)
+ }
+
+ for _, r := range s.RootModule().Resources {
+ if r.Type != "materialize_source_table_mysql" {
+ continue
+ }
+
+ _, err := materialize.ScanSourceTable(db, utils.ExtractId(r.Primary.ID))
+ if err == nil {
+ return fmt.Errorf("source table %v still exists", utils.ExtractId(r.Primary.ID))
+ } else if err != sql.ErrNoRows {
+ return err
+ }
+ }
+ return nil
+}
diff --git a/pkg/provider/acceptance_source_table_postgres_test.go b/pkg/provider/acceptance_source_table_postgres_test.go
new file mode 100644
index 00000000..9807ef15
--- /dev/null
+++ b/pkg/provider/acceptance_source_table_postgres_test.go
@@ -0,0 +1,272 @@
+package provider
+
+import (
+ "database/sql"
+ "fmt"
+ "testing"
+
+ "github.com/MaterializeInc/terraform-provider-materialize/pkg/materialize"
+ "github.com/MaterializeInc/terraform-provider-materialize/pkg/utils"
+ "github.com/hashicorp/terraform-plugin-testing/helper/acctest"
+ "github.com/hashicorp/terraform-plugin-testing/helper/resource"
+ "github.com/hashicorp/terraform-plugin-testing/terraform"
+)
+
+func TestAccSourceTablePostgres_basic(t *testing.T) {
+ nameSpace := acctest.RandStringFromCharSet(10, acctest.CharSetAlpha)
+ resource.ParallelTest(t, resource.TestCase{
+ PreCheck: func() { testAccPreCheck(t) },
+ ProviderFactories: testAccProviderFactories,
+ CheckDestroy: nil,
+ Steps: []resource.TestStep{
+ {
+ Config: testAccSourceTablePostgresBasicResource(nameSpace),
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckSourceTablePostgresExists("materialize_source_table_postgres.test_postgres"),
+ resource.TestMatchResourceAttr("materialize_source_table_postgres.test_postgres", "id", terraformObjectIdRegex),
+ resource.TestCheckResourceAttr("materialize_source_table_postgres.test_postgres", "name", nameSpace+"_table_postgres"),
+ resource.TestCheckResourceAttr("materialize_source_table_postgres.test_postgres", "database_name", "materialize"),
+ resource.TestCheckResourceAttr("materialize_source_table_postgres.test_postgres", "schema_name", "public"),
+ resource.TestCheckResourceAttr("materialize_source_table_postgres.test_postgres", "text_columns.#", "1"),
+ resource.TestCheckResourceAttr("materialize_source_table_postgres.test_postgres", "text_columns.0", "updated_at"),
+ resource.TestCheckResourceAttr("materialize_source_table_postgres.test_postgres", "upstream_name", "table2"),
+ resource.TestCheckResourceAttr("materialize_source_table_postgres.test_postgres", "upstream_schema_name", "public"),
+ resource.TestCheckResourceAttr("materialize_source_table_postgres.test_postgres", "source.#", "1"),
+ resource.TestCheckResourceAttr("materialize_source_table_postgres.test_postgres", "source.0.name", nameSpace+"_source_postgres"),
+ resource.TestCheckResourceAttr("materialize_source_table_postgres.test_postgres", "source.0.schema_name", "public"),
+ resource.TestCheckResourceAttr("materialize_source_table_postgres.test_postgres", "source.0.database_name", "materialize"),
+ ),
+ },
+ {
+ ResourceName: "materialize_source_table_postgres.test_postgres",
+ ImportState: true,
+ ImportStateVerify: false,
+ },
+ },
+ })
+}
+
+func TestAccSourceTablePostgres_update(t *testing.T) {
+ nameSpace := acctest.RandStringFromCharSet(10, acctest.CharSetAlpha)
+ resource.Test(t, resource.TestCase{
+ PreCheck: func() { testAccPreCheck(t) },
+ ProviderFactories: testAccProviderFactories,
+ CheckDestroy: nil,
+ Steps: []resource.TestStep{
+ {
+ Config: testAccSourceTablePostgresResource(nameSpace, "table2", "mz_system", ""),
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckSourceTablePostgresExists("materialize_source_table_postgres.test"),
+ resource.TestCheckResourceAttr("materialize_source_table_postgres.test", "name", nameSpace+"_table"),
+ resource.TestCheckResourceAttr("materialize_source_table_postgres.test", "upstream_name", "table2"),
+ resource.TestCheckResourceAttr("materialize_source_table_postgres.test", "text_columns.#", "2"),
+ resource.TestCheckResourceAttr("materialize_source_table_postgres.test", "ownership_role", "mz_system"),
+ resource.TestCheckResourceAttr("materialize_source_table_postgres.test", "comment", ""),
+ resource.TestCheckResourceAttr("materialize_source_table_postgres.test", "source.#", "1"),
+ resource.TestCheckResourceAttr("materialize_source_table_postgres.test", "source.0.name", nameSpace+"_source"),
+ resource.TestCheckResourceAttr("materialize_source_table_postgres.test", "source.0.schema_name", "public"),
+ resource.TestCheckResourceAttr("materialize_source_table_postgres.test", "source.0.database_name", "materialize"),
+ ),
+ },
+ {
+ Config: testAccSourceTablePostgresResource(nameSpace, "table3", nameSpace+"_role", "Updated comment"),
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckSourceTablePostgresExists("materialize_source_table_postgres.test"),
+ resource.TestCheckResourceAttr("materialize_source_table_postgres.test", "name", nameSpace+"_table"),
+ resource.TestCheckResourceAttr("materialize_source_table_postgres.test", "upstream_name", "table3"),
+ resource.TestCheckResourceAttr("materialize_source_table_postgres.test", "text_columns.#", "2"),
+ resource.TestCheckResourceAttr("materialize_source_table_postgres.test", "ownership_role", nameSpace+"_role"),
+ resource.TestCheckResourceAttr("materialize_source_table_postgres.test", "comment", "Updated comment"),
+ resource.TestCheckResourceAttr("materialize_source_table_postgres.test", "source.#", "1"),
+ resource.TestCheckResourceAttr("materialize_source_table_postgres.test", "source.0.name", nameSpace+"_source"),
+ resource.TestCheckResourceAttr("materialize_source_table_postgres.test", "source.0.schema_name", "public"),
+ ),
+ },
+ },
+ })
+}
+
+func TestAccSourceTablePostgres_disappears(t *testing.T) {
+ nameSpace := acctest.RandStringFromCharSet(10, acctest.CharSetAlpha)
+ resource.ParallelTest(t, resource.TestCase{
+ PreCheck: func() { testAccPreCheck(t) },
+ ProviderFactories: testAccProviderFactories,
+ CheckDestroy: testAccCheckAllSourceTablePostgresDestroyed,
+ Steps: []resource.TestStep{
+ {
+ Config: testAccSourceTablePostgresResource(nameSpace, "table2", "mz_system", ""),
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckSourceTablePostgresExists("materialize_source_table_postgres.test"),
+ testAccCheckObjectDisappears(
+ materialize.MaterializeObject{
+ ObjectType: "TABLE",
+ Name: nameSpace + "_table",
+ },
+ ),
+ ),
+ PlanOnly: true,
+ ExpectNonEmptyPlan: true,
+ },
+ },
+ })
+}
+
+func testAccSourceTablePostgresBasicResource(nameSpace string) string {
+ return fmt.Sprintf(`
+ resource "materialize_secret" "postgres_password" {
+ name = "%[1]s_secret_postgres"
+ value = "c2VjcmV0Cg=="
+ }
+
+ resource "materialize_connection_postgres" "postgres_connection" {
+ name = "%[1]s_connection_postgres"
+ host = "postgres"
+ port = 5432
+ user {
+ text = "postgres"
+ }
+ password {
+ name = materialize_secret.postgres_password.name
+ }
+ database = "postgres"
+ }
+
+ resource "materialize_source_postgres" "test_source_postgres" {
+ name = "%[1]s_source_postgres"
+ cluster_name = "quickstart"
+
+ postgres_connection {
+ name = materialize_connection_postgres.postgres_connection.name
+ }
+ publication = "mz_source"
+ table {
+ upstream_name = "table2"
+ upstream_schema_name = "public"
+ }
+ }
+
+ resource "materialize_source_table_postgres" "test_postgres" {
+ name = "%[1]s_table_postgres"
+ schema_name = "public"
+ database_name = "materialize"
+
+ source {
+ name = materialize_source_postgres.test_source_postgres.name
+ }
+
+ upstream_name = "table2"
+ upstream_schema_name = "public"
+
+ text_columns = [
+ "updated_at"
+ ]
+ }
+ `, nameSpace)
+}
+
+func testAccSourceTablePostgresResource(nameSpace, upstreamName, ownershipRole, comment string) string {
+ return fmt.Sprintf(`
+ resource "materialize_secret" "postgres_password" {
+ name = "%[1]s_secret"
+ value = "c2VjcmV0Cg=="
+ }
+
+ resource "materialize_connection_postgres" "postgres_connection" {
+ name = "%[1]s_connection"
+ host = "postgres"
+ port = 5432
+ user {
+ text = "postgres"
+ }
+ password {
+ name = materialize_secret.postgres_password.name
+ database_name = materialize_secret.postgres_password.database_name
+ schema_name = materialize_secret.postgres_password.schema_name
+ }
+ database = "postgres"
+ }
+
+ resource "materialize_source_postgres" "test_source" {
+ name = "%[1]s_source"
+ cluster_name = "quickstart"
+
+ postgres_connection {
+ name = materialize_connection_postgres.postgres_connection.name
+ schema_name = materialize_connection_postgres.postgres_connection.schema_name
+ database_name = materialize_connection_postgres.postgres_connection.database_name
+ }
+ publication = "mz_source"
+ table {
+ upstream_name = "%[2]s"
+ upstream_schema_name = "public"
+ }
+ }
+
+ resource "materialize_role" "test_role" {
+ name = "%[1]s_role"
+ }
+
+ resource "materialize_source_table_postgres" "test" {
+ name = "%[1]s_table"
+ schema_name = "public"
+ database_name = "materialize"
+
+ source {
+ name = materialize_source_postgres.test_source.name
+ schema_name = "public"
+ database_name = "materialize"
+ }
+
+ upstream_name = "%[2]s"
+ upstream_schema_name = "public"
+
+ text_columns = [
+ "updated_at",
+ "id"
+ ]
+
+ ownership_role = "%[3]s"
+ comment = "%[4]s"
+
+ depends_on = [materialize_role.test_role]
+ }
+ `, nameSpace, upstreamName, ownershipRole, comment)
+}
+
+func testAccCheckSourceTablePostgresExists(name string) resource.TestCheckFunc {
+ return func(s *terraform.State) error {
+ meta := testAccProvider.Meta()
+ db, _, err := utils.GetDBClientFromMeta(meta, nil)
+ if err != nil {
+ return fmt.Errorf("error getting DB client: %s", err)
+ }
+ r, ok := s.RootModule().Resources[name]
+ if !ok {
+ return fmt.Errorf("source table not found: %s", name)
+ }
+ _, err = materialize.ScanSourceTable(db, utils.ExtractId(r.Primary.ID))
+ return err
+ }
+}
+
+func testAccCheckAllSourceTablePostgresDestroyed(s *terraform.State) error {
+ meta := testAccProvider.Meta()
+ db, _, err := utils.GetDBClientFromMeta(meta, nil)
+ if err != nil {
+ return fmt.Errorf("error getting DB client: %s", err)
+ }
+
+ for _, r := range s.RootModule().Resources {
+ if r.Type != "materialize_source_table_postgres" {
+ continue
+ }
+
+ _, err := materialize.ScanSourceTable(db, utils.ExtractId(r.Primary.ID))
+ if err == nil {
+ return fmt.Errorf("source table %v still exists", utils.ExtractId(r.Primary.ID))
+ } else if err != sql.ErrNoRows {
+ return err
+ }
+ }
+ return nil
+}
diff --git a/pkg/provider/provider.go b/pkg/provider/provider.go
index ce9eb5ac..572c0cbb 100644
--- a/pkg/provider/provider.go
+++ b/pkg/provider/provider.go
@@ -133,6 +133,11 @@ func Provider(version string) *schema.Provider {
"materialize_source_grant": resources.GrantSource(),
"materialize_system_parameter": resources.SystemParameter(),
"materialize_table": resources.Table(),
+ "materialize_source_table_kafka": resources.SourceTableKafka(),
+ "materialize_source_table_load_generator": resources.SourceTableLoadGen(),
+ "materialize_source_table_mysql": resources.SourceTableMySQL(),
+ "materialize_source_table_postgres": resources.SourceTablePostgres(),
+ "materialize_source_table_webhook": resources.SourceTableWebhook(),
"materialize_table_grant": resources.GrantTable(),
"materialize_table_grant_default_privilege": resources.GrantTableDefaultPrivilege(),
"materialize_type": resources.Type(),
@@ -158,6 +163,8 @@ func Provider(version string) *schema.Provider {
"materialize_secret": datasources.Secret(),
"materialize_sink": datasources.Sink(),
"materialize_source": datasources.Source(),
+ "materialize_source_reference": datasources.SourceReference(),
+ "materialize_source_table": datasources.SourceTable(),
"materialize_scim_groups": datasources.SCIMGroups(),
"materialize_scim_configs": datasources.SCIMConfigs(),
"materialize_sso_config": datasources.SSOConfig(),
diff --git a/pkg/resources/resource_source_kafka.go b/pkg/resources/resource_source_kafka.go
index 57ff25b7..6cc7ce1f 100644
--- a/pkg/resources/resource_source_kafka.go
+++ b/pkg/resources/resource_source_kafka.go
@@ -32,62 +32,72 @@ var sourceKafkaSchema = map[string]*schema.Schema{
ForceNew: true,
},
"include_key": {
- Description: "Include a column containing the Kafka message key.",
+ Description: "Include a column containing the Kafka message key. Deprecated: Use the new `materialize_source_table_kafka` resource instead.",
+ Deprecated: "Use the new `materialize_source_table_kafka` resource instead.",
Type: schema.TypeBool,
Optional: true,
ForceNew: true,
},
"include_key_alias": {
- Description: "Provide an alias for the key column.",
+ Description: "Provide an alias for the key column. Deprecated: Use the new `materialize_source_table_kafka` resource instead.",
+ Deprecated: "Use the new `materialize_source_table_kafka` resource instead.",
Type: schema.TypeString,
Optional: true,
ForceNew: true,
},
"include_headers": {
- Description: "Include message headers.",
+ Description: "Include message headers. Deprecated: Use the new `materialize_source_table_kafka` resource instead.",
+ Deprecated: "Use the new `materialize_source_table_kafka` resource instead.",
Type: schema.TypeBool,
Optional: true,
ForceNew: true,
Default: false,
},
"include_headers_alias": {
- Description: "Provide an alias for the headers column.",
+ Description: "Provide an alias for the headers column. Deprecated: Use the new `materialize_source_table_kafka` resource instead.",
+ Deprecated: "Use the new `materialize_source_table_kafka` resource instead.",
Type: schema.TypeString,
Optional: true,
ForceNew: true,
},
"include_partition": {
- Description: "Include a partition column containing the Kafka message partition",
+ Description: "Include a partition column containing the Kafka message partition. Deprecated: Use the new `materialize_source_table_kafka` resource instead.",
+ Deprecated: "Use the new `materialize_source_table_kafka` resource instead.",
Type: schema.TypeBool,
Optional: true,
ForceNew: true,
},
"include_partition_alias": {
- Description: "Provide an alias for the partition column.",
+ Description: "Provide an alias for the partition column. Deprecated: Use the new `materialize_source_table_kafka` resource instead.",
+ Deprecated: "Use the new `materialize_source_table_kafka` resource instead.",
Type: schema.TypeString,
Optional: true,
ForceNew: true,
},
"include_offset": {
- Description: "Include an offset column containing the Kafka message offset.",
+ Description: "Include an offset column containing the Kafka message offset. Deprecated: Use the new `materialize_source_table_kafka` resource instead.",
+ Deprecated: "Use the new `materialize_source_table_kafka` resource instead.",
Type: schema.TypeBool,
Optional: true,
ForceNew: true,
},
"include_offset_alias": {
- Description: "Provide an alias for the offset column.",
+ Description: "Provide an alias for the offset column. Deprecated: Use the new `materialize_source_table_kafka` resource instead.",
+ Deprecated: "Use the new `materialize_source_table_kafka` resource instead.",
Type: schema.TypeString,
Optional: true,
ForceNew: true,
},
"include_timestamp": {
- Description: "Include a timestamp column containing the Kafka message timestamp.",
+ Description: "Include a timestamp column containing the Kafka message timestamp. Deprecated: Use the new `materialize_source_table_kafka` resource instead.",
+ Deprecated: "Use the new `materialize_source_table_kafka` resource instead.",
Type: schema.TypeBool,
Optional: true,
ForceNew: true,
},
"include_timestamp_alias": {
- Description: "Provide an alias for the timestamp column.",
+ Description: "Provide an alias for the timestamp column. Deprecated: Use the new `materialize_source_table_kafka` resource instead.",
+ Deprecated: "Use the new `materialize_source_table_kafka` resource instead.",
Type: schema.TypeString,
Optional: true,
ForceNew: true,
@@ -96,7 +106,8 @@ var sourceKafkaSchema = map[string]*schema.Schema{
"key_format": FormatSpecSchema("key_format", "Set the key format explicitly.", false),
"value_format": FormatSpecSchema("value_format", "Set the value format explicitly.", false),
"envelope": {
- Description: "How Materialize should interpret records (e.g. append-only, upsert)..",
+ Description: "How Materialize should interpret records (e.g. append-only, upsert). Deprecated: Use the new `materialize_source_table_kafka` resource instead.",
+ Deprecated: "Use the new `materialize_source_table_kafka` resource instead.",
Type: schema.TypeList,
MaxItems: 1,
Elem: &schema.Resource{
diff --git a/pkg/resources/resource_source_load_generator.go b/pkg/resources/resource_source_load_generator.go
index 30ce5d81..7b396f36 100644
--- a/pkg/resources/resource_source_load_generator.go
+++ b/pkg/resources/resource_source_load_generator.go
@@ -174,6 +174,14 @@ var sourceLoadgenSchema = map[string]*schema.Schema{
ForceNew: true,
ConflictsWith: []string{"counter_options", "auction_options", "marketing_options", "tpch_options"},
},
+ "all_tables": {
+ Description: "Whether to include all tables in the source. Compatible with `auction_options`, `marketing_options`, and `tpch_options`. If not specified, use the `materialize_source_table_load_generator` resource to specify tables to include.",
+ Type: schema.TypeBool,
+ Optional: true,
+ Default: false,
+ ConflictsWith: []string{"counter_options", "key_value_options"},
+ ForceNew: true,
+ },
"expose_progress": IdentifierSchema(IdentifierSchemaParams{
Elem: "expose_progress",
Description: "The name of the progress collection for the source. If this is not specified, the collection will be named `_progress`.",
@@ -251,6 +259,11 @@ func sourceLoadgenCreate(ctx context.Context, d *schema.ResourceData, meta any)
b.KeyValueOptions(o)
}
+ // all_tables
+ if v, ok := d.GetOk("all_tables"); ok && v.(bool) {
+ b.AllTables()
+ }
+
// create resource
if err := b.Create(); err != nil {
return diag.FromErr(err)
diff --git a/pkg/resources/resource_source_load_generator_test.go b/pkg/resources/resource_source_load_generator_test.go
index 8ba0ac9d..414addd0 100644
--- a/pkg/resources/resource_source_load_generator_test.go
+++ b/pkg/resources/resource_source_load_generator_test.go
@@ -19,6 +19,7 @@ var inSourceLoadgen = map[string]interface{}{
"cluster_name": "cluster",
"expose_progress": []interface{}{map[string]interface{}{"name": "progress"}},
"load_generator_type": "TPCH",
+ "all_tables": true,
"tpch_options": []interface{}{map[string]interface{}{
"tick_interval": "1s",
"scale_factor": 0.5,
diff --git a/pkg/resources/resource_source_mysql.go b/pkg/resources/resource_source_mysql.go
index 6421d76a..6950818f 100644
--- a/pkg/resources/resource_source_mysql.go
+++ b/pkg/resources/resource_source_mysql.go
@@ -27,19 +27,22 @@ var sourceMySQLSchema = map[string]*schema.Schema{
ForceNew: true,
}),
"ignore_columns": {
- Description: "Ignore specific columns when reading data from MySQL. Can only be updated in place when also updating a corresponding `table` attribute.",
+ Description: "Ignore specific columns when reading data from MySQL. Can only be updated in place when also updating a corresponding `table` attribute. Deprecated: Use the new `materialize_source_table_mysql` resource instead.",
+ Deprecated: "Use the new `materialize_source_table_mysql` resource instead.",
Type: schema.TypeList,
Elem: &schema.Schema{Type: schema.TypeString},
Optional: true,
},
"text_columns": {
- Description: "Decode data as text for specific columns that contain MySQL types that are unsupported in Materialize. Can only be updated in place when also updating a corresponding `table` attribute.",
+ Description: "Decode data as text for specific columns that contain MySQL types that are unsupported in Materialize. Can only be updated in place when also updating a corresponding `table` attribute. Deprecated: Use the new `materialize_source_table_mysql` resource instead.",
+ Deprecated: "Use the new `materialize_source_table_mysql` resource instead.",
Type: schema.TypeList,
Elem: &schema.Schema{Type: schema.TypeString},
Optional: true,
},
"table": {
- Description: "Specify the tables to be included in the source. If not specified, all tables are included.",
+ Description: "Specify the tables to be included in the source. Deprecated: Use the new `materialize_source_table_mysql` resource instead.",
+ Deprecated: "Use the new `materialize_source_table_mysql` resource instead.",
Type: schema.TypeSet,
Optional: true,
Elem: &schema.Resource{
@@ -76,6 +79,13 @@ var sourceMySQLSchema = map[string]*schema.Schema{
},
},
},
+ "all_tables": {
+ Description: "Include all tables in the source. If `table` is specified, this will be ignored.",
+ Deprecated: "Use the new `materialize_source_table_mysql` resource instead.",
+ Type: schema.TypeBool,
+ Optional: true,
+ ForceNew: true,
+ },
"expose_progress": IdentifierSchema(IdentifierSchemaParams{
Elem: "expose_progress",
Description: "The name of the progress collection for the source. If this is not specified, the collection will be named `_progress`.",
@@ -131,6 +141,10 @@ func sourceMySQLCreate(ctx context.Context, d *schema.ResourceData, meta any) di
b.Tables(t)
}
+ if v, ok := d.GetOk("all_tables"); ok && v.(bool) {
+ b.AllTables()
+ }
+
if v, ok := d.GetOk("ignore_columns"); ok && len(v.([]interface{})) > 0 {
columns, err := materialize.GetSliceValueString("ignore_columns", v.([]interface{}))
if err != nil {
diff --git a/pkg/resources/resource_source_postgres.go b/pkg/resources/resource_source_postgres.go
index c95e9887..ead886d1 100644
--- a/pkg/resources/resource_source_postgres.go
+++ b/pkg/resources/resource_source_postgres.go
@@ -33,14 +33,17 @@ var sourcePostgresSchema = map[string]*schema.Schema{
ForceNew: true,
},
"text_columns": {
- Description: "Decode data as text for specific columns that contain PostgreSQL types that are unsupported in Materialize. Can only be updated in place when also updating a corresponding `table` attribute.",
+ Description: "Decode data as text for specific columns that contain PostgreSQL types that are unsupported in Materialize. Can only be updated in place when also updating a corresponding `table` attribute. Deprecated: Use the new `materialize_source_table_postgres` resource instead.",
+ Deprecated: "Use the new `materialize_source_table_postgres` resource instead.",
Type: schema.TypeList,
Elem: &schema.Schema{Type: schema.TypeString},
Optional: true,
},
"table": {
- Description: "Creates subsources for specific tables in the Postgres connection.",
+ Description: "Creates subsources for specific tables in the Postgres connection. Deprecated: Use the new `materialize_source_table_postgres` resource instead.",
+ Deprecated: "Use the new `materialize_source_table_postgres` resource instead.",
Type: schema.TypeSet,
+ Optional: true,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"upstream_name": {
@@ -74,8 +77,6 @@ var sourcePostgresSchema = map[string]*schema.Schema{
},
},
},
- Required: true,
- MinItems: 1,
},
"expose_progress": IdentifierSchema(IdentifierSchemaParams{
Elem: "expose_progress",
@@ -203,6 +204,7 @@ func sourcePostgresCreate(ctx context.Context, d *schema.ResourceData, meta any)
}
if v, ok := d.GetOk("table"); ok {
+ log.Printf("[WARN] The 'table' field in materialize_source_postgres is deprecated. Use the new `materialize_source_table_postgres` resource instead.")
tables := v.(*schema.Set).List()
t := materialize.GetTableStruct(tables)
b.Table(t)
@@ -289,6 +291,7 @@ func sourcePostgresUpdate(ctx context.Context, d *schema.ResourceData, meta any)
}
if d.HasChange("table") {
+ log.Printf("[WARN] The 'table' field in materialize_source_postgres is deprecated. Use the new `materialize_source_table_postgres` resource instead.")
ot, nt := d.GetChange("table")
addTables := materialize.DiffTableStructs(nt.(*schema.Set).List(), ot.(*schema.Set).List())
dropTables := materialize.DiffTableStructs(ot.(*schema.Set).List(), nt.(*schema.Set).List())
diff --git a/pkg/resources/resource_source_table.go b/pkg/resources/resource_source_table.go
new file mode 100644
index 00000000..91788358
--- /dev/null
+++ b/pkg/resources/resource_source_table.go
@@ -0,0 +1,126 @@
+package resources
+
+import (
+ "context"
+ "database/sql"
+
+ "github.com/MaterializeInc/terraform-provider-materialize/pkg/materialize"
+ "github.com/MaterializeInc/terraform-provider-materialize/pkg/utils"
+
+ "github.com/hashicorp/terraform-plugin-sdk/v2/diag"
+ "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
+)
+
+func sourceTableRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics {
+ i := d.Id()
+
+ metaDb, region, err := utils.GetDBClientFromMeta(meta, d)
+ if err != nil {
+ return diag.FromErr(err)
+ }
+
+ t, err := materialize.ScanSourceTable(metaDb, utils.ExtractId(i))
+ if err == sql.ErrNoRows {
+ d.SetId("")
+ return nil
+ } else if err != nil {
+ return diag.FromErr(err)
+ }
+
+ d.SetId(utils.TransformIdWithRegion(string(region), i))
+
+ if err := d.Set("name", t.TableName.String); err != nil {
+ return diag.FromErr(err)
+ }
+
+ if err := d.Set("schema_name", t.SchemaName.String); err != nil {
+ return diag.FromErr(err)
+ }
+
+ if err := d.Set("database_name", t.DatabaseName.String); err != nil {
+ return diag.FromErr(err)
+ }
+
+ source := []interface{}{
+ map[string]interface{}{
+ "name": t.SourceName.String,
+ "schema_name": t.SourceSchemaName.String,
+ "database_name": t.SourceDatabaseName.String,
+ },
+ }
+ if err := d.Set("source", source); err != nil {
+ return diag.FromErr(err)
+ }
+
+ if err := d.Set("ownership_role", t.OwnerName.String); err != nil {
+ return diag.FromErr(err)
+ }
+
+ if err := d.Set("comment", t.Comment.String); err != nil {
+ return diag.FromErr(err)
+ }
+
+ return nil
+}
+
+func sourceTableUpdate(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics {
+ tableName := d.Get("name").(string)
+ schemaName := d.Get("schema_name").(string)
+ databaseName := d.Get("database_name").(string)
+
+ metaDb, _, err := utils.GetDBClientFromMeta(meta, d)
+ if err != nil {
+ return diag.FromErr(err)
+ }
+
+ o := materialize.MaterializeObject{ObjectType: "TABLE", Name: tableName, SchemaName: schemaName, DatabaseName: databaseName}
+
+ if d.HasChange("name") {
+ oldName, newName := d.GetChange("name")
+ o := materialize.MaterializeObject{ObjectType: "TABLE", Name: oldName.(string), SchemaName: schemaName, DatabaseName: databaseName}
+ b := materialize.NewSourceTableBuilder(metaDb, o)
+ if err := b.Rename(newName.(string)); err != nil {
+ return diag.FromErr(err)
+ }
+ }
+
+ if d.HasChange("ownership_role") {
+ _, newRole := d.GetChange("ownership_role")
+ b := materialize.NewOwnershipBuilder(metaDb, o)
+
+ if err := b.Alter(newRole.(string)); err != nil {
+ return diag.FromErr(err)
+ }
+ }
+
+ if d.HasChange("comment") {
+ _, newComment := d.GetChange("comment")
+ b := materialize.NewCommentBuilder(metaDb, o)
+
+ if err := b.Object(newComment.(string)); err != nil {
+ return diag.FromErr(err)
+ }
+ }
+
+ return sourceTableRead(ctx, d, meta)
+}
+
+func sourceTableDelete(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics {
+ tableName := d.Get("name").(string)
+ schemaName := d.Get("schema_name").(string)
+ databaseName := d.Get("database_name").(string)
+
+ metaDb, _, err := utils.GetDBClientFromMeta(meta, d)
+ if err != nil {
+ return diag.FromErr(err)
+ }
+
+ o := materialize.MaterializeObject{ObjectType: "TABLE", Name: tableName, SchemaName: schemaName, DatabaseName: databaseName}
+ b := materialize.NewSourceTableBuilder(metaDb, o)
+
+ if err := b.Drop(); err != nil {
+ return diag.FromErr(err)
+ }
+
+ return nil
+}
diff --git a/pkg/resources/resource_source_table_kafka.go b/pkg/resources/resource_source_table_kafka.go
new file mode 100644
index 00000000..d1adf4a5
--- /dev/null
+++ b/pkg/resources/resource_source_table_kafka.go
@@ -0,0 +1,417 @@
+package resources
+
+import (
+ "context"
+ "database/sql"
+ "log"
+
+ "github.com/MaterializeInc/terraform-provider-materialize/pkg/materialize"
+ "github.com/MaterializeInc/terraform-provider-materialize/pkg/utils"
+
+ "github.com/hashicorp/terraform-plugin-sdk/v2/diag"
+ "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
+)
+
+var sourceTableKafkaSchema = map[string]*schema.Schema{
+ "name": ObjectNameSchema("source table", true, false),
+ "schema_name": SchemaNameSchema("source table", false),
+ "database_name": DatabaseNameSchema("source table", false),
+ "qualified_sql_name": QualifiedNameSchema("source table"),
+ "comment": CommentSchema(false),
+ "source": IdentifierSchema(IdentifierSchemaParams{
+ Elem: "source",
+ Description: "The source this table is created from.",
+ Required: true,
+ ForceNew: true,
+ }),
+ "topic": {
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ Computed: true,
+ Description: "The name of the Kafka topic in the Kafka cluster.",
+ },
+ "include_key": {
+ Description: "Include a column containing the Kafka message key.",
+ Type: schema.TypeBool,
+ Optional: true,
+ ForceNew: true,
+ },
+ "include_key_alias": {
+ Description: "Provide an alias for the key column.",
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ },
+ "include_headers": {
+ Description: "Include message headers.",
+ Type: schema.TypeBool,
+ Optional: true,
+ ForceNew: true,
+ Default: false,
+ },
+ "include_headers_alias": {
+ Description: "Provide an alias for the headers column.",
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ },
+ "include_partition": {
+ Description: "Include a partition column containing the Kafka message partition",
+ Type: schema.TypeBool,
+ Optional: true,
+ ForceNew: true,
+ },
+ "include_partition_alias": {
+ Description: "Provide an alias for the partition column.",
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ },
+ "include_offset": {
+ Description: "Include an offset column containing the Kafka message offset.",
+ Type: schema.TypeBool,
+ Optional: true,
+ ForceNew: true,
+ },
+ "include_offset_alias": {
+ Description: "Provide an alias for the offset column.",
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ },
+ "include_timestamp": {
+ Description: "Include a timestamp column containing the Kafka message timestamp.",
+ Type: schema.TypeBool,
+ Optional: true,
+ ForceNew: true,
+ },
+ "include_timestamp_alias": {
+ Description: "Provide an alias for the timestamp column.",
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ },
+ "format": FormatSpecSchema("format", "How to decode raw bytes from different formats into data structures Materialize can understand at runtime.", false),
+ "key_format": FormatSpecSchema("key_format", "Set the key format explicitly.", false),
+ "value_format": FormatSpecSchema("value_format", "Set the value format explicitly.", false),
+ "envelope": {
+ Description: "How Materialize should interpret records (e.g. append-only, upsert)..",
+ Type: schema.TypeList,
+ MaxItems: 1,
+ Elem: &schema.Resource{
+ Schema: map[string]*schema.Schema{
+ "upsert": {
+ Description: "Use the upsert envelope, which uses message keys to handle CRUD operations.",
+ Type: schema.TypeBool,
+ Optional: true,
+ ForceNew: true,
+ ConflictsWith: []string{"envelope.0.debezium", "envelope.0.none"},
+ },
+ "debezium": {
+ Description: "Use the Debezium envelope, which uses a diff envelope to handle CRUD operations.",
+ Type: schema.TypeBool,
+ Optional: true,
+ ForceNew: true,
+ ConflictsWith: []string{"envelope.0.upsert", "envelope.0.none", "envelope.0.upsert_options"},
+ },
+ "none": {
+ Description: "Use an append-only envelope. This means that records will only be appended and cannot be updated or deleted.",
+ Type: schema.TypeBool,
+ Optional: true,
+ ForceNew: true,
+ ConflictsWith: []string{"envelope.0.upsert", "envelope.0.debezium", "envelope.0.upsert_options"},
+ },
+ "upsert_options": {
+ Description: "Options for the upsert envelope.",
+ Type: schema.TypeList,
+ MaxItems: 1,
+ Optional: true,
+ ForceNew: true,
+ Elem: &schema.Resource{
+ Schema: map[string]*schema.Schema{
+ "value_decoding_errors": {
+ Description: "Specify how to handle value decoding errors in the upsert envelope.",
+ Type: schema.TypeList,
+ MaxItems: 1,
+ Optional: true,
+ ForceNew: true,
+ Elem: &schema.Resource{
+ Schema: map[string]*schema.Schema{
+ "inline": {
+ Description: "Configuration for inline value decoding errors.",
+ Type: schema.TypeList,
+ MaxItems: 1,
+ Optional: true,
+ Elem: &schema.Resource{
+ Schema: map[string]*schema.Schema{
+ "enabled": {
+ Description: "Enable inline value decoding errors.",
+ Type: schema.TypeBool,
+ Optional: true,
+ Default: false,
+ },
+ "alias": {
+ Description: "Specify an alias for the value decoding errors column, to use an alternative name for the error column. If not specified, the column name will be `error`.",
+ Type: schema.TypeString,
+ Optional: true,
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+ Optional: true,
+ ForceNew: true,
+ },
+ "expose_progress": IdentifierSchema(IdentifierSchemaParams{
+ Elem: "expose_progress",
+ Description: "The name of the progress collection for the source. If this is not specified, the collection will be named `_progress`.",
+ Required: false,
+ ForceNew: true,
+ }),
+ "ownership_role": OwnershipRoleSchema(),
+ "region": RegionSchema(),
+}
+
+func SourceTableKafka() *schema.Resource {
+ return &schema.Resource{
+ Description: "A Kafka source describes a Kafka cluster you want Materialize to read data from.",
+
+ CreateContext: sourceTableKafkaCreate,
+ ReadContext: sourceTableKafkaRead,
+ UpdateContext: sourceTableKafkaUpdate,
+ DeleteContext: sourceTableDelete,
+
+ Importer: &schema.ResourceImporter{
+ StateContext: schema.ImportStatePassthroughContext,
+ },
+
+ Schema: sourceTableKafkaSchema,
+ }
+}
+
+func sourceTableKafkaCreate(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics {
+ sourceName := d.Get("name").(string)
+ schemaName := d.Get("schema_name").(string)
+ databaseName := d.Get("database_name").(string)
+
+ metaDb, region, err := utils.GetDBClientFromMeta(meta, d)
+ if err != nil {
+ return diag.FromErr(err)
+ }
+ o := materialize.MaterializeObject{ObjectType: "TABLE", Name: sourceName, SchemaName: schemaName, DatabaseName: databaseName}
+ b := materialize.NewSourceTableKafkaBuilder(metaDb, o)
+
+ source := materialize.GetIdentifierSchemaStruct(d.Get("source"))
+ b.Source(source)
+
+ b.UpstreamName(d.Get("topic").(string))
+
+ if v, ok := d.GetOk("include_key"); ok && v.(bool) {
+ if alias, ok := d.GetOk("include_key_alias"); ok {
+ b.IncludeKeyAlias(alias.(string))
+ } else {
+ b.IncludeKey()
+ }
+ }
+
+ if v, ok := d.GetOk("include_partition"); ok && v.(bool) {
+ if alias, ok := d.GetOk("include_partition_alias"); ok {
+ b.IncludePartitionAlias(alias.(string))
+ } else {
+ b.IncludePartition()
+ }
+ }
+
+ if v, ok := d.GetOk("include_offset"); ok && v.(bool) {
+ if alias, ok := d.GetOk("include_offset_alias"); ok {
+ b.IncludeOffsetAlias(alias.(string))
+ } else {
+ b.IncludeOffset()
+ }
+ }
+
+ if v, ok := d.GetOk("include_timestamp"); ok && v.(bool) {
+ if alias, ok := d.GetOk("include_timestamp_alias"); ok {
+ b.IncludeTimestampAlias(alias.(string))
+ } else {
+ b.IncludeTimestamp()
+ }
+ }
+
+ if v, ok := d.GetOk("include_headers"); ok && v.(bool) {
+ if alias, ok := d.GetOk("include_headers_alias"); ok {
+ b.IncludeHeadersAlias(alias.(string))
+ } else {
+ b.IncludeHeaders()
+ }
+ }
+
+ if v, ok := d.GetOk("format"); ok {
+ format := materialize.GetFormatSpecStruc(v)
+ b.Format(format)
+ }
+
+ if v, ok := d.GetOk("key_format"); ok {
+ format := materialize.GetFormatSpecStruc(v)
+ b.KeyFormat(format)
+ }
+
+ if v, ok := d.GetOk("value_format"); ok {
+ format := materialize.GetFormatSpecStruc(v)
+ b.ValueFormat(format)
+ }
+
+ if v, ok := d.GetOk("envelope"); ok {
+ envelope := materialize.GetSourceKafkaEnvelopeStruct(v)
+ b.Envelope(envelope)
+ }
+
+ if v, ok := d.GetOk("expose_progress"); ok {
+ e := materialize.GetIdentifierSchemaStruct(v)
+ b.ExposeProgress(e)
+ }
+
+ // create resource
+ if err := b.Create(); err != nil {
+ return diag.FromErr(err)
+ }
+
+ // ownership
+ if v, ok := d.GetOk("ownership_role"); ok {
+ ownership := materialize.NewOwnershipBuilder(metaDb, o)
+
+ if err := ownership.Alter(v.(string)); err != nil {
+ log.Printf("[DEBUG] resource failed ownership, dropping object: %s", o.Name)
+ b.Drop()
+ return diag.FromErr(err)
+ }
+ }
+
+ // object comment
+ if v, ok := d.GetOk("comment"); ok {
+ comment := materialize.NewCommentBuilder(metaDb, o)
+
+ if err := comment.Object(v.(string)); err != nil {
+ log.Printf("[DEBUG] resource failed comment, dropping object: %s", o.Name)
+ b.Drop()
+ return diag.FromErr(err)
+ }
+ }
+
+ // set id
+ i, err := materialize.SourceTableKafkaId(metaDb, o)
+ if err != nil {
+ return diag.FromErr(err)
+ }
+ d.SetId(utils.TransformIdWithRegion(string(region), i))
+
+ return sourceTableKafkaRead(ctx, d, meta)
+}
+
+func sourceTableKafkaRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics {
+ i := d.Id()
+
+ metaDb, region, err := utils.GetDBClientFromMeta(meta, d)
+ if err != nil {
+ return diag.FromErr(err)
+ }
+
+ t, err := materialize.ScanSourceTableKafka(metaDb, utils.ExtractId(i))
+ if err == sql.ErrNoRows {
+ d.SetId("")
+ return nil
+ } else if err != nil {
+ return diag.FromErr(err)
+ }
+
+ d.SetId(utils.TransformIdWithRegion(string(region), i))
+
+ if err := d.Set("name", t.TableName.String); err != nil {
+ return diag.FromErr(err)
+ }
+
+ if err := d.Set("schema_name", t.SchemaName.String); err != nil {
+ return diag.FromErr(err)
+ }
+
+ if err := d.Set("database_name", t.DatabaseName.String); err != nil {
+ return diag.FromErr(err)
+ }
+
+ source := []interface{}{
+ map[string]interface{}{
+ "name": t.SourceName.String,
+ "schema_name": t.SourceSchemaName.String,
+ "database_name": t.SourceDatabaseName.String,
+ },
+ }
+ if err := d.Set("source", source); err != nil {
+ return diag.FromErr(err)
+ }
+
+ if err := d.Set("topic", t.UpstreamName.String); err != nil {
+ return diag.FromErr(err)
+ }
+
+ // TODO: include envelope_type, key_format and value_format from mz_internal.mz_kafka_source_tables
+
+ if err := d.Set("ownership_role", t.OwnerName.String); err != nil {
+ return diag.FromErr(err)
+ }
+
+ if err := d.Set("comment", t.Comment.String); err != nil {
+ return diag.FromErr(err)
+ }
+
+ return nil
+}
+
+func sourceTableKafkaUpdate(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics {
+ tableName := d.Get("name").(string)
+ schemaName := d.Get("schema_name").(string)
+ databaseName := d.Get("database_name").(string)
+
+ metaDb, _, err := utils.GetDBClientFromMeta(meta, d)
+ if err != nil {
+ return diag.FromErr(err)
+ }
+
+ o := materialize.MaterializeObject{ObjectType: "TABLE", Name: tableName, SchemaName: schemaName, DatabaseName: databaseName}
+
+ if d.HasChange("name") {
+ oldName, newName := d.GetChange("name")
+ o := materialize.MaterializeObject{ObjectType: "TABLE", Name: oldName.(string), SchemaName: schemaName, DatabaseName: databaseName}
+ b := materialize.NewSourceTableKafkaBuilder(metaDb, o)
+ if err := b.Rename(newName.(string)); err != nil {
+ return diag.FromErr(err)
+ }
+ }
+
+ if d.HasChange("ownership_role") {
+ _, newRole := d.GetChange("ownership_role")
+ b := materialize.NewOwnershipBuilder(metaDb, o)
+
+ if err := b.Alter(newRole.(string)); err != nil {
+ return diag.FromErr(err)
+ }
+ }
+
+ if d.HasChange("comment") {
+ _, newComment := d.GetChange("comment")
+ b := materialize.NewCommentBuilder(metaDb, o)
+
+ if err := b.Object(newComment.(string)); err != nil {
+ return diag.FromErr(err)
+ }
+ }
+
+ return sourceTableKafkaRead(ctx, d, meta)
+}
diff --git a/pkg/resources/resource_source_table_kafka_test.go b/pkg/resources/resource_source_table_kafka_test.go
new file mode 100644
index 00000000..0088b876
--- /dev/null
+++ b/pkg/resources/resource_source_table_kafka_test.go
@@ -0,0 +1,508 @@
+package resources
+
+import (
+ "context"
+ "testing"
+
+ sqlmock "github.com/DATA-DOG/go-sqlmock"
+ "github.com/MaterializeInc/terraform-provider-materialize/pkg/testhelpers"
+ "github.com/MaterializeInc/terraform-provider-materialize/pkg/utils"
+ "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
+ "github.com/stretchr/testify/require"
+)
+
+var inSourceTableKafka = map[string]interface{}{
+ "name": "table",
+ "schema_name": "schema",
+ "database_name": "database",
+ "source": []interface{}{
+ map[string]interface{}{
+ "name": "kafka_source",
+ "schema_name": "public",
+ "database_name": "materialize",
+ },
+ },
+ "topic": "topic",
+ "include_key": true,
+ "include_key_alias": "message_key",
+ "include_headers": true,
+ "include_headers_alias": "message_headers",
+ "include_partition": true,
+ "include_partition_alias": "message_partition",
+ "format": []interface{}{
+ map[string]interface{}{
+ "json": true,
+ },
+ },
+ "envelope": []interface{}{
+ map[string]interface{}{
+ "upsert": true,
+ "upsert_options": []interface{}{
+ map[string]interface{}{
+ "value_decoding_errors": []interface{}{
+ map[string]interface{}{
+ "inline": []interface{}{
+ map[string]interface{}{
+ "enabled": true,
+ "alias": "decoding_error",
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+}
+
+func TestResourceSourceTableKafkaCreate(t *testing.T) {
+ r := require.New(t)
+ d := schema.TestResourceDataRaw(t, SourceTableKafka().Schema, inSourceTableKafka)
+ r.NotNil(d)
+
+ testhelpers.WithMockProviderMeta(t, func(db *utils.ProviderMeta, mock sqlmock.Sqlmock) {
+ // Create
+ mock.ExpectExec(
+ `CREATE TABLE "database"."schema"."table"
+ FROM SOURCE "materialize"."public"."kafka_source"
+ \(REFERENCE "topic"\)
+ FORMAT JSON
+ INCLUDE KEY AS "message_key", HEADERS AS "message_headers", PARTITION AS "message_partition"
+ ENVELOPE UPSERT \(VALUE DECODING ERRORS = \(INLINE AS "decoding_error"\)\);`,
+ ).WillReturnResult(sqlmock.NewResult(1, 1))
+
+ // Query Id
+ ip := `WHERE mz_databases.name = 'database' AND mz_schemas.name = 'schema' AND mz_tables.name = 'table'`
+ testhelpers.MockSourceTableKafkaScan(mock, ip)
+
+ // Query Params
+ pp := `WHERE mz_tables.id = 'u1'`
+ testhelpers.MockSourceTableKafkaScan(mock, pp)
+
+ if err := sourceTableKafkaCreate(context.TODO(), d, db); err != nil {
+ t.Fatal(err)
+ }
+ })
+}
+
+func TestResourceSourceTableKafkaRead(t *testing.T) {
+ r := require.New(t)
+ d := schema.TestResourceDataRaw(t, SourceTableKafka().Schema, inSourceTableKafka)
+ d.SetId("u1")
+ r.NotNil(d)
+
+ testhelpers.WithMockProviderMeta(t, func(db *utils.ProviderMeta, mock sqlmock.Sqlmock) {
+ // Query Params
+ pp := `WHERE mz_tables.id = 'u1'`
+ testhelpers.MockSourceTableKafkaScan(mock, pp)
+
+ if err := sourceTableKafkaRead(context.TODO(), d, db); err != nil {
+ t.Fatal(err)
+ }
+
+ r.Equal("table", d.Get("name").(string))
+ r.Equal("schema", d.Get("schema_name").(string))
+ r.Equal("database", d.Get("database_name").(string))
+ })
+}
+
+func TestResourceSourceTableKafkaUpdate(t *testing.T) {
+ r := require.New(t)
+ d := schema.TestResourceDataRaw(t, SourceTableKafka().Schema, inSourceTableKafka)
+ d.SetId("u1")
+ d.Set("name", "old_table")
+ r.NotNil(d)
+
+ testhelpers.WithMockProviderMeta(t, func(db *utils.ProviderMeta, mock sqlmock.Sqlmock) {
+ mock.ExpectExec(`ALTER TABLE "database"."schema"."" RENAME TO "database"."schema"."table"`).WillReturnResult(sqlmock.NewResult(1, 1))
+
+ // Query Params
+ pp := `WHERE mz_tables.id = 'u1'`
+ testhelpers.MockSourceTableKafkaScan(mock, pp)
+
+ if err := sourceTableKafkaUpdate(context.TODO(), d, db); err != nil {
+ t.Fatal(err)
+ }
+ })
+}
+
+func TestResourceSourceTableKafkaDelete(t *testing.T) {
+ r := require.New(t)
+ d := schema.TestResourceDataRaw(t, SourceTableKafka().Schema, inSourceTableKafka)
+ d.SetId("u1")
+ r.NotNil(d)
+
+ testhelpers.WithMockProviderMeta(t, func(db *utils.ProviderMeta, mock sqlmock.Sqlmock) {
+ mock.ExpectExec(`DROP TABLE "database"."schema"."table"`).WillReturnResult(sqlmock.NewResult(1, 1))
+
+ if err := sourceTableDelete(context.TODO(), d, db); err != nil {
+ t.Fatal(err)
+ }
+ })
+}
+
+func TestResourceSourceTableKafkaCreateWithAvroFormat(t *testing.T) {
+ r := require.New(t)
+ inSourceTableKafkaAvro := map[string]interface{}{
+ "name": "table_avro",
+ "schema_name": "schema",
+ "database_name": "database",
+ "source": []interface{}{
+ map[string]interface{}{
+ "name": "kafka_source",
+ "schema_name": "public",
+ "database_name": "materialize",
+ },
+ },
+ "topic": "topic",
+ "format": []interface{}{
+ map[string]interface{}{
+ "avro": []interface{}{
+ map[string]interface{}{
+ "schema_registry_connection": []interface{}{
+ map[string]interface{}{
+ "name": "sr_conn",
+ "schema_name": "public",
+ "database_name": "materialize",
+ },
+ },
+ },
+ },
+ },
+ },
+ "envelope": []interface{}{
+ map[string]interface{}{
+ "debezium": true,
+ },
+ },
+ }
+ d := schema.TestResourceDataRaw(t, SourceTableKafka().Schema, inSourceTableKafkaAvro)
+ r.NotNil(d)
+
+ testhelpers.WithMockProviderMeta(t, func(db *utils.ProviderMeta, mock sqlmock.Sqlmock) {
+ // Create
+ mock.ExpectExec(
+ `CREATE TABLE "database"."schema"."table_avro"
+ FROM SOURCE "materialize"."public"."kafka_source"
+ \(REFERENCE "topic"\)
+ FORMAT AVRO USING CONFLUENT SCHEMA REGISTRY CONNECTION "materialize"."public"."sr_conn"
+ ENVELOPE DEBEZIUM;`,
+ ).WillReturnResult(sqlmock.NewResult(1, 1))
+
+ // Query Id
+ ip := `WHERE mz_databases.name = 'database' AND mz_schemas.name = 'schema' AND mz_tables.name = 'table_avro'`
+ testhelpers.MockSourceTableKafkaScan(mock, ip)
+
+ // Query Params
+ pp := `WHERE mz_tables.id = 'u1'`
+ testhelpers.MockSourceTableKafkaScan(mock, pp)
+
+ if err := sourceTableKafkaCreate(context.TODO(), d, db); err != nil {
+ t.Fatal(err)
+ }
+ })
+}
+
+func TestResourceSourceTableKafkaCreateIncludeTrueNoAlias(t *testing.T) {
+ r := require.New(t)
+
+ testInSourceTableKafka := inSourceTableKafka
+ testInSourceTableKafka["include_key"] = true
+ delete(testInSourceTableKafka, "include_key_alias")
+ testInSourceTableKafka["include_headers"] = true
+ delete(testInSourceTableKafka, "include_headers_alias")
+ testInSourceTableKafka["include_partition"] = true
+ delete(testInSourceTableKafka, "include_partition_alias")
+ testInSourceTableKafka["include_offset"] = true
+ testInSourceTableKafka["include_timestamp"] = true
+
+ d := schema.TestResourceDataRaw(t, SourceTableKafka().Schema, testInSourceTableKafka)
+ r.NotNil(d)
+
+ testhelpers.WithMockProviderMeta(t, func(db *utils.ProviderMeta, mock sqlmock.Sqlmock) {
+ // Create
+ mock.ExpectExec(
+ `CREATE TABLE "database"."schema"."table"
+ FROM SOURCE "materialize"."public"."kafka_source"
+ \(REFERENCE "topic"\)
+ FORMAT JSON
+ INCLUDE KEY, HEADERS, PARTITION, OFFSET, TIMESTAMP
+ ENVELOPE UPSERT \(VALUE DECODING ERRORS = \(INLINE AS "decoding_error"\)\);`,
+ ).WillReturnResult(sqlmock.NewResult(1, 1))
+
+ // Query Id
+ ip := `WHERE mz_databases.name = 'database' AND mz_schemas.name = 'schema' AND mz_tables.name = 'table'`
+ testhelpers.MockSourceTableKafkaScan(mock, ip)
+
+ // Query Params
+ pp := `WHERE mz_tables.id = 'u1'`
+ testhelpers.MockSourceTableKafkaScan(mock, pp)
+
+ if err := sourceTableKafkaCreate(context.TODO(), d, db); err != nil {
+ t.Fatal(err)
+ }
+ })
+}
+
+func TestResourceSourceTableKafkaCreateIncludeFalseWithAlias(t *testing.T) {
+ r := require.New(t)
+
+ testInSourceTableKafka := inSourceTableKafka
+ testInSourceTableKafka["include_key"] = false
+ testInSourceTableKafka["include_headers"] = false
+ testInSourceTableKafka["include_partition"] = false
+ testInSourceTableKafka["include_offset"] = false
+ testInSourceTableKafka["include_timestamp"] = false
+
+ d := schema.TestResourceDataRaw(t, SourceTableKafka().Schema, testInSourceTableKafka)
+ r.NotNil(d)
+
+ testhelpers.WithMockProviderMeta(t, func(db *utils.ProviderMeta, mock sqlmock.Sqlmock) {
+ // Create
+ mock.ExpectExec(
+ `CREATE TABLE "database"."schema"."table"
+ FROM SOURCE "materialize"."public"."kafka_source"
+ \(REFERENCE "topic"\)
+ FORMAT JSON
+ ENVELOPE UPSERT \(VALUE DECODING ERRORS = \(INLINE AS "decoding_error"\)\);`,
+ ).WillReturnResult(sqlmock.NewResult(1, 1))
+
+ // Query Id
+ ip := `WHERE mz_databases.name = 'database' AND mz_schemas.name = 'schema' AND mz_tables.name = 'table'`
+ testhelpers.MockSourceTableKafkaScan(mock, ip)
+
+ // Query Params
+ pp := `WHERE mz_tables.id = 'u1'`
+ testhelpers.MockSourceTableKafkaScan(mock, pp)
+
+ if err := sourceTableKafkaCreate(context.TODO(), d, db); err != nil {
+ t.Fatal(err)
+ }
+ })
+}
+
+func TestResourceSourceTableKafkaCreateWithCSVFormat(t *testing.T) {
+ r := require.New(t)
+ inSourceTableKafkaCSV := map[string]interface{}{
+ "name": "table_csv",
+ "schema_name": "schema",
+ "database_name": "database",
+ "source": []interface{}{
+ map[string]interface{}{
+ "name": "kafka_source",
+ "schema_name": "public",
+ "database_name": "materialize",
+ },
+ },
+ "topic": "topic",
+ "format": []interface{}{
+ map[string]interface{}{
+ "csv": []interface{}{
+ map[string]interface{}{
+ "delimited_by": ",",
+ "header": []interface{}{"column1", "column2", "column3"},
+ },
+ },
+ },
+ },
+ }
+ d := schema.TestResourceDataRaw(t, SourceTableKafka().Schema, inSourceTableKafkaCSV)
+ r.NotNil(d)
+
+ testhelpers.WithMockProviderMeta(t, func(db *utils.ProviderMeta, mock sqlmock.Sqlmock) {
+ // Create
+ mock.ExpectExec(
+ `CREATE TABLE "database"."schema"."table_csv"
+ FROM SOURCE "materialize"."public"."kafka_source"
+ \(REFERENCE "topic"\)
+ FORMAT CSV WITH HEADER \( column1, column2, column3 \) DELIMITER ',';`,
+ ).WillReturnResult(sqlmock.NewResult(1, 1))
+
+ // Query Id
+ ip := `WHERE mz_databases.name = 'database' AND mz_schemas.name = 'schema' AND mz_tables.name = 'table_csv'`
+ testhelpers.MockSourceTableKafkaScan(mock, ip)
+
+ // Query Params
+ pp := `WHERE mz_tables.id = 'u1'`
+ testhelpers.MockSourceTableKafkaScan(mock, pp)
+
+ if err := sourceTableKafkaCreate(context.TODO(), d, db); err != nil {
+ t.Fatal(err)
+ }
+ })
+}
+
+func TestResourceSourceTableKafkaCreateWithKeyAndValueFormat(t *testing.T) {
+ r := require.New(t)
+ inSourceTableKafkaKeyValue := map[string]interface{}{
+ "name": "table_key_value",
+ "schema_name": "schema",
+ "database_name": "database",
+ "source": []interface{}{
+ map[string]interface{}{
+ "name": "kafka_source",
+ "schema_name": "public",
+ "database_name": "materialize",
+ },
+ },
+ "topic": "topic",
+ "key_format": []interface{}{
+ map[string]interface{}{
+ "json": true,
+ },
+ },
+ "value_format": []interface{}{
+ map[string]interface{}{
+ "avro": []interface{}{
+ map[string]interface{}{
+ "schema_registry_connection": []interface{}{
+ map[string]interface{}{
+ "name": "sr_conn",
+ "schema_name": "public",
+ "database_name": "materialize",
+ },
+ },
+ },
+ },
+ },
+ },
+ }
+ d := schema.TestResourceDataRaw(t, SourceTableKafka().Schema, inSourceTableKafkaKeyValue)
+ r.NotNil(d)
+
+ testhelpers.WithMockProviderMeta(t, func(db *utils.ProviderMeta, mock sqlmock.Sqlmock) {
+ // Create
+ mock.ExpectExec(
+ `CREATE TABLE "database"."schema"."table_key_value"
+ FROM SOURCE "materialize"."public"."kafka_source"
+ \(REFERENCE "topic"\)
+ KEY FORMAT JSON
+ VALUE FORMAT AVRO USING CONFLUENT SCHEMA REGISTRY CONNECTION "materialize"."public"."sr_conn";`,
+ ).WillReturnResult(sqlmock.NewResult(1, 1))
+
+ // Query Id
+ ip := `WHERE mz_databases.name = 'database' AND mz_schemas.name = 'schema' AND mz_tables.name = 'table_key_value'`
+ testhelpers.MockSourceTableKafkaScan(mock, ip)
+
+ // Query Params
+ pp := `WHERE mz_tables.id = 'u1'`
+ testhelpers.MockSourceTableKafkaScan(mock, pp)
+
+ if err := sourceTableKafkaCreate(context.TODO(), d, db); err != nil {
+ t.Fatal(err)
+ }
+ })
+}
+
+func TestResourceSourceTableKafkaCreateWithProtobufFormat(t *testing.T) {
+ r := require.New(t)
+ inSourceTableKafkaProtobuf := map[string]interface{}{
+ "name": "table_protobuf",
+ "schema_name": "schema",
+ "database_name": "database",
+ "source": []interface{}{
+ map[string]interface{}{
+ "name": "kafka_source",
+ "schema_name": "public",
+ "database_name": "materialize",
+ },
+ },
+ "topic": "topic",
+ "format": []interface{}{
+ map[string]interface{}{
+ "protobuf": []interface{}{
+ map[string]interface{}{
+ "schema_registry_connection": []interface{}{
+ map[string]interface{}{
+ "name": "sr_conn",
+ "schema_name": "public",
+ "database_name": "materialize",
+ },
+ },
+ "message": "MyMessage",
+ },
+ },
+ },
+ },
+ "envelope": []interface{}{
+ map[string]interface{}{
+ "none": true,
+ },
+ },
+ }
+ d := schema.TestResourceDataRaw(t, SourceTableKafka().Schema, inSourceTableKafkaProtobuf)
+ r.NotNil(d)
+
+ testhelpers.WithMockProviderMeta(t, func(db *utils.ProviderMeta, mock sqlmock.Sqlmock) {
+ // Create
+ mock.ExpectExec(
+ `CREATE TABLE "database"."schema"."table_protobuf"
+ FROM SOURCE "materialize"."public"."kafka_source"
+ \(REFERENCE "topic"\)
+ FORMAT PROTOBUF MESSAGE 'MyMessage' USING CONFLUENT SCHEMA REGISTRY CONNECTION "materialize"."public"."sr_conn"
+ ENVELOPE NONE;`,
+ ).WillReturnResult(sqlmock.NewResult(1, 1))
+
+ // Query Id
+ ip := `WHERE mz_databases.name = 'database' AND mz_schemas.name = 'schema' AND mz_tables.name = 'table_protobuf'`
+ testhelpers.MockSourceTableKafkaScan(mock, ip)
+
+ // Query Params
+ pp := `WHERE mz_tables.id = 'u1'`
+ testhelpers.MockSourceTableKafkaScan(mock, pp)
+
+ if err := sourceTableKafkaCreate(context.TODO(), d, db); err != nil {
+ t.Fatal(err)
+ }
+ })
+}
+
+func TestResourceSourceTableKafkaCreateWithNoTopic(t *testing.T) {
+ r := require.New(t)
+ inSourceTableKafkaNoTopic := map[string]interface{}{
+ "name": "no_topic",
+ "schema_name": "schema",
+ "database_name": "database",
+ "source": []interface{}{
+ map[string]interface{}{
+ "name": "kafka_source",
+ "schema_name": "public",
+ "database_name": "materialize",
+ },
+ },
+ "format": []interface{}{
+ map[string]interface{}{
+ "json": true,
+ },
+ },
+ "envelope": []interface{}{
+ map[string]interface{}{
+ "none": true,
+ },
+ },
+ }
+ d := schema.TestResourceDataRaw(t, SourceTableKafka().Schema, inSourceTableKafkaNoTopic)
+ r.NotNil(d)
+
+ testhelpers.WithMockProviderMeta(t, func(db *utils.ProviderMeta, mock sqlmock.Sqlmock) {
+ // Create
+ mock.ExpectExec(
+ `CREATE TABLE "database"."schema"."no_topic"
+ FROM SOURCE "materialize"."public"."kafka_source"
+ FORMAT JSON
+ ENVELOPE NONE;`,
+ ).WillReturnResult(sqlmock.NewResult(1, 1))
+
+ // Query Id
+ ip := `WHERE mz_databases.name = 'database' AND mz_schemas.name = 'schema' AND mz_tables.name = 'no_topic'`
+ testhelpers.MockSourceTableKafkaScan(mock, ip)
+
+ // Query Params
+ pp := `WHERE mz_tables.id = 'u1'`
+ testhelpers.MockSourceTableKafkaScan(mock, pp)
+
+ if err := sourceTableKafkaCreate(context.TODO(), d, db); err != nil {
+ t.Fatal(err)
+ }
+ })
+}
diff --git a/pkg/resources/resource_source_table_load_generator.go b/pkg/resources/resource_source_table_load_generator.go
new file mode 100644
index 00000000..39d321e1
--- /dev/null
+++ b/pkg/resources/resource_source_table_load_generator.go
@@ -0,0 +1,110 @@
+package resources
+
+import (
+ "context"
+ "log"
+
+ "github.com/MaterializeInc/terraform-provider-materialize/pkg/materialize"
+ "github.com/MaterializeInc/terraform-provider-materialize/pkg/utils"
+
+ "github.com/hashicorp/terraform-plugin-sdk/v2/diag"
+ "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
+)
+
+var sourceTableLoadGenSchema = map[string]*schema.Schema{
+ "name": ObjectNameSchema("table", true, false),
+ "schema_name": SchemaNameSchema("table", false),
+ "database_name": DatabaseNameSchema("table", false),
+ "qualified_sql_name": QualifiedNameSchema("table"),
+ "source": IdentifierSchema(IdentifierSchemaParams{
+ Elem: "source",
+ Description: "The source this table is created from. Compatible with `auction_options`, `marketing_options`, and `tpch_options` load generator sources.",
+ Required: true,
+ ForceNew: true,
+ }),
+ "upstream_name": {
+ Type: schema.TypeString,
+ Required: true,
+ ForceNew: true,
+ Description: "The name of the table in the upstream database.",
+ },
+ "upstream_schema_name": {
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ Description: "The schema of the table in the upstream database.",
+ },
+ "comment": CommentSchema(false),
+ "ownership_role": OwnershipRoleSchema(),
+ "region": RegionSchema(),
+}
+
+func SourceTableLoadGen() *schema.Resource {
+ return &schema.Resource{
+ CreateContext: sourceTableLoadGenCreate,
+ ReadContext: sourceTableRead,
+ UpdateContext: sourceTableUpdate,
+ DeleteContext: sourceTableDelete,
+
+ Importer: &schema.ResourceImporter{
+ StateContext: schema.ImportStatePassthroughContext,
+ },
+
+ Schema: sourceTableLoadGenSchema,
+ }
+}
+
+func sourceTableLoadGenCreate(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics {
+ tableName := d.Get("name").(string)
+ schemaName := d.Get("schema_name").(string)
+ databaseName := d.Get("database_name").(string)
+
+ metaDb, region, err := utils.GetDBClientFromMeta(meta, d)
+ if err != nil {
+ return diag.FromErr(err)
+ }
+
+ o := materialize.MaterializeObject{ObjectType: "TABLE", Name: tableName, SchemaName: schemaName, DatabaseName: databaseName}
+ b := materialize.NewSourceTableLoadGenBuilder(metaDb, o)
+
+ source := materialize.GetIdentifierSchemaStruct(d.Get("source"))
+ b.Source(source)
+
+ b.UpstreamName(d.Get("upstream_name").(string))
+
+ if v, ok := d.GetOk("upstream_schema_name"); ok {
+ b.UpstreamSchemaName(v.(string))
+ }
+
+ if err := b.Create(); err != nil {
+ return diag.FromErr(err)
+ }
+
+ // Handle ownership
+ if v, ok := d.GetOk("ownership_role"); ok {
+ ownership := materialize.NewOwnershipBuilder(metaDb, o)
+ if err := ownership.Alter(v.(string)); err != nil {
+ log.Printf("[DEBUG] resource failed ownership, dropping object: %s", o.Name)
+ b.Drop()
+ return diag.FromErr(err)
+ }
+ }
+
+ // Handle comments
+ if v, ok := d.GetOk("comment"); ok {
+ comment := materialize.NewCommentBuilder(metaDb, o)
+ if err := comment.Object(v.(string)); err != nil {
+ log.Printf("[DEBUG] resource failed comment, dropping object: %s", o.Name)
+ b.Drop()
+ return diag.FromErr(err)
+ }
+ }
+
+ i, err := materialize.SourceTableId(metaDb, o)
+ if err != nil {
+ return diag.FromErr(err)
+ }
+ d.SetId(utils.TransformIdWithRegion(string(region), i))
+
+ return sourceTableRead(ctx, d, meta)
+}
diff --git a/pkg/resources/resource_source_table_load_generator_test.go b/pkg/resources/resource_source_table_load_generator_test.go
new file mode 100644
index 00000000..6643830b
--- /dev/null
+++ b/pkg/resources/resource_source_table_load_generator_test.go
@@ -0,0 +1,113 @@
+package resources
+
+import (
+ "context"
+ "testing"
+
+ "github.com/MaterializeInc/terraform-provider-materialize/pkg/testhelpers"
+ "github.com/MaterializeInc/terraform-provider-materialize/pkg/utils"
+
+ sqlmock "github.com/DATA-DOG/go-sqlmock"
+ "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
+ "github.com/stretchr/testify/require"
+)
+
+var inSourceTableLoadGen = map[string]interface{}{
+ "name": "table",
+ "schema_name": "schema",
+ "database_name": "database",
+ "source": []interface{}{
+ map[string]interface{}{
+ "name": "loadgen",
+ "schema_name": "public",
+ "database_name": "materialize",
+ },
+ },
+ "upstream_name": "upstream_table",
+ "upstream_schema_name": "upstream_schema",
+ "text_columns": []interface{}{"column1", "column2"},
+ "ignore_columns": []interface{}{"column3", "column4"},
+}
+
+func TestResourceSourceTableLoadGenCreate(t *testing.T) {
+ r := require.New(t)
+ d := schema.TestResourceDataRaw(t, SourceTableLoadGen().Schema, inSourceTableLoadGen)
+ r.NotNil(d)
+
+ testhelpers.WithMockProviderMeta(t, func(db *utils.ProviderMeta, mock sqlmock.Sqlmock) {
+ // Create
+ mock.ExpectExec(
+ `CREATE TABLE "database"."schema"."table"
+ FROM SOURCE "materialize"."public"."loadgen"
+ \(REFERENCE "upstream_schema"."upstream_table"\);`,
+ ).WillReturnResult(sqlmock.NewResult(1, 1))
+
+ // Query Id
+ ip := `WHERE mz_databases.name = 'database' AND mz_schemas.name = 'schema' AND mz_tables.name = 'table'`
+ testhelpers.MockSourceTableScan(mock, ip)
+
+ // Query Params
+ pp := `WHERE mz_tables.id = 'u1'`
+ testhelpers.MockSourceTableScan(mock, pp)
+
+ if err := sourceTableLoadGenCreate(context.TODO(), d, db); err != nil {
+ t.Fatal(err)
+ }
+ })
+}
+
+func TestResourceSourceTableLoadGenRead(t *testing.T) {
+ r := require.New(t)
+ d := schema.TestResourceDataRaw(t, SourceTableLoadGen().Schema, inSourceTableLoadGen)
+ d.SetId("u1")
+ r.NotNil(d)
+
+ testhelpers.WithMockProviderMeta(t, func(db *utils.ProviderMeta, mock sqlmock.Sqlmock) {
+ // Query Params
+ pp := `WHERE mz_tables.id = 'u1'`
+ testhelpers.MockSourceTableScan(mock, pp)
+
+ if err := sourceTableRead(context.TODO(), d, db); err != nil {
+ t.Fatal(err)
+ }
+
+ r.Equal("table", d.Get("name").(string))
+ r.Equal("schema", d.Get("schema_name").(string))
+ r.Equal("database", d.Get("database_name").(string))
+ })
+}
+
+func TestResourceSourceTableLoadGenUpdate(t *testing.T) {
+ r := require.New(t)
+ d := schema.TestResourceDataRaw(t, SourceTableLoadGen().Schema, inSourceTableLoadGen)
+ d.SetId("u1")
+ d.Set("name", "old_table")
+ r.NotNil(d)
+
+ testhelpers.WithMockProviderMeta(t, func(db *utils.ProviderMeta, mock sqlmock.Sqlmock) {
+ mock.ExpectExec(`ALTER TABLE "database"."schema"."" RENAME TO "database"."schema"."table"`).WillReturnResult(sqlmock.NewResult(1, 1))
+
+ // Query Params
+ pp := `WHERE mz_tables.id = 'u1'`
+ testhelpers.MockSourceTableScan(mock, pp)
+
+ if err := sourceTableUpdate(context.TODO(), d, db); err != nil {
+ t.Fatal(err)
+ }
+ })
+}
+
+func TestResourceSourceTableLoadGenDelete(t *testing.T) {
+ r := require.New(t)
+ d := schema.TestResourceDataRaw(t, SourceTableLoadGen().Schema, inSourceTableLoadGen)
+ d.SetId("u1")
+ r.NotNil(d)
+
+ testhelpers.WithMockProviderMeta(t, func(db *utils.ProviderMeta, mock sqlmock.Sqlmock) {
+ mock.ExpectExec(`DROP TABLE "database"."schema"."table"`).WillReturnResult(sqlmock.NewResult(1, 1))
+
+ if err := sourceTableDelete(context.TODO(), d, db); err != nil {
+ t.Fatal(err)
+ }
+ })
+}
diff --git a/pkg/resources/resource_source_table_mysql.go b/pkg/resources/resource_source_table_mysql.go
new file mode 100644
index 00000000..78b321e6
--- /dev/null
+++ b/pkg/resources/resource_source_table_mysql.go
@@ -0,0 +1,243 @@
+package resources
+
+import (
+ "context"
+ "database/sql"
+ "log"
+
+ "github.com/MaterializeInc/terraform-provider-materialize/pkg/materialize"
+ "github.com/MaterializeInc/terraform-provider-materialize/pkg/utils"
+
+ "github.com/hashicorp/terraform-plugin-sdk/v2/diag"
+ "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
+)
+
+var sourceTableMySQLSchema = map[string]*schema.Schema{
+ "name": ObjectNameSchema("table", true, false),
+ "schema_name": SchemaNameSchema("table", false),
+ "database_name": DatabaseNameSchema("table", false),
+ "qualified_sql_name": QualifiedNameSchema("table"),
+ "source": IdentifierSchema(IdentifierSchemaParams{
+ Elem: "source",
+ Description: "The source this table is created from.",
+ Required: true,
+ ForceNew: true,
+ }),
+ "upstream_name": {
+ Type: schema.TypeString,
+ Required: true,
+ ForceNew: true,
+ Description: "The name of the table in the upstream database.",
+ },
+ "upstream_schema_name": {
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ Description: "The schema of the table in the upstream database.",
+ },
+ "text_columns": {
+ Description: "Columns to be decoded as text.",
+ Type: schema.TypeList,
+ Elem: &schema.Schema{Type: schema.TypeString},
+ Optional: true,
+ ForceNew: true,
+ },
+ "exclude_columns": {
+ Description: "Exclude specific columns when reading data from MySQL. This option used to be called `ignore_columns`.",
+ Type: schema.TypeList,
+ Elem: &schema.Schema{Type: schema.TypeString},
+ Optional: true,
+ ForceNew: true,
+ },
+ "comment": CommentSchema(false),
+ "ownership_role": OwnershipRoleSchema(),
+ "region": RegionSchema(),
+}
+
+func SourceTableMySQL() *schema.Resource {
+ return &schema.Resource{
+ CreateContext: sourceTableMySQLCreate,
+ ReadContext: sourceTableMySQLRead,
+ UpdateContext: sourceTableMySQLUpdate,
+ DeleteContext: sourceTableDelete,
+
+ Importer: &schema.ResourceImporter{
+ StateContext: schema.ImportStatePassthroughContext,
+ },
+
+ Schema: sourceTableMySQLSchema,
+ }
+}
+
+func sourceTableMySQLCreate(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics {
+ tableName := d.Get("name").(string)
+ schemaName := d.Get("schema_name").(string)
+ databaseName := d.Get("database_name").(string)
+
+ metaDb, region, err := utils.GetDBClientFromMeta(meta, d)
+ if err != nil {
+ return diag.FromErr(err)
+ }
+
+ o := materialize.MaterializeObject{ObjectType: "TABLE", Name: tableName, SchemaName: schemaName, DatabaseName: databaseName}
+ b := materialize.NewSourceTableMySQLBuilder(metaDb, o)
+
+ source := materialize.GetIdentifierSchemaStruct(d.Get("source"))
+ b.Source(source)
+
+ b.UpstreamName(d.Get("upstream_name").(string))
+
+ if v, ok := d.GetOk("upstream_schema_name"); ok {
+ b.UpstreamSchemaName(v.(string))
+ }
+
+ if v, ok := d.GetOk("text_columns"); ok {
+ textColumns, err := materialize.GetSliceValueString("text_columns", v.([]interface{}))
+ if err != nil {
+ return diag.FromErr(err)
+ }
+ b.TextColumns(textColumns)
+ }
+
+ if v, ok := d.GetOk("exclude_columns"); ok && len(v.([]interface{})) > 0 {
+ columns, err := materialize.GetSliceValueString("exclude_columns", v.([]interface{}))
+ if err != nil {
+ return diag.FromErr(err)
+ }
+ b.ExcludeColumns(columns)
+ }
+
+ if err := b.Create(); err != nil {
+ return diag.FromErr(err)
+ }
+
+ // Handle ownership
+ if v, ok := d.GetOk("ownership_role"); ok {
+ ownership := materialize.NewOwnershipBuilder(metaDb, o)
+ if err := ownership.Alter(v.(string)); err != nil {
+ log.Printf("[DEBUG] resource failed ownership, dropping object: %s", o.Name)
+ b.Drop()
+ return diag.FromErr(err)
+ }
+ }
+
+ // Handle comments
+ if v, ok := d.GetOk("comment"); ok {
+ comment := materialize.NewCommentBuilder(metaDb, o)
+ if err := comment.Object(v.(string)); err != nil {
+ log.Printf("[DEBUG] resource failed comment, dropping object: %s", o.Name)
+ b.Drop()
+ return diag.FromErr(err)
+ }
+ }
+
+ i, err := materialize.SourceTableMySQLId(metaDb, o)
+ if err != nil {
+ return diag.FromErr(err)
+ }
+ d.SetId(utils.TransformIdWithRegion(string(region), i))
+
+ return sourceTableMySQLRead(ctx, d, meta)
+}
+
+func sourceTableMySQLUpdate(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics {
+ tableName := d.Get("name").(string)
+ schemaName := d.Get("schema_name").(string)
+ databaseName := d.Get("database_name").(string)
+
+ metaDb, _, err := utils.GetDBClientFromMeta(meta, d)
+ if err != nil {
+ return diag.FromErr(err)
+ }
+
+ o := materialize.MaterializeObject{ObjectType: "TABLE", Name: tableName, SchemaName: schemaName, DatabaseName: databaseName}
+
+ if d.HasChange("name") {
+ oldName, newName := d.GetChange("name")
+ o := materialize.MaterializeObject{ObjectType: "TABLE", Name: oldName.(string), SchemaName: schemaName, DatabaseName: databaseName}
+ b := materialize.NewSourceTableBuilder(metaDb, o)
+ if err := b.Rename(newName.(string)); err != nil {
+ return diag.FromErr(err)
+ }
+ }
+
+ if d.HasChange("ownership_role") {
+ _, newRole := d.GetChange("ownership_role")
+ b := materialize.NewOwnershipBuilder(metaDb, o)
+
+ if err := b.Alter(newRole.(string)); err != nil {
+ return diag.FromErr(err)
+ }
+ }
+
+ if d.HasChange("comment") {
+ _, newComment := d.GetChange("comment")
+ b := materialize.NewCommentBuilder(metaDb, o)
+
+ if err := b.Object(newComment.(string)); err != nil {
+ return diag.FromErr(err)
+ }
+ }
+
+ return sourceTableMySQLRead(ctx, d, meta)
+}
+
+func sourceTableMySQLRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics {
+ i := d.Id()
+
+ metaDb, region, err := utils.GetDBClientFromMeta(meta, d)
+ if err != nil {
+ return diag.FromErr(err)
+ }
+
+ t, err := materialize.ScanSourceTableMySQL(metaDb, utils.ExtractId(i))
+ if err == sql.ErrNoRows {
+ d.SetId("")
+ return nil
+ } else if err != nil {
+ return diag.FromErr(err)
+ }
+
+ d.SetId(utils.TransformIdWithRegion(string(region), i))
+
+ if err := d.Set("name", t.TableName.String); err != nil {
+ return diag.FromErr(err)
+ }
+
+ if err := d.Set("schema_name", t.SchemaName.String); err != nil {
+ return diag.FromErr(err)
+ }
+
+ if err := d.Set("database_name", t.DatabaseName.String); err != nil {
+ return diag.FromErr(err)
+ }
+
+ source := []interface{}{
+ map[string]interface{}{
+ "name": t.SourceName.String,
+ "schema_name": t.SourceSchemaName.String,
+ "database_name": t.SourceDatabaseName.String,
+ },
+ }
+ if err := d.Set("source", source); err != nil {
+ return diag.FromErr(err)
+ }
+
+ if err := d.Set("upstream_name", t.UpstreamName.String); err != nil {
+ return diag.FromErr(err)
+ }
+
+ if err := d.Set("upstream_schema_name", t.UpstreamSchemaName.String); err != nil {
+ return diag.FromErr(err)
+ }
+
+ if err := d.Set("ownership_role", t.OwnerName.String); err != nil {
+ return diag.FromErr(err)
+ }
+
+ if err := d.Set("comment", t.Comment.String); err != nil {
+ return diag.FromErr(err)
+ }
+
+ return nil
+}
diff --git a/pkg/resources/resource_source_table_mysql_test.go b/pkg/resources/resource_source_table_mysql_test.go
new file mode 100644
index 00000000..57f2bb1d
--- /dev/null
+++ b/pkg/resources/resource_source_table_mysql_test.go
@@ -0,0 +1,113 @@
+package resources
+
+import (
+ "context"
+ "testing"
+
+ sqlmock "github.com/DATA-DOG/go-sqlmock"
+ "github.com/MaterializeInc/terraform-provider-materialize/pkg/testhelpers"
+ "github.com/MaterializeInc/terraform-provider-materialize/pkg/utils"
+ "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
+ "github.com/stretchr/testify/require"
+)
+
+var inSourceTableMySQL = map[string]interface{}{
+ "name": "table",
+ "schema_name": "schema",
+ "database_name": "database",
+ "source": []interface{}{
+ map[string]interface{}{
+ "name": "source",
+ "schema_name": "public",
+ "database_name": "materialize",
+ },
+ },
+ "upstream_name": "upstream_table",
+ "upstream_schema_name": "upstream_schema",
+ "text_columns": []interface{}{"column1", "column2"},
+ "exclude_columns": []interface{}{"column3", "column4"},
+}
+
+func TestResourceSourceTableMySQLCreate(t *testing.T) {
+ r := require.New(t)
+ d := schema.TestResourceDataRaw(t, SourceTableMySQL().Schema, inSourceTableMySQL)
+ r.NotNil(d)
+
+ testhelpers.WithMockProviderMeta(t, func(db *utils.ProviderMeta, mock sqlmock.Sqlmock) {
+ // Create
+ mock.ExpectExec(
+ `CREATE TABLE "database"."schema"."table"
+ FROM SOURCE "materialize"."public"."source"
+ \(REFERENCE "upstream_schema"."upstream_table"\)
+ WITH \(TEXT COLUMNS \(column1, column2\), EXCLUDE COLUMNS \(column3, column4\)\);`,
+ ).WillReturnResult(sqlmock.NewResult(1, 1))
+
+ // Query Id
+ ip := `WHERE mz_databases.name = 'database' AND mz_schemas.name = 'schema' AND mz_tables.name = 'table'`
+ testhelpers.MockSourceTableMySQLScan(mock, ip)
+
+ // Query Params
+ pp := `WHERE mz_tables.id = 'u1'`
+ testhelpers.MockSourceTableMySQLScan(mock, pp)
+
+ if err := sourceTableMySQLCreate(context.TODO(), d, db); err != nil {
+ t.Fatal(err)
+ }
+ })
+}
+
+func TestResourceSourceTableMySQLRead(t *testing.T) {
+ r := require.New(t)
+ d := schema.TestResourceDataRaw(t, SourceTableMySQL().Schema, inSourceTableMySQL)
+ d.SetId("u1")
+ r.NotNil(d)
+
+ testhelpers.WithMockProviderMeta(t, func(db *utils.ProviderMeta, mock sqlmock.Sqlmock) {
+ // Query Params
+ pp := `WHERE mz_tables.id = 'u1'`
+ testhelpers.MockSourceTableMySQLScan(mock, pp)
+
+ if err := sourceTableMySQLRead(context.TODO(), d, db); err != nil {
+ t.Fatal(err)
+ }
+
+ r.Equal("table", d.Get("name").(string))
+ r.Equal("schema", d.Get("schema_name").(string))
+ r.Equal("database", d.Get("database_name").(string))
+ })
+}
+
+func TestResourceSourceTableMySQLUpdate(t *testing.T) {
+ r := require.New(t)
+ d := schema.TestResourceDataRaw(t, SourceTableMySQL().Schema, inSourceTableMySQL)
+ d.SetId("u1")
+ d.Set("name", "old_table")
+ r.NotNil(d)
+
+ testhelpers.WithMockProviderMeta(t, func(db *utils.ProviderMeta, mock sqlmock.Sqlmock) {
+ mock.ExpectExec(`ALTER TABLE "database"."schema"."" RENAME TO "database"."schema"."table"`).WillReturnResult(sqlmock.NewResult(1, 1))
+
+ // Query Params
+ pp := `WHERE mz_tables.id = 'u1'`
+ testhelpers.MockSourceTableMySQLScan(mock, pp)
+
+ if err := sourceTableMySQLUpdate(context.TODO(), d, db); err != nil {
+ t.Fatal(err)
+ }
+ })
+}
+
+func TestResourceSourceTableMySQLDelete(t *testing.T) {
+ r := require.New(t)
+ d := schema.TestResourceDataRaw(t, SourceTableMySQL().Schema, inSourceTableMySQL)
+ d.SetId("u1")
+ r.NotNil(d)
+
+ testhelpers.WithMockProviderMeta(t, func(db *utils.ProviderMeta, mock sqlmock.Sqlmock) {
+ mock.ExpectExec(`DROP TABLE "database"."schema"."table"`).WillReturnResult(sqlmock.NewResult(1, 1))
+
+ if err := sourceTableDelete(context.TODO(), d, db); err != nil {
+ t.Fatal(err)
+ }
+ })
+}
diff --git a/pkg/resources/resource_source_table_postgres.go b/pkg/resources/resource_source_table_postgres.go
new file mode 100644
index 00000000..47079777
--- /dev/null
+++ b/pkg/resources/resource_source_table_postgres.go
@@ -0,0 +1,228 @@
+package resources
+
+import (
+ "context"
+ "database/sql"
+ "log"
+
+ "github.com/MaterializeInc/terraform-provider-materialize/pkg/materialize"
+ "github.com/MaterializeInc/terraform-provider-materialize/pkg/utils"
+
+ "github.com/hashicorp/terraform-plugin-sdk/v2/diag"
+ "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
+)
+
+var sourceTablePostgresSchema = map[string]*schema.Schema{
+ "name": ObjectNameSchema("table", true, false),
+ "schema_name": SchemaNameSchema("table", false),
+ "database_name": DatabaseNameSchema("table", false),
+ "qualified_sql_name": QualifiedNameSchema("table"),
+ "source": IdentifierSchema(IdentifierSchemaParams{
+ Elem: "source",
+ Description: "The source this table is created from.",
+ Required: true,
+ ForceNew: true,
+ }),
+ "upstream_name": {
+ Type: schema.TypeString,
+ Required: true,
+ ForceNew: true,
+ Description: "The name of the table in the upstream database.",
+ },
+ "upstream_schema_name": {
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ Description: "The schema of the table in the upstream database.",
+ },
+ "text_columns": {
+ Description: "Columns to be decoded as text.",
+ Type: schema.TypeList,
+ Elem: &schema.Schema{Type: schema.TypeString},
+ Optional: true,
+ ForceNew: true,
+ },
+ "comment": CommentSchema(false),
+ "ownership_role": OwnershipRoleSchema(),
+ "region": RegionSchema(),
+}
+
+func SourceTablePostgres() *schema.Resource {
+ return &schema.Resource{
+ CreateContext: sourceTablePostgresCreate,
+ ReadContext: sourceTablePostgresRead,
+ UpdateContext: sourceTablePostgresUpdate,
+ DeleteContext: sourceTableDelete,
+
+ Importer: &schema.ResourceImporter{
+ StateContext: schema.ImportStatePassthroughContext,
+ },
+
+ Schema: sourceTablePostgresSchema,
+ }
+}
+
+func sourceTablePostgresCreate(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics {
+ tableName := d.Get("name").(string)
+ schemaName := d.Get("schema_name").(string)
+ databaseName := d.Get("database_name").(string)
+
+ metaDb, region, err := utils.GetDBClientFromMeta(meta, d)
+ if err != nil {
+ return diag.FromErr(err)
+ }
+
+ o := materialize.MaterializeObject{ObjectType: "TABLE", Name: tableName, SchemaName: schemaName, DatabaseName: databaseName}
+ b := materialize.NewSourceTablePostgresBuilder(metaDb, o)
+
+ source := materialize.GetIdentifierSchemaStruct(d.Get("source"))
+ b.Source(source)
+
+ b.UpstreamName(d.Get("upstream_name").(string))
+
+ if v, ok := d.GetOk("upstream_schema_name"); ok {
+ b.UpstreamSchemaName(v.(string))
+ }
+
+ if v, ok := d.GetOk("text_columns"); ok {
+ textColumns, err := materialize.GetSliceValueString("text_columns", v.([]interface{}))
+ if err != nil {
+ return diag.FromErr(err)
+ }
+ b.TextColumns(textColumns)
+ }
+
+ if err := b.Create(); err != nil {
+ return diag.FromErr(err)
+ }
+
+ // Handle ownership
+ if v, ok := d.GetOk("ownership_role"); ok {
+ ownership := materialize.NewOwnershipBuilder(metaDb, o)
+ if err := ownership.Alter(v.(string)); err != nil {
+ log.Printf("[DEBUG] resource failed ownership, dropping object: %s", o.Name)
+ b.Drop()
+ return diag.FromErr(err)
+ }
+ }
+
+ // Handle comments
+ if v, ok := d.GetOk("comment"); ok {
+ comment := materialize.NewCommentBuilder(metaDb, o)
+ if err := comment.Object(v.(string)); err != nil {
+ log.Printf("[DEBUG] resource failed comment, dropping object: %s", o.Name)
+ b.Drop()
+ return diag.FromErr(err)
+ }
+ }
+
+ i, err := materialize.SourceTablePostgresId(metaDb, o)
+ if err != nil {
+ return diag.FromErr(err)
+ }
+ d.SetId(utils.TransformIdWithRegion(string(region), i))
+
+ return sourceTablePostgresRead(ctx, d, meta)
+}
+
+func sourceTablePostgresUpdate(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics {
+ tableName := d.Get("name").(string)
+ schemaName := d.Get("schema_name").(string)
+ databaseName := d.Get("database_name").(string)
+
+ metaDb, _, err := utils.GetDBClientFromMeta(meta, d)
+ if err != nil {
+ return diag.FromErr(err)
+ }
+
+ o := materialize.MaterializeObject{ObjectType: "TABLE", Name: tableName, SchemaName: schemaName, DatabaseName: databaseName}
+
+ if d.HasChange("name") {
+ oldName, newName := d.GetChange("name")
+ o := materialize.MaterializeObject{ObjectType: "TABLE", Name: oldName.(string), SchemaName: schemaName, DatabaseName: databaseName}
+ b := materialize.NewSourceTableBuilder(metaDb, o)
+ if err := b.Rename(newName.(string)); err != nil {
+ return diag.FromErr(err)
+ }
+ }
+
+ if d.HasChange("ownership_role") {
+ _, newRole := d.GetChange("ownership_role")
+ b := materialize.NewOwnershipBuilder(metaDb, o)
+
+ if err := b.Alter(newRole.(string)); err != nil {
+ return diag.FromErr(err)
+ }
+ }
+
+ if d.HasChange("comment") {
+ _, newComment := d.GetChange("comment")
+ b := materialize.NewCommentBuilder(metaDb, o)
+
+ if err := b.Object(newComment.(string)); err != nil {
+ return diag.FromErr(err)
+ }
+ }
+
+ return sourceTablePostgresRead(ctx, d, meta)
+}
+
+func sourceTablePostgresRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics {
+ i := d.Id()
+
+ metaDb, region, err := utils.GetDBClientFromMeta(meta, d)
+ if err != nil {
+ return diag.FromErr(err)
+ }
+
+ t, err := materialize.ScanSourceTablePostgres(metaDb, utils.ExtractId(i))
+ if err == sql.ErrNoRows {
+ d.SetId("")
+ return nil
+ } else if err != nil {
+ return diag.FromErr(err)
+ }
+
+ d.SetId(utils.TransformIdWithRegion(string(region), i))
+
+ if err := d.Set("name", t.TableName.String); err != nil {
+ return diag.FromErr(err)
+ }
+
+ if err := d.Set("schema_name", t.SchemaName.String); err != nil {
+ return diag.FromErr(err)
+ }
+
+ if err := d.Set("database_name", t.DatabaseName.String); err != nil {
+ return diag.FromErr(err)
+ }
+
+ source := []interface{}{
+ map[string]interface{}{
+ "name": t.SourceName.String,
+ "schema_name": t.SourceSchemaName.String,
+ "database_name": t.SourceDatabaseName.String,
+ },
+ }
+ if err := d.Set("source", source); err != nil {
+ return diag.FromErr(err)
+ }
+
+ if err := d.Set("upstream_name", t.UpstreamName.String); err != nil {
+ return diag.FromErr(err)
+ }
+
+ if err := d.Set("upstream_schema_name", t.UpstreamSchemaName.String); err != nil {
+ return diag.FromErr(err)
+ }
+
+ if err := d.Set("ownership_role", t.OwnerName.String); err != nil {
+ return diag.FromErr(err)
+ }
+
+ if err := d.Set("comment", t.Comment.String); err != nil {
+ return diag.FromErr(err)
+ }
+
+ return nil
+}
diff --git a/pkg/resources/resource_source_table_postgres_test.go b/pkg/resources/resource_source_table_postgres_test.go
new file mode 100644
index 00000000..44747742
--- /dev/null
+++ b/pkg/resources/resource_source_table_postgres_test.go
@@ -0,0 +1,112 @@
+package resources
+
+import (
+ "context"
+ "testing"
+
+ sqlmock "github.com/DATA-DOG/go-sqlmock"
+ "github.com/MaterializeInc/terraform-provider-materialize/pkg/testhelpers"
+ "github.com/MaterializeInc/terraform-provider-materialize/pkg/utils"
+ "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
+ "github.com/stretchr/testify/require"
+)
+
+var inSourceTablePostgres = map[string]interface{}{
+ "name": "table",
+ "schema_name": "schema",
+ "database_name": "database",
+ "source": []interface{}{
+ map[string]interface{}{
+ "name": "source",
+ "schema_name": "public",
+ "database_name": "materialize",
+ },
+ },
+ "upstream_name": "upstream_table",
+ "upstream_schema_name": "upstream_schema",
+ "text_columns": []interface{}{"column1", "column2"},
+ "ignore_columns": []interface{}{"column3", "column4"},
+}
+
+func TestResourceSourceTablePostgresCreate(t *testing.T) {
+ r := require.New(t)
+ d := schema.TestResourceDataRaw(t, SourceTablePostgres().Schema, inSourceTablePostgres)
+ r.NotNil(d)
+
+ testhelpers.WithMockProviderMeta(t, func(db *utils.ProviderMeta, mock sqlmock.Sqlmock) {
+ // Create
+ mock.ExpectExec(`CREATE TABLE "database"."schema"."table"
+ FROM SOURCE "materialize"."public"."source"
+ \(REFERENCE "upstream_schema"."upstream_table"\)
+ WITH \(TEXT COLUMNS \(column1, column2\)\);`).
+ WillReturnResult(sqlmock.NewResult(1, 1))
+
+ // Query Id
+ ip := `WHERE mz_databases.name = 'database' AND mz_schemas.name = 'schema' AND mz_tables.name = 'table'`
+ testhelpers.MockSourceTablePostgresScan(mock, ip)
+
+ // Query Params
+ pp := `WHERE mz_tables.id = 'u1'`
+ testhelpers.MockSourceTablePostgresScan(mock, pp)
+
+ if err := sourceTablePostgresCreate(context.TODO(), d, db); err != nil {
+ t.Fatal(err)
+ }
+ })
+}
+
+func TestResourceSourceTablePostgresRead(t *testing.T) {
+ r := require.New(t)
+ d := schema.TestResourceDataRaw(t, SourceTablePostgres().Schema, inSourceTablePostgres)
+ d.SetId("u1")
+ r.NotNil(d)
+
+ testhelpers.WithMockProviderMeta(t, func(db *utils.ProviderMeta, mock sqlmock.Sqlmock) {
+ // Query Params
+ pp := `WHERE mz_tables.id = 'u1'`
+ testhelpers.MockSourceTablePostgresScan(mock, pp)
+
+ if err := sourceTablePostgresRead(context.TODO(), d, db); err != nil {
+ t.Fatal(err)
+ }
+
+ r.Equal("table", d.Get("name").(string))
+ r.Equal("schema", d.Get("schema_name").(string))
+ r.Equal("database", d.Get("database_name").(string))
+ })
+}
+
+func TestResourceSourceTablePostgresUpdate(t *testing.T) {
+ r := require.New(t)
+ d := schema.TestResourceDataRaw(t, SourceTablePostgres().Schema, inSourceTablePostgres)
+ d.SetId("u1")
+ d.Set("name", "old_table")
+ r.NotNil(d)
+
+ testhelpers.WithMockProviderMeta(t, func(db *utils.ProviderMeta, mock sqlmock.Sqlmock) {
+ mock.ExpectExec(`ALTER TABLE "database"."schema"."" RENAME TO "database"."schema"."table"`).WillReturnResult(sqlmock.NewResult(1, 1))
+
+ // Query Params
+ pp := `WHERE mz_tables.id = 'u1'`
+ testhelpers.MockSourceTablePostgresScan(mock, pp)
+
+ if err := sourceTablePostgresUpdate(context.TODO(), d, db); err != nil {
+ t.Fatal(err)
+ }
+ })
+}
+
+func TestResourceSourceTablePostgresDelete(t *testing.T) {
+ r := require.New(t)
+ d := schema.TestResourceDataRaw(t, SourceTablePostgres().Schema, inSourceTablePostgres)
+ d.SetId("u1")
+ r.NotNil(d)
+
+ testhelpers.WithMockProviderMeta(t, func(db *utils.ProviderMeta, mock sqlmock.Sqlmock) {
+ mock.ExpectExec(`DROP TABLE "database"."schema"."table"`).WillReturnResult(sqlmock.NewResult(1, 1))
+
+ if err := sourceTableDelete(context.TODO(), d, db); err != nil {
+ t.Fatal(err)
+ }
+ })
+}
diff --git a/pkg/resources/resource_source_table_webhook.go b/pkg/resources/resource_source_table_webhook.go
new file mode 100644
index 00000000..21e04a6d
--- /dev/null
+++ b/pkg/resources/resource_source_table_webhook.go
@@ -0,0 +1,367 @@
+package resources
+
+import (
+ "context"
+ "database/sql"
+ "log"
+
+ "github.com/MaterializeInc/terraform-provider-materialize/pkg/materialize"
+ "github.com/MaterializeInc/terraform-provider-materialize/pkg/utils"
+
+ "github.com/hashicorp/terraform-plugin-sdk/v2/diag"
+ "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
+ "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation"
+)
+
+var sourceTableWebhookSchema = map[string]*schema.Schema{
+ "name": ObjectNameSchema("table", true, false),
+ "schema_name": SchemaNameSchema("table", false),
+ "database_name": DatabaseNameSchema("table", false),
+ "qualified_sql_name": QualifiedNameSchema("table"),
+ "comment": CommentSchema(false),
+ "body_format": {
+ Description: "The body format of the webhook.",
+ Type: schema.TypeString,
+ Required: true,
+ ForceNew: true,
+ ValidateFunc: validation.StringInSlice([]string{
+ "TEXT",
+ "JSON",
+ "BYTES",
+ }, true),
+ },
+ "include_header": {
+ Description: "Map a header value from a request into a column.",
+ Type: schema.TypeList,
+ Optional: true,
+ Elem: &schema.Resource{
+ Schema: map[string]*schema.Schema{
+ "header": {
+ Description: "The name for the header.",
+ Type: schema.TypeString,
+ Required: true,
+ },
+ "alias": {
+ Description: "The alias for the header.",
+ Type: schema.TypeString,
+ Optional: true,
+ },
+ "bytes": {
+ Description: "Change type to `bytea`.",
+ Type: schema.TypeBool,
+ Optional: true,
+ },
+ },
+ },
+ ForceNew: true,
+ },
+ "include_headers": {
+ Description: "Include headers in the webhook.",
+ Type: schema.TypeList,
+ Elem: &schema.Resource{
+ Schema: map[string]*schema.Schema{
+ "all": {
+ Description: "Include all headers.",
+ Type: schema.TypeBool,
+ Optional: true,
+ ConflictsWith: []string{"include_headers.0.only", "include_headers.0.not"},
+ AtLeastOneOf: []string{"include_headers.0.all", "include_headers.0.only", "include_headers.0.not"},
+ },
+ "only": {
+ Description: "Headers that should be included.",
+ Type: schema.TypeList,
+ Elem: &schema.Schema{Type: schema.TypeString},
+ Optional: true,
+ ConflictsWith: []string{"include_headers.0.all"},
+ AtLeastOneOf: []string{"include_headers.0.all", "include_headers.0.only", "include_headers.0.not"},
+ },
+ "not": {
+ Description: "Headers that should be excluded.",
+ Type: schema.TypeList,
+ Elem: &schema.Schema{Type: schema.TypeString},
+ Optional: true,
+ ConflictsWith: []string{"include_headers.0.all"},
+ AtLeastOneOf: []string{"include_headers.0.all", "include_headers.0.only", "include_headers.0.not"},
+ },
+ },
+ },
+ Optional: true,
+ MinItems: 1,
+ MaxItems: 1,
+ ForceNew: true,
+ },
+ "check_options": {
+ Description: "The check options for the webhook.",
+ Type: schema.TypeList,
+ Optional: true,
+ Elem: &schema.Resource{
+ Schema: map[string]*schema.Schema{
+ "field": {
+ Description: "The field for the check options.",
+ Type: schema.TypeList,
+ Elem: &schema.Resource{
+ Schema: map[string]*schema.Schema{
+ "body": {
+ Description: "The body for the check options.",
+ Type: schema.TypeBool,
+ Optional: true,
+ },
+ "headers": {
+ Description: "The headers for the check options.",
+ Type: schema.TypeBool,
+ Optional: true,
+ },
+ "secret": IdentifierSchema(IdentifierSchemaParams{
+ Elem: "secret",
+ Description: "The secret for the check options.",
+ Required: false,
+ ForceNew: true,
+ }),
+ },
+ },
+ MinItems: 1,
+ MaxItems: 1,
+ Required: true,
+ },
+ "alias": {
+ Description: "The alias for the check options.",
+ Type: schema.TypeString,
+ Optional: true,
+ },
+ "bytes": {
+ Description: "Change type to `bytea`.",
+ Type: schema.TypeBool,
+ Optional: true,
+ },
+ },
+ },
+ ForceNew: true,
+ },
+ "check_expression": {
+ Description: "The check expression for the webhook.",
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ },
+ "ownership_role": OwnershipRoleSchema(),
+ "region": RegionSchema(),
+}
+
+func SourceTableWebhook() *schema.Resource {
+ return &schema.Resource{
+ Description: "A webhook source table allows reading data directly from webhooks.",
+
+ CreateContext: sourceTableWebhookCreate,
+ ReadContext: sourceTableWebhookRead,
+ UpdateContext: sourceTableWebhookUpdate,
+ DeleteContext: sourceTableDelete,
+
+ Importer: &schema.ResourceImporter{
+ StateContext: schema.ImportStatePassthroughContext,
+ },
+
+ Schema: sourceTableWebhookSchema,
+ }
+}
+
+func sourceTableWebhookCreate(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics {
+ tableName := d.Get("name").(string)
+ schemaName := d.Get("schema_name").(string)
+ databaseName := d.Get("database_name").(string)
+
+ metaDb, region, err := utils.GetDBClientFromMeta(meta, d)
+ if err != nil {
+ return diag.FromErr(err)
+ }
+
+ o := materialize.MaterializeObject{ObjectType: "TABLE", Name: tableName, SchemaName: schemaName, DatabaseName: databaseName}
+ b := materialize.NewSourceTableWebhookBuilder(metaDb, o)
+
+ b.BodyFormat(d.Get("body_format").(string))
+
+ if v, ok := d.GetOk("include_header"); ok {
+ var headers []materialize.HeaderStruct
+ for _, header := range v.([]interface{}) {
+ h := header.(map[string]interface{})
+ headers = append(headers, materialize.HeaderStruct{
+ Header: h["header"].(string),
+ Alias: h["alias"].(string),
+ Bytes: h["bytes"].(bool),
+ })
+ }
+ b.IncludeHeader(headers)
+ }
+
+ if v, ok := d.GetOk("include_headers"); ok {
+ var i materialize.IncludeHeadersStruct
+ u := v.([]interface{})[0].(map[string]interface{})
+
+ if v, ok := u["all"]; ok {
+ i.All = v.(bool)
+ }
+
+ if v, ok := u["only"]; ok {
+ o, err := materialize.GetSliceValueString("only", v.([]interface{}))
+ if err != nil {
+ return diag.FromErr(err)
+ }
+ i.Only = o
+ }
+
+ if v, ok := u["not"]; ok {
+ n, err := materialize.GetSliceValueString("not", v.([]interface{}))
+ if err != nil {
+ return diag.FromErr(err)
+ }
+ i.Not = n
+ }
+ b.IncludeHeaders(i)
+ }
+
+ if v, ok := d.GetOk("check_options"); ok {
+ var options []materialize.CheckOptionsStruct
+ for _, option := range v.([]interface{}) {
+ t := option.(map[string]interface{})
+ fieldMap := t["field"].([]interface{})[0].(map[string]interface{})
+
+ var secret = materialize.IdentifierSchemaStruct{}
+ if secretMap, ok := fieldMap["secret"].([]interface{}); ok && len(secretMap) > 0 && secretMap[0] != nil {
+ secret = materialize.GetIdentifierSchemaStruct(secretMap)
+ }
+
+ field := materialize.FieldStruct{
+ Body: fieldMap["body"].(bool),
+ Headers: fieldMap["headers"].(bool),
+ Secret: secret,
+ }
+
+ options = append(options, materialize.CheckOptionsStruct{
+ Field: field,
+ Alias: t["alias"].(string),
+ Bytes: t["bytes"].(bool),
+ })
+ }
+ b.CheckOptions(options)
+ }
+
+ if v, ok := d.GetOk("check_expression"); ok {
+ b.CheckExpression(v.(string))
+ }
+
+ // Create resource
+ if err := b.Create(); err != nil {
+ return diag.FromErr(err)
+ }
+
+ // Handle ownership
+ if v, ok := d.GetOk("ownership_role"); ok {
+ ownership := materialize.NewOwnershipBuilder(metaDb, o)
+ if err := ownership.Alter(v.(string)); err != nil {
+ log.Printf("[DEBUG] resource failed ownership, dropping object: %s", o.Name)
+ b.Drop()
+ return diag.FromErr(err)
+ }
+ }
+
+ // Handle comments
+ if v, ok := d.GetOk("comment"); ok {
+ comment := materialize.NewCommentBuilder(metaDb, o)
+ if err := comment.Object(v.(string)); err != nil {
+ log.Printf("[DEBUG] resource failed comment, dropping object: %s", o.Name)
+ b.Drop()
+ return diag.FromErr(err)
+ }
+ }
+
+ // Set ID
+ i, err := materialize.SourceTableWebhookId(metaDb, o)
+ if err != nil {
+ return diag.FromErr(err)
+ }
+ d.SetId(utils.TransformIdWithRegion(string(region), i))
+
+ return sourceTableWebhookRead(ctx, d, meta)
+}
+
+func sourceTableWebhookRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics {
+ i := d.Id()
+
+ metaDb, region, err := utils.GetDBClientFromMeta(meta, d)
+ if err != nil {
+ return diag.FromErr(err)
+ }
+
+ t, err := materialize.ScanSourceTableWebhook(metaDb, utils.ExtractId(i))
+ if err == sql.ErrNoRows {
+ d.SetId("")
+ return nil
+ } else if err != nil {
+ return diag.FromErr(err)
+ }
+
+ d.SetId(utils.TransformIdWithRegion(string(region), i))
+
+ if err := d.Set("name", t.TableName.String); err != nil {
+ return diag.FromErr(err)
+ }
+
+ if err := d.Set("schema_name", t.SchemaName.String); err != nil {
+ return diag.FromErr(err)
+ }
+
+ if err := d.Set("database_name", t.DatabaseName.String); err != nil {
+ return diag.FromErr(err)
+ }
+
+ if err := d.Set("ownership_role", t.OwnerName.String); err != nil {
+ return diag.FromErr(err)
+ }
+
+ if err := d.Set("comment", t.Comment.String); err != nil {
+ return diag.FromErr(err)
+ }
+
+ return nil
+}
+
+func sourceTableWebhookUpdate(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics {
+ tableName := d.Get("name").(string)
+ schemaName := d.Get("schema_name").(string)
+ databaseName := d.Get("database_name").(string)
+
+ metaDb, _, err := utils.GetDBClientFromMeta(meta, d)
+ if err != nil {
+ return diag.FromErr(err)
+ }
+
+ o := materialize.MaterializeObject{ObjectType: "TABLE", Name: tableName, SchemaName: schemaName, DatabaseName: databaseName}
+
+ if d.HasChange("name") {
+ oldName, newName := d.GetChange("name")
+ o := materialize.MaterializeObject{ObjectType: "TABLE", Name: oldName.(string), SchemaName: schemaName, DatabaseName: databaseName}
+ b := materialize.NewSourceTableBuilder(metaDb, o)
+ if err := b.Rename(newName.(string)); err != nil {
+ return diag.FromErr(err)
+ }
+ }
+
+ if d.HasChange("ownership_role") {
+ _, newRole := d.GetChange("ownership_role")
+ b := materialize.NewOwnershipBuilder(metaDb, o)
+
+ if err := b.Alter(newRole.(string)); err != nil {
+ return diag.FromErr(err)
+ }
+ }
+
+ if d.HasChange("comment") {
+ _, newComment := d.GetChange("comment")
+ b := materialize.NewCommentBuilder(metaDb, o)
+
+ if err := b.Object(newComment.(string)); err != nil {
+ return diag.FromErr(err)
+ }
+ }
+
+ return sourceTableWebhookRead(ctx, d, meta)
+}
diff --git a/pkg/resources/resource_source_table_webhook_test.go b/pkg/resources/resource_source_table_webhook_test.go
new file mode 100644
index 00000000..9a7f7e8d
--- /dev/null
+++ b/pkg/resources/resource_source_table_webhook_test.go
@@ -0,0 +1,116 @@
+package resources
+
+import (
+ "context"
+ "testing"
+
+ "github.com/MaterializeInc/terraform-provider-materialize/pkg/testhelpers"
+ "github.com/MaterializeInc/terraform-provider-materialize/pkg/utils"
+
+ sqlmock "github.com/DATA-DOG/go-sqlmock"
+ "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
+ "github.com/stretchr/testify/require"
+)
+
+var inSourceTableWebhook = map[string]interface{}{
+ "name": "webhook_table",
+ "schema_name": "schema",
+ "database_name": "database",
+ "body_format": "JSON",
+ "include_headers": []interface{}{
+ map[string]interface{}{
+ "all": true,
+ },
+ },
+ "check_options": []interface{}{
+ map[string]interface{}{
+ "field": []interface{}{map[string]interface{}{
+ "body": true,
+ }},
+ "alias": "bytes",
+ },
+ map[string]interface{}{
+ "field": []interface{}{map[string]interface{}{
+ "headers": true,
+ }},
+ "alias": "headers",
+ },
+ },
+ "check_expression": "check_expression",
+}
+
+func TestResourceSourceTableWebhookCreate(t *testing.T) {
+ r := require.New(t)
+ d := schema.TestResourceDataRaw(t, SourceTableWebhook().Schema, inSourceTableWebhook)
+ r.NotNil(d)
+
+ testhelpers.WithMockProviderMeta(t, func(db *utils.ProviderMeta, mock sqlmock.Sqlmock) {
+ // Create
+ mock.ExpectExec(
+ `CREATE TABLE "database"."schema"."webhook_table" FROM WEBHOOK BODY FORMAT JSON INCLUDE HEADERS CHECK \( WITH \(BODY AS bytes\, HEADERS AS headers\) check_expression\);`,
+ ).WillReturnResult(sqlmock.NewResult(1, 1))
+
+ // Query Id
+ ip := `WHERE mz_databases.name = 'database' AND mz_schemas.name = 'schema' AND mz_tables.name = 'webhook_table'`
+ testhelpers.MockSourceTableWebhookScan(mock, ip)
+
+ // Query Params
+ pp := `WHERE mz_tables.id = 'u1'`
+ testhelpers.MockSourceTableWebhookScan(mock, pp)
+
+ if err := sourceTableWebhookCreate(context.TODO(), d, db); err != nil {
+ t.Fatal(err)
+ }
+ })
+}
+
+func TestResourceSourceTableWebhookDelete(t *testing.T) {
+ r := require.New(t)
+ d := schema.TestResourceDataRaw(t, SourceTableWebhook().Schema, inSourceTableWebhook)
+ r.NotNil(d)
+
+ testhelpers.WithMockProviderMeta(t, func(db *utils.ProviderMeta, mock sqlmock.Sqlmock) {
+ mock.ExpectExec(`DROP TABLE "database"."schema"."webhook_table";`).WillReturnResult(sqlmock.NewResult(1, 1))
+
+ if err := sourceTableDelete(context.TODO(), d, db); err != nil {
+ t.Fatal(err)
+ }
+ })
+}
+
+func TestResourceSourceTableWebhookUpdate(t *testing.T) {
+ r := require.New(t)
+ d := schema.TestResourceDataRaw(t, SourceTableWebhook().Schema, inSourceTableWebhook)
+ d.SetId("u1")
+ d.Set("name", "webhook_table")
+ r.NotNil(d)
+
+ testhelpers.WithMockProviderMeta(t, func(db *utils.ProviderMeta, mock sqlmock.Sqlmock) {
+ mock.ExpectExec(`ALTER TABLE "database"."schema"."" RENAME TO "database"."schema"."webhook_table"`).WillReturnResult(sqlmock.NewResult(1, 1))
+
+ // Query Params
+ pp := `WHERE mz_tables.id = 'u1'`
+ testhelpers.MockSourceTableWebhookScan(mock, pp)
+
+ if err := sourceTableWebhookUpdate(context.TODO(), d, db); err != nil {
+ t.Fatal(err)
+ }
+ })
+}
+
+func TestResourceSourceTableWebhookRead(t *testing.T) {
+ r := require.New(t)
+ d := schema.TestResourceDataRaw(t, SourceTableWebhook().Schema, inSourceTableWebhook)
+ d.SetId("u1")
+ r.NotNil(d)
+
+ testhelpers.WithMockProviderMeta(t, func(db *utils.ProviderMeta, mock sqlmock.Sqlmock) {
+ // Query Params
+ pp := `WHERE mz_tables.id = 'u1'`
+ testhelpers.MockSourceTableWebhookScan(mock, pp)
+
+ if err := sourceTableWebhookRead(context.TODO(), d, db); err != nil {
+ t.Fatal(err)
+ }
+ })
+}
diff --git a/pkg/resources/resource_source_webhook.go b/pkg/resources/resource_source_webhook.go
index 617753e2..ae803727 100644
--- a/pkg/resources/resource_source_webhook.go
+++ b/pkg/resources/resource_source_webhook.go
@@ -158,7 +158,12 @@ var sourceWebhookSchema = map[string]*schema.Schema{
func SourceWebhook() *schema.Resource {
return &schema.Resource{
- Description: "A webhook source describes a webhook you want Materialize to read data from.",
+ Description: "A webhook source describes a webhook you want Materialize to read data from. " +
+ "This resource is deprecated and will be removed in a future release. " +
+ "Please use materialize_source_table_webhook instead.",
+
+ DeprecationMessage: "This resource is deprecated and will be removed in a future release. " +
+ "Please use materialize_source_table_webhook instead.",
CreateContext: sourceWebhookCreate,
ReadContext: sourceRead,
diff --git a/pkg/testhelpers/mock_scans.go b/pkg/testhelpers/mock_scans.go
index 9107bd91..87e7c43d 100644
--- a/pkg/testhelpers/mock_scans.go
+++ b/pkg/testhelpers/mock_scans.go
@@ -569,6 +569,45 @@ func MockSourceScan(mock sqlmock.Sqlmock, predicate string) {
mock.ExpectQuery(q).WillReturnRows(ir)
}
+func MockSourceScanWithType(mock sqlmock.Sqlmock, predicate string, sourceType string) {
+ b := `
+ SELECT
+ mz_sources.id,
+ mz_sources.name,
+ mz_schemas.name AS schema_name,
+ mz_databases.name AS database_name,
+ mz_sources.type AS source_type,
+ COALESCE\(mz_sources.size, mz_clusters.size\) AS size,
+ mz_sources.envelope_type,
+ mz_connections.name as connection_name,
+ mz_clusters.name as cluster_name,
+ comments.comment AS comment,
+ mz_roles.name AS owner_name,
+ mz_sources.privileges
+ FROM mz_sources
+ JOIN mz_schemas
+ ON mz_sources.schema_id = mz_schemas.id
+ JOIN mz_databases
+ ON mz_schemas.database_id = mz_databases.id
+ LEFT JOIN mz_connections
+ ON mz_sources.connection_id = mz_connections.id
+ LEFT JOIN mz_clusters
+ ON mz_sources.cluster_id = mz_clusters.id
+ JOIN mz_roles
+ ON mz_sources.owner_id = mz_roles.id
+ LEFT JOIN \(
+ SELECT id, comment
+ FROM mz_internal.mz_comments
+ WHERE object_type = 'source'
+ \) comments
+ ON mz_sources.id = comments.id`
+
+ q := mockQueryBuilder(b, predicate, "")
+ ir := mock.NewRows([]string{"id", "name", "schema_name", "database_name", "source_type", "size", "envelope_type", "connection_name", "cluster_name", "owner_name", "privileges"}).
+ AddRow("u1", "source", "schema", "database", sourceType, "small", "BYTES", "conn", "cluster", "joe", defaultPrivilege)
+ mock.ExpectQuery(q).WillReturnRows(ir)
+}
+
func MockSubsourceScan(mock sqlmock.Sqlmock, predicate string) {
b := `
WITH dependencies AS \(
@@ -736,6 +775,254 @@ func MockTableScan(mock sqlmock.Sqlmock, predicate string) {
mock.ExpectQuery(q).WillReturnRows(ir)
}
+func MockSourceTableMySQLScan(mock sqlmock.Sqlmock, predicate string) {
+ b := `
+ SELECT
+ mz_tables.id,
+ mz_tables.name,
+ mz_schemas.name AS schema_name,
+ mz_databases.name AS database_name,
+ mz_sources.name AS source_name,
+ source_schemas.name AS source_schema_name,
+ source_databases.name AS source_database_name,
+ mz_mysql_source_tables.table_name AS upstream_table_name,
+ mz_mysql_source_tables.schema_name AS upstream_schema_name,
+ mz_sources.type AS source_type,
+ comments.comment AS comment,
+ mz_roles.name AS owner_name,
+ mz_tables.privileges
+ FROM mz_tables
+ JOIN mz_schemas
+ ON mz_tables.schema_id = mz_schemas.id
+ JOIN mz_databases
+ ON mz_schemas.database_id = mz_databases.id
+ JOIN mz_sources
+ ON mz_tables.source_id = mz_sources.id
+ JOIN mz_schemas AS source_schemas
+ ON mz_sources.schema_id = source_schemas.id
+ JOIN mz_databases AS source_databases
+ ON source_schemas.database_id = source_databases.id
+ LEFT JOIN mz_internal.mz_mysql_source_tables
+ ON mz_tables.id = mz_mysql_source_tables.id
+ JOIN mz_roles
+ ON mz_tables.owner_id = mz_roles.id
+ LEFT JOIN \(
+ SELECT id, comment
+ FROM mz_internal.mz_comments
+ WHERE object_type = 'table'
+ AND object_sub_id IS NULL
+ \) comments
+ ON mz_tables.id = comments.id`
+
+ q := mockQueryBuilder(b, predicate, "")
+ ir := mock.NewRows([]string{"id", "name", "schema_name", "database_name", "source_name", "source_schema_name", "source_database_name", "upstream_table_name", "upstream_schema_name", "source_type", "comment", "owner_name", "privileges"}).
+ AddRow("u1", "table", "schema", "database", "source", "public", "materialize", "upstream_table", "upstream_schema", "mysql", "comment", "materialize", defaultPrivilege)
+ mock.ExpectQuery(q).WillReturnRows(ir)
+}
+
+func MockSourceTablePostgresScan(mock sqlmock.Sqlmock, predicate string) {
+ b := `
+ SELECT
+ mz_tables.id,
+ mz_tables.name,
+ mz_schemas.name AS schema_name,
+ mz_databases.name AS database_name,
+ mz_sources.name AS source_name,
+ source_schemas.name AS source_schema_name,
+ source_databases.name AS source_database_name,
+ mz_postgres_source_tables.table_name AS upstream_table_name,
+ mz_postgres_source_tables.schema_name AS upstream_schema_name,
+ mz_sources.type AS source_type,
+ comments.comment AS comment,
+ mz_roles.name AS owner_name,
+ mz_tables.privileges
+ FROM mz_tables
+ JOIN mz_schemas
+ ON mz_tables.schema_id = mz_schemas.id
+ JOIN mz_databases
+ ON mz_schemas.database_id = mz_databases.id
+ JOIN mz_sources
+ ON mz_tables.source_id = mz_sources.id
+ JOIN mz_schemas AS source_schemas
+ ON mz_sources.schema_id = source_schemas.id
+ JOIN mz_databases AS source_databases
+ ON source_schemas.database_id = source_databases.id
+ LEFT JOIN mz_internal.mz_postgres_source_tables
+ ON mz_tables.id = mz_postgres_source_tables.id
+ JOIN mz_roles
+ ON mz_tables.owner_id = mz_roles.id
+ LEFT JOIN \(
+ SELECT id, comment
+ FROM mz_internal.mz_comments
+ WHERE object_type = 'table'
+ AND object_sub_id IS NULL
+ \) comments
+ ON mz_tables.id = comments.id`
+
+ q := mockQueryBuilder(b, predicate, "")
+ ir := mock.NewRows([]string{"id", "name", "schema_name", "database_name", "source_name", "source_schema_name", "source_database_name", "upstream_table_name", "upstream_schema_name", "source_type", "comment", "owner_name", "privileges"}).
+ AddRow("u1", "table", "schema", "database", "source", "public", "materialize", "upstream_table", "upstream_schema", "postgres", "comment", "materialize", defaultPrivilege)
+ mock.ExpectQuery(q).WillReturnRows(ir)
+}
+
+func MockSourceTableKafkaScan(mock sqlmock.Sqlmock, predicate string) {
+ b := `
+ SELECT
+ mz_tables.id,
+ mz_tables.name,
+ mz_schemas.name AS schema_name,
+ mz_databases.name AS database_name,
+ mz_sources.name AS source_name,
+ source_schemas.name AS source_schema_name,
+ source_databases.name AS source_database_name,
+ mz_kafka_source_tables.topic AS upstream_table_name,
+ mz_sources.type AS source_type,
+ comments.comment AS comment,
+ mz_roles.name AS owner_name,
+ mz_tables.privileges
+ FROM mz_tables
+ JOIN mz_schemas
+ ON mz_tables.schema_id = mz_schemas.id
+ JOIN mz_databases
+ ON mz_schemas.database_id = mz_databases.id
+ JOIN mz_sources
+ ON mz_tables.source_id = mz_sources.id
+ JOIN mz_schemas AS source_schemas
+ ON mz_sources.schema_id = source_schemas.id
+ JOIN mz_databases AS source_databases
+ ON source_schemas.database_id = source_databases.id
+ LEFT JOIN mz_internal.mz_kafka_source_tables
+ ON mz_tables.id = mz_kafka_source_tables.id
+ JOIN mz_roles
+ ON mz_tables.owner_id = mz_roles.id
+ LEFT JOIN \(
+ SELECT id, comment
+ FROM mz_internal.mz_comments
+ WHERE object_type = 'table'
+ AND object_sub_id IS NULL
+ \) comments
+ ON mz_tables.id = comments.id`
+
+ q := mockQueryBuilder(b, predicate, "")
+ ir := mock.NewRows([]string{"id", "name", "schema_name", "database_name", "source_name", "source_schema_name", "source_database_name", "upstream_table_name", "source_type", "comment", "owner_name", "privileges"}).
+ AddRow("u1", "table", "schema", "database", "source", "public", "materialize", "topic", "kafka", "comment", "materialize", defaultPrivilege)
+ mock.ExpectQuery(q).WillReturnRows(ir)
+}
+
+func MockSourceTableScan(mock sqlmock.Sqlmock, predicate string) {
+ b := `
+ SELECT
+ mz_tables.id,
+ mz_tables.name,
+ mz_schemas.name AS schema_name,
+ mz_databases.name AS database_name,
+ mz_sources.name AS source_name,
+ source_schemas.name AS source_schema_name,
+ source_databases.name AS source_database_name,
+ mz_sources.type AS source_type,
+ COALESCE\(mz_kafka_source_tables.topic,
+ mz_mysql_source_tables.table_name,
+ mz_postgres_source_tables.table_name\) AS upstream_table_name,
+ COALESCE\(mz_mysql_source_tables.schema_name,
+ mz_postgres_source_tables.schema_name\) AS upstream_schema_name,
+ comments.comment AS comment,
+ mz_roles.name AS owner_name,
+ mz_tables.privileges
+ FROM mz_tables
+ JOIN mz_schemas
+ ON mz_tables.schema_id = mz_schemas.id
+ JOIN mz_databases
+ ON mz_schemas.database_id = mz_databases.id
+ JOIN mz_sources
+ ON mz_tables.source_id = mz_sources.id
+ JOIN mz_schemas AS source_schemas
+ ON mz_sources.schema_id = source_schemas.id
+ JOIN mz_databases AS source_databases
+ ON source_schemas.database_id = source_databases.id
+ LEFT JOIN mz_internal.mz_kafka_source_tables
+ ON mz_tables.id = mz_kafka_source_tables.id
+ LEFT JOIN mz_internal.mz_mysql_source_tables
+ ON mz_tables.id = mz_mysql_source_tables.id
+ LEFT JOIN mz_internal.mz_postgres_source_tables
+ ON mz_tables.id = mz_postgres_source_tables.id
+ JOIN mz_roles
+ ON mz_tables.owner_id = mz_roles.id
+ LEFT JOIN \(
+ SELECT id, comment
+ FROM mz_internal.mz_comments
+ WHERE object_type = 'table'
+ AND object_sub_id IS NULL
+ \) comments
+ ON mz_tables.id = comments.id`
+
+ q := mockQueryBuilder(b, predicate, "")
+ ir := mock.NewRows([]string{"id", "name", "schema_name", "database_name", "source_name", "source_schema_name", "source_database_name", "upstream_table_name", "upstream_schema_name", "source_type", "comment", "owner_name", "privileges"}).
+ AddRow("u1", "table", "schema", "database", "source", "public", "materialize", "table", "schema", "KAFKA", "comment", "materialize", defaultPrivilege)
+ mock.ExpectQuery(q).WillReturnRows(ir)
+}
+
+func MockSourceTableWebhookScan(mock sqlmock.Sqlmock, predicate string) {
+ b := `
+ SELECT
+ mz_tables.id,
+ mz_tables.name,
+ mz_schemas.name AS schema_name,
+ mz_databases.name AS database_name,
+ mz_sources.type AS source_type,
+ comments.comment AS comment,
+ mz_roles.name AS owner_name,
+ mz_tables.privileges
+ FROM mz_tables
+ JOIN mz_schemas
+ ON mz_tables.schema_id = mz_schemas.id
+ JOIN mz_databases
+ ON mz_schemas.database_id = mz_databases.id
+ JOIN mz_roles
+ ON mz_tables.owner_id = mz_roles.id
+ LEFT JOIN \(
+ SELECT id, comment
+ FROM mz_internal.mz_comments
+ WHERE object_type = 'table'
+ AND object_sub_id IS NULL
+ \) comments
+ ON mz_tables.id = comments.id`
+
+ q := mockQueryBuilder(b, predicate, "")
+ ir := mock.NewRows([]string{"id", "name", "schema_name", "database_name", "source_type", "comment", "owner_name", "privileges"}).
+ AddRow("u1", "table", "schema", "database", "webhook", "comment", "materialize", defaultPrivilege)
+ mock.ExpectQuery(q).WillReturnRows(ir)
+}
+
+func MockSourceReferenceScan(mock sqlmock.Sqlmock, predicate string) {
+ b := `
+ SELECT
+ sr.source_id,
+ sr.namespace,
+ sr.name,
+ sr.updated_at,
+ sr.columns,
+ s.name AS source_name,
+ ss.name AS source_schema_name,
+ sd.name AS source_database_name,
+ s.type AS source_type
+ FROM mz_internal.mz_source_references sr
+ JOIN mz_sources s ON sr.source_id = s.id
+ JOIN mz_schemas ss ON s.schema_id = ss.id
+ JOIN mz_databases sd ON ss.database_id = sd.id`
+
+ q := mockQueryBuilder(b, predicate, "")
+ ir := mock.NewRows([]string{
+ "source_id", "namespace", "name", "updated_at", "columns",
+ "source_name", "source_schema_name", "source_database_name", "source_type",
+ }).AddRow(
+ "source-id", "namespace", "reference_name", "2023-10-01T12:34:56Z",
+ pq.StringArray{"column1", "column2"},
+ "source_name", "source_schema_name", "source_database_name", "source_type",
+ )
+
+ mock.ExpectQuery(q).WillReturnRows(ir)
+}
+
func MockTypeScan(mock sqlmock.Sqlmock, predicate string) {
b := `
SELECT
diff --git a/templates/guides/materialize_source_table.md.tmpl b/templates/guides/materialize_source_table.md.tmpl
new file mode 100644
index 00000000..a41d6bea
--- /dev/null
+++ b/templates/guides/materialize_source_table.md.tmpl
@@ -0,0 +1,243 @@
+---
+{{ printf "# generated by https://github.com/hashicorp/terraform-plugin-docs" }}
+{{ printf "# template file: templates/guides/materialize_source_table.md.tmpl" }}
+page_title: "Source Table Migration Guide"
+subcategory: ""
+description: |-
+ Guide for migrating to the new materialize_source_table_{source_type} resources.
+---
+
+# Source versioning: migrating to `materialize_source_table_{source_type}` Resource
+
+In previous versions of the Materialize Terraform provider, source tables were defined within the source resource itself and were considered subsources of the source rather than separate entities.
+
+This guide will walk you through the process of migrating your existing source table definitions to the new `materialize_source_table_{source_type}` resource.
+
+For each MySQL and Postgres source, you will need to create a new `materialize_source_table_{source_type}` resource for each table that was previously defined within the source resource. This ensures that the tables are preserved during the migration process. For Kafka sources, you will need to create a `materialize_source_table_kafka` table with the same name as the kafka source to contain the data for the kafka topic.
+
+## Old Approach
+
+Previously, source tables were defined directly within the source resource:
+
+### Example: MySQL Source
+
+```hcl
+resource "materialize_source_mysql" "mysql_source" {
+ name = "mysql_source"
+ cluster_name = "cluster_name"
+
+ mysql_connection {
+ name = materialize_connection_mysql.mysql_connection.name
+ }
+
+ table {
+ upstream_name = "mysql_table1"
+ upstream_schema_name = "shop"
+ name = "mysql_table1_local"
+ }
+}
+```
+
+### Example: Kafka Source
+
+```hcl
+resource "materialize_source_kafka" "example_source_kafka_format_text" {
+ name = "source_kafka_text"
+ comment = "source kafka comment"
+ cluster_name = materialize_cluster.cluster_source.name
+ topic = "topic1"
+
+ kafka_connection {
+ name = materialize_connection_kafka.kafka_connection.name
+ schema_name = materialize_connection_kafka.kafka_connection.schema_name
+ database_name = materialize_connection_kafka.kafka_connection.database_name
+ }
+ key_format {
+ text = true
+ }
+ value_format {
+ text = true
+ }
+}
+```
+
+## New Approach
+
+The new approach separates source definitions and table definitions. You will now create the source without specifying the tables, and then define each table using the `materialize_source_table_{source_type}` resource.
+
+## Manual Migration Process
+
+This manual migration process requires users to create new source tables using the new `materialize_source_table_{source_type}` resource and then remove the old ones. We'll cover examples for both MySQL and Kafka sources.
+
+### Step 1: Define `materialize_source_table_{source_type}` Resources
+
+Before making any changes to your existing source resources, create new `materialize_source_table_{source_type}` resources for each table that is currently defined within your sources.
+
+#### MySQL Example:
+
+```hcl
+resource "materialize_source_table_mysql" "mysql_table_from_source" {
+ name = "mysql_table1_from_source"
+ schema_name = "public"
+ database_name = "materialize"
+
+ source {
+ name = materialize_source_mysql.mysql_source.name
+ // Define the schema and database for the source if needed
+ }
+
+ upstream_name = "mysql_table1"
+ upstream_schema_name = "shop"
+
+ ignore_columns = ["about"]
+}
+```
+
+#### Kafka Example:
+
+```hcl
+resource "materialize_source_table_kafka" "kafka_table_from_source" {
+ name = "kafka_table_from_source"
+ schema_name = "public"
+ database_name = "materialize"
+
+ source_name {
+ name = materialize_source_kafka.kafka_source.name
+ }
+
+ key_format {
+ text = true
+ }
+
+ value_format {
+ text = true
+ }
+
+}
+```
+
+### Step 2: Apply the Changes
+
+Run `terraform plan` and `terraform apply` to create the new `materialize_source_table_{source_type}` resources.
+
+### Step 3: Remove Table Blocks from Source Resources
+
+Once the new `materialize_source_table_{source_type}` resources are successfully created, remove all the deprecated and table-specific attributes from your source resources.
+
+#### MySQL Example:
+
+For MySQL sources, remove the `table` block and any table-specific attributes from the source resource:
+
+```hcl
+resource "materialize_source_mysql" "mysql_source" {
+ name = "mysql_source"
+ cluster_name = "cluster_name"
+
+ mysql_connection {
+ name = materialize_connection_mysql.mysql_connection.name
+ }
+
+ // Remove the table blocks from here
+ - table {
+ - upstream_name = "mysql_table1"
+ - upstream_schema_name = "shop"
+ - name = "mysql_table1_local"
+ -
+ - ignore_columns = ["about"]
+ -
+ ...
+}
+```
+
+#### Kafka Example:
+
+For Kafka sources, remove the `format`, `include_key`, `include_headers`, and other table-specific attributes from the source resource:
+
+```hcl
+resource "materialize_source_kafka" "kafka_source" {
+ name = "kafka_source"
+ cluster_name = "cluster_name"
+
+ kafka_connection {
+ name = materialize_connection_kafka.kafka_connection.name
+ }
+
+ topic = "example_topic"
+
+ lifecycle {
+ ignore_changes = [
+ include_key,
+ include_headers,
+ format,
+ ...
+ ]
+ }
+ // Remove the format, include_key, include_headers, and other table-specific attributes
+}
+```
+
+In the `lifecycle` block, add the `ignore_changes` meta-argument to prevent Terraform from trying to update these attributes during subsequent applies, that way Terraform won't try to update these values based on incomplete information from the state as they will no longer be defined in the source resource itself but in the new `materialize_source_table_{source_type}` resources.
+
+### Step 4: Update Terraform State
+
+After removing the `table` blocks and the table/topic specific attributes from your source resources, run `terraform plan` and `terraform apply` again to update the Terraform state and apply the changes.
+
+### Step 5: Verify the Migration
+
+After applying the changes, verify that your tables are still correctly set up in Materialize by checking the table definitions using Materialize's SQL commands.
+
+For a more detailed view of a specific table, you can use the `SHOW CREATE TABLE` command:
+
+```sql
+SHOW CREATE TABLE materialize.public.mysql_table1_from_source;
+```
+
+## Importing Existing Tables
+
+To import existing tables into your Terraform state, use the following command:
+
+```bash
+terraform import materialize_source_table_{source_type}.table_name :
+```
+
+Replace `{source}` with the appropriate source type (e.g., `mysql`, `kafka`), `` with the actual region, and `` with the table ID.
+
+### Important Note on Importing
+
+Due to limitations in the current read function, not all properties of the source tables are available when importing. To work around this, you'll need to use the `ignore_changes` lifecycle meta-argument for certain attributes that can't be read back from the state.
+
+For example, for a Kafka source table:
+
+```hcl
+resource "materialize_source_table_kafka" "kafka_table_from_source" {
+ name = "kafka_table_from_source"
+ schema_name = "public"
+ database_name = "materialize"
+
+ source_name = materialize_source_kafka.kafka_source.name
+
+ include_key = true
+ include_headers = true
+
+ envelope {
+ upsert = true
+ }
+
+ lifecycle {
+ ignore_changes = [
+ include_key,
+ include_headers,
+ envelope
+ ... Add other attributes here as needed
+ ]
+ }
+}
+```
+
+This `ignore_changes` block tells Terraform to ignore changes to these attributes during subsequent applies, preventing Terraform from trying to update these values based on incomplete information from the state.
+
+After importing, you may need to manually update these ignored attributes in your Terraform configuration to match the actual state in Materialize.
+
+## Future Improvements
+
+Webhooks sources have not yet been migrated to the new model. Once this changes, the migration process will be updated to include them.
diff --git a/templates/resources/table_grant.md.tmpl b/templates/resources/table_grant.md.tmpl
index bebb5df8..e2cf0934 100644
--- a/templates/resources/table_grant.md.tmpl
+++ b/templates/resources/table_grant.md.tmpl
@@ -21,4 +21,4 @@ description: |-
Import is supported using the following syntax:
-{{ codefile "shell" (printf "%s%s%s" "examples/resources/" .Name "/import.sh") }}
\ No newline at end of file
+{{ codefile "shell" (printf "%s%s%s" "examples/resources/" .Name "/import.sh") }}