Skip to content

Commit

Permalink
GHA Terraform (#101)
Browse files Browse the repository at this point in the history
  • Loading branch information
dehume authored Apr 5, 2023
1 parent 5be4fcd commit e5df9da
Show file tree
Hide file tree
Showing 24 changed files with 260 additions and 64 deletions.
29 changes: 29 additions & 0 deletions .github/workflows/documentation.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
name: documentation
on:
pull_request:
paths:
- pkg/**

jobs:
documentation:
runs-on: ubuntu-latest

permissions:
contents: write

steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0

- uses: actions/setup-go@v3
with:
go-version: 1.19

- run: go mod download

- run: go generate ./...

- uses: stefanzweifel/git-auto-commit-action@v4
with:
commit_message: Terraform Docs
1 change: 1 addition & 0 deletions .github/workflows/integration.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ on:
pull_request:
paths:
- pkg/**
- integration/**

jobs:
integration:
Expand Down
16 changes: 16 additions & 0 deletions .github/workflows/terraform.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
name: terraform
on:
pull_request:
paths:
- examples/**
- integration/**

jobs:
terraform:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3

- uses: hashicorp/setup-terraform@v2

- run: terraform fmt -recursive -check -diff
5 changes: 2 additions & 3 deletions .github/workflows/test.yml
Original file line number Diff line number Diff line change
@@ -1,9 +1,8 @@
name: test
on:
pull_request:
paths-ignore:
- README.md
- CHANGELOG.md
paths:
- pkg/**

jobs:
test:
Expand Down
3 changes: 1 addition & 2 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,8 +28,7 @@ make install
The documentation is generated from the provider's schema. To generate the documentation, run:

```bash
terraform fmt -recursive ./examples/
go run github.com/hashicorp/terraform-plugin-docs/cmd/tfplugindocs
make docs
```

## Testing the Provider
Expand Down
4 changes: 4 additions & 0 deletions GNUmakefile
Original file line number Diff line number Diff line change
Expand Up @@ -25,3 +25,7 @@ install:
.PHONY: testacc
testacc:
TF_ACC=1 go test ./... -v $(TESTARGS) -timeout 120m

.PHONY: docs
docs:
go generate ./...
2 changes: 1 addition & 1 deletion docs/resources/connection_aws_privatelink.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ resource "materialize_connection_aws_privatelink" "example_privatelink_connectio

- `connection_type` (String) The type of connection.
- `id` (String) The ID of this resource.
- `qualified_name` (String) The fully qualified name of the connection.
- `qualified_sql_name` (String) The fully qualified name of the connection.

## Import

Expand Down
2 changes: 1 addition & 1 deletion docs/resources/connection_confluent_schema_registry.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ resource "materialize_connection_confluent_schema_registry" "example_confluent_s

- `connection_type` (String) The type of connection.
- `id` (String) The ID of this resource.
- `qualified_name` (String) The fully qualified name of the connection.
- `qualified_sql_name` (String) The fully qualified name of the connection.

<a id="nestedblock--aws_privatelink"></a>
### Nested Schema for `aws_privatelink`
Expand Down
2 changes: 1 addition & 1 deletion docs/resources/connection_kafka.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ resource "materialize_connection_kafka" "example_kafka_connection_multiple_broke

- `connection_type` (String) The type of connection.
- `id` (String) The ID of this resource.
- `qualified_name` (String) The fully qualified name of the connection.
- `qualified_sql_name` (String) The fully qualified name of the connection.

<a id="nestedblock--kafka_broker"></a>
### Nested Schema for `kafka_broker`
Expand Down
2 changes: 1 addition & 1 deletion docs/resources/connection_postgres.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ resource "materialize_connection_postgres" "example_postgres_connection" {

- `connection_type` (String) The type of connection.
- `id` (String) The ID of this resource.
- `qualified_name` (String) The fully qualified name of the connection.
- `qualified_sql_name` (String) The fully qualified name of the connection.

<a id="nestedblock--user"></a>
### Nested Schema for `user`
Expand Down
2 changes: 1 addition & 1 deletion docs/resources/connection_ssh_tunnel.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ resource "materialize_connection_ssh_tunnel" "example_ssh_connection" {

- `connection_type` (String) The type of connection.
- `id` (String) The ID of this resource.
- `qualified_name` (String) The fully qualified name of the connection.
- `qualified_sql_name` (String) The fully qualified name of the connection.

## Import

Expand Down
87 changes: 87 additions & 0 deletions docs/resources/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,87 @@
---
# generated by https://github.com/hashicorp/terraform-plugin-docs
page_title: "materialize_index Resource - terraform-provider-materialize"
subcategory: ""
description: |-
An in-memory index on a source, view, or materialized view.
---

# materialize_index (Resource)

An in-memory index on a source, view, or materialized view.

## Example Usage

```terraform
resource "materialize_index" "loadgen_index" {
name = "example_index"
cluster_name = "cluster"
method = "ARRANGEMENT"
obj_name {
name = "source"
schema_name = "schema"
database_name = "database"
}
}
# CREATE INDEX index
# IN CLUSTER cluster
# ON "database"."schema"."source"
# USING ARRANGEMENT
```

<!-- schema generated by tfplugindocs -->
## Schema

### Required

- `cluster_name` (String) The cluster to maintain this index. If not specified, defaults to the active cluster.
- `obj_name` (Block List, Min: 1, Max: 1) The name of the source, view, or materialized view on which you want to create an index. (see [below for nested schema](#nestedblock--obj_name))

### Optional

- `col_expr` (Block List) The expressions to use as the key for the index. (see [below for nested schema](#nestedblock--col_expr))
- `default` (Boolean) Creates a default index using all inferred columns are used.
- `method` (String) The name of the index method to use.
- `name` (String) The identifier for the index.

### Read-Only

- `database_name` (String) The identifier for the index database.
- `id` (String) The ID of this resource.
- `qualified_sql_name` (String) The fully qualified name of the view.
- `schema_name` (String) The identifier for the index schema.

<a id="nestedblock--obj_name"></a>
### Nested Schema for `obj_name`

Required:

- `name` (String) The obj_name name.

Optional:

- `database_name` (String) The obj_name database name.
- `schema_name` (String) The obj_name schema name.


<a id="nestedblock--col_expr"></a>
### Nested Schema for `col_expr`

Required:

- `field` (String) The name of the option you want to set.

Optional:

- `val` (String) The value for the option.

## Import

Import is supported using the following syntax:

```shell
# Indexes can be imported using the index id:
terraform import materialize_index.example_index <index_id>
```
2 changes: 1 addition & 1 deletion docs/resources/materialized_view.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ resource "materialize_materialized_view" "simple_materialized_view" {
### Read-Only

- `id` (String) The ID of this resource.
- `qualified_name` (String) The fully qualified name of the materialized view.
- `qualified_sql_name` (String) The fully qualified name of the materialized view.

## Import

Expand Down
2 changes: 1 addition & 1 deletion docs/resources/schema.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ resource "materialize_schema" "example_schema" {
### Read-Only

- `id` (String) The ID of this resource.
- `qualified_name` (String) The fully qualified name of the schema.
- `qualified_sql_name` (String) The fully qualified name of the schema.

## Import

Expand Down
2 changes: 1 addition & 1 deletion docs/resources/secret.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ resource "materialize_secret" "example_secret" {
### Read-Only

- `id` (String) The ID of this resource.
- `qualified_name` (String) The fully qualified name of the secret.
- `qualified_sql_name` (String) The fully qualified name of the secret.

## Import

Expand Down
6 changes: 3 additions & 3 deletions docs/resources/sink_kafka.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,11 +20,11 @@ resource "materialize_sink_kafka" "example_sink_kafka" {
from {
name = "table"
}
topic = "test_avro_topic"
topic = "test_avro_topic"
format {
avro {
schema_registry_connection {
name = "csr_connection"
name = "csr_connection"
database_name = "database"
schema_name = "schema"
}
Expand Down Expand Up @@ -70,7 +70,7 @@ resource "materialize_sink_kafka" "example_sink_kafka" {
### Read-Only

- `id` (String) The ID of this resource.
- `qualified_name` (String) The fully qualified name of the sink.
- `qualified_sql_name` (String) The fully qualified name of the sink.
- `sink_type` (String) The type of sink.

<a id="nestedblock--from"></a>
Expand Down
16 changes: 8 additions & 8 deletions docs/resources/source_kafka.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,10 +59,10 @@ resource "materialize_source_kafka" "example_source_kafka" {
- `envelope` (Block List, Max: 1) How Materialize should interpret records (e.g. append-only, upsert).. (see [below for nested schema](#nestedblock--envelope))
- `format` (Block List, Max: 1) How to decode raw bytes from different formats into data structures Materialize can understand at runtime. (see [below for nested schema](#nestedblock--format))
- `include_headers` (Boolean) Include message headers.
- `include_key` (String) Include a column containing the Kafka message key. If the key is encoded using a format that includes schemas, the column will take its name from the schema. For unnamed formats (e.g. TEXT), the column will be named "key".
- `include_offset` (String) Include an offset column containing the Kafka message offset.
- `include_partition` (String) Include a partition column containing the Kafka message partition
- `include_timestamp` (String) Include a timestamp column containing the Kafka message timestamp.
- `include_key` (Boolean) Include a column containing the Kafka message key. If the key is encoded using a format that includes schemas, the column will take its name from the schema. For unnamed formats (e.g. TEXT), the column will be named "key".
- `include_offset` (Boolean) Include an offset column containing the Kafka message offset.
- `include_partition` (Boolean) Include a partition column containing the Kafka message partition
- `include_timestamp` (Boolean) Include a timestamp column containing the Kafka message timestamp.
- `key_format` (Block List, Max: 1) Set the key format explicitly. (see [below for nested schema](#nestedblock--key_format))
- `primary_key` (List of String) Declare a set of columns as a primary key.
- `schema_name` (String) The identifier for the source schema.
Expand All @@ -74,7 +74,7 @@ resource "materialize_source_kafka" "example_source_kafka" {
### Read-Only

- `id` (String) The ID of this resource.
- `qualified_name` (String) The fully qualified name of the source.
- `qualified_sql_name` (String) The fully qualified name of the source.
- `source_type` (String) The type of source.

<a id="nestedblock--kafka_connection"></a>
Expand Down Expand Up @@ -142,7 +142,7 @@ Optional:

Optional:

- `columns` (Number) The columns to use for the source.
- `column` (Number) The columns to use for the source.
- `delimited_by` (String) The delimiter to use for the source.
- `header` (List of String) The number of columns and the name of each column using the header row.

Expand Down Expand Up @@ -212,7 +212,7 @@ Optional:

Optional:

- `columns` (Number) The columns to use for the source.
- `column` (Number) The columns to use for the source.
- `delimited_by` (String) The delimiter to use for the source.
- `header` (List of String) The number of columns and the name of each column using the header row.

Expand Down Expand Up @@ -282,7 +282,7 @@ Optional:

Optional:

- `columns` (Number) The columns to use for the source.
- `column` (Number) The columns to use for the source.
- `delimited_by` (String) The delimiter to use for the source.
- `header` (List of String) The number of columns and the name of each column using the header row.

Expand Down
56 changes: 48 additions & 8 deletions docs/resources/source_load_generator.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,28 +33,68 @@ resource "materialize_source_load_generator" "example_source_load_generator" {

### Required

- `load_generator_type` (String) The load generator types: [AUCTION COUNTER NONE].
- `load_generator_type` (String) The load generator types: [AUCTION COUNTER TPCH].
- `name` (String) The identifier for the source.

### Optional

- `auction_options` (Block List) Auction Options. (see [below for nested schema](#nestedblock--auction_options))
- `cluster_name` (String) The cluster to maintain this source. If not specified, the size option must be specified.
- `counter_options` (Block List) Counter Options. (see [below for nested schema](#nestedblock--counter_options))
- `database_name` (String) The identifier for the source database.
- `max_cardinality` (Boolean) Valid for the COUNTER generator. Causes the generator to delete old values to keep the collection at most a given size. Defaults to unlimited.
- `scale_factor` (Number) The scale factor for the TPCH generator. Defaults to 0.01 (~ 10MB).
- `schema_name` (String) The identifier for the source schema.
- `size` (String) The size of the source.
- `table` (Block List) Creates subsources for specific tables. (see [below for nested schema](#nestedblock--table))
- `tick_interval` (String) The interval at which the next datum should be emitted. Defaults to one second.
- `tpch_options` (Block List) TPCH Options. (see [below for nested schema](#nestedblock--tpch_options))

### Read-Only

- `id` (String) The ID of this resource.
- `qualified_name` (String) The fully qualified name of the source.
- `qualified_sql_name` (String) The fully qualified name of the source.
- `source_type` (String) The type of source.

<a id="nestedblock--table"></a>
### Nested Schema for `table`
<a id="nestedblock--auction_options"></a>
### Nested Schema for `auction_options`

Optional:

- `scale_factor` (Number) The scale factor for the generator. Defaults to 0.01 (~ 10MB).
- `table` (Block List) Creates subsources for specific tables. (see [below for nested schema](#nestedblock--auction_options--table))
- `tick_interval` (String) The interval at which the next datum should be emitted. Defaults to one second.

<a id="nestedblock--auction_options--table"></a>
### Nested Schema for `auction_options.table`

Required:

- `name` (String) The name of the table.

Optional:

- `alias` (String) The alias of the table.



<a id="nestedblock--counter_options"></a>
### Nested Schema for `counter_options`

Optional:

- `max_cardinality` (Number) Causes the generator to delete old values to keep the collection at most a given size. Defaults to unlimited.
- `scale_factor` (Number) The scale factor for the generator. Defaults to 0.01 (~ 10MB).
- `tick_interval` (String) The interval at which the next datum should be emitted. Defaults to one second.


<a id="nestedblock--tpch_options"></a>
### Nested Schema for `tpch_options`

Optional:

- `scale_factor` (Number) The scale factor for the generator. Defaults to 0.01 (~ 10MB).
- `table` (Block List) Creates subsources for specific tables. (see [below for nested schema](#nestedblock--tpch_options--table))
- `tick_interval` (String) The interval at which the next datum should be emitted. Defaults to one second.

<a id="nestedblock--tpch_options--table"></a>
### Nested Schema for `tpch_options.table`

Required:

Expand Down
Loading

0 comments on commit e5df9da

Please sign in to comment.