Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Resource read should return API errors #40

Merged
merged 1 commit into from
Jan 5, 2022
Merged

Conversation

tundsta
Copy link
Contributor

@tundsta tundsta commented Jun 24, 2021

There's a bug in the error handling for connectorRead(). API errors on Connect calls are handled but the error isn't returned and propagated to the SDK, resulting in the framework assuming the read is successful. This has implications for terraform plan and apply.

on a plan, API errors on read are silently ignored, remote synchronisation fails and stale state is used present the diff:

    kafka-connect_connector.psqlsource: Refreshing state... [id=customer.connector]
    2021-06-24T14:39:00.881+0100 [DEBUG] provider.terraform-provider-kafka-connect_v0.2.3: 2021/06/24 14:39:00 [INFO] Attempting to read remote data for connector customer.connector
    2021-06-24T14:39:00.881+0100 [DEBUG] provider.terraform-provider-kafka-connect_v0.2.3: 2021/06/24 14:39:00 [INFO] Current local config nonsensitive values are: map[auto.create:true connector.class:io.debezium.connector.postgresql.PostgresConnector database.dbname:customers database.hostname:postgres
    ...
    ...
    2021-06-24T14:39:11.723+0100 [DEBUG] provider: plugin process exited: path=.terraform/providers/registry.terraform.io/mongey/kafka-connect/0.2.3/linux_amd64/terraform-provider-kafka-connect_v0.2.3 pid=158158
    2021-06-24T14:39:11.723+0100 [DEBUG] provider: plugin exited
    
    Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
      ~ update in-place
    
    Terraform will perform the following actions:
    
      # kafka-connect_connector.psqlsource will be updated in-place
      ~ resource "kafka-connect_connector" "psqlsource" {
          ~ config           = {
              ~ "snapshot.mode"                  = "initial" -> "exported"
                # (22 unchanged elements hidden)
            }
            id               = "customer.connector"
            name             = "customer.connector"
            # (1 unchanged attribute hidden)
        }
    
    Plan: 0 to add, 1 to change, 0 to destroy.

This also causes issues on an apply when update is called - the read incorrectly returns successful as above, even if the update fails and returns an error, due to the read swallowing the error, it updates the terraform state file as if the read had succeeded, producing an inconsistent state:

❯ terraform apply -auto-approve

kafka-connect_connector.psqlsource: Refreshing state... [id=customer.connector]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # kafka-connect_connector.psqlsource will be updated in-place
  ~ resource "kafka-connect_connector" "psqlsource" {
      ~ config           = {
          ~ "snapshot.mode"                  = "exported" -> "initial"
            # (22 unchanged elements hidden)
        }
        id               = "customer.connector"
        name             = "customer.connector"
        # (1 unchanged attribute hidden)
    }

Plan: 0 to add, 1 to change, 0 to destroy.
kafka-connect_connector.psqlsource: Modifying... [id=customer.connector]
kafka-connect_connector.psqlsource: Still modifying... [id=customer.connector, 10s elapsed]
╷
│ Error: Put http://localhost:8083/connectors/customer.connector/config: dial tcp 127.0.0.1:8083: connect: connection refused
│ 
│   with kafka-connect_connector.psqlsource,
│   on main.tf line 14, in resource "kafka-connect_connector" "psqlsource":
│   14: resource "kafka-connect_connector" "psqlsource" {

To avoid inconsistent state, the read should follow best practice and ensure API responses return an error e.g.:

func resourceExampleSimpleRead(d *schema.ResourceData, meta interface{}) error {
   client := meta.(*ProviderApi).client
   resource, err := client.GetResource(d.Id())

   if err != nil {
      return fmt.Errorf("error getting resource %s: %s", d.Id(), err)
   }
   d.Set("name", resource.Name)
   d.Set("type", resource.Type)
   if err := d.Set("tags", resource.TagMap); err != nil {
      return fmt.Errorf("error setting tags for resource %s: %s", d.Id(), err)
   }
   return nil

@jamestait
Copy link

This fixes #23. Manual test case:

provider "kafka-connect" {
  url = "http://localhost"
}

resource "kafka-connect_connector" "source" {
  name = "testing.source.test"

  config = {
    "auto.create" : "true",
    "connector.class" : "io.debezium.connector.postgresql.PostgresConnector",
    "database.dbname" : "test",
    "database.hostname" : "test-database",
    "database.password" : "test",
    "database.port" : "5432",
    "database.server.name" : "localhost",
    "database.user" : "test",
    "decimal.handling.mode" : "string",
    "errors.retry.delay.max.ms" : "60000",
    "errors.retry.timeout" : "-1",
    "heartbeat.action.query" : "CREATE TABLE IF NOT EXISTS public.debezium_heartbeat_test (id SERIAL PRIMARY KEY, ts TIMESTAMP WITH TIME ZONE); INSERT INTO public.debezium_heartbeat_test (id, ts) VALUES (1, NOW()) ON CONFLICT(id) DO UPDATE SET ts=EXCLUDED.ts",
    "heartbeat.interval.ms" : "360000",
    "hstore.handling.mode" : "json",
    "key.converter" : "org.apache.kafka.connect.json.JsonConverter",
    "key.converter.schemas.enable" : "true",
    "name" : "testing.source.test",
    "plugin.name" : "pgoutput",
    "slot.name" : "repl_slot_test",
    "snapshot.mode" : "never",
    "table.include.list" : "test, test2",
    "value.converter" : "org.apache.kafka.connect.json.JsonConverter",
    "value.converter.schemas.enable" : "false"
  }
}

This resource existed prior to the run, but with only test in the table.include.list property. Result with current master:

$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

kafka-connect_connector.source: Refreshing state... [id=testing.source.test]

------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # kafka-connect_connector.source will be updated in-place
  ~ resource "kafka-connect_connector" "source" {
      ~ config           = {
            "auto.create"                    = "true"
            "connector.class"                = "io.debezium.connector.postgresql.PostgresConnector"
            "database.dbname"                = "test"
            "database.hostname"              = "test-database"
            "database.password"              = "test"
            "database.port"                  = "5432"
            "database.server.name"           = "localhost"
            "database.user"                  = "test"
            "decimal.handling.mode"          = "string"
            "errors.retry.delay.max.ms"      = "60000"
            "errors.retry.timeout"           = "-1"
            "heartbeat.action.query"         = "CREATE TABLE IF NOT EXISTS public.debezium_heartbeat_test (id SERIAL PRIMARY KEY, ts TIMESTAMP WITH TIME ZONE); INSERT INTO public.debezium_heartbeat_test (id, ts) VALUES (1, NOW()) ON CONFLICT(id) DO UPDATE SET ts=EXCLUDED.ts"
            "heartbeat.interval.ms"          = "360000"
            "hstore.handling.mode"           = "json"
            "key.converter"                  = "org.apache.kafka.connect.json.JsonConverter"
            "key.converter.schemas.enable"   = "true"
            "name"                           = "testing.source.test"
            "plugin.name"                    = "pgoutput"
            "slot.name"                      = "repl_slot_test"
            "snapshot.mode"                  = "never"
          ~ "table.include.list"             = "test" -> "test, test2"
            "value.converter"                = "org.apache.kafka.connect.json.JsonConverter"
            "value.converter.schemas.enable" = "false"
        }
        config_sensitive = (sensitive value)
        id               = "testing.source.test"
        name             = "testing.source.test"
    }

Plan: 0 to add, 1 to change, 0 to destroy.

------------------------------------------------------------------------

With the fix:

$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

kafka-connect_connector.source: Refreshing state... [id=testing.source.test]

Error: Get connector : <html>
<head><title>503 Service Temporarily Unavailable</title></head>
<body bgcolor="white">
<center><h1>503 Service Temporarily Unavailable</h1></center>
<hr><center>nginx/1.13.10</center>
</body>
</html>

The state file is not updated.

@tundsta I think it'd be great if you could add an automated test for this as well.

@Mongey Mongey merged commit 43a3cbd into Mongey:master Jan 5, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants