Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release 0.4.0 #61

Merged
merged 54 commits into from
Feb 10, 2021
Merged

Release 0.4.0 #61

merged 54 commits into from
Feb 10, 2021

Conversation

danitso-dp
Copy link
Collaborator

@danitso-dp danitso-dp commented Jan 2, 2021

Community Note

  • Please vote on this pull request by adding a 👍 reaction to the original pull request comment to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for pull request followers and do not help prioritize the request

Closes #47
Closes #51
Closes #52
Closes #53
Closes #54
Closes #55
Closes #56
Closes #57
Closes #59
Closes #60

Release note for CHANGELOG:

See references to PRs.

blz-ea and others added 30 commits October 15, 2020 19:51
 - Utilize asynchronous Functions for proxmox
 - Fix required disk interface parameter
 - Fix datastore move to same datastore error
 - Fix compare sizes for resize disk
@luhahn
Copy link
Contributor

luhahn commented Jan 11, 2021

I'll check if i can patch that :)

But you didn't define the initialization {} field, right?

@ryan4yin
Copy link
Contributor

@luhahn thanks 👍

@ryan4yin
Copy link
Contributor

I'll check if i can patch that :)

But you didn't define the initialization {} field, right?

no, I defined the initialization. the configuration I used: https://github.com/ryan4yin/pulumi-proxmox#examples

@luhahn
Copy link
Contributor

luhahn commented Jan 11, 2021

Mh, if initialization is defined the provider will try to create the cloudinit file. Without it won't.
I check what can be done.

@luhahn
Copy link
Contributor

luhahn commented Jan 11, 2021

@ryan4yin Ok, I had a look at the code. And the problem with existing cloud init files is, that we currently have cloud-init and cdrom on fixed IDE devices. Cloud init on ide2 and cd on ide3. However if your vm which you are cloning from already has a cloud init device and you want the provider to dynamically use that device instead of the fixed ones. We need to change alot of the current logic.

This is because terraform providers are using read and update function in order to determine if changes have been made, so we need to make quite a few changes in the code.

Since I already had a look I tried to set the interfaces for cd and cloudinit dynamically, like the disk interfaces. I also added some checks to see if a cloud init device is already existing. But i am stuck in how to update those correctly without terraform deleting them all the time.

Another idea would be to delete the current cloud init device, which has been received from your clone. And override the settings with the ones from your initialization configuration.

The last solution would be that you simply remove the initialization config..

@ryan4yin
Copy link
Contributor

ryan4yin commented Jan 12, 2021

@luhahn thanks, I got it. now I solve this by manually deleteing the cloudinit disk from the template VM.

@luhahn
Copy link
Contributor

luhahn commented Jan 14, 2021

The current state of this PR is running in our test/development cluster for quite some time now. Except for the existing cloud-init i couldn't notice any bugs for cloning :)

@mleone87
Copy link
Contributor

mleone87 commented Jan 17, 2021

Hi, first comment on this provider!

I downloaded the PR code and built it to my test setup.
I use the following resource definition, partially modified from the upstream documentation(mainly removing obsolete tf syntax, the outputs and the password resource).
The problem is that the resource is constantly deleted and recreated, with or without any modification

resource "proxmox_virtual_environment_vm" "ubuntu_vm" {
  name        = "vm1"
  description = "Managed by Terraform333"

  node_name = "node01"
  vm_id     = 4321

  agent {
    enabled = true
  }

  cpu {
    type = "host"
    cores = 2
  }

  disk {
    interface    = "virtio"
    datastore_id = "nvme250"
    file_id      = proxmox_virtual_environment_file.ubuntu_cloud_image.id
  }

  initialization {
    datastore_id = "nvme250"
    ip_config {
      ipv4 {
        address = "dhcp"
      }
    }

    user_account {
      keys     = [trimspace(tls_private_key.ubuntu_vm_key.public_key_openssh)]
      password = "ubuntu"
      username = "ubuntu"
    }
  
  }

  network_device {}

  operating_system {
    type = "l26"
  }

  serial_device {}
}

resource "proxmox_virtual_environment_file" "ubuntu_cloud_image" {
  content_type = "iso"
  datastore_id = "local"
  node_name    = "node01"

  source_file {
    path = "~/ubuntu1804.img"
  }
}

resource "tls_private_key" "ubuntu_vm_key" {
  algorithm = "RSA"
  rsa_bits  = 2048
}


edit: is this because of the source disk that is defined as an img from a datastore?

@luhahn
Copy link
Contributor

luhahn commented Jan 20, 2021

can you provide more information, like logs?

@mleone87
Copy link
Contributor

mleone87 commented Jan 20, 2021

can you provide more information, like logs?

sure, if you want I can upload the full copy-paste of my terminal somewhere the full logs but this is the terraform plan that is subsequent to a successful apply.

terraform plan
proxmox_virtual_environment_file.ubuntu_cloud_image: Refreshing state... [id=local:iso/ubuntu1804.img]
tls_private_key.ubuntu_vm_key: Refreshing state... [id=bb9473be8766c8e0d60f1097b385a98a446efc80]
proxmox_virtual_environment_vm.ubuntu_vm: Refreshing state... [id=4321]

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
-/+ destroy and then create replacement

Terraform will perform the following actions:

  # proxmox_virtual_environment_vm.ubuntu_vm must be replaced
-/+ resource "proxmox_virtual_environment_vm" "ubuntu_vm" {
      ~ id                      = "4321" -> (known after apply)
      ~ ipv4_addresses          = [
          - [
              - "127.0.0.1",
            ],
          - [
              - "192.168.10.98",
            ],
        ] -> (known after apply)
      ~ ipv6_addresses          = [
          - [
              - "::1",
            ],
          - [
              - "fe80::4497:e4ff:fe7d:2024",
            ],
        ] -> (known after apply)
      ~ mac_addresses           = [
          - "00:00:00:00:00:00",
          - "46:97:E4:7D:20:24",
        ] -> (known after apply)
        name                    = "vm1"
      ~ network_interface_names = [
          - "lo",
          - "eth0",
        ] -> (known after apply)
        # (17 unchanged attributes hidden)


      ~ cpu {
          - flags        = [] -> null
            # (6 unchanged attributes hidden)
        }

      ~ disk {
          + file_id      = "local:iso/ubuntu1804.img" # forces replacement
          ~ interface    = "scsi0" -> "virtio"
            # (3 unchanged attributes hidden)
        }

      ~ initialization {
            # (1 unchanged attribute hidden)

          ~ ip_config {
              ~ ipv4 {
                    # (1 unchanged attribute hidden)
                }
            }

          ~ user_account {
              ~ password = (sensitive value)
                # (2 unchanged attributes hidden)
            }
        }

      ~ network_device {
          - mac_address = "46:97:E4:7D:20:24" -> null
            # (5 unchanged attributes hidden)
        }


        # (3 unchanged blocks hidden)
    }

Plan: 1 to add, 0 to change, 1 to destroy.

Changes to Outputs:
  ~ ubuntu_vm_ip = [
      - [
          - "127.0.0.1",
        ],
      - [
          - "192.168.10.98",
        ],
    ] -> (known after apply)

------------------------------------------------------------------------

I can understand why terraform would destroy a vm created that way since iit seems correct from it's perspective, but then I can not figure out how we could create a new vm(not a clone) with an image uploaded that way.

EDIT:

this sould fix it, I didn't think about it before


  lifecycle {
    ignore_changes = [
      disk[0].file_id,
    ]
  }

mleone87 and others added 2 commits January 20, 2021 17:24
Fix a small typo in resourceVirtualEnvironmentVMCreateCustom function and make the provider capable to import disk with the correct interface
Fix a typo and make disk import coherent with disk interface
@danitso-dp
Copy link
Collaborator Author

I've finally managed to finish a pile of very important tasks, so I'll be dedicating a lot of time next week to get v0.4.0 released.

Based on your testing and what I managed to test earlier, it seems we're close to a stable version so I won't pull in more PRs. Instead, I'll begin work on the v0.5.0 release next week once v0.4.0 is released.

@danitso-dp
Copy link
Collaborator Author

I can't find any major issues with this release so I'll take one last look at the documentation and changelog before releasing it. I'll also push the final changes to the release flow before merging this PR.

@danitso-dp
Copy link
Collaborator Author

Merging this now. Any issues will be fixed against the master branch.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants