-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] Erroneous lock files are created on PVE node when vm_id
argument is omitted from VM resource
#43
Comments
A brief update: I've tested this with the most recent commits in |
I've tried to extend resource "proxmox_virtual_environment_vm" "example_new" {
name = "terraform-provider-proxmox-example-new"
node_name = data.proxmox_virtual_environment_nodes.example.names[0]
pool_id = proxmox_virtual_environment_pool.example.id
clone {
vm_id = proxmox_virtual_environment_vm.example_template.id
}
memory {
dedicated = 768
}
connection {
type = "ssh"
agent = false
host = element(element(self.ipv4_addresses, index(self.network_interface_names, "eth0")), 0)
private_key = tls_private_key.example.private_key_pem
user = "ubuntu"
}
provisioner "remote-exec" {
inline = [
"echo Welcome to $(hostname)!",
]
}
} Which results in the following plan: Terraform will perform the following actions:
# proxmox_virtual_environment_vm.example_new will be created
+ resource "proxmox_virtual_environment_vm" "example_new" {
+ acpi = true
+ bios = "seabios"
+ id = (known after apply)
+ ipv4_addresses = (known after apply)
+ ipv6_addresses = (known after apply)
+ keyboard_layout = "en-us"
+ mac_addresses = (known after apply)
+ name = "terraform-provider-proxmox-example-new"
+ network_interface_names = (known after apply)
+ node_name = "proxmox"
+ on_boot = false
+ pool_id = "terraform-provider-proxmox-example"
+ reboot = false
+ started = true
+ tablet_device = true
+ template = false
+ timeout_clone = 1800
+ timeout_move_disk = 1800
+ timeout_reboot = 1800
+ timeout_shutdown_vm = 1800
+ timeout_start_vm = 1800
+ timeout_stop_vm = 300
+ vm_id = -1
+ clone {
+ full = true
+ retries = 1
+ vm_id = 2040
}
+ memory {
+ dedicated = 768
+ floating = 0
+ shared = 0
}
}
Plan: 1 to add, 0 to change, 0 to destroy. Which produces a clone call similar to this:
Which results in a new VM being created with ID 100. In case I run another plan, no changes to the resource shows up: Terraform will perform the following actions:
Plan: 0 to add, 0 to change, 0 to destroy. @fabacab can you check whether you still experience the described issue? |
This bug's a little strange because I'm not 100% sure what the correct behavior should be, except that I think what I'm seeing is definitely not correct. I discovered this by looking more closely into the issue described in #40 where the provider reports a timeout. It's possible this issue is a duplicate of that bug or simply a symptom of it. I'm not sure.
What happens is that if a
proxmox_virtual_environment_vm
resource is created without avm_id
argument, then a file named/run/lock/qemu-server/lock--1.conf
, that islock-${VM_ID}
(which is-1
as in negative one) is created. However, the correct lock file, namely something likelock-105.conf
where105
is the actual next incremental Proxmox VE guest ID number is also created.Note that if a
vm_id
is supplied by the Terraform configuration's author, then this issue does not manifest. However, it's likely to, because thevm_id
argument is marked as optional in the Provider's documentation.I also see code that seems to indicate that the
vm_id
is automatically determined (commit 9775ede), but this does not seem to be working becaue the initial value of-1
is ultimately making its way somewhere to the Proxmox VE node's backend regardless.To Reproduce
proxmox_virtual_environment_vm
resource (or, presumably, this would also show up with aproxmox_virtual_environment_container
resource), except omit thevm_id
argument.terraform plan
you'll see output with a line including:-1
value has not correctly picked up the next incremental numeric value for a Proxmox VE guest ID.terraform apply
, get a shell on the Proxmox VE node itself and runwatch ls /run/lock/qemu-server
.watch
ing the/run/lock/qemu-server
directory listing on the Proxmox VE node in one terminal, apply the Terraform configuration withterraform apply
.lock--1.conf
) appear.Expected behavior
The Provider code should automatically detect the correct Proxmox VE guest ID number to use. It should not use
-1
or create references to an ID with number -1 anywhere on the Proxmox VE backend system.The text was updated successfully, but these errors were encountered: