-
Notifications
You must be signed in to change notification settings - Fork 138
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SSH is not automatically reloaded when included via include_role
due to Ansible limitations
#301
Comments
Thank you for the report! I do not think there is any other simple way for us to trigger the reload effectively without handlers. We could theoretically "emulate" handlers by setting some facts/variables through the play, but this does not sound like very elegant solution. Would calling the @richm any thoughts? |
@micolous our recommendation would be to use
|
Would calling it from inside of the role before returning cause some issues (except for making sure the handlers are invoked?). |
That's generally not recommended in my experience with Ansible. There are two cases:
The drawback with If we really do need for the user to be able to say "I want the sshd role to restart the service if something has changed immediately after making that change" then we need to introduce some additional logic like this: - name: Make some config change
template: ...
register: __role_config
- name: Notify handler
debug:
msg: Handler notified
notify: handler
when:
- __role_config is changed
- not role_restart_service
- name: Restart service
service:
name: servit
state: restarted
when:
- __role_config is changed
- role_restart_service | bool |
So this really sounds to me like something that should be mentioned in documentation. Even though this might be clear to experienced ansible users and it might not affect users using just a single role in most common cases, some might get burnt by this. |
It's covered in the Ansible docs under playbook handlers. Going from this design, the author of the playbook should start a seperate play if sshd needs to be restarted before some other tasks happen - the play isn't complete until it ends. It's worth a mention in the docs with a link to the upstream for clarity. I don't think we should try to restart sshd in the role beyond the user of handlers. |
Fixes: willshersystems#301 Signed-off-by: Jakub Jelen <[email protected]>
When using this module via
include_role
on an Ubuntu 20.04.6 target, SSH is not automatically reloaded on config change.This is because of an Ansible limitation: ansible/ansible#26537 ansible/proposals#136
My task definition (below) is itself included via
include_tasks
, because I'm using another pre-canned playbook for the system:When running the playbook, I can see it's updated the config successfully (in
/etc/ssh/sshd_config.d/00-ansible_system_role.conf
), butss -tnl
still showssshd
listening on[::]:22
.The Ansible debugging output seems to indicate that it's tried to validate the new config, and then start
sshd
if it wasn't already running (which is a no-op), but config changes don't seem to triggerReload_sshd
at all.I've worked around this in my task definition by adding another step to manually reload
sshd
:Expected behaviour
include_role
Versions
The text was updated successfully, but these errors were encountered: