You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
... the documentation does not mention anything about my problem
... there are no open or closed issues that are related to my problem
Description
We have deployed the helm-chart to a openshift cluster.
The service account has the right to use privileged rights.
Expected behaviour
The container should start the needed services and serve the gui.
Actual behaviour
The log says:
[services.d] starting services
[services.d] done.
But the container fails the readiness probe.
When i start the terminal and execute "/init" by hand, the fpm process starts and the gui is accessable.
Steps to reproduce
Create service aaccount with roleref "system:openshift:scc:privileged".
Deploy helm chart and set service account.
Stateful set starts pod, but container fails readiness probe.
Docker info
Not possible because of openshift
Docker Compose config
No response
Logs
[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] 00-fix-logs.sh: executing...
chown: changing ownership of '/proc/self/fd/1': Permission denied
chown: changing ownership of '/proc/self/fd/2': Permission denied
[cont-init.d] 00-fix-logs.sh: exited 0.
[cont-init.d] 01-fix-uidgid.sh: executing...
[cont-init.d] 01-fix-uidgid.sh: exited 0.
[cont-init.d] 02-fix-perms.sh: executing...
Fixing perms...
[cont-init.d] 02-fix-perms.sh: exited 0.
[cont-init.d] 03-config.sh: executing...
Setting timezone to UTC...
Setting PHP-FPM configuration...
Setting PHP INI configuration...
Setting OpCache configuration...
Setting Nginx configuration...
Updating SNMP community...
Initializing LibreNMS files / folders...
Setting LibreNMS configuration...
Checking LibreNMS plugins...
Fixing perms...
Checking additional Monitoring plugins...
Checking alert templates...
[cont-init.d] 03-config.sh: exited 0.
[cont-init.d] 04-svc-main.sh: executing...
Waiting 60s for database to be ready...
Database ready!
Updating database schema...
INFO Nothing to migrate.
INFO Seeding database.
Database\Seeders\DefaultAlertTemplateSeeder ........................ RUNNING
Database\Seeders\DefaultAlertTemplateSeeder ...................... 1 ms DONE
Database\Seeders\ConfigSeeder ...................................... RUNNING
Database\Seeders\ConfigSeeder .................................... 1 ms DONE
Database\Seeders\RolesSeeder ....................................... RUNNING
Database\Seeders\RolesSeeder .................................... 13 ms DONE
Clear cache
INFO Application cache cleared successfully.
INFO Configuration cached successfully.
[cont-init.d] 04-svc-main.sh: exited 0.
[cont-init.d] 05-svc-dispatcher.sh: executing...
[cont-init.d] 05-svc-dispatcher.sh: exited 0.
[cont-init.d] 06-svc-syslogng.sh: executing...
[cont-init.d] 06-svc-syslogng.sh: exited 0.
[cont-init.d] 07-svc-cron.sh: executing...
Creating LibreNMS daily.sh cron task with the following period fields: 15 0 * * *
Creating LibreNMS cron artisan schedule:run
Fixing crontabs permissions...
[cont-init.d] 07-svc-cron.sh: exited 0.
[cont-init.d] 08-svc-snmptrapd.sh: executing...
[cont-init.d] 08-svc-snmptrapd.sh: exited 0.
[cont-init.d] ~-socklog: executing...
[cont-init.d] ~-socklog: exited 0.
[cont-init.d] done.
[services.d] starting services
[services.d] done.
Additional info
We first had other problems but get them solved by starting the pod with the privileged service account.
When i go to the pod-terminal and execute "/init" then the following log appears AND we are able to open the gui.
When we close the terminal window the service dissapears.
With setting env-vars "GUID" and "PUID" to 0 i was able that the services are starting, but sadly php-fpm does not like it:
ALERT: [pool www] user has not been defined
ALERT: [pool www] user has not been defined
ERROR: failed to post process the configuration
ERROR: failed to post process the configuration
ERROR: FPM initialization failed
ERROR: FPM initialization failed
Support guidelines
I've found a bug and checked that ...
Description
We have deployed the helm-chart to a openshift cluster.
The service account has the right to use privileged rights.
Expected behaviour
The container should start the needed services and serve the gui.
Actual behaviour
The log says:
[services.d] starting services
[services.d] done.
But the container fails the readiness probe.
When i start the terminal and execute "/init" by hand, the fpm process starts and the gui is accessable.
Steps to reproduce
Create service aaccount with roleref "system:openshift:scc:privileged".
Deploy helm chart and set service account.
Stateful set starts pod, but container fails readiness probe.
Docker info
Docker Compose config
No response
Logs
Additional info
We first had other problems but get them solved by starting the pod with the privileged service account.
When i go to the pod-terminal and execute "/init" then the following log appears AND we are able to open the gui.
When we close the terminal window the service dissapears.
[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] 00-fix-logs.sh: executing...
[cont-init.d] 00-fix-logs.sh: exited 0.
[cont-init.d] 01-fix-uidgid.sh: executing...
[cont-init.d] 01-fix-uidgid.sh: exited 0.
[cont-init.d] 02-fix-perms.sh: executing...
Fixing perms...
[cont-init.d] 02-fix-perms.sh: exited 0.
[cont-init.d] 03-config.sh: executing...
Setting timezone to UTC...
Setting PHP-FPM configuration...
Setting PHP INI configuration...
Setting OpCache configuration...
Setting Nginx configuration...
Updating SNMP community...
Initializing LibreNMS files / folders...
Setting LibreNMS configuration...
Checking LibreNMS plugins...
Fixing perms...
Checking additional Monitoring plugins...
Checking alert templates...
[cont-init.d] 03-config.sh: exited 0.
[cont-init.d] 04-svc-main.sh: executing...
Waiting 60s for database to be ready...
Database ready!
Updating database schema...
INFO Nothing to migrate.
INFO Seeding database.
Database\Seeders\DefaultAlertTemplateSeeder .............................................................................................. RUNNING
Database\Seeders\DefaultAlertTemplateSeeder ............................................................................................ 1 ms DONE
Database\Seeders\ConfigSeeder ............................................................................................................ RUNNING
Database\Seeders\ConfigSeeder .......................................................................................................... 1 ms DONE
Database\Seeders\RolesSeeder ............................................................................................................. RUNNING
Database\Seeders\RolesSeeder .......................................................................................................... 12 ms DONE
Clear cache
INFO Application cache cleared successfully.
INFO Configuration cached successfully.
[cont-init.d] 04-svc-main.sh: exited 0.
[cont-init.d] 05-svc-dispatcher.sh: executing...
[cont-init.d] 05-svc-dispatcher.sh: exited 0.
[cont-init.d] 06-svc-syslogng.sh: executing...
[cont-init.d] 06-svc-syslogng.sh: exited 0.
[cont-init.d] 07-svc-cron.sh: executing...
Creating LibreNMS daily.sh cron task with the following period fields: 15 0 * * *
Creating LibreNMS cron artisan schedule:run
Fixing crontabs permissions...
[cont-init.d] 07-svc-cron.sh: exited 0.
[cont-init.d] 08-svc-snmptrapd.sh: executing...
[cont-init.d] 08-svc-snmptrapd.sh: exited 0.
[cont-init.d] ~-socklog: executing...
[cont-init.d] ~-socklog: exited 0.
[cont-init.d] done.
[services.d] starting services
crond: crond (busybox 1.36.1) started, log level 8
[services.d] done.
2024/12/05 13:12:45 [notice] 1523#1523: using the "epoll" event method
2024/12/05 13:12:45 [notice] 1523#1523: nginx/1.24.0
2024/12/05 13:12:45 [notice] 1523#1523: OS: Linux 5.14.0-284.73.1.el9_2.x86_64
2024/12/05 13:12:45 [notice] 1523#1523: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2024/12/05 13:12:45 [notice] 1523#1523: start worker processes
2024/12/05 13:12:45 [notice] 1523#1523: start worker process 1557
2024/12/05 13:12:45 [notice] 1523#1523: start worker process 1559
2024/12/05 13:12:45 [notice] 1523#1523: start worker process 1560
2024/12/05 13:12:45 [notice] 1523#1523: start worker process 1561
2024/12/05 13:12:45 [notice] 1523#1523: start worker process 1562
2024/12/05 13:12:45 [notice] 1523#1523: start worker process 1563
s6-log: fatal: unable to lock /var/log/socklog/cron/lock: Resource busy
[05-Dec-2024 13:12:45] NOTICE: fpm is running, pid 1519
[05-Dec-2024 13:12:45] NOTICE: ready to handle connections
s6-log: fatal: unable to lock /var/log/socklog/cron/lock: Resource busy
s6-log: fatal: unable to lock /var/log/socklog/cron/lock: Resource busy
s6-log: fatal: unable to lock /var/log/socklog/cron/lock: Resource busy
s6-log: fatal: unable to lock /var/log/socklog/cron/lock: Resource busy
s6-log: fatal: unable to lock /var/log/socklog/cron/lock: Resource busy
s6-log: fatal: unable to lock /var/log/socklog/cron/lock: Resource busy
s6-log: fatal: unable to lock /var/log/socklog/cron/lock: Resource busy
The text was updated successfully, but these errors were encountered: