Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Could not find device path for /mnt #40

Open
tjorim opened this issue Dec 29, 2024 · 8 comments
Open

Could not find device path for /mnt #40

tjorim opened this issue Dec 29, 2024 · 8 comments
Labels
bug Something isn't working

Comments

@tjorim
Copy link

tjorim commented Dec 29, 2024

Describe the bug
Error in the HA logs

Expected behavior
No errors

Screenshots
No screenshot but copy of log

This error originated from a custom integration.

Logger: custom_components.unraid.api.disk_state
Source: custom_components/unraid/api/disk_state.py:64
integration: UNRAID (documentation, issues)
First occurred: 20:20:37 (34 occurrences)
Last logged: 21:47:41

Could not find device path for /mnt

Unraid (please complete the following information):

  • Version 7.0.0-rc.2

Home Assistant (please complete the following information):

  • Version 2025.1.0b3

Home Assistant installation type (please complete the following information):
Home Assistant OS

  • Docker / Container
  • Virtual Machine
  • Raspberry Pi

Priority of Request

  • High
  • Medium
  • Low

Additional context
Add any other context about the problem here.

@tjorim tjorim added the bug Something isn't working label Dec 29, 2024
@MountArarat
Copy link

same question

@domalab
Copy link
Owner

domalab commented Dec 30, 2024

I made some updates today. Feel free to download latest source code, overwrite your existing unraid integration and restart home assistant. Let me know it if fixes the issue

@tjorim
Copy link
Author

tjorim commented Jan 9, 2025

Issue still present on 2025.01.08

@domalab
Copy link
Owner

domalab commented Jan 9, 2025

@tjorim @MountArarat Can you please run the following commands from unraid console and provide the output

First, let's verify the mount points and disk structure:

List all mount points under /mnt

ls -la /mnt/

Show all mounted filesystems under /mnt with their device paths

findmnt -t ext4,xfs,btrfs /mnt/

Check disk devices and their mappings

lsblk -f

Check the SMART status and disk states:

For each disk (sdb, sdc, sdd, etc)

smartctl -n standby -j /dev/sdb
smartctl -n standby -j /dev/sdc
smartctl -n standby -j /dev/sdd
smartctl -n standby -j /dev/sde

Alternative check using hdparm

hdparm -C /dev/sdb
hdparm -C /dev/sdc
hdparm -C /dev/sdd

Validate the disk mappings in Unraid:

Check disk configuration

cat /boot/config/disk.cfg

Check disk mappings

cat /var/local/emhttp/disks.ini

Check current array status

mdcmd status

Test the specific df command that's causing issues:

Original problematic command

df -B1 /mnt/disk* /mnt/cache /mnt/* 2>/dev/null

New proposed command

df -B1 /mnt/disk[0-9]* /mnt/cache* /mnt/pool* 2>/dev/null

Test each pattern separately

df -B1 /mnt/disk[0-9]*
df -B1 /mnt/cache*
df -B1 /mnt/pool*

Debug the device path resolution:

For each disk, test the findmnt command

findmnt -n -o SOURCE /mnt/disk1
findmnt -n -o SOURCE /mnt/disk2
findmnt -n -o SOURCE /mnt/cache

This will help verify:

  • Verify the disk structure
  • Confirm SMART status behavior
  • Validate mount points
  • Check if the new df command patterns work correctly
  • Debug device path resolution

@domalab
Copy link
Owner

domalab commented Jan 9, 2025

@tjorim @MountArarat Can you also try to update the api/disk_operations.py code file.

Find the async def get_individual_disk_usage(self) -> List[Dict[str, Any]]:

Replace it with this code snippet

async def get_individual_disk_usage(self) -> List[Dict[str, Any]]:
        """Get usage information for individual disks."""
        try:
            disks = []
            # Get usage for mounted disks with more specific path patterns
            # Note: Using specific patterns to avoid capturing system paths
            usage_result = await self.execute_command(
                "df -B1 /mnt/disk[0-9]* /mnt/cache* 2>/dev/null | "
                "awk 'NR>1 {print $6,$2,$3,$4}'"
            )

            if usage_result.exit_status == 0:
                for line in usage_result.stdout.splitlines():
                    try:
                        mount_point, total, used, free = line.split()
                        disk_name = mount_point.replace('/mnt/', '')
                        
                        # Skip invalid or system disks while allowing custom pools
                        if not is_valid_disk_name(disk_name):
                            _LOGGER.debug("Skipping invalid disk name: %s", disk_name)
                            continue
                        
                        # Get current disk state
                        state = await self._state_manager.get_disk_state(disk_name)
                        
                        disk_info = {
                            "name": disk_name,
                            "mount_point": mount_point,
                            "total": int(total),
                            "used": int(used),
                            "free": int(free),
                            "percentage": round((int(used) / int(total) * 100), 1) if int(total) > 0 else 0,
                            "state": state.value,
                            "smart_data": {},  # Will be populated by update_disk_status
                            "smart_status": "Unknown",
                            "temperature": None,
                            "device": None,
                        }
                        
                        # Update disk status with SMART data if disk is active
                        if state == DiskState.ACTIVE:
                            disk_info = await self.update_disk_status(disk_info)
                            
                        disks.append(disk_info)

                    except (ValueError, IndexError) as err:
                        _LOGGER.debug("Error parsing disk usage line '%s': %s", line, err)
                        continue

                return disks

            return []

        except Exception as err:
            _LOGGER.error("Error getting disk usage: %s", err)
            return []

And then restart home assistant

@tjorim
Copy link
Author

tjorim commented Jan 10, 2025

First, let's verify the mount points and disk structure:
List all mount points under /mnt

ls -la /mnt/

total 16
drwxr-xr-x 13 root   root  260 Dec 28 20:11 ./
drwxr-xr-x 20 root   root  440 Jan  6 14:52 ../
drwxr-xr-x  6 nobody users 140 Dec 28 20:11 RecycleBin/
drwxrwxrwt  2 nobody users  40 Dec 28 20:11 addons/
drwxrwxrwx  1 nobody users  68 Dec 28 20:21 cache/
drwxrwxrwx  7 nobody users  78 Dec 28 20:21 disk1/
drwxrwxrwx  2 nobody users   6 Dec 28 20:21 disk2/
drwxrwxrwx  2 nobody users   6 Dec 28 20:21 disk3/
drwxrwxrwt  2 nobody users  40 Dec 28 20:11 disks/
drwxrwxrwt  2 nobody users  40 Dec 28 20:11 remotes/
drwxrwxrwt  2 nobody users  40 Dec 28 20:11 rootshare/
drwxrwxrwx  1 nobody users  78 Dec 28 20:21 user/
drwxrwxrwx  1 nobody users  78 Dec 28 20:21 user0/

Show all mounted filesystems under /mnt with their device paths

findmnt -t ext4,xfs,btrfs /mnt/

EMPTY RESPONSE

Check disk devices and their mappings

lsblk -f

NAME        FSTYPE   FSVER LABEL  UUID                                 FSAVAIL FSUSE% MOUNTPOINTS
loop0       squashfs 4.0                                                 14.9G     4% /usr
loop1       squashfs 4.0                                                 14.9G     4% /lib
loop2       btrfs                 6c3b9702-d31c-4a9a-beaf-ee116ba6fd54    6.8G    70% /var/lib/docker/btrfs
                                                                                      /var/lib/docker
loop3       btrfs                 953b9064-4a07-4a86-b883-4ff206f613e9  904.4M     1% /etc/libvirt
sda                                                                                   
sdb                                                                                   
└─sdb1      vfat     FAT32 UNRAID 272C-EBE2                              12.9G    10% /boot
sdc                                                                                   
└─sdc1      xfs                   5408d75d-e084-4920-ba94-3dbe68ab3084                
sdd                                                                                   
└─sdd1      xfs                   c25a1741-654f-4d1d-993a-13dab762e59e                
sde                                                                                   
└─sde1      xfs                   a05f5077-6d2e-4e2a-b77e-957bad403646                
sdf                                                                                   
└─sdf1      xfs                   360d906b-e8e5-4a17-94d0-bb1f7289e35c                
md1p1                                                                     3.2T    13% /mnt/disk1
md2p1                                                                     3.6T     1% /mnt/disk2
md3p1                                                                     3.6T     1% /mnt/disk3
nvme1n1                                                                               
└─nvme1n1p1 btrfs                 ac4e7e68-2a4a-4d35-8087-1107238141ce  389.6G    16% /mnt/cache
nvme0n1                                                                               
└─nvme0n1p1 btrfs                 ac4e7e68-2a4a-4d35-8087-1107238141ce                

Check the SMART status and disk states:
For each disk (sdb, sdc, sdd, etc)

smartctl -n standby -j /dev/sdb

{
  "json_format_version": [
    1,
    0
  ],
  "smartctl": {
    "version": [
      7,
      4
    ],
    "pre_release": false,
    "svn_revision": "5530",
    "platform_info": "x86_64-linux-6.6.66-Unraid",
    "build_info": "(local build)",
    "argv": [
      "smartctl",
      "-n",
      "standby",
      "-j",
      "/dev/sdb"
    ],
    "messages": [
      {
        "string": "/dev/sdb: Unknown USB bridge [0x0781:0x5571 (0x100)]",
        "severity": "error"
      }
    ],
    "exit_status": 1
  },
  "local_time": {
    "time_t": 1736506984,
    "asctime": "Fri Jan 10 12:03:04 2025 CET"
  }
}

smartctl -n standby -j /dev/sdc

{
  "json_format_version": [
    1,
    0
  ],
  "smartctl": {
    "version": [
      7,
      4
    ],
    "pre_release": false,
    "svn_revision": "5530",
    "platform_info": "x86_64-linux-6.6.66-Unraid",
    "build_info": "(local build)",
    "argv": [
      "smartctl",
      "-n",
      "standby",
      "-j",
      "/dev/sdc"
    ],
    "messages": [
      {
        "string": "Device is in STANDBY mode, exit(2)",
        "severity": "information"
      }
    ],
    "exit_status": 2
  },
  "local_time": {
    "time_t": 1736507019,
    "asctime": "Fri Jan 10 12:03:39 2025 CET"
  },
  "device": {
    "name": "/dev/sdc",
    "info_name": "/dev/sdc [SAT]",
    "type": "sat",
    "protocol": "ATA"
  }
}

smartctl -n standby -j /dev/sdd

{
  "json_format_version": [
    1,
    0
  ],
  "smartctl": {
    "version": [
      7,
      4
    ],
    "pre_release": false,
    "svn_revision": "5530",
    "platform_info": "x86_64-linux-6.6.66-Unraid",
    "build_info": "(local build)",
    "argv": [
      "smartctl",
      "-n",
      "standby",
      "-j",
      "/dev/sdd"
    ],
    "messages": [
      {
        "string": "Device is in STANDBY mode, exit(2)",
        "severity": "information"
      }
    ],
    "exit_status": 2
  },
  "local_time": {
    "time_t": 1736507042,
    "asctime": "Fri Jan 10 12:04:02 2025 CET"
  },
  "device": {
    "name": "/dev/sdd",
    "info_name": "/dev/sdd [SAT]",
    "type": "sat",
    "protocol": "ATA"
  }
}

smartctl -n standby -j /dev/sde

{
  "json_format_version": [
    1,
    0
  ],
  "smartctl": {
    "version": [
      7,
      4
    ],
    "pre_release": false,
    "svn_revision": "5530",
    "platform_info": "x86_64-linux-6.6.66-Unraid",
    "build_info": "(local build)",
    "argv": [
      "smartctl",
      "-n",
      "standby",
      "-j",
      "/dev/sde"
    ],
    "messages": [
      {
        "string": "Device is in STANDBY mode, exit(2)",
        "severity": "information"
      }
    ],
    "exit_status": 2
  },
  "local_time": {
    "time_t": 1736507061,
    "asctime": "Fri Jan 10 12:04:21 2025 CET"
  },
  "device": {
    "name": "/dev/sde",
    "info_name": "/dev/sde [SAT]",
    "type": "sat",
    "protocol": "ATA"
  }
}

Alternative check using hdparm

hdparm -C /dev/sdb

/dev/sdb:
SG_IO: bad/missing sense data, sb[]:  70 00 05 00 00 00 00 14 00 00 00 00 20 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
 drive state is:  standby

hdparm -C /dev/sdc

/dev/sdc:
 drive state is:  standby

hdparm -C /dev/sdd

/dev/sdd:
 drive state is:  standby

Validate the disk mappings in Unraid:
Check disk configuration

cat /boot/config/disk.cfg

# Generated settings:
startArray="yes"
spindownDelay="30"
spinupGroups="no"
shutdownTimeout="90"
luksKeyfile="/root/keyfile"
poll_attributes="1800"
defaultFsType="xfs"
queueDepth="auto"
nr_requests="Auto"
md_scheduler="auto"
md_num_stripes="1280"
md_queue_limit="80"
md_sync_limit="5"
md_write_method="auto"
diskIdSlot.0="-"
diskSpindownDelay.0="-1"
diskSpinupGroup.0=""
diskIdSlot.1="-"
diskSpindownDelay.1="-1"
diskSpinupGroup.1=""
diskFsType.1="xfs"
diskFsProfile.1=""
diskFsWidth.1="0"
diskFsGroups.1="0"
diskAutotrim.1="off"
diskCompression.1="off"
diskComment.1=""
diskWarning.1=""
diskCritical.1=""
diskExport.1="e"
diskCaseSensitive.1="auto"
diskSecurity.1="public"
diskReadList.1=""
diskWriteList.1=""
diskVolsizelimit.1=""
diskExportNFS.1="-"
diskExportNFSFsid.1="0"
diskSecurityNFS.1="public"
diskHostListNFS.1=""
diskIdSlot.2="-"
diskSpindownDelay.2="-1"
diskSpinupGroup.2=""
diskFsType.2="xfs"
diskFsProfile.2=""
diskFsWidth.2="0"
diskFsGroups.2="0"
diskAutotrim.2="off"
diskCompression.2="off"
diskComment.2=""
diskWarning.2=""
diskCritical.2=""
diskExport.2="e"
diskCaseSensitive.2="auto"
diskSecurity.2="public"
diskReadList.2=""
diskWriteList.2=""
diskVolsizelimit.2=""
diskExportNFS.2="-"
diskExportNFSFsid.2="0"
diskSecurityNFS.2="public"
diskHostListNFS.2=""
diskIdSlot.3="-"
diskSpindownDelay.3="-1"
diskSpinupGroup.3=""
diskFsType.3="xfs"
diskFsProfile.3=""
diskFsWidth.3="0"
diskFsGroups.3="0"
diskAutotrim.3="off"
diskCompression.3="off"
diskComment.3=""
diskWarning.3=""
diskCritical.3=""
diskExport.3="e"
diskCaseSensitive.3="auto"
diskSecurity.3="public"
diskReadList.3=""
diskWriteList.3=""
diskVolsizelimit.3=""
diskExportNFS.3="-"
diskExportNFSFsid.3="0"
diskSecurityNFS.3="public"
diskHostListNFS.3=""
diskIdSlot.29="-"
diskSpindownDelay.29="-1"
diskSpinupGroup.29=""

Check disk mappings

cat /var/local/emhttp/disks.ini

["parity"]
idx="0"
name="parity"
device="sdc"
id="WDC_WD40EFZX-68AWUN0_WD-WX92DA06LT6S"
transport="ata"
size="3907018532"
status="DISK_OK"
format="GPT: 4KiB-aligned"
rotational="1"
discard="0"
removable="0"
spundown="1"
temp="*"
numReads="976755672"
numWrites="1656"
numErrors="0"
type="Parity"
color="green-blink"
spindownDelay="-1"
spinupGroup=""
idSb="WDC_WD40EFZX-68AWUN0_WD-WX92DA06LT6S"
sizeSb="3907018532"
["disk1"]
idx="1"
name="disk1"
device="sdd"
id="WDC_WD40EFZX-68AWUN0_WD-WX92DA02KKT2"
transport="ata"
size="3907018532"
status="DISK_OK"
format="GPT: 4KiB-aligned"
rotational="1"
discard="0"
removable="0"
spundown="1"
temp="*"
numReads="976756266"
numWrites="554"
numErrors="0"
type="Data"
color="green-blink"
spindownDelay="-1"
spinupGroup=""
idSb="WDC_WD40EFZX-68AWUN0_WD-WX92DA02KKT2"
sizeSb="3907018532"
deviceSb="md1p1"
luksState="0"
fsType="xfs"
fsStatus="Mounted"
autotrim="off"
compression="off"
warning=""
critical=""
exportable="no"
comment=""
fsColor="green-on"
fsSize="3905110812"
fsFree="3413458892"
fsUsed="491651920"
["disk2"]
idx="2"
name="disk2"
device="sde"
id="WDC_WD40EFZX-68AWUN0_WD-WX92DA02KYYA"
transport="ata"
size="3907018532"
status="DISK_OK"
format="GPT: 4KiB-aligned"
rotational="1"
discard="0"
removable="0"
spundown="1"
temp="*"
numReads="976755612"
numWrites="551"
numErrors="0"
type="Data"
color="green-blink"
spindownDelay="-1"
spinupGroup=""
idSb="WDC_WD40EFZX-68AWUN0_WD-WX92DA02KYYA"
sizeSb="3907018532"
deviceSb="md2p1"
luksState="0"
fsType="xfs"
fsStatus="Mounted"
autotrim="off"
compression="off"
warning=""
critical=""
exportable="no"
comment=""
fsColor="green-on"
fsSize="3905110812"
fsFree="3877850892"
fsUsed="27259920"
["disk3"]
idx="3"
name="disk3"
device="sdf"
id="WDC_WD40EFZX-68AWUN0_WD-WX92DA06LDPE"
transport="ata"
size="3907018532"
status="DISK_OK"
format="GPT: 4KiB-aligned"
rotational="1"
discard="0"
removable="0"
spundown="1"
temp="*"
numReads="976755612"
numWrites="551"
numErrors="0"
type="Data"
color="green-blink"
spindownDelay="-1"
spinupGroup=""
idSb="WDC_WD40EFZX-68AWUN0_WD-WX92DA06LDPE"
sizeSb="3907018532"
deviceSb="md3p1"
luksState="0"
fsType="xfs"
fsStatus="Mounted"
autotrim="off"
compression="off"
warning=""
critical=""
exportable="no"
comment=""
fsColor="green-on"
fsSize="3905110812"
fsFree="3877850892"
fsUsed="27259920"
["parity2"]
idx="29"
name="parity2"
device=""
id=""
transport=""
size="0"
status="DISK_NP_DSBL"
format="-"
rotational=""
discard=""
removable=""
spundown="0"
temp="*"
numReads="0"
numWrites="0"
numErrors="0"
type="Parity"
color="grey-off"
spindownDelay="-1"
spinupGroup=""
idSb=""
sizeSb="0"
["cache"]
idx="30"
name="cache"
device="nvme1n1"
id="Samsung_SSD_970_EVO_Plus_500GB_S4EVNM0R218303Z"
transport="nvme"
size="488385560"
status="DISK_OK"
format="MBR: 1MiB-aligned"
rotational="0"
discard="1"
removable="0"
spundown="0"
temp="31"
numReads="91823"
numWrites="766782"
numErrors="0"
type="Cache"
color="green-on"
spindownDelay="-1"
spinupGroup=""
idSb="Samsung_SSD_970_EVO_Plus_500GB_S4EVNM0R218303Z"
sizeSb="488385560"
deviceSb="nvme1n1p1"
luksState="0"
fsType="btrfs"
fsStatus="Mounted"
autotrim="on"
compression="off"
warning=""
critical=""
exportable="no"
comment=""
fsColor="green-on"
fsSize="488385560"
fsFree="408556884"
fsUsed="77881772"
fsProfile="raid1"
fsWidth="2"
fsGroups="1"
state="STARTED"
slots="2"
devices="2"
devicesSb="2"
uuid="ac4e7e68-2a4a-4d35-8087-1107238141ce"
shareEnabled="yes"
shareFloor="0"
nameOrig="cache"
["cache2"]
idx="31"
name="cache2"
device="nvme0n1"
id="Samsung_SSD_970_EVO_Plus_500GB_S4EVNMFN724172A"
transport="nvme"
size="488385560"
status="DISK_OK"
format="MBR: 1MiB-aligned"
rotational="0"
discard="1"
removable="0"
spundown="0"
temp="34"
numReads="51802"
numWrites="766743"
numErrors="0"
type="Cache"
color="green-on"
spindownDelay="-1"
spinupGroup=""
idSb="Samsung_SSD_970_EVO_Plus_500GB_S4EVNMFN724172A"
sizeSb="488385560"
deviceSb="nvme0n1p1"
luksState="0"
["flash"]
idx="32"
name="flash"
device="sdb"
id="Cruzer_Fit"
transport="usb"
size="15015904"
status="DISK_OK"
format="unknown"
rotational="1"
discard="0"
removable="1"
spundown="0"
temp="*"
numReads="12221"
numWrites="4862"
numErrors="0"
type="Flash"
color="green-on"
fsType="vfat"
fsStatus="Mounted"
autotrim="off"
compression="off"
warning=""
critical=""
exportable="yes"
comment="Unraid OS boot device"
fsColor="yellow-on"
fsSize="15000232"
fsFree="13568752"
fsUsed="1431480"

Check current array status

mdcmd status

sbName=/boot/config/super.dat
sbVersion=2.9.17
sbCreated=1618326190
sbUpdated=1736231273
sbEvents=310
sbState=1
sbNumDisks=5
sbLabel=0781-5571-2002-123120085936
sbSynced=1736205306
sbSynced2=1736231273
sbSyncErrs=0
sbSyncExit=0
mdVersion=2.9.33
mdState=STARTED
mdNumDisks=4
mdNumDisabled=1
mdNumReplaced=0
mdNumInvalid=1
mdNumMissing=0
mdNumWrong=0
mdNumNew=0
mdSwapP=0
mdSwapQ=0
mdResyncAction=check P
mdResyncSize=3907018532
mdResyncCorr=0
mdResync=0
mdResyncPos=0
mdResyncDt=0
mdResyncDb=0
diskNumber.0=0
diskName.0=
diskSize.0=3907018532
diskState.0=7
diskId.0=WDC_WD40EFZX-68AWUN0_WD-WX92DA06LT6S
rdevNumber.0=0
rdevStatus.0=DISK_OK
rdevName.0=sdc
rdevOffset.0=64
rdevSize.0=3907018532
rdevId.0=WDC_WD40EFZX-68AWUN0_WD-WX92DA06LT6S
rdevReads.0=976755672
rdevWrites.0=1656
rdevNumErrors.0=0
diskNumber.1=1
diskName.1=md1p1
diskSize.1=3907018532
diskState.1=7
diskId.1=WDC_WD40EFZX-68AWUN0_WD-WX92DA02KKT2
rdevNumber.1=1
rdevStatus.1=DISK_OK
rdevName.1=sdd
rdevOffset.1=64
rdevSize.1=3907018532
rdevId.1=WDC_WD40EFZX-68AWUN0_WD-WX92DA02KKT2
rdevReads.1=976756266
rdevWrites.1=554
rdevNumErrors.1=0
diskNumber.2=2
diskName.2=md2p1
diskSize.2=3907018532
diskState.2=7
diskId.2=WDC_WD40EFZX-68AWUN0_WD-WX92DA02KYYA
rdevNumber.2=2
rdevStatus.2=DISK_OK
rdevName.2=sde
rdevOffset.2=64
rdevSize.2=3907018532
rdevId.2=WDC_WD40EFZX-68AWUN0_WD-WX92DA02KYYA
rdevReads.2=976755612
rdevWrites.2=551
rdevNumErrors.2=0
diskNumber.3=3
diskName.3=md3p1
diskSize.3=3907018532
diskState.3=7
diskId.3=WDC_WD40EFZX-68AWUN0_WD-WX92DA06LDPE
rdevNumber.3=3
rdevStatus.3=DISK_OK
rdevName.3=sdf
rdevOffset.3=64
rdevSize.3=3907018532
rdevId.3=WDC_WD40EFZX-68AWUN0_WD-WX92DA06LDPE
rdevReads.3=976755612
rdevWrites.3=551
rdevNumErrors.3=0
diskNumber.4=4
diskName.4=
diskSize.4=0
diskState.4=0
diskId.4=
rdevNumber.4=4
rdevStatus.4=DISK_NP
rdevName.4=
rdevOffset.4=0
rdevSize.4=0
rdevId.4=
rdevReads.4=0
rdevWrites.4=0
rdevNumErrors.4=0
diskNumber.5=5
diskName.5=
diskSize.5=0
diskState.5=0
diskId.5=
rdevNumber.5=5
rdevStatus.5=DISK_NP
rdevName.5=
rdevOffset.5=0
rdevSize.5=0
rdevId.5=
rdevReads.5=0
rdevWrites.5=0
rdevNumErrors.5=0
diskNumber.6=6
diskName.6=
diskSize.6=0
diskState.6=0
diskId.6=
rdevNumber.6=6
rdevStatus.6=DISK_NP
rdevName.6=
rdevOffset.6=0
rdevSize.6=0
rdevId.6=
rdevReads.6=0
rdevWrites.6=0
rdevNumErrors.6=0
diskNumber.7=7
diskName.7=
diskSize.7=0
diskState.7=0
diskId.7=
rdevNumber.7=7
rdevStatus.7=DISK_NP
rdevName.7=
rdevOffset.7=0
rdevSize.7=0
rdevId.7=
rdevReads.7=0
rdevWrites.7=0
rdevNumErrors.7=0
diskNumber.8=8
diskName.8=
diskSize.8=0
diskState.8=0
diskId.8=
rdevNumber.8=8
rdevStatus.8=DISK_NP
rdevName.8=
rdevOffset.8=0
rdevSize.8=0
rdevId.8=
rdevReads.8=0
rdevWrites.8=0
rdevNumErrors.8=0
diskNumber.9=9
diskName.9=
diskSize.9=0
diskState.9=0
diskId.9=
rdevNumber.9=9
rdevStatus.9=DISK_NP
rdevName.9=
rdevOffset.9=0
rdevSize.9=0
rdevId.9=
rdevReads.9=0
rdevWrites.9=0
rdevNumErrors.9=0
diskNumber.10=10
diskName.10=
diskSize.10=0
diskState.10=0
diskId.10=
rdevNumber.10=10
rdevStatus.10=DISK_NP
rdevName.10=
rdevOffset.10=0
rdevSize.10=0
rdevId.10=
rdevReads.10=0
rdevWrites.10=0
rdevNumErrors.10=0
diskNumber.11=11
diskName.11=
diskSize.11=0
diskState.11=0
diskId.11=
rdevNumber.11=11
rdevStatus.11=DISK_NP
rdevName.11=
rdevOffset.11=0
rdevSize.11=0
rdevId.11=
rdevReads.11=0
rdevWrites.11=0
rdevNumErrors.11=0
diskNumber.12=12
diskName.12=
diskSize.12=0
diskState.12=0
diskId.12=
rdevNumber.12=12
rdevStatus.12=DISK_NP
rdevName.12=
rdevOffset.12=0
rdevSize.12=0
rdevId.12=
rdevReads.12=0
rdevWrites.12=0
rdevNumErrors.12=0
diskNumber.13=13
diskName.13=
diskSize.13=0
diskState.13=0
diskId.13=
rdevNumber.13=13
rdevStatus.13=DISK_NP
rdevName.13=
rdevOffset.13=0
rdevSize.13=0
rdevId.13=
rdevReads.13=0
rdevWrites.13=0
rdevNumErrors.13=0
diskNumber.14=14
diskName.14=
diskSize.14=0
diskState.14=0
diskId.14=
rdevNumber.14=14
rdevStatus.14=DISK_NP
rdevName.14=
rdevOffset.14=0
rdevSize.14=0
rdevId.14=
rdevReads.14=0
rdevWrites.14=0
rdevNumErrors.14=0
diskNumber.15=15
diskName.15=
diskSize.15=0
diskState.15=0
diskId.15=
rdevNumber.15=15
rdevStatus.15=DISK_NP
rdevName.15=
rdevOffset.15=0
rdevSize.15=0
rdevId.15=
rdevReads.15=0
rdevWrites.15=0
rdevNumErrors.15=0
diskNumber.16=16
diskName.16=
diskSize.16=0
diskState.16=0
diskId.16=
rdevNumber.16=16
rdevStatus.16=DISK_NP
rdevName.16=
rdevOffset.16=0
rdevSize.16=0
rdevId.16=
rdevReads.16=0
rdevWrites.16=0
rdevNumErrors.16=0
diskNumber.17=17
diskName.17=
diskSize.17=0
diskState.17=0
diskId.17=
rdevNumber.17=17
rdevStatus.17=DISK_NP
rdevName.17=
rdevOffset.17=0
rdevSize.17=0
rdevId.17=
rdevReads.17=0
rdevWrites.17=0
rdevNumErrors.17=0
diskNumber.18=18
diskName.18=
diskSize.18=0
diskState.18=0
diskId.18=
rdevNumber.18=18
rdevStatus.18=DISK_NP
rdevName.18=
rdevOffset.18=0
rdevSize.18=0
rdevId.18=
rdevReads.18=0
rdevWrites.18=0
rdevNumErrors.18=0
diskNumber.19=19
diskName.19=
diskSize.19=0
diskState.19=0
diskId.19=
rdevNumber.19=19
rdevStatus.19=DISK_NP
rdevName.19=
rdevOffset.19=0
rdevSize.19=0
rdevId.19=
rdevReads.19=0
rdevWrites.19=0
rdevNumErrors.19=0
diskNumber.20=20
diskName.20=
diskSize.20=0
diskState.20=0
diskId.20=
rdevNumber.20=20
rdevStatus.20=DISK_NP
rdevName.20=
rdevOffset.20=0
rdevSize.20=0
rdevId.20=
rdevReads.20=0
rdevWrites.20=0
rdevNumErrors.20=0
diskNumber.21=21
diskName.21=
diskSize.21=0
diskState.21=0
diskId.21=
rdevNumber.21=21
rdevStatus.21=DISK_NP
rdevName.21=
rdevOffset.21=0
rdevSize.21=0
rdevId.21=
rdevReads.21=0
rdevWrites.21=0
rdevNumErrors.21=0
diskNumber.22=22
diskName.22=
diskSize.22=0
diskState.22=0
diskId.22=
rdevNumber.22=22
rdevStatus.22=DISK_NP
rdevName.22=
rdevOffset.22=0
rdevSize.22=0
rdevId.22=
rdevReads.22=0
rdevWrites.22=0
rdevNumErrors.22=0
diskNumber.23=23
diskName.23=
diskSize.23=0
diskState.23=0
diskId.23=
rdevNumber.23=23
rdevStatus.23=DISK_NP
rdevName.23=
rdevOffset.23=0
rdevSize.23=0
rdevId.23=
rdevReads.23=0
rdevWrites.23=0
rdevNumErrors.23=0
diskNumber.24=24
diskName.24=
diskSize.24=0
diskState.24=0
diskId.24=
rdevNumber.24=24
rdevStatus.24=DISK_NP
rdevName.24=
rdevOffset.24=0
rdevSize.24=0
rdevId.24=
rdevReads.24=0
rdevWrites.24=0
rdevNumErrors.24=0
diskNumber.25=25
diskName.25=
diskSize.25=0
diskState.25=0
diskId.25=
rdevNumber.25=25
rdevStatus.25=DISK_NP
rdevName.25=
rdevOffset.25=0
rdevSize.25=0
rdevId.25=
rdevReads.25=0
rdevWrites.25=0
rdevNumErrors.25=0
diskNumber.26=26
diskName.26=
diskSize.26=0
diskState.26=0
diskId.26=
rdevNumber.26=26
rdevStatus.26=DISK_NP
rdevName.26=
rdevOffset.26=0
rdevSize.26=0
rdevId.26=
rdevReads.26=0
rdevWrites.26=0
rdevNumErrors.26=0
diskNumber.27=27
diskName.27=
diskSize.27=0
diskState.27=0
diskId.27=
rdevNumber.27=27
rdevStatus.27=DISK_NP
rdevName.27=
rdevOffset.27=0
rdevSize.27=0
rdevId.27=
rdevReads.27=0
rdevWrites.27=0
rdevNumErrors.27=0
diskNumber.28=28
diskName.28=
diskSize.28=0
diskState.28=0
diskId.28=
rdevNumber.28=28
rdevStatus.28=DISK_NP
rdevName.28=
rdevOffset.28=0
rdevSize.28=0
rdevId.28=
rdevReads.28=0
rdevWrites.28=0
rdevNumErrors.28=0
diskNumber.29=29
diskName.29=
diskSize.29=0
diskState.29=4
diskId.29=
rdevNumber.29=29
rdevStatus.29=DISK_NP_DSBL
rdevName.29=
rdevOffset.29=0
rdevSize.29=0
rdevId.29=
rdevReads.29=0
rdevWrites.29=0
rdevNumErrors.29=0

Test the specific df command that's causing issues:
Original problematic command

df -B1 /mnt/disk* /mnt/cache /mnt/* 2>/dev/null

Filesystem          1B-blocks         Used      Available Use% Mounted on
/dev/md1p1      3998833471488 503451566080  3495381905408  13% /mnt/disk1
/dev/md2p1      3998833471488  27914158080  3970919313408   1% /mnt/disk2
/dev/md3p1      3998833471488  27914158080  3970919313408   1% /mnt/disk3
tmpfs                 1048576            0        1048576   0% /mnt/disks
/dev/nvme1n1p1   500106813440  79750955008   418362228736  17% /mnt/cache
rootfs            16605904896    624128000    15981776896   4% /mnt
tmpfs                 1048576            0        1048576   0% /mnt/addons
/dev/nvme1n1p1   500106813440  79750955008   418362228736  17% /mnt/cache
/dev/md1p1      3998833471488 503451566080  3495381905408  13% /mnt/disk1
/dev/md2p1      3998833471488  27914158080  3970919313408   1% /mnt/disk2
/dev/md3p1      3998833471488  27914158080  3970919313408   1% /mnt/disk3
tmpfs                 1048576            0        1048576   0% /mnt/disks
tmpfs                 1048576            0        1048576   0% /mnt/remotes
tmpfs                 1048576            0        1048576   0% /mnt/rootshare
shfs           11996500414464 559279882240 11437220532224   5% /mnt/user
shfs           11996500414464 559279882240 11437220532224   5% /mnt/user0

New proposed command

df -B1 /mnt/disk[0-9]* /mnt/cache* /mnt/pool* 2>/dev/null

Filesystem         1B-blocks         Used     Available Use% Mounted on
/dev/md1p1     3998833471488 503451566080 3495381905408  13% /mnt/disk1
/dev/md2p1     3998833471488  27914158080 3970919313408   1% /mnt/disk2
/dev/md3p1     3998833471488  27914158080 3970919313408   1% /mnt/disk3
/dev/nvme1n1p1  500106813440  79750971392  418362212352  17% /mnt/cache

Test each pattern separately

df -B1 /mnt/disk[0-9]*

Filesystem         1B-blocks         Used     Available Use% Mounted on
/dev/md1p1     3998833471488 503451566080 3495381905408  13% /mnt/disk1
/dev/md2p1     3998833471488  27914158080 3970919313408   1% /mnt/disk2
/dev/md3p1     3998833471488  27914158080 3970919313408   1% /mnt/disk3

df -B1 /mnt/cache*

Filesystem        1B-blocks        Used    Available Use% Mounted on
/dev/nvme1n1p1 500106813440 79750987776 418362195968  17% /mnt/cache

df -B1 /mnt/pool*

df: '/mnt/pool*': No such file or directory

Debug the device path resolution:
For each disk, test the findmnt command

findmnt -n -o SOURCE /mnt/disk1

/dev/md1p1

findmnt -n -o SOURCE /mnt/disk2

/dev/md2p1

findmnt -n -o SOURCE /mnt/cache

/dev/nvme1n1p1

@tjorim
Copy link
Author

tjorim commented Jan 10, 2025

image

(after adjusting the code)

@domalab
Copy link
Owner

domalab commented Jan 10, 2025

image

(after adjusting the code)

Do you still see errors in logs?

The integration is not supposed to create /mnt sensors. So if that was created before the code update. It can be deleted.

You should just see sensors for the individual disks, array, cache and custom pool devices.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants