You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jan 24, 2023. It is now read-only.
I would assume after each POD is allocated requested resources, this should be reduced from the allocatable ones. However, I see the same amount each time before and after the POD is scheduled, this my POD Spec:
[root@corningopenness testApp]# more busybox.yml
apiVersion: v1
kind: Pod
metadata:
name: busybox1
labels:
app: busybox1
annotations:
k8s.v1.cni.cncf.io/networks: sriov-openness
spec:
nodeName: opennesswkn-1
containers:
When i check the list of pods on the WN, i see one of the pod is having issues :
[root@corningopenness testApp]# kubectl get pods
NAME READY STATUS RESTARTS AGE
busybox1 1/1 Running 0 84s
intel-rmd-operator-78c8d6b47c-csbbm 1/1 Running 5 7h11m
rmd-node-agent-opennesswkn-1 1/1 Running 0 6h52m
rmd-opennesswkn-1 0/1 CrashLoopBackOff 85 6h52m
is it because of that rmd pod ?, how do i fix it ?
The text was updated successfully, but these errors were encountered:
ravicorning
changed the title
Updated Allocatable resources
Allocatable resources not being updated
Mar 16, 2021
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
I would assume after each POD is allocated requested resources, this should be reduced from the allocatable ones. However, I see the same amount each time before and after the POD is scheduled, this my POD Spec:
[root@corningopenness testApp]# more busybox.yml
apiVersion: v1
kind: Pod
metadata:
name: busybox1
labels:
app: busybox1
annotations:
k8s.v1.cni.cncf.io/networks: sriov-openness
spec:
nodeName: opennesswkn-1
containers:
command:
env:
value: "/host/proc"
imagePullPolicy: IfNotPresent
name: busybox
resources:
requests:
memory: 4Gi
hugepages-1Gi: 4Gi
cmk.intel.com/exclusive-cores: 10
intel.com/intel_sriov_netdevice: '1'
intel.com/intel_fec_5g: '1'
limits:
hugepages-1Gi: 4Gi
memory: 4Gi
cmk.intel.com/exclusive-cores: 10
intel.com/intel_sriov_netdevice: '1'
intel.com/intel_fec_5g: '1'
restartPolicy: Never
Before running the pod spec:
[root@corningopenness testApp]# kubectl get node opennesswkn-1 -o json | jq '.status.allocatable'
{
"cpu": "46",
"devices.kubevirt.io/kvm": "110",
"devices.kubevirt.io/tun": "110",
"devices.kubevirt.io/vhost-net": "110",
"ephemeral-storage": "96589578081",
"hugepages-1Gi": "20Gi",
"intel.com/intel_fec_5g": "2",
"intel.com/intel_sriov_netdevice": "12",
"memory": "110453496Ki",
"pods": "110"
}
After applying the pod spec:
[root@corningopenness testApp]# kubectl get pods
NAME READY STATUS RESTARTS AGE
busybox1 1/1 Running 0 49s
[root@corningopenness openness-experience-kits-master]# kubectl describe pods busybox1
--snip--
Limits:
cmk.intel.com/exclusive-cores: 10
hugepages-1Gi: 4Gi
intel.com/intel_fec_5g: 1
intel.com/intel_sriov_netdevice: 1
memory: 4Gi
Requests:
cmk.intel.com/exclusive-cores: 10
hugepages-1Gi: 4Gi
intel.com/intel_fec_5g: 1
intel.com/intel_sriov_netdevice: 1
memory: 4Gi
Environment:
CMK_PROC_FS: /host/proc
CMK_PROC_FS: /host/proc
CMK_NUM_CORES: 10
Mounts:
[root@corningopenness testApp]# kubectl get node opennesswkn-1 -o json | jq '.status.allocatable'
{
"cpu": "46",
"devices.kubevirt.io/kvm": "110",
"devices.kubevirt.io/tun": "110",
"devices.kubevirt.io/vhost-net": "110",
"ephemeral-storage": "96589578081",
"hugepages-1Gi": "20Gi",
"intel.com/intel_fec_5g": "2",
"intel.com/intel_sriov_netdevice": "12",
"memory": "110453496Ki",
"pods": "110"
}
When i check the list of pods on the WN, i see one of the pod is having issues :
[root@corningopenness testApp]# kubectl get pods
NAME READY STATUS RESTARTS AGE
busybox1 1/1 Running 0 84s
intel-rmd-operator-78c8d6b47c-csbbm 1/1 Running 5 7h11m
rmd-node-agent-opennesswkn-1 1/1 Running 0 6h52m
rmd-opennesswkn-1 0/1 CrashLoopBackOff 85 6h52m
is it because of that rmd pod ?, how do i fix it ?
The text was updated successfully, but these errors were encountered: