Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

install nsm problem #11930

Open
316953425 opened this issue May 14, 2024 · 3 comments
Open

install nsm problem #11930

316953425 opened this issue May 14, 2024 · 3 comments

Comments

@316953425
Copy link

316953425 commented May 14, 2024

hi @glazychev-art
I install nsm v1.13.0-relase
first install spire, but it failed as followed:https://github.com/networkservicemesh/deployments-k8s/tree/release/v1.13.0/examples/spire/single_cluster

spire-server can not start

error log :
Warning FailedScheduling 46s default-scheduler 0/3 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling..

v1.11.0-relase version no problem

@VitalyGushin VitalyGushin moved this to Todo in Backlog May 14, 2024
@denis-tingaikin
Copy link
Member

As far as I know, we started to use PersistentVolumeClaim for spire since v1.12. 

The problem is going from e541301

To fix it locally, you may play with 

  volumeClaimTemplates:
    - metadata:
        name: spire-data
        namespace: spire
      spec:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 1Gi

Most likely, you might need to remove the request or storage or make sure that your cluster provider has this storage.

@denis-tingaikin
Copy link
Member

denis-tingaikin commented May 14, 2024

Also, since NSM has backward compatibility, you may try: use spire from v1.11.0 and deploy nsm v1.13.0. I suppose it should be a quick workaround for now.

@p4lik4ri
Copy link

As @denis-tingaikin mentioned you must fix it locally, for me i just edited the pvc that spire automatically created, and added the following line: storageClassName: openebs-hostpath. In your case instead of openebs-hostpath add your storage provisioner that you have configured in your cluster.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    pv.kubernetes.io/bind-completed: "yes"
    pv.kubernetes.io/bound-by-controller: "yes"
    volume.beta.kubernetes.io/storage-provisioner: openebs.io/local
    volume.kubernetes.io/selected-node: ubuntu
    volume.kubernetes.io/storage-provisioner: openebs.io/local
  creationTimestamp: "2024-05-14T13:47:55Z"
  finalizers:
  - kubernetes.io/pvc-protection
  labels:
    app: spire-server
  name: spire-data-spire-server-0
  namespace: spire
  resourceVersion: "34038569"
  uid: 303d94b0-fa07-4289-8f20-78affc0192fe
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: openebs-hostpath
  volumeMode: Filesystem
  volumeName: pvc-303d94b0-fa07-4289-8f20-78affc0192fe
status:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 1Gi
  phase: Bound

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Todo
Development

No branches or pull requests

3 participants