-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update Ingress API Version (extensions/v1beta1
-> networking.k8s.io/v1
)
#154
Update Ingress API Version (extensions/v1beta1
-> networking.k8s.io/v1
)
#154
Conversation
…o/v1`) The change was done accordingly to the k8s deprecation guide (https://kubernetes.io/docs/reference/using-api/deprecation-guide/#ingress-v122) and taking as an example the apache pulsar helmchart (apache/pulsar-helm-chart@83bb8bd6). The correctness of the change was only syntactally verified by the following command run against k8s server versions v1.18.20 and v1.19.16 (and respective kubectl versions). The helm version used for both runs is v3.6.3. ``` helm install "pulsar" helm-chart-sources/pulsar/ --namespace pulsar --create-namespace --values examples/dev-values.yaml --set "kube-prometheus-stack.enabled=false,extra.pulsarSQL=true,pulsarSQL.ingress.enabled=true,extra.pulsarAdminConsole=true,pulsarAdminConsole.ingress.enabled=true,proxy.ingress.enabled=true" --dry-run --debug | grep -A 30 -B 2 "kind: Ingress" ``` It should be noted that `kube-prometheus-stack.enabled=false` was necessary in order to avoid the error: ``` Error: unable to build kubernetes objects from release manifest: [unable to recognize no matches for kind "PrometheusRule" in version "monitoring.coreos.com/v1", unable to recognize "": no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1"] helm.go:88: [debug] [unable to recognize no matches for kind "PrometheusRule" in version "monitoring.coreos.com/v1", unable to recognize "": no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1"] ```
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@MMirelli - this is a good change. Have you tested it? You can use minikube to quickly test against multiple k8s versions.
Oh I see, apparently minikube has a metallb addon, which could be easily used. Will do, thank you for the review @michaeljmarshall . |
Mh not sure why this wasn't failing earlier and now it is: the latest changes are not major. On minikube it seems like working, after enabling the Ingress addon.
I wonder whether on kind the a similar addon should be activated. |
Mh this PR seems to have the same problem. It looks smth in the CI broke couple of days ago. This is probably smth to investigate further, right @michaeljmarshall ? I can try to do it next week. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@MMirelli - please rebase. Your tests should hopefully pass then.
ingress: | ||
enabled: true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I addressed this in #162 by changing the service to a ClusterIP
. I think the underlying issue is with MetalLB, but I'm not sure we even need to deploy that daemonset in our tests. There is no need to test k8s behavior for load balancers in our tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@MMirelli - I ended up rebasing it through the GitHub UI because it was simple. LGTM
Great job @michaeljmarshall, I got what was the problem now! Thank you! |
Addresses #140 .
The change was done accordingly to the k8s deprecation guide and taking as an example the apache pulsar helmchart analogous change.
The correctness of the change was only syntactically checked by the following command run against k8s server versions v1.18.20 and v1.19.16. The helm version used for both runs is v3.6.3.
It should be noted that
kube-prometheus-stack.enabled=false
was necessary in order to avoid the error: