-
Notifications
You must be signed in to change notification settings - Fork 80
Trait apply should use update instead of patch #272
Comments
Maybe I can help, could you plz assign this issue to me? |
@captainroy-hy sure, thanks! |
I wonder why patch can't do "remove"? |
@ryanzhang-oss 'patch' will not remove un-related fields |
use APIUpdatingApplicator instead of APIPatchingApplicator crossplane-runtime 0.9.0 -> 0.10.0 |
thx for your kindly suggestion! |
just to be clear, the real problem is not just to remove the fields that are not present in the new CR but also preserve the fields that are not controlled by the trait. If this is the real goal then I don't think anyone can solve this unless the trait declares what fields it owns. |
@ryanzhang-oss Right. Let me add another example:
|
Per our discussion, this is a well known two-way merge problem: https://medium.com/flant-com/3-way-merge-patches-helm-werf-beb7eccecdfe The correct way to solve this problem is via three-way merge, which is what kubectl apply does. Basically, we need to emulate kubectl apply in the Apply() function -- even though it is called Apply(), it is not yet. Nowadays k8s supports server side apply which drastically simplifies the client side implementation. Here are some materials I could find related but they only provide examples dispersedly:
|
Server-side apply can solve this problem after k8s v1.19. Before 1.19, it doesn't remove field in such a case shown here |
I don't get what the comment said. Could you elaborate more detailedly? And provide an example? |
about this case: because this resource already exists, so if user want
if from patch to update
this is the semantic of update update or patch depends on the business logic of traits. Is there no uniform standard ? |
SS apply this deployment apiVersion: apps/v1
kind: Deployment
...
spec:
replicas: 3
template:
spec:
containers:
- name: nginx
image: nginx:1.14.2 SS apply below with the same field manager apiVersion: apps/v1
kind: Deployment
...
spec:
# replicas: 3 <===== unset
template:
spec:
containers:
- name: nginx
# image: nginx:1.14.2 <===== unset In api-server v1.19, we will get apiVersion: apps/v1
kind: Deployment
...
spec:
replicas: 1 # <===== reset to default value
template:
spec:
containers:
- name: nginx
# image: nginx:1.14.2 # <===== removed because it has no default value But in api-server v1.16-v1.18, we will get apiVersion: apps/v1
kind: Deployment
...
spec:
replicas: 3 # <===== unchanged
template:
spec:
containers:
- name: nginx
image: nginx:1.14.2 # <===== unchanged Related to this fix merged into k8s v1.19 |
@allenhaozi This is what three-way merge patch handles. This is how kubectl apply is implemented. We are going to align with upstream to fix the problem correctly. |
@captainroy-hy |
I don't think any patch method can solve this problem without what @allenhaozi described. The fact that the trait owns "auto" and "rules" but not "others" field has to be explicitly stated somewhere. This is not k8s related. |
That's wrong assumption. I believe this is what kubectl apply address. |
working on this PR, refactor |
Some related issue w.r.t. client side implementation of kubectl apply:
Note that this is an issue with kubectl since it is controlled by users directly. We can avoid and control the risk by isolating it from end users. For now we can implement 3 way merge on client side by using our own keyword in annotation Once k8s 1.19 is more popular, we can switch the implementation to make use of server side apply. |
I met a problem that when restart
|
@allenhaozi Could you clarify the problem in more details? And why it is related here or maybe create another issue? |
In simple word
when restart the controllers will leads to the resource's pod restart For example: when restart the controllers if the deployment has exist we will find the following event
It's a problem of how can Workload and Trait work together to update a resource when can not patch the resource, so mention it. |
@allenhaozi A possible workaround: maybe the ServerWorkload controller should show no opinion on But of course, it's still a problem to deal with "field management by multiple controllers", as @ryanzhang-oss said in this issue. |
Fixed in kubevela/kubevela#857 |
Hello! |
Hi, @Ananya-1106 Welcome! This repo is a more like libariry of OAM implementation, we suggest you to get start from KubeVela and the |
Problem Statement
Currently, when oam-runtime applies traits, it does MergePatch (code link1 and link2). Basically, it compares the trait to apply (A), and existing trait object in cache (B), and compare difference between A and B to create a patch to send to APIServer.
This will lead to the following problem:
In above scenario, we expect
rules
field to be removed, but when we apply it, the trait actually becomes:Proposed Solution
To solve the problem, we should just update the spec of trait instead of patch here.
For fields safety concerns, the traits are operational data so it will only be managed by OAM platform. Therefore updating is OK in this case. On the contrary, workloads could be changed by end users and we should consider more about how to update it.
The text was updated successfully, but these errors were encountered: