Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add max job count field in CRD resourcequotaclaim for limit job count in namespaces #17

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 7 additions & 5 deletions README.MD
Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,7 @@ ex: a namespace cannot claim more that a 1/3 of the cluster
is claimable compared to the actual resources.
ex: In a development environment you could choose to allow reserving more resources than what is actually usable in reality.


In order to facilitate the adaption of _ResourceQuotaClaims_ it is possible to enforce a default claim for namespaces.
The feature will be activated on namespace that contains the label __quota=managed__.

Expand All @@ -83,7 +84,7 @@ kubectl apply -f https://raw.githubusercontent.com/ca-gip/kotary/master/artifact

| Name | Description | Mandatory | Type | Default |
| :-------------- | :--------------------------------------------------------: | :---------: | :-------------: |:---------------------- |
| **defaultClaimSpec** | *Default claim that will be added to a watched Namespace* | `no` | `ResourceList` | cpu:2 <br /> memory: 6Gi |
| **defaultClaimSpec** | *Default claim that will be added to a watched Namespace* | `no` | `ResourceList` | cpu:2 <br /> memory: 6Gi <br /> count/jobs.batch: 5 |
| **ratioMaxAllocationMemory** | *Maximum amount of Memory claimable by a Namespace* | `no` | `Float` | 1 |
| **ratioMaxAllocationCPU** | *Maximum amount of CPU claimable by a Namespace* | `no` | `Float` | 1 |
| **ratioOverCommitMemory** | *Memory over-commitment* | `no` | `Float` | 1 |
Expand All @@ -92,7 +93,7 @@ kubectl apply -f https://raw.githubusercontent.com/ca-gip/kotary/master/artifact
##### Example

In the following sample configuration we set :
* A default claim of 2 CPU and 10Gi of Memory
* A default claim of 2 CPU , 10Gi of Memory and 5 jobs
* 33% of total amount of resource can be claim by a namespace
* An over-commit of 130%

Expand All @@ -104,6 +105,7 @@ data:
defaultClaimSpec: |
cpu: "2"
memory: "10Gi"
count/jobs.batch: 5
ratioMaxAllocationMemory: "0.33"
ratioMaxAllocationCPU: "0.33"
ratioOverCommitMemory: "1.3"
Expand Down Expand Up @@ -131,7 +133,7 @@ kubectl apply -f https://raw.githubusercontent.com/ca-gip/kotary/master/artifact

### Update a ResourceQuota

To update a _ResourceQuotas_ you will have to create a _ResourceQuotaClaims_ with specification for CPU and Memory.
To update a _ResourceQuotas_ you will have to create a _ResourceQuotaClaims_ with specification for CPU ,Memory and job count.
You can use the same units as the one available in Kubernetes,
please refer to the [official documentation](https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/)

Expand Down Expand Up @@ -168,8 +170,8 @@ demo 5 20Gi REJECTED Exceeded Memory allocation limit claiming 20Gi bu

```bash
$ kubectl get quotaclaim
NAME CPU RAM STATUS DETAILS
demo 5 16Gi PENDING Awaiting lower CPU consumption claiming 16Gi but current total of CPU request is 18Gi
NAME CPU RAM MAX JOB COUNT STATUS DETAILS
demo 5 16Gi 5 PENDING Awaiting lower CPU consumption claiming 16Gi but current total of CPU request is 18Gi
```

### Default claim
Expand Down
1 change: 1 addition & 0 deletions artifacts/cm.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ data:
defaultClaimSpec: |
cpu: "2"
memory: "10Gi"
count/jobs.batch: 10
ratioMaxAllocationMemory: "0.33"
ratioMaxAllocationCPU: "0.33"
ratioOverCommitMemory: "1.3"
Expand Down
7 changes: 7 additions & 0 deletions artifacts/crd.yml
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,9 @@ spec:
memory:
x-kubernetes-int-or-string: true
pattern: '^([+-]?[0-9.]+)([eEinumkKMGTP]*[-+]?[0-9]*)$'
count/jobs.batch:
pattern: ^[0-9]*$
x-kubernetes-int-or-string: true
status:
type: object
properties:
Expand All @@ -46,6 +49,10 @@ spec:
type: string
description: Desired amount of RAM
jsonPath: .spec.memory
- description: Desired maximum number of jobs
jsonPath: .spec.count/jobs\.batch
name: MAX JOB COUNT
type: string
- name: Status
type: string
description: Status of the claim
Expand Down
3 changes: 2 additions & 1 deletion artifacts/example-quotaclaim.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,5 @@ metadata:
namespace: native-development
spec:
memory: 24Gi
cpu: 14000
cpu: 14000
count/jobs.batch: 12
19 changes: 19 additions & 0 deletions internal/controller/claim.go
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ import (
"context"
"fmt"
"math"
"reflect"

"github.com/ca-gip/kotary/internal/utils"
"k8s.io/apimachinery/pkg/api/errors"
Expand Down Expand Up @@ -185,6 +186,8 @@ func (c *Controller) updateResourceQuota(claim *cagipv1.ResourceQuotaClaim) erro
} else if !quota.Equals(resourceQuota.Status.Hard, claim.Spec) {
// If this spec of the ResourceQuota is not the desired one we update it
klog.V(4).Infof("ResourceQuota not synced, updating for ns %s", claim.Namespace)
klog.V(4).Infof("Call Job count add to claim, updating for ns %s", claim.Namespace)
claim = addJobCountifnotexistinClaim(claim, resourceQuota)
_, err := c.resourcequotaclientset.CoreV1().ResourceQuotas(claim.Namespace).Update(context.TODO(), newResourceQuota(claim), metav1.UpdateOptions{})
if err != nil {
klog.Errorf("Could not update ResourceQuotas for ns %s ", claim.Annotations)
Expand Down Expand Up @@ -355,6 +358,7 @@ func newResourceQuota(claim *cagipv1.ResourceQuotaClaim) *v1Core.ResourceQuota {
labels := map[string]string{
"creator": utils.ControllerName,
}

return &v1Core.ResourceQuota{
ObjectMeta: metav1.ObjectMeta{
Name: utils.ResourceQuotaName,
Expand All @@ -366,3 +370,18 @@ func newResourceQuota(claim *cagipv1.ResourceQuotaClaim) *v1Core.ResourceQuota {
},
}
}

/*
If job count not exist in claim, it dissappears ,

so if there isn't job count in new claim , we add job count of existing resourcequota
*/
func addJobCountifnotexistinClaim(claim *cagipv1.ResourceQuotaClaim, managedQuota *v1Core.ResourceQuota) *cagipv1.ResourceQuotaClaim {
klog.V(4).Infof("Add job count, updating for ns %s", claim.Namespace)
if reflect.ValueOf(claim.Spec["count/jobs.batch"]).IsZero() {
claim.Spec["count/jobs.batch"] = managedQuota.Spec.Hard["count/jobs.batch"]
klog.V(4).Infof("Add Job count added to claim, updating for ns %s", claim.Namespace)

}
return claim
}
116 changes: 102 additions & 14 deletions internal/controller/claim_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -739,8 +739,9 @@ func TestTotalResourceQuota(t *testing.T) {
},
Spec: v1.ResourceQuotaSpec{
Hard: v1.ResourceList{
v1.ResourceMemory: resource.MustParse("8Gi"),
v1.ResourceCPU: resource.MustParse("3k"),
v1.ResourceMemory: resource.MustParse("8Gi"),
v1.ResourceCPU: resource.MustParse("3k"),
"count/jobs.batch": resource.MustParse("5"),
},
},
},
Expand All @@ -750,8 +751,9 @@ func TestTotalResourceQuota(t *testing.T) {
},
Spec: v1.ResourceQuotaSpec{
Hard: v1.ResourceList{
v1.ResourceMemory: resource.MustParse("8Gi"),
v1.ResourceCPU: resource.MustParse("3k"),
v1.ResourceMemory: resource.MustParse("8Gi"),
v1.ResourceCPU: resource.MustParse("3k"),
"count/jobs.batch": resource.MustParse("5"),
},
},
},
Expand All @@ -761,15 +763,17 @@ func TestTotalResourceQuota(t *testing.T) {
},
Spec: v1.ResourceQuotaSpec{
Hard: v1.ResourceList{
v1.ResourceMemory: resource.MustParse("8Gi"),
v1.ResourceCPU: resource.MustParse("3k"),
v1.ResourceMemory: resource.MustParse("8Gi"),
v1.ResourceCPU: resource.MustParse("3k"),
"count/jobs.batch": resource.MustParse("5"),
},
},
},
},
expect: &v1.ResourceList{
v1.ResourceMemory: resource.MustParse("16Gi"),
v1.ResourceCPU: resource.MustParse("6k"),
v1.ResourceMemory: resource.MustParse("16Gi"),
v1.ResourceCPU: resource.MustParse("6k"),
"count/jobs.batch": resource.MustParse("5"),
},
},
"1 quota": {
Expand All @@ -785,15 +789,17 @@ func TestTotalResourceQuota(t *testing.T) {
},
Spec: v1.ResourceQuotaSpec{
Hard: v1.ResourceList{
v1.ResourceMemory: resource.MustParse("8Gi"),
v1.ResourceCPU: resource.MustParse("3k"),
v1.ResourceMemory: resource.MustParse("8Gi"),
v1.ResourceCPU: resource.MustParse("3k"),
"count/jobs.batch": resource.MustParse("5"),
},
},
},
},
expect: &v1.ResourceList{
v1.ResourceMemory: resource.MustParse("8Gi"),
v1.ResourceCPU: resource.MustParse("3k"),
v1.ResourceMemory: resource.MustParse("8Gi"),
v1.ResourceCPU: resource.MustParse("3k"),
"count/jobs.batch": resource.MustParse("5"),
},
},
}
Expand Down Expand Up @@ -824,8 +830,9 @@ func TestDeleteResourceQuotaClaim(t *testing.T) {
Namespace: metav1.NamespaceDefault,
},
Spec: v1.ResourceList{
v1.ResourceCPU: resource.MustParse("1"),
v1.ResourceMemory: resource.MustParse("1Mi"),
v1.ResourceCPU: resource.MustParse("1"),
v1.ResourceMemory: resource.MustParse("1Mi"),
"count/jobs.batch": resource.MustParse("3"),
},
}
f := newFixture(t)
Expand Down Expand Up @@ -985,3 +992,84 @@ func TestCanDownscaleQuota(t *testing.T) {
}

}
func TestAddJobCountifnotexistinClaim(t *testing.T) {
testCases := map[string]struct {
claim *cagipv1.ResourceQuotaClaim
managedQuota *v1.ResourceQuota
expect *cagipv1.ResourceQuotaClaim
}{
"job count exists in claim": {
claim: &cagipv1.ResourceQuotaClaim{
Spec: v1.ResourceList{
v1.ResourceCPU: resource.MustParse("1"),
v1.ResourceMemory: resource.MustParse("1Gi"),
"count/jobs.batch": resource.MustParse("3"),
},
},
managedQuota: &v1.ResourceQuota{
Spec: v1.ResourceQuotaSpec{
Hard: v1.ResourceList{
v1.ResourceCPU: resource.MustParse("2"),
v1.ResourceMemory: resource.MustParse("2Gi"),
"count/jobs.batch": resource.MustParse("5"),
},
},
},
expect: &cagipv1.ResourceQuotaClaim{
Spec: v1.ResourceList{
v1.ResourceCPU: resource.MustParse("1"),
v1.ResourceMemory: resource.MustParse("1Gi"),
"count/jobs.batch": resource.MustParse("3"),
},
},
},
"job count does not exist in claim": {
claim: &cagipv1.ResourceQuotaClaim{
Spec: v1.ResourceList{
v1.ResourceCPU: resource.MustParse("1"),
v1.ResourceMemory: resource.MustParse("1Gi"),
},
},
managedQuota: &v1.ResourceQuota{
Spec: v1.ResourceQuotaSpec{
Hard: v1.ResourceList{
v1.ResourceCPU: resource.MustParse("2"),
v1.ResourceMemory: resource.MustParse("2Gi"),
"count/jobs.batch": resource.MustParse("5"),
},
},
},
expect: &cagipv1.ResourceQuotaClaim{
Spec: v1.ResourceList{
v1.ResourceCPU: resource.MustParse("1"),
v1.ResourceMemory: resource.MustParse("1Gi"),
"count/jobs.batch": resource.MustParse("5"),
},
},
},
"empty resources": {
claim: &cagipv1.ResourceQuotaClaim{
Spec: v1.ResourceList{},
},
managedQuota: &v1.ResourceQuota{
Spec: v1.ResourceQuotaSpec{
Hard: v1.ResourceList{
"count/jobs.batch": resource.MustParse("5"),
},
},
},
expect: &cagipv1.ResourceQuotaClaim{
Spec: v1.ResourceList{
"count/jobs.batch": resource.MustParse("5"),
},
},
},
}

for testName, testCase := range testCases {
t.Run(testName, func(t *testing.T) {
result := addJobCountifnotexistinClaim(testCase.claim, testCase.managedQuota)
assert.DeepEqual(t, result.Spec, testCase.expect.Spec)
})
}
}
Loading
Loading