Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stateful best practices #16

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open

Stateful best practices #16

wants to merge 1 commit into from

Conversation

danielepolencic
Copy link
Contributor

I guess this could a section on his own.

@weibeld
Copy link
Collaborator

weibeld commented Dec 2, 2019

This addresses this section of the TODO. I will have a closer look at it soon.

@danielepolencic
Copy link
Contributor Author

Also, a single geographical region (this also applies to cluster nodes).

@danielepolencic
Copy link
Contributor Author


In particular, you should consider using:

- the [Service Catalog](https://kubernetes.io/docs/concepts/extend-kubernetes/service-catalog/), if you're cluster is deployed in the cloud or has access to a Service that exposes the Open Service Broker API.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- the [Service Catalog](https://kubernetes.io/docs/concepts/extend-kubernetes/service-catalog/), if you're cluster is deployed in the cloud or has access to a Service that exposes the Open Service Broker API.
- the [Service Catalog](https://kubernetes.io/docs/concepts/extend-kubernetes/service-catalog/), if your cluster is deployed in the cloud or has access to a Service that exposes the Open Service Broker API.


You should always have:

- a StroageClass named `default` defined in your cluster and
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- a StroageClass named `default` defined in your cluster and
- a StorageClass named `default` defined in your cluster and


The admission controller adds the _defalt_ StorageClass if the PersistentVolumeClaim doesn't have one.

You can continue reading about the [default behaviour in StorageClasses on the official documentation](https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/#defaulting-behavior).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
You can continue reading about the [default behaviour in StorageClasses on the official documentation](https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/#defaulting-behavior).
You can continue reading about the [default behaviour in StorageClasses in the official documentation](https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/#defaulting-behavior).


Pods share resources such as CPU and memory with other Pods on the same Node.

If a Pods is using more resources than requested (but still less than the limits), it might end up committing for resources.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
If a Pods is using more resources than requested (but still less than the limits), it might end up committing for resources.
If a Pod is using more resources than requested (but still less than the limits), it might end up committing for resources.


If a Pods is using more resources than requested (but still less than the limits), it might end up committing for resources.

The challenge is more problematic when Pods compete for disk I/O, particularly in Pods that uses storage.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
The challenge is more problematic when Pods compete for disk I/O, particularly in Pods that uses storage.
The challenge is more problematic when Pods compete for disk I/O, particularly in Pods that use storage.

@danielepolencic
Copy link
Contributor Author

https://dbexamstudy.blogspot.com/2021/11/is-kubernetes-slowing-down-my-database.html

Things that you can control in your Kubernetes cluster:

◉ Whether the Linux kernel uses 4KB, 2MB or 1GB Linux pages on your Linux x8664 Kubernetes nodes
◉ How many Linux huge pages [2MB or 1GB] that you configure
◉ The requests and limits for the memory and huge page resources of your Kubernetes applications
   ◉ A database is considered an application in Kubernetes

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants