Skip to content
/ sero Public

A proxy that allows applications to scale to zero.

Notifications You must be signed in to change notification settings

fluktuid/sero

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

37 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Sero

Scale to zero in your cluster - but simple.

Sero works as a small interceptor watching the connections. So whenever a connection to sero is opened it will try to forward your connection. If your Application is scaled to zero, sero will scale it to one. The rest will then be done by the awesome Kubernetes Autoscaler.

How to use

  1. Set up Sero
  2. Install sero for the desired component
  3. Send your Traffic to sero
  4. 🎉

Example

Imagine there is an app 'a' that is reached in the cluster under 'a-svc:80'. The deployment of the app is 'a-deploy'.

In this case, a possible configuration looks like this:

target:
  host: 'a-svc'
  port: 80
  protocol: tcp
  deployment: a-deploy
  timeout:
    forward: 200 # maximum waiting time when forwarding a request
    scaleUP: 3000 # maximum waiting time after an 'scale up' event

To concider

  • Sero has not yet been tested for production use. (If you did, feel free to contact the project team.)
  • Ideally, sero and target pod should run on the same node. Affinities can help here.
  • Sero needs some resources itself to trade connections. The requests / limits or scalers must therefore be tailored to the load growth of your application.

About

A proxy that allows applications to scale to zero.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages