Skip to content
forked from aklivity/zilla

A multi-protocol edge and service proxy, designed for event-driven architectures. Securely interface web apps, IoT clients, and microservices to Apache Kafka® via declaratively defined, stateless APIs.

License

Notifications You must be signed in to change notification settings

llukyanov/zilla

 
 

Repository files navigation


Build Status Slack Community Artifact HUB

A multi-protocol proxy, designed for event-driven architectures

Zilla abstracts Apache Kafka® for web applications, IoT clients and microservices. With Zilla, Kafka topics can be securely and reliably exposed via user-defined REST, Server-Sent Events (SSE), MQTT, or gRPC APIs.

Zilla has no external dependencies and does not rely on the Kafka Consumer/Producer API or Kafka Connect. Instead, it natively supports the Kafka wire protocol and uses advanced protocol mediation to establish stateless API entry points into Kafka. Zilla also addresses security enforcement, observability and connection offloading on the data path.

When Zilla is deployed alongside Apache Kafka®, achieving an extensible yet streamlined event-driven architecture becomes much easier.

Contents

The fastest way to try out Zilla is via the Quickstart, which walks you through publishing and subscribing to Kafka through REST, gRPC, and SSE API endpoints. The Quickstart uses Aklivity’s public Postman Workspace with pre-defined API endpoints and a Docker Compose stack running pre-configured Zilla and Kafka instances to make things as easy as possible.

REST-Kafka Proxying

  • Correlated Request-Response (sync)HTTP request-response over a pair of Kafka topics with correlation. Supports synchronous interaction, blocked waiting for a correlated response.
  • Correlated Request-Response (async)HTTP request-response over a pair of Kafka topics with correlation. Supports asynchronous interaction, returning immediately with 202 Accepted plus location to retrieve a correlated response. Supports prefer: wait=N to retrieve the correlated response immediately as soon as it becomes available, with no need for client polling.
  • Oneway — Produce an HTTP request payload to a Kafka topic, extracting message key and/or headers from segments of HTTP path if needed.
  • Cache — Retrieve message from a Kafka topic, filtered by message key and/or headers, with key and/or header values extracted from segments of the HTTP path if needed. Returns an etag header with HTTP response. Supports conditional GET if-none-match request, returning 304 if not modified or 200 if modified (with new etag header). Supports prefer: wait=N to respond as soon as message becomes available, no need for client polling.
  • Authorization — Routed requests can be guarded to enforce required client privileges.

SSE-Kafka Proxying

  • Filtering — Streams messages from a Kafka topic, filtered by message key and/or headers, with key and/or header values extracted from segments of HTTP path if needed.
  • Reliable Delivery — Supports event-id and last-event-id header to recover from an interrupted stream without message loss, and without the client needing to acknowledge message receipt.
  • Continous Authorization — Supports a challenge event, triggering the client to send up-to-date authorization credentials, such as JWT token, before expiration. The response stream is terminated if the authorization expires. Multiple SSE streams on the same HTTP/2 connection and authorized by the same JWT token can be reauthorized by a single challenge event response.

gRPC-Kafka Proxying

  • Correlated Request-Response (sync)gRPC request-response over a pair of Kafka topics with correlation. All forms of gRPC communication supported: unary, client streaming, server streaming, and bidirectional streaming. Supports synchronous interaction with blocked waiting for a correlated response.
  • Reliable Delivery (server streaming) — Supports message-id field and last-message-id request metadata to recover from an interrupted stream without message loss, and the client does not need to acknowledge the message receipt.

Deployment, Performance & Other

  • Realtime Cache — Local cache synchronized with Kafka for specific topics, even when no clients are connected. The cache is stateless and recovers automatically. It is consistent across different Zilla instances without peer communication.
  • Filtering — Local cache indexes message key and headers upon retrieval from Kafka, supporting efficient filtered reads from cached topics.
  • Fan-in, Fan-out — Local cache uses a small number of connections to interact with Kafka brokers, independent of the number of connected clients.
  • Authorization — Specific routed topics can be guarded to enforce required client privileges.
  • Helm Chart — Generic Zilla Helm chart avaliable.
  • Auto-reconfigure — Detect changes in zilla.yaml and reconfigure Zilla automatically.
  • Prometheus Integration — Export Zilla metrics to Prometheus for observability and auto-scaling.
  • Declarative Configuration — API mappings and endpoints inside Zilla are declaratively configured via YAML.
  • Kafka Security — Connect Zilla to Kafka over PLAINTEXT, TLS/SSL, TLS/SSL with Client Certificates, SASL/PLAIN, and SASL/SCRAM.

📚 Read the docs

  • Zilla Documentation: Guides, tutorials and references to help understand how to use Zilla and configure it for your use case.
  • Zilla Examples: A repo of sample Zilla configurations for different use cases running on Kubernetes.
  • Todo Application: Follow the tutorial and see how Zilla and Kafka can be used to build the quintessential "Todo app," but based on streaming and CQRS.
  • Product Roadmap: Check out our plan for upcoming releases.

📝 Check out blog posts

Inside Zilla, every protocol, whether it is TCP, TLS, HTTP, Kafka, gRPC, etc., is treated as a stream, so mediating between protocols simplifies to mapping protocol-specific metadata.

Zilla’s declarative configuration defines a routed graph of protocol decoders, transformers, encoders and caches that combine to provide a secure and stateless API entry point into an event-driven architecture. This “routed graph” can be visualized and maintained with the help of the Zilla VS Code extension.

Zilla has been designed from the ground up to be very high-performance. Inside, all data flows over shared memory as streams with back pressure between CPU cores, allowing Zilla to take advantage of modern multi-core hardware. The code base is written in system-level Java and uses low-level, high-performance data structures, with no locks and no object allocation on the data path.

You can get a sense of the internal efficiencies of Zilla by running the BufferBM microbenchmark for the internal data structure that underpins all data flow inside the Zilla runtime.

git clone https://github.com/aklivity/zilla
cd zilla
./mvnw clean install
cd runtime/engine/target
java -jar ./engine-develop-SNAPSHOT-shaded-tests.jar BufferBM

Note: with Java 16 or higher add --add-opens=java.base/java.io=ALL-UNNAMED just after java to avoid getting errors related to reflective access across Java module boundaries when running the benchmark.

Benchmark                  Mode  Cnt         Score        Error  Units
BufferBM.batched          thrpt   15  15315188.949 ± 198360.879  ops/s
BufferBM.multiple         thrpt   15  18366915.039 ± 420092.183  ops/s
BufferBM.multiple:reader  thrpt   15   3884377.984 ± 112128.903  ops/s
BufferBM.multiple:writer  thrpt   15  14482537.055 ± 316551.083  ops/s
BufferBM.single           thrpt   15  15111915.264 ± 294689.110  ops/s

This benchmark was executed on 2019 MacBook Pro laptop with 2.3 GHZ 8-Core Intel i9 chip and 16 GB of DDR4 RAM, showing about 14-15 million messages per second.

Is Zilla production-ready?

Yes, Zilla has been built with the highest performance and security considerations in mind, and the Zilla engine has been deployed inside enterprise production environments. If you are looking to deploy Zilla for a mission-critical use case and need enterprise support, please contact us.

Does Zilla only work with Apache Kafka?

Currently, yes, although nothing about Zilla is Kafka-specific — Kafka is just another protocol in Zilla's transformation pipeline. Besides expanding on the list of supported protocols and mappings, we are in the process of adding more traditional proxying capabilities, such as rate-limiting and security enforcement, for existing Async and OpenAPI endpoints. See the Zilla Roadmap for more details.

Another REST-Kafka Proxy? How is this one different?

Take a look at our blog post where we go into detail about how Zilla is different TL;DR Zilla supports creating application-style REST APIs on top of Kafka, as opposed to providing just a system-level HTTP API. This unlocks the ability to do things such as correlated request-response over Kafka topics.

What does Zilla's performance look like?

Please see the note above on performance.

What's on the roadmap for Zilla?

Please review the Zilla Roadamp. If you have a request or feedback, we would love to hear it! Get in touch through any of the channels.

🌱 Community

Looking to contribute to Zilla? Check out the Contributing to Zilla guide. ✨We value all contributions, whether it is source code, documentation, bug reports, feature requests or feedback!

About

A multi-protocol edge and service proxy, designed for event-driven architectures. Securely interface web apps, IoT clients, and microservices to Apache Kafka® via declaratively defined, stateless APIs.

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Java 99.7%
  • ANTLR 0.2%
  • Shell 0.1%
  • Dockerfile 0.0%
  • Smarty 0.0%
  • HTML 0.0%