Skip to content

Commit

Permalink
JsChart demo (#26)
Browse files Browse the repository at this point in the history
Migrate ApexCharts chart mechanism.

* Remove custom plugin
  • Loading branch information
abendt authored Nov 12, 2024
1 parent 7769917 commit 1e16b03
Show file tree
Hide file tree
Showing 6 changed files with 291 additions and 42 deletions.
76 changes: 76 additions & 0 deletions _includes/footer.html
Original file line number Diff line number Diff line change
@@ -0,0 +1,76 @@
<footer class="site-footer h-card">
<data class="u-url" href="{{ "/" | relative_url }}"></data>

<div class="wrapper">

<div class="footer-col-wrapper">
<div class="footer-col">
<p class="feed-subscribe">
<a href="{{ site.feed.path | default: 'feed.xml' | absolute_url }}">
<svg class="svg-icon orange">
<use xlink:href="{{ 'assets/minima-social-icons.svg#rss' | relative_url }}"></use>
</svg><span>Subscribe</span>
</a>
</p>
{%- if site.author %}
<ul class="contact-list">
{% if site.author.name -%}
<li class="p-name">{{ site.author.name | escape }}</li>
{% endif -%}
{% if site.author.email -%}
<li><a class="u-email" href="mailto:{{ site.author.email }}">{{ site.author.email }}</a></li>
{%- endif %}
</ul>
{%- endif %}
</div>
<div class="footer-col">
<p>{{ site.description | escape }}</p>
</div>
</div>

<div class="social-links">
{%- include social.html -%}
</div>

</div>

</footer>

{% if page.apexcharts %}

<script src="https://cdn.jsdelivr.net/npm/apexcharts"></script>
<script>
window.addEventListener('load', function() {
const elements = document.querySelectorAll('.language-apexchart');

elements.forEach(function(element) {

let options;

try {
options = JSON.parse(element.textContent)
} catch (e) {
options = new Function("return " + element.textContent)()
}

// the highlight elements renders to pre -> code
// we navigate to the parent of that
const preElement = element.parentElement
const parent = preElement.parentElement

// Create a new div element
const newDiv = document.createElement('div');

// Optionally, you can set a class or ID to the new div
newDiv.classList.add('new-chart-container');

// Replace the original element tree with the new div
parent.replaceChild(newDiv, preElement);

const chart = new ApexCharts(newDiv, options);
chart.render();
});
});
</script>

{% endif %}
4 changes: 0 additions & 4 deletions _includes/head.html
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,4 @@
<meta name="image" property="og:image" content="/assets/logo_wide.png">
<link rel="stylesheet" href="/assets/css/styles.css">

{% if page.apexcharts %}
<script src="https://cdn.jsdelivr.net/npm/apexcharts"></script>
{% endif %}

</head>
12 changes: 12 additions & 0 deletions _includes/jekyll-apexcharts/example.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
{
"chart": {
"type": "line"
},
"series": [{
"name": "sales",
"data": [30,40,35,50,49,60,70,91,125]
}],
"xaxis": {
"categories": [1991,1992,1993,1994,1995,1996,1997, 1998,1999]
}
}
22 changes: 0 additions & 22 deletions _plugins/apex_diagram.rb

This file was deleted.

32 changes: 16 additions & 16 deletions _posts/2024-11-11-springboot-cpu.md
Original file line number Diff line number Diff line change
Expand Up @@ -153,7 +153,7 @@ The limit parameter (number of prime calculations) was set to _1000, 10,000, 50,

## Tomcat

{% apex %}
```apexchart
{
series: [{
name: "Regular",
Expand Down Expand Up @@ -212,13 +212,13 @@ The limit parameter (number of prime calculations) was set to _1000, 10,000, 50,
}
}
}
{% endapex %}
```

> ⬆ Note: The y-axis uses a logarithmic scale to highlight differences on the right side of the chart.
With lower CPU work (fewer prime number calculations), using a separate worker thread actually performs worse than direct execution, as the cost of switching threads outweighs any gains. However, as CPU work increases, offloading doubles the RPS for both the `CompletableFuture` and `Dispatchers.Default` versions.

{% apex %}
```apexchart
{
series: [{
name: "Regular",
Expand Down Expand Up @@ -277,7 +277,7 @@ With lower CPU work (fewer prime number calculations), using a separate worker t
}
}
}
{% endapex %}
```

Examining p99 latency shows that as CPU work increases, latency also rises. However, with a separate thread pool, latency grows at a much slower rate compared to direct execution.

Expand All @@ -286,7 +286,7 @@ Offloading CPU work only helps with high CPU loads. When CPU work is light, swit

## WebFlux

{% apex %}
```apexchart
{
series: [{
name: "Regular",
Expand Down Expand Up @@ -373,13 +373,13 @@ Offloading CPU work only helps with high CPU loads. When CPU work is light, swit
}
}
}
{% endapex %}
```

> ⬆ Note: The y-axis uses a logarithmic scale to highlight differences on the right side of the chart.
The results are similar in the WebFlux stack, but the difference between direct execution and offloaded computation is less pronounced than in the Tomcat stack. Offloading CPU-heavy computation still improves performance, but the RPS gains from offloading don't seem as significant.

{% apex %}
```apexchart
{
series: [{
name: "Regular",
Expand Down Expand Up @@ -473,7 +473,7 @@ The results are similar in the WebFlux stack, but the difference between direct
}
}
}
{% endapex %}
```

However, the impact is more noticeable when we look at latency. The latency values for direct executions are worse than those observed in the Tomcat stack. We interpret this as the combined cost of frequent context switches and the added latency from performing work on the event loop of the reactive stack[^2].

Expand Down Expand Up @@ -514,7 +514,7 @@ We modified the code to call [yield()][yield] every `batchSize` elements, allowi

This change should help reduce p99 latency by processing tasks across multiple coroutines more evenly. Next, we’ll measure the effects of varying `batchSize` values on performance.

{% apex %}
```apexchart
{
series: [
{
Expand Down Expand Up @@ -595,11 +595,11 @@ This change should help reduce p99 latency by processing tasks across multiple c
}
}
}
{% endapex %}
```

The change positively impacts RPS, showing improvements over the plain suspend baseline. Larger batch sizes are advantageous, likely due to the overhead introduced by frequent `yield()` calls.

{% apex %}
```apexchart
{
series: [
{
Expand Down Expand Up @@ -687,7 +687,7 @@ The change positively impacts RPS, showing improvements over the plain suspend b
}
}
}
{% endapex %}
```

A similar trend appears in p99 latency values, where the cooperative change consistently reduces latency. Larger batch sizes once again seem beneficial.

Expand Down Expand Up @@ -715,7 +715,7 @@ By restricting the number of active concurrent calculations with a Semaphore, we

Here, the Semaphore argument is set to match the number of CPU cores. This ensures only as many tasks as there are CPU cores can run concurrently.

{% apex %}
```apexchart
{
series: [{
name: "Regular (baseline)",
Expand Down Expand Up @@ -772,11 +772,11 @@ Here, the Semaphore argument is set to match the number of CPU cores. This ensur
}
}
}
{% endapex %}
```

As expected, we observe a positive impact on RPS.

{% apex %}
```apexchart
{
series: [{
name: "Regular (baseline)",
Expand Down Expand Up @@ -833,7 +833,7 @@ As expected, we observe a positive impact on RPS.
}
}
}
{% endapex %}
```

We also see improvements in latency. In general, performance gains here come from lowering concurrency in the CPU-intensive part. This can be done with a dedicated thread pool or, as in this example, by using a semaphore to limit simultaneous tasks.

Expand Down
Loading

0 comments on commit 1e16b03

Please sign in to comment.