Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improvements in CLI generated Webpack config #969

Closed
chrisckc opened this issue Nov 13, 2018 · 33 comments
Closed

Improvements in CLI generated Webpack config #969

chrisckc opened this issue Nov 13, 2018 · 33 comments

Comments

@chrisckc
Copy link
Contributor

I'm submitting a feature request
Improvement of generated webpack config in relation to bundle splitting, long term cachability and other aspects inline with the following document:
https://developers.google.com/web/fundamentals/performance/webpack/

  • Loader/bundler:
    Webpack (v4)

Current behavior:
Webpack config is not optimised

  • What is the expected behavior?
    Webpack config should have some basic level of optimisation and documentation should cover more advanced optimisations.

  • What is the motivation / use case for changing the behavior?
    To provide an improved out of the box development experience.

Following on from this: #958 i have learned more about webpack optimisations and tested some configs against different code both Aurelia and non-Aurelia.

Chunk splitting seems to be one of the most important aspects of optimisation provided that we are already using minification of both js and css.

The CLI generated webpack config does js minification in production mode as it enabled by default in webpack v4 but does not perform CSS minification without an additional plugin.

I will talk about chunk splitting first with some test results.

@chrisckc
Copy link
Contributor Author

chrisckc commented Nov 13, 2018

In webpack v4 chunk splitting is controlled by the optimization.splitChunks setting and the best configuation is highly dependant on network speed (latency and transfer time) and whether or not HTTP/2 is used.

HTTP/1.1 allows up to six simultaneous connections using Chrome/Firefox/Safari
HTTP/2 uses multiplexing on a single connection to download may files simultaneously and optimises headers etc.

The default behaviour of splitChunks is described in the webpack docs:
https://webpack.js.org/plugins/split-chunks-plugin/#optimization-splitchunks

By default it only affects on-demand chunks, because changing initial chunks would affect the script tags the HTML file should include to run the project.

The above statement may be relevant in some situations such as just using vanilla js or using a different framework, however not for Aurelia due to the way the Aurelia webpack build is configured.

Using the default config would result in single js and css chunks containing everything, this has no upsides, only downsides. It does take advantage of browser caching as the slightest change to the app would result in the whole js bundle getting a new filename hash and being downloaded again.
Also large files more likely to fail during download on unreliable connections.

For the simple hello world app which the CLI generates, the default splitChunks config with just 2 simple changes to enable splitting of both initial and async chunks and separation of the webpack runtime would be sufficient.

optimization: {
	runtimeChunk: true,
	splitChunks: {
		chunks: "all"
	}
}

This would create separate app and vendor js and css chunks and a separate webpack runtime chunk, increasing cachability.

But wait, for the webpack-skeleton-navigation source more work is required:

I have noticed that users.ts lazy loads aurelia-http-client and conditionally/lazy loads isomorphic-fetch polyfill. Using the default webpack splitChunks config results in a separate 'fetch' chunk being generated, but not a separate chunk for the aurelia-http-client.

I discovered this during some testing, the fetch chunk is created because of the magic comment "webpackChunkName:" in this code in users.ts:

const fetchPolyfill = !self.fetch
  ? import('isomorphic-fetch' /* webpackChunkName: 'fetch' */)
  : Promise.resolve(self.fetch);

However the default splitChunks config bundles the aurelia-http-client with the rest of the vendor modules, removing most of the advantage of lazy loading it.

It's also not worth adding the code complexity of using using lazy loading if its not going to provide the full benefit in reducing the initial download size. With this in mind the webpack config should take this into account and make sure that aurelia-http-client is separated from the vendor bundle.

Taking into account the fact that the CLI is being updated to output the skeleton source files, we need a webpack config that does the following:

  • Splits chunks in a manner that suits the skeleton source code
  • minifies css as well as js in production builds
  • respects the lazy loading example in users.ts
  • provides other optimisations according to that google document i linked above?

Maybe the webpack config should depend on what the CLI is outputting, if it's going to be optional?

Chunk splitting performance test results to follow next....

@chrisckc
Copy link
Contributor Author

chrisckc commented Nov 13, 2018

HTTP/1.1 Test results

Here are the results of tests using HTTP/1/1 and different webpack configs.

The test were conducted using a project which consists of a CLI generated project with the Skeleton source files transplanted, which might be very close to what the CLI will output soon?
The repo for the above project is here: https://github.com/chrisckc/aurelia-cli-skeleton-typescript-webpack.
I commented out the bootstrap sass import and replaced it with the bootstrap css import to simplify testing.
I stripped out the optimisation section from the config and used different optimisation configs as described below.

I used au build --env prod to build and then served using npm start -- serve
I then checked the timings in Chrome dev tool when requesting the home page.

Test 1 (default config)

Using the default splitChunks config, the following 4 files were downloaded by the browser:
-index.html: 893 bytes
-single app/vendor chunk: 524KB
-single app/vendor css file: 157KB (containing fontawesome & bootstrap)
-font-awesome woff2 font: 75.8KB

This is the request waterfall on Slow 3G:
image

Chrome DevTool timings with network throttling enabled:

Custom 4G (4.0Mb/s 50ms latency)
4 requests | 757KB transferred | finish 1.6s | Load 1.75s

Fast 3G
4 requests | 757KB transferred | finish 5.25s | Load 5.40s

Slow 3G
4 requests | 757KB transferred | finish 19.20s | Load 19.35s

Summary: most of the time taken by one large request

Comments:
I noticed that when using the Slow 3G emulation, the loading splash html markup does not show for quite some time. When it does eventually appear, the loading spinner shows a square rather than font-awesome spinner for most of the remaining loading duration.

This is caused by the fact that the large css bundle has to finish downloading before the browser renders the splash markup. Once the browser has the css file it then knows that it needs to download the woff2 font file. As the woff2 file is 75KB and only starts to download later in the load cycle, the app js file is almost finished downloading in a parallel request by the time its finished, resulting in the loading screen not working very well.

Test 2 (separate runtime chunk and splitChunks all)

Using the following splitChunks config:

optimization: {
	runtimeChunk: true,
	splitChunks: {
		chunks: "all"
	}
}

The following 7 files are downloaded by the browser:
-index.html: 893 bytes
-single app chunk: 23.8KB
-single vendor chunk: 498KB
-single app css file: 2.5KB
-single vendor css file: 155KB
-runtime chunk: 2.7KB
-font-awesome woff2 font: 75.8KB

This is the request waterfall on Slow 3G:
image

Chrome DevTool timings:

Custom 4G (4.0Mb/s 50ms latency)
7 requests | 759KB transferred | finish 1.6s | Load 1.74s

Fast 3G
7 requests | 759KB transferred | finish 5.29s | Load 5.45s

Slow 3G (TTFB 2 seconds)
7 requests | 759KB transferred | finish 19.24s | Load 19.41s

Summary: Win Win, negligible difference in load time, much better cachability, more chance of success on unreliable connections.

Test 3 (new config with seperate runtime chunk)

Using the config provided in the recent merged pull request with the addition of 'runtimeChunk: true' to match the above tests

11 files are downloaded by the browser:
This is the request waterfall on Slow 3G:
image

Chrome DevTool timings:

Custom 4G (4.0Mb/s 50ms latency)
11 requests | 762KB transferred | finish 1.66s | Load 1.68s (tiny bit faster)

Fast 3G
11 requests | 762KB transferred | finish 5.68s | Load 5.69s (tiny bit slower)

Slow 3G (TTFB 2 seconds)
11 requests | 762KB transferred | finish 20.65s | Load 20.66s (a bit slower)

Summary: Arguably not as quite good load times as the previous test (except 4G), better cachability in the case of upgrading node modules and more chance of success on unreliable connections.

Comments:
Looking at the waterfall, it seems that, downloading the font file is holding up the load completion as the large css file containing both font-awesome and bootstrap needs to be downloaded first before the broswer knows it needs to get the font file. The max simultaneous requests has been used up by then so the request is queued.

Lets fix that by separating out fontawesome and bootstrap into separate css and js bundles

Test 4 (new config with seperate runtime chunk with additional splits)

Using the config provided in the merged pull request with the addition of 'runtimeChunk: true' to match the above tests and with the addition of:

fontawesome: {
  name: 'vendor.font-awesome',
  test:  /[\\/]node_modules[\\/]font-awesome[\\/]/,
  enforce: true,
  reuseExistingChunk: true,
  priority: 40
},
bootstrap: {
  test: /[\\/]node_modules[\\/]bootstrap[\\/]/,
  name: "vendor.bootstrap",
  enforce: true,
  priority: 30
},

14 files are downloaded by the browser:
This is the request waterfall on Slow 3G:
image

Chrome DevTool timings:

Custom 4G (4.0Mb/s 50ms latency)
14 requests | 764KB transferred | finish 1.75s | Load 1.86s (tiny bit slower than 7 requests)

Fast 3G
14 requests | 764KB transferred | finish 5.33s | Load 5.35s (tiny bit faster than 7 requests)

Slow 3G (TTFB 2 seconds)
14 requests | 764KB transferred | finish 19.45s | Load 19.46s (tiny bit slower than 7 requests)

Summary: Results are more or less the same as the test where 7 requests were made, but again has better cachability in the case of upgrading node modules and more chance of success on unreliable connections.

Overall summary

  • Splitting out the bundle results in a very slightly larger overall size but the difference is small enough to have hardly any effect.

  • A few more more files does not result in any significant reduction in load times (up to the tested 15 files anyway)

  • Splitting out the bundle allows better long term cachability as long as it is combined with other measures such as using HashedModuleIdsPlugin and the correct server headers etc. refer to the discussion here: fix(webpack.config.template.js): change webpack config to fix bundle duplication issue #965

  • Splitting the bundle into 6 parts seems to be the theoretical sweet spot for HTTP/1.1 in Chrome

@chrisckc
Copy link
Contributor Author

chrisckc commented Nov 13, 2018

Splitting out the bundles can also be achieved using splitChunks.maxSize, but gives no control over the naming of the outputted bundles, everything is named "app~<>.chunk.js". using maxSize: 100000 results in 19 requests with the largest js file being 89KB, however it can't split the css bundle to under 100KB because bootstrap css is larger than 100KB.

Also note that none of the above configs split out the lazy loaded aurelia-fetch-client in users.ts, resulting in it being downloaded during the initial download. This only adds 7.4KB to the initial download, but nonetheless if using lazy loading then it should be safe to assume that the intention is to defer the download until the module is required.

Looking at the default splitChunks config:
https://webpack.js.org/plugins/split-chunks-plugin/#optimization-splitchunks it will only try and split async chunks, so it looks like it should pick up aurelia-fetch-client and add it to an async 'vendor' bundle because that particular node module is being loaded async. (assuming the lazy loading code in users.ts is correct)

Unfortunately that does not happen, only the isomorphic-fetch is split out due to the dynamic import code and it's name is controlled by the magic comment /* webpackChunkName: 'fetch' */
I am not exactly sure why this is the case, i don't know enough about lazy loading and webpack yet, but i suspect it is because aurelia-fetch-client still has to be imported in users.ts like so:

import {HttpClient} from 'aurelia-fetch-client';

This is so that it can be injected into the Users class, which means it is statically 'imported' in a sync/initial loaded module, so also becomes an initial module according to the webpack docs.

@chrisckc
Copy link
Contributor Author

chrisckc commented Nov 13, 2018

To fix the above issue with lazy loading aurelia-fetch-client the following needs to be done:
Edit app.ts and update the users route by adding a second parameter to PLATFORM.moduleName as follows:

{ route: 'users',         name: 'users',        moduleId: PLATFORM.moduleName('./users', 'users'),        nav: true, title: 'Github Users' },

The second parameter tells webpack to create a separate on-demand / async chunk for the 'users' module with the name supplied. This means that aurelia-fetch-client is no longer imported from an 'initial' module.

Now assuming that you are using the default cacheGroups add just want to add the minimum to make this work, use the following config to add an extra cacheGroup:

optimization: {
    runtimeChunk: true,
    splitChunks: {
    	chunks: "all",
    	cacheGroups: {
	    aureliaFetchClient: {
		test: /[\\/]node_modules[\\/]aurelia-fetch-client[\\/]/,
		name: "vendor.aurelia-fetch-client",
		chunks: 'async',
		enforce: true,
		priority: 29
	    }
    	}
    }
}

Note that enforce needs to be true unless we alter minSize from the default 30 KB to less than the size of the module 7.4KB. Priority can be anything grater than 0.

Now notice that the 'fetch' and 'aurelia-fetch-client' bundles are not downloaded in the initial download and when visiting the users route, some new requests are created for the 'users', 'fetch' and 'aurelia-fetch-client' chunks.

There is one issue with the above config though, the 'users' and 'aurelia-fetch-client' chunks are both under 10 KB, so would probably be better off as a single chunk.

Edit: After testing the above code changes using the webpack config in the recent pull request, i found that the aurelia-fetch-client module is included in the separate 'users' chunk which avoids an additional request. Note that with this config, if another async view also used 'aurelia-fetch-client', that module would then be shared by 2 or more modules, and would be split into a separate chunk by the 'common' cacheGroup entry. This would take us back to having separate requests again, but only when loading the first view that uses 'aurelia-fetch-client', after that it's already in the browser.

If the 'enforce: true' were removed from the 'common' cacheGroup, 'aurelia-fetch-client', would instead be embedded in each view's chunk, resulting in some duplication. This duplication would not be much of an issue though because it exists in an on-demand chunk and downloading an extra 7.4KB when each view's chunk is downloaded is not really going to make a noticeable difference.

Now with 'enforce: true' removed, aurelia-fetch-client would need to be 30KB or larger to be split into a separate chunk by the 'common' entry in the cacheGroups.

This clearly shows that optimising lazy loaded modules is not easy, i think that the default minSize of 30 KB suits initial chunks, but not async/on-demand chunks where duplicating a 29 KB module across many async chunks would be a bit wasteful. Maybe something in the range of 10 to 15 KB would be a better minSize for async chunks when considering slower connections.

@chrisckc
Copy link
Contributor Author

Note that i am working on a more generic config which avoids having to call out specific modules in the webpack config, however this config might be more suited to HTTP/2 as it's going to result in more chunks. In the meantime ill post some results of using HTTP/2 with the same tests that i did above.

@EisenbergEffect
Copy link
Contributor

@jods4 Just want to make sure you've got your eyes on this to provide feedback as @chrisckc puts this work in. Lots of good info here.

@chrisckc
Copy link
Contributor Author

chrisckc commented Nov 14, 2018

HTTP/2 results:

I used the following package to get a HTTP/2 server: https://www.npmjs.com/package/static-http2-server with a self signed certificate. (HTTP/2 requires HTTPS)

For each test the HTTP/2 server was restarted to avoid "HTTP/2 Server Push" coming into play when re-requesting the page. Obviously the browser cache was also disabled as in the previous tests.

The results were a little surprising:
I repeated the previous tests and also created a bundle consisting of 40 chunks as a further test.

  • I observed that even at 40 requests, there was still hardly any difference between using HTTP/1.1 and HTTP/2, the results were much the same as tests 3 and 4.
  • I also observed with both HTTP/1.1 and HTTP/2 there is only a very small difference in load time between making 4 requests and 40 requests.

The lack of benefit from HTTP/2 seems to be because when using HTTP/2, looking at the request waterfall, the large bootstrap css file and the separate request for the font-awesome font is holding up the download and page load completion. At 40 chunks, all of the other chunk files are much smaller than the bootstrap css, it looks like if this could be split up and we remove the font-awesome font requirement, HTTP/2 would be able to provide benefits.

Although in theory the Chrome DevTools network throttling should mean that there should be no difference, i think it would be good to try a separate server over a network link to rule out any side effects of running the server on localhost. I will report back when i get around to that sanity test.

Ok, so here are the results for comparison:

Test 1 (default config)

no differences

Test 2 (separate runtime chunk and splitChunks all)

no differences

Test 3 (new config with separate runtime chunk)

Custom 4G (4.0Mb/s 50ms latency)
11 requests | 762KB transferred | finish 1.7s | Load 1.71s (tiny bit slower)

Fast 3G
11 requests | 762KB transferred | finish 5.8s | Load 5.81s (tiny bit slower)

Slow 3G (TTFB 2 seconds)
11 requests | 762KB transferred | finish 21.01s | Load 21.02s (a bit slower)

As noted in the HTTP/1/1 test results, it is clear from the waterfall that the font-awesome font is holding up the load completion, this doesn't stop the page from being rendered though.
The reported load time in the DevTools is for all assets, but in reality once the js and css has been downloaded the page is ready.

Test 4 (new config with separate runtime chunk with additional splits)

Custom 4G (4.0Mb/s 50ms latency)
15 requests | 765KB transferred | finish 1.62s | Load 1.64s (tiny but faster)

Fast 3G
15 requests | 765KB transferred | finish 5.67s | Load 5.68s (tiny bit slower)

Slow 3G (TTFB 2 seconds)
15 requests | 765KB transferred | finish 20.57s | Load 20.58s (a bit slower)

In the HTTP/1.1 waterfall, it is clear that the additional split of separating the bootstrap and font-awesome chunks did not have the desired effect due to the maxed out concurrent request limit.
However for HTTP/2, i anticipated that this may not be a problem, however testing shows that although the font-awesome css is available sooner, the font-awesome font file is still not requested until all of the css has finished downloading.

Unfortunately the bootstrap css is quite big, so this prevents the spinner from appearing until the very last moment in the whole load cycle. This is because the css content of a page is part of the critical rendering path: https://developers.google.com/web/fundamentals/performance/critical-rendering-path/render-tree-construction

In reality the loading spinner is not going to be used in a real app and it will be better to use something that does not rely on an external font. There are nicer ways to show something while waiting, this is just an example of the complexity involved here.

This however does highlight the issue of showing some UI while the Aurelia app loads when you have a large amount of css, it doesn't matter how it's split, the browser needs all of the css before it renders the page.

Maybe this could be used to help solve that problem: https://github.com/filamentgroup/loadCSS/
Getting sidetracked here, this should be a topic for a separate issue.

Overall Summary

  • To realise the benefits of HTTP/2 careful attention needs to be paid to chunk sizes
  • The performance the benefit of HTTP/2 with many small chunks is cancelled out if there is also large chunk in the bundle.

@chrisckc
Copy link
Contributor Author

Here is the HTTP/2 waterfall for the test i carried out with 40 chunks:
image

As can be seen, larger chunks look they hold up the download, and the last request at the bottom is a separate one for the font file.

@chrisckc
Copy link
Contributor Author

chrisckc commented Nov 14, 2018

Ok, so i removed font-awesome and split the bootstrap css into 2 separate chunks using this method:

https://github.com/webpack-contrib/sass-loader/issues/455#issuecomment-438676470

I then carried out the tests with both a 41 chunk split and 4 chunk split using both HTTP/1.1 and HTTP/2

41 Chunk Split:

HTTP/1.1

Custom 4G (4.0Mb/s 50ms latency)
41 requests | 648KB transferred | finish 1.45s | Load 1.58s

Fast 3G
41 requests | 648KB transferred | finish 5.83s | Load 5.98s

Slow 3G (TTFB 2 seconds)
41 requests | 648KB transferred | finish 20.94s | Load 21.13s

HTTP/2

Custom 4G (4.0Mb/s 50ms latency)
41 requests | 648KB transferred | finish 1.44s | Load 1.68s

Fast 3G
41 requests | 648KB transferred | finish 4.75s | Load 4.95s

Slow 3G (TTFB 2 seconds)
41 requests | 648KB transferred | finish 17.24s | Load 17.40s

4 Chunk Split:

HTTP/1.1

Custom 4G (4.0Mb/s 50ms latency)
4 requests | 625KB transferred | finish 1.34s | Load 1.53s

Fast 3G
4 requests | 648KB transferred | finish 4.54s | Load 4.69s

Slow 3G (TTFB 2 seconds)
4 requests | 648KB transferred | finish 16.56s | Load 16.72s

HTTP/2

Custom 4G (4.0Mb/s 50ms latency)
4 requests | 624KB transferred | finish 1.34s | Load 1.49s

Fast 3G
4 requests | 624KB transferred | finish 4.54s | Load 4.68s

Slow 3G (TTFB 2 seconds)
4 requests | 624KB transferred | finish 16.50s | Load 16.64s

Summary

Those results cant be directly compared all the previous results as less is being downloaded.

  • Splitting into fewer chunks is always slightly faster regardless of network speed (assuming the connection is reliable)

  • HTTP/2 is marginally faster than HTTP/1.1 regardless of network speed.

  • HTTP/2 allows the bundle to be split into many chunks with very little performance penalty

The outcome of this seems to be:

Use HTTP/2 with many small chunks to achieve better long term cachability and increased resilience on unreliable connections.

@EisenbergEffect
Copy link
Contributor

nice work @chrisckc ! I'm not a Webpack expert so ultimately I need to rely on other members of my team to provide feedback. @jods4 can you review the extensive info above and share your thoughts? Also @huochunpeng since you are working on the CLI, can you review as well?

@3cp
Copy link
Member

3cp commented Nov 15, 2018

Webpack is over my head, I cannot judge. But I can see @chrisckc got pretty comprehensive test result here. 👍

@chrisckc
Copy link
Contributor Author

The long and short of it is that many of the web performance best practices become anti-patterns when used with HTTP/2. Maybe there should be an option on the CLI which asks what the webpack config should be optimised for?

Use of HTTP/2 is on the rise: https://w3techs.com/technologies/details/ce-http2/all/all
Some of the will be off the back of the rise of HTTPS due to increased web security awareness and regulations such as GDPR etc.

@EisenbergEffect
Copy link
Contributor

This is tricky because there's a fine balance with the UX of the CLI and the questions that we ask. Perhaps for the default, we use our existing setup, but if you choose custom and Webpack, then we let you decide between the two setups. @huochunpeng @JeroenVinke what do you all think? If that sounds good @chrisckc would you be willing to tweak a PR or two to submit the change in that form?

@3cp
Copy link
Member

3cp commented Nov 16, 2018

Or put some alternative config in webpack.config template, but comment them out with clear instruction what it can offer. Users can unlock the config when they are more experienced with webpack.

In this way, it's simpler for cli, it also retains all possible alternative config in users local app.
If we ask user to choose upfront, some (me for example) may not have enough knowledge to make correct decision. And they will hunt for new config snippet when they want something more in future.

It's better to have a lengthy well self-documented webpack.config template.

For example, I think our current webpack splitChunks config could put following chunks behind comments with document why you might want them, and a link to this github issue for people to check performance benchmarks.

bluebird
aureliaBinding
aureliaTemplating
aurelia
common

@chrisckc
Copy link
Contributor Author

chrisckc commented Nov 20, 2018

@EisenbergEffect I can see benefits to both approaches, asking about HTTP/2 in the CLI menu, under the custom choice will prompts users to think about this. This is good because an app would ideally need to be designed with HTTP/2 in mind from the start to take the most advantage from it.

But as @huochunpeng stated, it does require more upfront knowledge / upfront learning to answer the CLI questions. Just adding the extra config as a commented out block also removes the need to update the CLI menu to implement it.

As a best of both methods, we could update the CLI menu adding a new option for HTTP/2 and if the HTTP/2 option is not chosen, the generated webpack config could contain a comment about HTTP/2 and a link to the webpack config in the CLI source code.
"/aurelia-cli/lib/resources/content/webpack.config.template.js" where the required config snippet can be obtained. This avoids having a large block of commented out code which the user may just delete initially.

@chrisckc
Copy link
Contributor Author

Btw. i have updated my test repo:
https://github.com/chrisckc/aurelia-cli-skeleton-typescript-webpack

It now contains a more generic webpack config. The new config contains a "vendorSplit" cacheGroup which works by using a function for the name property in each cacheGroup.

This results in each node module being split individually if the size is bigger than the specified minSize (20000 bytes)

        // generic 'initial/sync' vendor node module splits:
        vendorSplit: { // each node module as separate chunk file if module is bigger than minSize
          test: /[\\/]node_modules[\\/]/,
          name(module) {
            //console.log('vendor module.context: ', module.context);
            // Extract the name of the package from the path segment after node_modules
            const packageName = module.context.match(/[\\/]node_modules[\\/](.*?)([\\/]|$)/)[1];
            return `vendor.${packageName.replace('@', '')}`;
          },
          priority: 20,
          minSize: 20000 // only create if 20k or larger
        },
        aurelia: { // picks up any remaining aurelia modules from node_modules that are < 20KB
          test: /[\\/]node_modules[\\/]aurelia-.*[\\/]/,
          name: "vendor.aurelia",
          enforce: true,
          priority: 19
        },
        vendors: { // picks up everything else being used from node_modules that is < 20KB
          test: /[\\/]node_modules[\\/]/,
          name: "vendors",
          priority: 15,
          enforce: true, // create chunk regardless of the size of the chunk
        },

Any node modules smaller than 20k are then gathered up into a single "aurelia" chunk and any non-aurelia modules are gathered up into a "vendors" chunk.

With this config it is not necessary to have separate entries for bluebird, and jquery in my test repo. It still need the seperate entry for bootstrap as it it responsible for separating the compiled css emitted by sass-loader.

I have left the separate entries for bluebird, and jquery in the config for now as it affects the order of the scripts injected into index.html and i am not yet sure if this matters. I suspect not as i don't think the webpack runtime executes the entry chunk until it has downloaded all of the initial chunks?

@EisenbergEffect
Copy link
Contributor

EisenbergEffect commented Nov 21, 2018

@chrisckc I like your idea. So, if custom is selected and Webpack, we can ask about HTTP/2. If they choose no, we include the link to GitHub. We can also include the link if they take the default setup.

Would you be interested in submitting a PR to make these changes. Perhaps @huochunpeng could help guide you as to where/how to make these changes.

@chrisckc
Copy link
Contributor Author

chrisckc commented Nov 22, 2018

No problem, first we have to agree on how many chunks should be created for the non HTTP/2 config and how many for the HTTP/2 config. Also need to decide if we are adding the following config which enables long term caching to work properly, this would be added to both configs.

    runtimeChunk: true,
    // moduleIds is the replacement for HashedModuleIdsPlugin and NamedModulesPlugin which were
    // deprecated in https://github.com/webpack/webpack/releases/tag/v4.16.0
    // in production mode, this changes module id's to use hashes based on the relative path of the module, otherwise use the module name
    moduleIds: production ? 'hashed' : 'named',

HTTP/1.1
The config in the recent PR creates 6 chunks so maybe use this as the default HTTP/1.1 config.

HTTP/2
The config in the repo i linked to above which splits each node module bigger then 20k into its own chunk by using "minSize = 20000" would create 11 chunks so maybe we use that for the HTTP/2 config? This config also maintenance free as it automatically splits out any node module that is bigger than 20k. The only issue is that adding many small modules will result in the 'vendors' chunk getting larger as they won't be split out. Im thinking "minSize = 10000" might be better. The "maxSize" option is supposed to deal with that but in testing i found that it has issues.

MaxSize issues:

The maxSize option seems to limit the size of a collection of node modules by splitting it up into smaller chunks each containing fewer modules.

In the above example "vendors.{hash}.chunk.js" would be split as follows
"vendors~{shorthash}.{hash}.chunk.js"
"vendors~{shorthash}.{hash}.chunk.js"
"vendors~{shorthash}.{hash}.chunk.js"
...........

However it is unable to split a single large module, it seems to try as it results in a tiny almost empty chunk being created for a single large module. I tried applying maxSize to individual cacheGroups instead of at the global level, but this caused the webpack build to hang, which looks like the result of a loop.

I don't think maxSize is production ready yet, it was only introduced quite recently in webpack and i seem to have found 2 bugs.

@chrisckc
Copy link
Contributor Author

I logged an issue for the maxSize over at the webpack repo:
webpack/webpack#8407

@EisenbergEffect
Copy link
Contributor

Seems to make sense to go ahead and add the long-term cache for both and the keep the 6 chunks for HTTP/1.1. We can go with your recommendation for HTTP/2 as well. It seems good to me. Should we do this now or wait on the webpack issue to be resolved?

@chrisckc
Copy link
Contributor Author

The webpack issue has already been started on and they seem to have regular releases so i think it would be worth waiting so that we can use the maxSize option. The package.json will need to specify the minimum webpack version.

@chrisckc
Copy link
Contributor Author

Documentation also requires updating, just commented on this:
aurelia/documentation#348

@EisenbergEffect
Copy link
Contributor

On the documentation. Should be getting a fix for that in soon. @chrisckc I saw that the there's some discussion ongoing for the webpack issue you raised. Do you just want to ping us back here when that's resolved and you think we should move this forward? We're flexible. You're doing some great work here and you're more knowledgeable about Webpack than I am so I'm happy to defer to you for what you think is the best option here.

@chrisckc
Copy link
Contributor Author

The webpack issue: webpack/webpack#8407

has now been fixed and released in Webpack v4.26.1

I will work on suitable configs later this week with a view to creating a pull request.

@EisenbergEffect
Copy link
Contributor

Excellent!

@chrisckc
Copy link
Contributor Author

chrisckc commented Nov 30, 2018

I have updated my test repo again, there is now a separate webpack config for HTTP/2
https://github.com/chrisckc/aurelia-cli-skeleton-typescript-webpack

The normal HTTP/1.1 config creates 8 vendor chunks by using the default minSize: 30000 and splitChunks.maxSize: 200000
(the total number of files downloaded is a bit more than 8 due to other files such as the app bundle, css and webpack runtime)

The HTTP/2 config creates 21 chunks by using minSize: 10000 and maxSize: 40000 along with a more aggressive splitChunks config.

The HTTP/2 config would create more chunks of some of the larger modules could be split up.

The largest modules are:
Bluebird
jQuery
aurelia-binding
aurelia-templating

This happens when modules are either implemented as a single module or are pre-bundled so can't be split by webpack.

Note: The updated configs would require a minor update to the CLI generated index.cshtml that was updated in this pull request: #987 that is until the dotnet template is updated to use the SpaProxy method.

There still remains an issue with Webpack in relation to empty chunks being created when using mini-css-extract-plugin: webpack/webpack#7300 but it has been ongoing for some time and looks like it only going to be fixed in Webpack v5.

@EisenbergEffect
Copy link
Contributor

@chrisckc Thanks for your continued work on this, and esp. all the measuring. It seems like we're ready for a PR, if you'd like to put one together. Do you need some assistance on where to put what or how to proceed?

@chrisckc
Copy link
Contributor Author

chrisckc commented Dec 1, 2018

@EisenbergEffect After a quick look i can see that the menu structure is driven by "lib/commands/new/new-application.json" so it looks like i need to add a new question and state config in there and reference it in "lib/resources/content/webpack.config.template.js"

The new option should probably be specified right after the "loaderBundler" question, such that if Webpack is specified, then use a new "activity" to ask which HTTP Protocol to optimise the Webpack bundle splitting config for, either HTTP/1.1 or HTTP/2 which then assigns a state to be used later.

In the webpack.config.template.js i could then use this to select the alternative config:

// @if httpProtocol.id='http2'

BTW. I have published a blog post which explains the webpack optimisations covered here and in my test repo:
https://www.chrisclaxton.me.uk/chris-claxtons-blog/webpack-chunksplitting-deepdive

@EisenbergEffect
Copy link
Contributor

@chrisckc Sounds like you've got a good understanding of what's needed. Also, if you'd like to cross-post to the official Aurelia blog or do a guest blog post after we get this all merged and released, we'd love to have you.

@chrisckc
Copy link
Contributor Author

chrisckc commented Dec 5, 2018

@EisenbergEffect Creating a PR today, I will also modify the package.json to resolve this issue:

#996

Are you ok with me updating karma-webpack to an RC version to fix that issue?
"karma-webpack": "^4.0.0-rc.3", in package json

and also adding:
node: { fs: 'empty' },
to the webpack config template? or would you rather have a separate PR for those?

chrisckc added a commit to chrisckc/cli that referenced this issue Dec 5, 2018
Improvement of generated webpack config in relation to bundle splitting, long term cachability and other aspects.
Added new question for HTTP protocol to be used in webpack optimisation config
New question only accessible when following the 'custom' path
Choices are HTTP/1.1 or HTTP/2
Default is HTTP/1.1 when the question is not presented.

PR for feature: aurelia#969
@chrisckc
Copy link
Contributor Author

chrisckc commented Dec 5, 2018

I left the fix for 996 out of the above PR, created separate PR.

@magnusdanielson
Copy link

Excellent work! I have always wondered what the load stat would look like.

@3cp
Copy link
Member

3cp commented May 15, 2019

@chrisckc pls close this issue if it's considered fixed, thx.

@3cp 3cp closed this as completed Oct 7, 2020
gp-slick-coder pushed a commit to Styrit/shopadoo.styrit.com that referenced this issue Jun 3, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants