-
-
Notifications
You must be signed in to change notification settings - Fork 134
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improvements in CLI generated Webpack config #969
Comments
In webpack v4 chunk splitting is controlled by the optimization.splitChunks setting and the best configuation is highly dependant on network speed (latency and transfer time) and whether or not HTTP/2 is used. HTTP/1.1 allows up to six simultaneous connections using Chrome/Firefox/Safari The default behaviour of splitChunks is described in the webpack docs:
The above statement may be relevant in some situations such as just using vanilla js or using a different framework, however not for Aurelia due to the way the Aurelia webpack build is configured. Using the default config would result in single js and css chunks containing everything, this has no upsides, only downsides. It does take advantage of browser caching as the slightest change to the app would result in the whole js bundle getting a new filename hash and being downloaded again. For the simple hello world app which the CLI generates, the default splitChunks config with just 2 simple changes to enable splitting of both initial and async chunks and separation of the webpack runtime would be sufficient.
This would create separate app and vendor js and css chunks and a separate webpack runtime chunk, increasing cachability. But wait, for the webpack-skeleton-navigation source more work is required: I have noticed that users.ts lazy loads aurelia-http-client and conditionally/lazy loads isomorphic-fetch polyfill. Using the default webpack splitChunks config results in a separate 'fetch' chunk being generated, but not a separate chunk for the aurelia-http-client. I discovered this during some testing, the fetch chunk is created because of the magic comment "webpackChunkName:" in this code in users.ts:
However the default splitChunks config bundles the aurelia-http-client with the rest of the vendor modules, removing most of the advantage of lazy loading it. It's also not worth adding the code complexity of using using lazy loading if its not going to provide the full benefit in reducing the initial download size. With this in mind the webpack config should take this into account and make sure that aurelia-http-client is separated from the vendor bundle. Taking into account the fact that the CLI is being updated to output the skeleton source files, we need a webpack config that does the following:
Maybe the webpack config should depend on what the CLI is outputting, if it's going to be optional? Chunk splitting performance test results to follow next.... |
HTTP/1.1 Test resultsHere are the results of tests using HTTP/1/1 and different webpack configs. The test were conducted using a project which consists of a CLI generated project with the Skeleton source files transplanted, which might be very close to what the CLI will output soon? I used Test 1 (default config)Using the default splitChunks config, the following 4 files were downloaded by the browser: This is the request waterfall on Slow 3G: Chrome DevTool timings with network throttling enabled: Custom 4G (4.0Mb/s 50ms latency) Fast 3G Slow 3G Summary: most of the time taken by one large request Comments: This is caused by the fact that the large css bundle has to finish downloading before the browser renders the splash markup. Once the browser has the css file it then knows that it needs to download the woff2 font file. As the woff2 file is 75KB and only starts to download later in the load cycle, the app js file is almost finished downloading in a parallel request by the time its finished, resulting in the loading screen not working very well. Test 2 (separate runtime chunk and splitChunks all)Using the following splitChunks config:
The following 7 files are downloaded by the browser: This is the request waterfall on Slow 3G: Chrome DevTool timings: Custom 4G (4.0Mb/s 50ms latency) Fast 3G Slow 3G (TTFB 2 seconds) Summary: Win Win, negligible difference in load time, much better cachability, more chance of success on unreliable connections. Test 3 (new config with seperate runtime chunk)Using the config provided in the recent merged pull request with the addition of 'runtimeChunk: true' to match the above tests 11 files are downloaded by the browser: Chrome DevTool timings: Custom 4G (4.0Mb/s 50ms latency) Fast 3G Slow 3G (TTFB 2 seconds) Summary: Arguably not as quite good load times as the previous test (except 4G), better cachability in the case of upgrading node modules and more chance of success on unreliable connections. Comments: Lets fix that by separating out fontawesome and bootstrap into separate css and js bundles Test 4 (new config with seperate runtime chunk with additional splits)Using the config provided in the merged pull request with the addition of 'runtimeChunk: true' to match the above tests and with the addition of:
14 files are downloaded by the browser: Chrome DevTool timings: Custom 4G (4.0Mb/s 50ms latency) Fast 3G Slow 3G (TTFB 2 seconds) Summary: Results are more or less the same as the test where 7 requests were made, but again has better cachability in the case of upgrading node modules and more chance of success on unreliable connections. Overall summary
|
Splitting out the bundles can also be achieved using splitChunks.maxSize, but gives no control over the naming of the outputted bundles, everything is named "app~<>.chunk.js". using maxSize: 100000 results in 19 requests with the largest js file being 89KB, however it can't split the css bundle to under 100KB because bootstrap css is larger than 100KB. Also note that none of the above configs split out the lazy loaded aurelia-fetch-client in users.ts, resulting in it being downloaded during the initial download. This only adds 7.4KB to the initial download, but nonetheless if using lazy loading then it should be safe to assume that the intention is to defer the download until the module is required. Looking at the default splitChunks config: Unfortunately that does not happen, only the isomorphic-fetch is split out due to the dynamic import code and it's name is controlled by the magic comment
This is so that it can be injected into the Users class, which means it is statically 'imported' in a sync/initial loaded module, so also becomes an initial module according to the webpack docs. |
To fix the above issue with lazy loading aurelia-fetch-client the following needs to be done:
The second parameter tells webpack to create a separate on-demand / async chunk for the 'users' module with the name supplied. This means that aurelia-fetch-client is no longer imported from an 'initial' module. Now assuming that you are using the default cacheGroups add just want to add the minimum to make this work, use the following config to add an extra cacheGroup:
Note that enforce needs to be true unless we alter minSize from the default 30 KB to less than the size of the module 7.4KB. Priority can be anything grater than 0. Now notice that the 'fetch' and 'aurelia-fetch-client' bundles are not downloaded in the initial download and when visiting the users route, some new requests are created for the 'users', 'fetch' and 'aurelia-fetch-client' chunks. There is one issue with the above config though, the 'users' and 'aurelia-fetch-client' chunks are both under 10 KB, so would probably be better off as a single chunk. Edit: After testing the above code changes using the webpack config in the recent pull request, i found that the aurelia-fetch-client module is included in the separate 'users' chunk which avoids an additional request. Note that with this config, if another async view also used 'aurelia-fetch-client', that module would then be shared by 2 or more modules, and would be split into a separate chunk by the 'common' cacheGroup entry. This would take us back to having separate requests again, but only when loading the first view that uses 'aurelia-fetch-client', after that it's already in the browser. If the 'enforce: true' were removed from the 'common' cacheGroup, 'aurelia-fetch-client', would instead be embedded in each view's chunk, resulting in some duplication. This duplication would not be much of an issue though because it exists in an on-demand chunk and downloading an extra 7.4KB when each view's chunk is downloaded is not really going to make a noticeable difference. Now with 'enforce: true' removed, aurelia-fetch-client would need to be 30KB or larger to be split into a separate chunk by the 'common' entry in the cacheGroups. This clearly shows that optimising lazy loaded modules is not easy, i think that the default minSize of 30 KB suits initial chunks, but not async/on-demand chunks where duplicating a 29 KB module across many async chunks would be a bit wasteful. Maybe something in the range of 10 to 15 KB would be a better minSize for async chunks when considering slower connections. |
Note that i am working on a more generic config which avoids having to call out specific modules in the webpack config, however this config might be more suited to HTTP/2 as it's going to result in more chunks. In the meantime ill post some results of using HTTP/2 with the same tests that i did above. |
HTTP/2 results:I used the following package to get a HTTP/2 server: https://www.npmjs.com/package/static-http2-server with a self signed certificate. (HTTP/2 requires HTTPS) For each test the HTTP/2 server was restarted to avoid "HTTP/2 Server Push" coming into play when re-requesting the page. Obviously the browser cache was also disabled as in the previous tests. The results were a little surprising:
The lack of benefit from HTTP/2 seems to be because when using HTTP/2, looking at the request waterfall, the large bootstrap css file and the separate request for the font-awesome font is holding up the download and page load completion. At 40 chunks, all of the other chunk files are much smaller than the bootstrap css, it looks like if this could be split up and we remove the font-awesome font requirement, HTTP/2 would be able to provide benefits. Although in theory the Chrome DevTools network throttling should mean that there should be no difference, i think it would be good to try a separate server over a network link to rule out any side effects of running the server on localhost. I will report back when i get around to that sanity test. Ok, so here are the results for comparison: Test 1 (default config)no differences Test 2 (separate runtime chunk and splitChunks all)no differences Test 3 (new config with separate runtime chunk)Custom 4G (4.0Mb/s 50ms latency) Fast 3G Slow 3G (TTFB 2 seconds) As noted in the HTTP/1/1 test results, it is clear from the waterfall that the font-awesome font is holding up the load completion, this doesn't stop the page from being rendered though. Test 4 (new config with separate runtime chunk with additional splits)Custom 4G (4.0Mb/s 50ms latency) Fast 3G Slow 3G (TTFB 2 seconds) In the HTTP/1.1 waterfall, it is clear that the additional split of separating the bootstrap and font-awesome chunks did not have the desired effect due to the maxed out concurrent request limit. Unfortunately the bootstrap css is quite big, so this prevents the spinner from appearing until the very last moment in the whole load cycle. This is because the css content of a page is part of the critical rendering path: https://developers.google.com/web/fundamentals/performance/critical-rendering-path/render-tree-construction In reality the loading spinner is not going to be used in a real app and it will be better to use something that does not rely on an external font. There are nicer ways to show something while waiting, this is just an example of the complexity involved here. This however does highlight the issue of showing some UI while the Aurelia app loads when you have a large amount of css, it doesn't matter how it's split, the browser needs all of the css before it renders the page. Maybe this could be used to help solve that problem: https://github.com/filamentgroup/loadCSS/ Overall Summary
|
Ok, so i removed font-awesome and split the bootstrap css into 2 separate chunks using this method:
I then carried out the tests with both a 41 chunk split and 4 chunk split using both HTTP/1.1 and HTTP/2 41 Chunk Split:HTTP/1.1Custom 4G (4.0Mb/s 50ms latency) Fast 3G Slow 3G (TTFB 2 seconds) HTTP/2Custom 4G (4.0Mb/s 50ms latency) Fast 3G Slow 3G (TTFB 2 seconds) 4 Chunk Split:HTTP/1.1Custom 4G (4.0Mb/s 50ms latency) Fast 3G Slow 3G (TTFB 2 seconds) HTTP/2Custom 4G (4.0Mb/s 50ms latency) Fast 3G Slow 3G (TTFB 2 seconds) SummaryThose results cant be directly compared all the previous results as less is being downloaded.
The outcome of this seems to be: Use HTTP/2 with many small chunks to achieve better long term cachability and increased resilience on unreliable connections. |
Webpack is over my head, I cannot judge. But I can see @chrisckc got pretty comprehensive test result here. 👍 |
The long and short of it is that many of the web performance best practices become anti-patterns when used with HTTP/2. Maybe there should be an option on the CLI which asks what the webpack config should be optimised for? Use of HTTP/2 is on the rise: https://w3techs.com/technologies/details/ce-http2/all/all |
This is tricky because there's a fine balance with the UX of the CLI and the questions that we ask. Perhaps for the default, we use our existing setup, but if you choose custom and Webpack, then we let you decide between the two setups. @huochunpeng @JeroenVinke what do you all think? If that sounds good @chrisckc would you be willing to tweak a PR or two to submit the change in that form? |
Or put some alternative config in webpack.config template, but comment them out with clear instruction what it can offer. Users can unlock the config when they are more experienced with webpack. In this way, it's simpler for cli, it also retains all possible alternative config in users local app. It's better to have a lengthy well self-documented webpack.config template. For example, I think our current webpack splitChunks config could put following chunks behind comments with document why you might want them, and a link to this github issue for people to check performance benchmarks.
|
@EisenbergEffect I can see benefits to both approaches, asking about HTTP/2 in the CLI menu, under the custom choice will prompts users to think about this. This is good because an app would ideally need to be designed with HTTP/2 in mind from the start to take the most advantage from it. But as @huochunpeng stated, it does require more upfront knowledge / upfront learning to answer the CLI questions. Just adding the extra config as a commented out block also removes the need to update the CLI menu to implement it. As a best of both methods, we could update the CLI menu adding a new option for HTTP/2 and if the HTTP/2 option is not chosen, the generated webpack config could contain a comment about HTTP/2 and a link to the webpack config in the CLI source code. |
Btw. i have updated my test repo: It now contains a more generic webpack config. The new config contains a "vendorSplit" cacheGroup which works by using a function for the name property in each cacheGroup. This results in each node module being split individually if the size is bigger than the specified minSize (20000 bytes)
Any node modules smaller than 20k are then gathered up into a single "aurelia" chunk and any non-aurelia modules are gathered up into a "vendors" chunk. With this config it is not necessary to have separate entries for bluebird, and jquery in my test repo. It still need the seperate entry for bootstrap as it it responsible for separating the compiled css emitted by sass-loader. I have left the separate entries for bluebird, and jquery in the config for now as it affects the order of the scripts injected into index.html and i am not yet sure if this matters. I suspect not as i don't think the webpack runtime executes the entry chunk until it has downloaded all of the initial chunks? |
@chrisckc I like your idea. So, if custom is selected and Webpack, we can ask about HTTP/2. If they choose no, we include the link to GitHub. We can also include the link if they take the default setup. Would you be interested in submitting a PR to make these changes. Perhaps @huochunpeng could help guide you as to where/how to make these changes. |
No problem, first we have to agree on how many chunks should be created for the non HTTP/2 config and how many for the HTTP/2 config. Also need to decide if we are adding the following config which enables long term caching to work properly, this would be added to both configs.
HTTP/1.1 HTTP/2 MaxSize issues: The maxSize option seems to limit the size of a collection of node modules by splitting it up into smaller chunks each containing fewer modules. In the above example "vendors.{hash}.chunk.js" would be split as follows However it is unable to split a single large module, it seems to try as it results in a tiny almost empty chunk being created for a single large module. I tried applying maxSize to individual cacheGroups instead of at the global level, but this caused the webpack build to hang, which looks like the result of a loop. I don't think maxSize is production ready yet, it was only introduced quite recently in webpack and i seem to have found 2 bugs. |
I logged an issue for the maxSize over at the webpack repo: |
Seems to make sense to go ahead and add the long-term cache for both and the keep the 6 chunks for HTTP/1.1. We can go with your recommendation for HTTP/2 as well. It seems good to me. Should we do this now or wait on the webpack issue to be resolved? |
The webpack issue has already been started on and they seem to have regular releases so i think it would be worth waiting so that we can use the maxSize option. The package.json will need to specify the minimum webpack version. |
Documentation also requires updating, just commented on this: |
On the documentation. Should be getting a fix for that in soon. @chrisckc I saw that the there's some discussion ongoing for the webpack issue you raised. Do you just want to ping us back here when that's resolved and you think we should move this forward? We're flexible. You're doing some great work here and you're more knowledgeable about Webpack than I am so I'm happy to defer to you for what you think is the best option here. |
The webpack issue: webpack/webpack#8407 has now been fixed and released in Webpack v4.26.1 I will work on suitable configs later this week with a view to creating a pull request. |
Excellent! |
I have updated my test repo again, there is now a separate webpack config for HTTP/2 The normal HTTP/1.1 config creates 8 vendor chunks by using the default minSize: 30000 and splitChunks.maxSize: 200000 The HTTP/2 config creates 21 chunks by using minSize: 10000 and maxSize: 40000 along with a more aggressive splitChunks config. The HTTP/2 config would create more chunks of some of the larger modules could be split up. The largest modules are: This happens when modules are either implemented as a single module or are pre-bundled so can't be split by webpack. Note: The updated configs would require a minor update to the CLI generated index.cshtml that was updated in this pull request: #987 that is until the dotnet template is updated to use the SpaProxy method. There still remains an issue with Webpack in relation to empty chunks being created when using mini-css-extract-plugin: webpack/webpack#7300 but it has been ongoing for some time and looks like it only going to be fixed in Webpack v5. |
@chrisckc Thanks for your continued work on this, and esp. all the measuring. It seems like we're ready for a PR, if you'd like to put one together. Do you need some assistance on where to put what or how to proceed? |
@EisenbergEffect After a quick look i can see that the menu structure is driven by "lib/commands/new/new-application.json" so it looks like i need to add a new question and state config in there and reference it in "lib/resources/content/webpack.config.template.js" The new option should probably be specified right after the "loaderBundler" question, such that if Webpack is specified, then use a new "activity" to ask which HTTP Protocol to optimise the Webpack bundle splitting config for, either HTTP/1.1 or HTTP/2 which then assigns a state to be used later. In the webpack.config.template.js i could then use this to select the alternative config:
BTW. I have published a blog post which explains the webpack optimisations covered here and in my test repo: |
@chrisckc Sounds like you've got a good understanding of what's needed. Also, if you'd like to cross-post to the official Aurelia blog or do a guest blog post after we get this all merged and released, we'd love to have you. |
@EisenbergEffect Creating a PR today, I will also modify the package.json to resolve this issue: Are you ok with me updating karma-webpack to an RC version to fix that issue? and also adding: |
Improvement of generated webpack config in relation to bundle splitting, long term cachability and other aspects. Added new question for HTTP protocol to be used in webpack optimisation config New question only accessible when following the 'custom' path Choices are HTTP/1.1 or HTTP/2 Default is HTTP/1.1 when the question is not presented. PR for feature: aurelia#969
I left the fix for 996 out of the above PR, created separate PR. |
Excellent work! I have always wondered what the load stat would look like. |
@chrisckc pls close this issue if it's considered fixed, thx. |
I'm submitting a feature request
Improvement of generated webpack config in relation to bundle splitting, long term cachability and other aspects inline with the following document:
https://developers.google.com/web/fundamentals/performance/webpack/
Webpack (v4)
Current behavior:
Webpack config is not optimised
What is the expected behavior?
Webpack config should have some basic level of optimisation and documentation should cover more advanced optimisations.
What is the motivation / use case for changing the behavior?
To provide an improved out of the box development experience.
Following on from this: #958 i have learned more about webpack optimisations and tested some configs against different code both Aurelia and non-Aurelia.
Chunk splitting seems to be one of the most important aspects of optimisation provided that we are already using minification of both js and css.
The CLI generated webpack config does js minification in production mode as it enabled by default in webpack v4 but does not perform CSS minification without an additional plugin.
I will talk about chunk splitting first with some test results.
The text was updated successfully, but these errors were encountered: