-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cmake build multiple targets #103
Conversation
… into multi-build
I am happy that my experiment won't go to waste! It's great that you were able to adapt it to the needs of the Forte developer(s) and make the CMake more compact. My only suggestion is to consider renaming
I recently observed the following behavior: Ninja seems to build multiple targets concurrently, while Make cannot do this (e.g. make will wait for BUILD_CORE to finish, and only then start BUILD_EXE, etc, while Ninja will compile all sources at the same time). As a result, with highly parallel builds and note: Ninja doesn't seem to tell gcc how many jobs it should use for LTO.
It does seem like the speedup (or lack thereof) depends on many factors. Probably the compiler version also has an effect, but I've only tried gcc 12. Out of curiosity, did you do these tests on a very powerful machine? |
OK.
So according to your description, Ninja is smarter than Make for building multiple targets concurrently, but Make is smarter than Ninja for ensuring concurrency in LTO. Personally I use Make all the time, so I do not have much to say for Ninja.
So let's say we just use Make. Previously,
This is on an old machine that I normally use for code developement. With gcc 9.2, GNU Make 3.82, cmake 3.26, When I guess it should be mainly the disk IO problem because the IO speed of this machine is kind of slow. The precompiled header file is 342 MB, much larger than the source files. So saving and loading precompiled headers can cost additional IO time. For github actions (with gcc 9.4), there is significant speedup: With |
I see! That is a nice way to do it. It looks like main.cpp also got the same treatment. Then there should be no difference between make and ninja.
Thanks for sharing those test details. I'm surprised that the precompiled header file is so big---Github Actions must have pretty fast disk IO! |
@chillenb
This PR handles the multiple target build, based on your doublebuild branch. This is now a necessary feature because another user asked me about the following case. Their code is written in C++, depends on
block2
libblock2.so
, and also has the interface todmrgscf
(based onpyscf
) which uses the Python extension ofblock2
. So they need two targets ofblock2
for their github actions.There are some adjustments (856b42e) to address a few new problems due to this change:
pylib_obj
intermediate target.cmake
options used). But on github actions it does save significant amount time for building wheels. So the default is set to ON.TARGET_PRECOMPILE_HEADERS(b2_exe REUSE_FROM b2_core)
is avoided, because it will prevent the parallelization introduced in 1, and re-precompiling the headers multiple times in parallel will not cost any extra time.BUILD_CORE
. IfBUILD_CORE=OFF
, it will recover the previous scheme, namely, every target will build its own instantiation part (for backward compatibility). IfBUILD_CORE=ON
and all other targets are off, then this can be useful for syntax checking after modifying the C++ code. Default isON
, for saving time when building multiple targets.CMakeLists.txt
is 27 lines shorter after introducing 1, 2, and 3 (for the temperance in adding code complexity).Thanks for your work for making this possible and if you have any additional suggestions please let me know.