this post was submitted on 05 Sep 2024
13 points (63.3% liked)
Programming
16975 readers
1288 users here now
Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!
Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.
Hope you enjoy the instance!
Rules
Rules
- Follow the programming.dev instance rules
- Keep content related to programming in some way
- If you're posting long videos try to add in some form of tldr for those who don't want to watch videos
Wormhole
Follow the wormhole through a path of communities [email protected]
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
There's no precompiler in C++. There's a preprocessor but that's something entirely different. It's also not a slow portion of the compile process typically.
C++ is getting to the point where modules might work well enough to do something useful with them, but they remove the need for #include preprocessor directives to share code.
OP clearly means "preprocessor", not "precompiler". You're right that preprocessing itself isn't slow, but the header/impl split can actually cause some slowness at build time.
Slow compared to what exactly...?
The worst part about headers is needing to reprocess the whole header from scratch ... but precompiled headers largely solve that (or just using smaller more targeted header files).
Even in those cases there's something to be said for the extreme parallelism in a C++ build. You give some of that up with modules for better code organization and in some cases it does help build times, but I've heard in others it hurts build times (a fair bit of that might just be inexperience with the feature/best practices and immature implementations, but alas).
Slow compared to just chucking everything into a single source file, actually: https://github.com/j-jorge/unity-build
That's only true for clean builds and even then isn't universally true, and of course there are other reasons not to do unity builds. But the existence of the technique, and the fact that historically it has sped up build times enough for various projects to adopt it, does show that the C++ model, with headers and separate compilation units, has some inherent inefficiency.
Sure, there's a cost to breaking things up, all multiprocessing and multithreading comes at a cost. That said, in my evaluation, single for "unity builds" are garbage; sometimes a few files are used to get some multiprocessing back (... as the GitHub you mentioned references).
They're mostly a way to just minimize the amount of translation units so that you don't have the "I changed a central header that all my files include and now I need to rebuild the world" (with a world that includes many many small translation units) problem (this is arguably worse on Windows because process spawning is more expensive).
Unity builds as a whole are very very niche and you're almost always better off doing a more targeted analysis of where your build (or often more importantly, incremental build) is expensive and making appropriate changes. Note that large C++ projects like llvm, chromium, etc do NOT use unity builds (almost certainly, because they are not more efficient in any sense).
I'm not even sure how they got started, presumably they were mostly a way to get LTO without LTO. They're absolutely awful for incremental builds.
Yeah, I mean, I tried to be explicit that I wasn't recommending unity builds. I'm just pointing out that OP, while misinformed and misguided in various ways, isn't actually wrong about header files being one source of slowness for C++.