That's an interesting experiment! I too deal with a large C++ codebase that is slow to compile due to templates and other cruft. So I feel the author's pain. I have to wonder, though, if having a...
That's an interesting experiment! I too deal with a large C++ codebase that is slow to compile due to templates and other cruft. So I feel the author's pain. I have to wonder, though, if having a precompiled header containing the C++ stuff would solve some of that problem? Obviously, for templates you need to recompile for each type, but if he's using the same types in different source files, then it should be a win overall.
I'd also be very hesitant to replace something like std::vector with my own implementation, though as the extra programming overhead of bounds checking and pointer arithmetic is easy to get wrong and difficult to maintain. It sounds like a better choice would have been std::array which is a fixed sized and doesn't reallocate memory while you're working with it. Just as with the hashing and sorting stuff they did – it was choosing the right tool that made it better, not necessarily moving away from C++.
Look at the code though. There are still bound checks. He uses assert for that.
I'd also be very hesitant to replace something like
std::vector with my own implementation, though as the extra
programming overhead of bounds checking and pointer arithmetic is easy
to get wrong and difficult to maintain. (…)
Look at the code though. There are still bound checks. He uses
assert for that.
Oh sure, but what I'm saying is that they now have to maintain that themselves. That's a lot of extra work. Any time they update the code for it, they have to make sure the assert is still valid –...
Oh sure, but what I'm saying is that they now have to maintain that themselves. That's a lot of extra work. Any time they update the code for it, they have to make sure the assert is still valid – that it still checks what they originally wanted to check and that it doesn't miss any new edge cases due to the changes. It can be done, it's just extra maintenance that you don't need to do yourself when using a library built by someone else. It may be totally worth it in this case if it gives enough extra performance, but I'd rather put the burden on someone else if I have a choice.
That's an interesting experiment! I too deal with a large C++ codebase that is slow to compile due to templates and other cruft. So I feel the author's pain. I have to wonder, though, if having a precompiled header containing the C++ stuff would solve some of that problem? Obviously, for templates you need to recompile for each type, but if he's using the same types in different source files, then it should be a win overall.
I'd also be very hesitant to replace something like
std::vector
with my own implementation, though as the extra programming overhead of bounds checking and pointer arithmetic is easy to get wrong and difficult to maintain. It sounds like a better choice would have beenstd::array
which is a fixed sized and doesn't reallocate memory while you're working with it. Just as with the hashing and sorting stuff they did – it was choosing the right tool that made it better, not necessarily moving away from C++.Look at the code though. There are still bound checks. He uses
assert
for that.Oh sure, but what I'm saying is that they now have to maintain that themselves. That's a lot of extra work. Any time they update the code for it, they have to make sure the assert is still valid – that it still checks what they originally wanted to check and that it doesn't miss any new edge cases due to the changes. It can be done, it's just extra maintenance that you don't need to do yourself when using a library built by someone else. It may be totally worth it in this case if it gives enough extra performance, but I'd rather put the burden on someone else if I have a choice.