I like the ideas presented in this piece, but not the way it was presented. It was way too light on details. For example, they mention testing your assumptions but don't bother to describe what an...
I like the ideas presented in this piece, but not the way it was presented. It was way too light on details. For example, they mention testing your assumptions but don't bother to describe what an assertion is or how pre-conditions work in some languages. Why not? Those are the standard ways to test one's assumptions about their code. I mean, you can log stuff or stop on a breakpoint, but in a function that's called a million times a second, that's not going to be helpful.
I also feel like they didn't make clear the distinction between what they call "the standard strategy" and their better method. It seems to me that the standard strategy of "think about where the problem is likely to be and then change the code to see if it fixes it," is pretty similar to "think about what your assumptions are and then log them to see if they're correct." The only distinction is you're logging versus making the change to test the outcome. In both cases you're doing a type of investigation into what went wrong. In what the author calls "investigation mode" you hit the exact same wall. You test what your assumptions are, and if they're all correct, you're in the exact same position as the person who made changes and their changes failed to fix the problem. Whatever you thought was the issue wasn't.
Anyway, I'm grumpy today, I guess. It wasn't a bad piece, but I thought they could have presented something more in-depth.
I like the ideas presented in this piece, but not the way it was presented. It was way too light on details. For example, they mention testing your assumptions but don't bother to describe what an assertion is or how pre-conditions work in some languages. Why not? Those are the standard ways to test one's assumptions about their code. I mean, you can log stuff or stop on a breakpoint, but in a function that's called a million times a second, that's not going to be helpful.
I also feel like they didn't make clear the distinction between what they call "the standard strategy" and their better method. It seems to me that the standard strategy of "think about where the problem is likely to be and then change the code to see if it fixes it," is pretty similar to "think about what your assumptions are and then log them to see if they're correct." The only distinction is you're logging versus making the change to test the outcome. In both cases you're doing a type of investigation into what went wrong. In what the author calls "investigation mode" you hit the exact same wall. You test what your assumptions are, and if they're all correct, you're in the exact same position as the person who made changes and their changes failed to fix the problem. Whatever you thought was the issue wasn't.
Anyway, I'm grumpy today, I guess. It wasn't a bad piece, but I thought they could have presented something more in-depth.