Tell me about your early experiences with debugging and software QA
Are you an “old timer” in the computer industry? I’m writing a story about the things programmers (and QA people) had to do to test their software. It’s meant to be a nostalgic piece that’ll remind people about old methods — for good or ill.
For example, there was a point where the only way to insert a breakpoint in the code was to insert “printfs” that said “I got to this place in the code!” And all testing was manual testing. Nothing was automated. If you wanted a bug tracking system, you built your own.
So tell me your stories. Tell me what you had to do to test software, way back when, and compare it to today. What tools did you use -- or build? Is there anything you miss? Anything that makes you especially glad that the past is past?
C’mon, you know you wanted a “remember when”!
Oh man, what do you want to know? So many random things come to mind.
I started my career in earnest in the early 90s (though I had done some professional programming before that time). My first job was working writing the software for point of sale terminals running in MS DOS. We used Visual C (not Visual Studio, which didn't exist yet) and it was all fairly straightforward. There was a source debugger and it worked much like today's in that you could step through your code a line at a time and set breakpoints easily by clicking on the line of code where you wanted it to stop. Nothing too drastic. But what we didn't have was all the other tools.
There was no built-in static analyzer. No leaks detector. No automatic way to write over memory you freed so you could detect a use-after-free. You had to write those yourself. I read a book called No Bugs written by a Microsoft Engineer that laid out how to wrap
malloc()
andfree()
(andnew
anddelete
in C++) so you could do all this stuff by hand.In your
malloc()
wrapper, you'd actually allocate more memory than requested, fill it with a bit pattern that would cause a crash if dereferenced as a pointer, and return a pointer a few bytes in. This allowed you to detect if you had a buffer overrun. When you went to free the memory, you'd check the first 4 or 8 bytes and the last 4 or 8 bytes, and see if they still had the magic pattern in them. If so, you didn't overwrite that buffer. If not, someone somewhere wrote past the beginning or end of an array! This is all handled automatically today by setting environment variables while debugging. We have things like MallocScribble, MallocGaurdEdges, GaurdMalloc, etc. (at least on macOS and iOS — I assume Windows has some equivalent). We also have protected memory, so you know immediately when you dereference a NULL pointer. Back then, if it was a read, you just read some bogus data. Your app might or might not crash. If it was a write, it might do nothing, or it might crash the whole machine.A few years later I got out of the Windows/DOS world and found a paying job doing my passion — writing image processing software for macOS. At the time it was macOS 7, and then shortly after, macOS 8. There was a tool called Spotlight (not to be confused with currently shipping macOS's Spotlight feature). It was some sort of hack that inserted itself into your process (because there was no protected memory), and checked your stack and heap for problems like leaked resources, use after free, etc.
All of these solutions were helpful and made me a better programmer, but they all suffered from one huge problem. Most of them found the problems after they had occurred. So unless it was blindingly obvious what the bad data was, you now knew you had a problem, but still had to hunt it down by hand. That meant using the source debugger and various bits of logging to see if you could catch it in the act. Don't get me wrong - knowing there's a problem is the first step to fixing it, so it did help enormously. But it was also only half the information you needed!
I do not miss those days at all. Part of it was that I wasn't yet very experienced, but the tools were also pretty awful. I'm very impressed with the tools that are available today. And as mentioned, the OS has protected memory so you're much more likely to catch memory errors than before. In addition to tools for debugging there's a bunch of tools for performance tuning, testing, etc.
You mean to tell me I'm not supposed to be doing this still....??
In times past, you would have a web app daemon running, and it would log to a plain text file. When troubleshooting, you would SSH into the server that the daemon was running on, then inspect the log file. You could use
tail
,grep
,sed
,awk
, and all the rest of the classic UNIX suite of text processing tools.Nowadays, there seems to be this infatuation with third-party services that purport to replace this classic debugging workflow. Instead, either the service parses your logs, or your app hits an API instead of logging. Then, you're supposed to use the third-party service's web UI to get supposed "features" that are "better" than the old way. "Quickly search through terrabytes of logs!" they say. "Filter lines with our powerful query language!" they say. "Query based on metadata like source host or geo IP!" they say.
Well, [far] more often than not, I am stymied, handcuffed and otherwise thwarted whenever I try to use these accursed services, and can't find what I'm looking for. It's infuriating, because I know that if I could just get at the log files the old way, I'd be done with my debugging task in minutes. True story: One time, I was having no success, and I wrote some new code which would let me pass a string and have it send an email with the string. I literally used this code as a replacement for
printf
(orRails.logger.debug
orconsole.log
), and read the debugging strings via email. I can't even adequately describe these services as "they don't work". It's worse than that: They make the debugging flow even worse, because they hide or remove the actual plain textfiles that otherwise would be available.Examples of services I'm talking about are Loggly and SumoLogic.