18
votes
What's the benefit of avoiding the debugger?
Link information
This data is scraped automatically and may be incorrect.
- Authors
- user8, EAGER_STUDENT, yegor256, Bill, Sergey Kalinichenko, Chris Browne, Spencer Rathbun, user4234, Keith Brings, JohnFx, comingstorm, wobbily_col, ZJR, dan_waterworth, Zan Lynx, Thomas Eding, ChrisFletcher, Mike Dunlavey, DJClayworth, Winston Ewert, gnat
- Published
- Jan 23 2012
- Word count
- 55 words
I've included the entire answer below as I've noticed that stackexchange and stackoverflow responses (and links to them) have been breaking as of late, especially when you link to a specific answer.
I am one of those who is rather decent at programming, mind you I started programming at six, but I have never really, earnestly, used a debugger. I have had many senior level old-hat devs harp on me for never using the debugger, and some who are astonished at the speed of development and problem solving without them (logs! println! tests... although still not good at writing them....) and their incredibly irritation when a solution would be patently obvious w/ a debugger but I've decided to go down a far different mario-pipe to discover a solution.
Recently (today), I decided to finally look into them after two decades of programming, and found this response that just nailed why I have always tried them for a short time, given up, went back to my style of 'debugging in the mind'. Figured others would enjoy this answer.
Still going to try to use them, just because they would be beneficial to at-least know how to properly use in a codebase I am a stranger to.
I don't love the answer or the theory.
I've been supporting my own code for about 7ish years at my current company, and it taught me a LOT about best practices and using technology correctly. The vast majority of work that proper practices try to eliminate isn't development but maintenance. How much are you going to hate yourself when you look at this in 2 years and don't remember what you were thinking, or how easily can you hand it to a total stranger?
A major reason i became such a huge proponent of languages like F# (a functional language but still willing to let you do OOP/side effects unlike say haskell), is because domain driven design makes 90% of bugs flat out impossible. The code can not compile if you don't handle everything, and the compiler is vastly better at checking every possible end point of my functions and making sure i'm handling all cases. I feel like this is very similar to your mental debugging, but the nice thing about it is since it's part of the code I can hand the base off to someone else and they get the same insights/power without having to have a similar thought process.
Likewise for the data that can't be caught there (IO) debuggers are wildly important. I deal in data that's often in the hundreds of millions, if not billions, of records. Print statements are functionally useless at that volume except for very specific kinds of problems which likely should have been caught by using proper design in the first place.
There is a middle ground where yes during early testing might do some cheeky print statements, but that's also because it's trivial to throw code into the F# REPL from your code base or an .fsx script, at which point again i'm probably going to be better served by using more proper practices.
In short I think my theory boils down to:
Yes, this is where my increased need for looking into debugging has come into play, as the days go on, more and more people are modifying and working on my code after me. Going into larger projects has been beneficial w/ a debugger - I just never really had a need once I am in the codebase, and so never learned deeper integration of a 'debugger' mindset.
This is exactly how I code, I write in a rather functional way, and have never been particularly a fan of OOP in the slightest. I have always conformed even my OOP-required projects into a more modular or procedural method. I spent a lot of time in Erlang/Elixir, Haskell, Fortran, and without realizing it always forced my C-code (my first love) to always be modular and more functional. Briefly had a fling with F#, but that was before the open sourcing of the .net ecosystem, so it has been a few years.
I have a strict tolerance of "if it compiles, it should always run", in my industry a failure could mean catastrophic physical failure, and ensuring that it's as safe as you possible can make it - proper allocations, minimal upfront designations, strict performance testing and regulation (This is one of the many reasons I really dislike GCs.), and ensuring each piece in play is as small and singular in focus as possible.
I currently write nearly all my new projects (especially, as I run a robotics and security company), in Odin, which while a newer systems language, is procedural. Very similar in many ways to functional programming, but not exactly. Before settling on Odin, I spent a great deal of time as an Ada developer, safe-languages that can drop down to the system that are more functional.
My quip about print statements, was mostly a joke, while useful, it's incredibly annoying. Certainly useful at the start, horrific once anything is in production and ongoing development - creates more problems than it solves. Do you know how many buffer IOs I have seen/made to be blocked? Even if you disable it? Some of these boards I work on can't even handle writing anything (even if its non-existent! or blocked!) to any sort of interface without cascading failures.
Yeah I realized after I saw your LISP topic that i'd misunderstood your approach and we're basically on the same page. Your reference to competitive coding threw me off as I've met some people who were into that, but not really great at practical coding because they cared more about developing quickly in the moment than in the long term. Obviously that's not what you were going for.
Odin's actually been on my docket to look into for awhile as a natural progression from F#, but god I don't want to give up white space as it keeps the files so clean and readable and I despite seeing curly braces taking up space.
I’m not a huge fan of strictly functional programming, but I totally get the “if it compiles it runs” mindset. It’s actually one of the reasons why I still like writing code in Java vs Python or JavaScript - or really just about any dynamic typed language, for that matter. Java doesn’t catch everything, of course, but all the boilerplate exists for a reason and it’s to help you, the programmer! That, plus it has perhaps the single best error catching system I’ve seen in any language in the form of its extremely robust and flexible exceptions, something I feel is often imitated but never really bested.
I should probably pick up Rust again and try to build something substantial this time around.
I haven’t used Java, but I think typescript with ‘any’ blocked is pretty similar. If there is a type I haven’t covered in the logic, I get squiggles and TS doesn’t compile. But TS error handling is pretty terrible.
From what you describe, you might also like Swift. It’s a modern language, strongly typed, and requires that you annotate functions that can throw, and requires that errors be handled or at least acknowledged in some way. And, no, it is not exclusive to Apple platforms.
Typescript is basically a must have when I’m working in JS (which for my current situation is basically never). But one thing I sorely miss in other languages is the concept of a checked exception. Basically if it’s possible or likely a method will cause an error you can use the throws keyword to force the method that calls it to catch the specific type of exception it could cause.
I’ve tried Swift. While it has a lot of features I think are cleaver and useful, I never really got the knack for it.
Honestly I find that somewhat surprising to read. Checked exceptions is probably the #1 thing people complain about in Java, from a language design perspective. In fact, it's gotten to the point where modern Java has developed many ways to avoid checked exceptions, and any existence of checked exceptions just fundamentally breaks modern Java language primitives (for example, you can't have any checked exceptions in lambdas).
It's generally considered one of the worst parts of the language, like C making strings end-terminated.
Java is a very opinionated language. If you can agree with it it's your best friend. If not, you're going to have problems.
I'm also lucky because in my career I haven't had to deal with other people's shitty legacy code because the few times I had to work with others I've been in charge. I can understand why people might not like Java's exception system. The flexibility and extensibility can make it into a double-edged sword, and I can see poorly implemented exceptions being a real pain to deal with.
I wouldn't really just say that it's about opinionated or not. Like I said, even Java doesn't agree with checked exceptions anymore. Much of Java 9 and Java 11 was about streams and lambdas, and they just won't compile if you call any method with a checked exception.
This implicitly is nudging all future Java code to avoid checked exceptions, since otherwise if you have a framework or library with checked exceptions, it can't be used with significant parts of Java's standard library anymore.
The fundamental issue is that checked exceptions aren't exceptions - it's really information that should be encoded in the return type. And indeed, that's how modern languages do it. In Rust or Swift you won't find checked exceptions, because the replacements are some form of the Optional<> or Result<> forms. For any case where certain failures are expected (like, file I/O where the filepath does not point a file), it should be encoded in the return type itself. If you really want to get deep in language specifics, this is in a way treating them as a monad.
Encoding it as part of the exception system caused many issues. It made the syntax more verbose. It conflated two logical paths which shouldn't be done together (true exceptions and expected failures). It causes many issues with control inversion (which is why lambdas can't have checked exceptions) in the type system.
Today, Java also has Optional and Result (see: https://docs.oracle.com/javase/8/docs/api/java/util/Optional.html), and it's generally considered best practice to use them and avoid checked exceptions, even in the language of origin.
I feel like this is try catch with a match expression in modern languages? F# synatax would be roughly
Which you'd often wrap in a Result type of Ok/Error.
I might just be misunderstanding.
No, what I mean is that if I call a method that has a checked exception, you, the programmer, must manually handle that exception or Java will refuse to compile your code. It’s an imperfect tool, though, as as long as you catch that exception it will pass regardless of if you actually handle it.
That’s what the match case would force though?
But yeah I’m clearly not getting something. I’ll look into it myself and see where my mental gap is
There's likely a greater chance that I'm not understanding what you meant instead. I'll freely admit that I know very little about F# or other languages that are strongly focused on functional programming.
Scala has pattern matching that looks to be the same as your F# example. At least in scala, a no exhaustive match will compile and run; anything not matched in the catch will continue bubbling up, and you can choose not to catch anything. I think Akir is saying in TS, you can specify “this function throws X, Y, Z and you MUST catch it”, then if you don’t or your match is not exhaustive, it will fail to compile at all.
I guess what makes that weird to me is that if you KNOW the function throws X, Y, Z...then why wouldn't you write the match to include those right then and there?
What i'm seeing here is:
so this would not compile because i'm not explicitly handling x, y, z in your example.
But i don't see a world where I put the type annotation on that function and don't write the with statement right then, like this?
I don't understand how having some annotation saying "the below function must catch x/y/x" would do much because...well if you know you need x/y/z you'd type it like so, and if you're dealing with someone who might delete those cases...well i feel like they're about as likely to delete some annotation?
To be clear it's pretty hard to not do an exhaustive match in F#, as it will throw a warning, and you can throw it in strict mode and say ALL matches must be exhaustive (which I do).
Obviously when you're matching on a string or something you always throw in the "wildcard" case at the end that's something like
To be clear these are always at the end of the statement, and thus would be the "Else" case essentially to catch all remaining cases for situations where it's an unreasonable state space. The _ example is where you just don't even care about whatever it returned for an exception and the ex is just a variable binding for if you do (as you often should).
I’m much the same way, but not being good at using a debugger can be a real setback when you work with complex systems. Fixing bugs in a video game, for instance, would be pretty much impossible. But one of the things I try to teach my students as an important skill is simply trying to make in the code on their brain to find out why it’s not working. Only about a quarter actually “get” it though.
I think there is a lot of value in building tools that help you build tools. Debuggers are good but sometimes they are way, way, way too generic. Some problems are better solved with Construction Sets.
See the Direct Manipulation craze of the 1980s for more examples; or Bret Victor:
But some problems are efficiently solved with a generic debugger. And some problems can be solved by logging and aggregating statistics over logs. It's best to learn the basics of each strategy and learn to reach for the right tool when the shape of the problem is different.
One thing that a lot of systems are missing is introspection and observability. Just because there is no error does not guarantee a system is doing the right thing. Debuggers are not a great fit here but often logs aren't either--but well-designed custom tools can bubble up the same kind of information that debuggers can so that would be one reason to avoid generic debuggers.
I haven't used a debugger since I started using vim to write C# (probably 15 years or so), with the primary driver being that getting
netcoredbg(or whatever the Xamarin/Mono equivalent to that was back then, if there even was one) to work with vim was a nightmarish hassle, and I didn't want to deal with it. Prior to the switch I used the debugger in Visual Studio a fair amount, so it took a little while to get used to troubleshooting without it, but I feel like all this experience thinking through the code and identifying possible causes and then testing those manually (usually by adding additional debug logging) has made me a far better programmer than I'd be otherwise.Working this way makes debugging feel a lot more like a "science," in the sense that you have to come up with theories and then figure out the most efficient way to prove or disprove them. I think working on and improving that alternate way of approaching problems (what the OP calls "debugging in your mind"), instead of always relying on a debugger to do it for you, is a skill that has much wider benefits across the whole sphere of programming--not just in debugging.
I will be honest, as a person that works with legacy systems that exists and still in development phase for the 20+, 15+ years I kinda don't understand what's wrong with using debugger.
Logging, debugging, printing to console, mind debugging, reading code is just a tools, and sometime they helpful, sometime not? In some situation debugging doesn't make sense, in some - have perfect sense. Like debugging doesn't make sense if you already have general understanding why and what happens at this point in a code. But if code is complex, unfamiliar, and there a lot variables that comes from different point in code that defines behaviour, and from outside it's not clear what exactly happens here - debug will definitely help to see a execution flow. Upd: also debugging doesn't make sense in multithreading envs of course.
One reason I haven't used debuggers all that much is that in a new programming environment, it often takes time to learn how to set them up and use them, and debugging with print statements or by improving logging is often easier. I used them the most when I was a Java programmer, and sometimes in Dart or when using Chrome Devtools to debug a website.
That excuse goes away now that we have coding agents. Even if you don't know how to set up and use the debugger, your coding agent probably does, and they are fairly good at debugging, or so I hear.
So far I haven't seen a coding agent try to use a debugger, though; instead it runs little programs from the command line to test things using 'deno eval.' It will also connect to SQLite and execute SQL queries to learn what's in the database. And that seems good enough.
I believe it's just a matter of asking it to use a debugger, though? I haven't asked yet.
As a meta comment on step through debuggers: they're are a tool, and if you're a professional software developer, a key aspect of your job is to accomplish it as efficiently as possible. 90% of the time that means using whatever you're most comfortable with, but 10% of the time it can be more effective to take a different approach.
(if you're a hobbyist, just have fun, and do whatever piques your interest)
Also bear in mind that this conflates very different parts of one's day to day, such as:
all of which combine with your skillset, domain, and existing integrated toolsets to indicate which specific tool is best suited for the job. For example: a new coworker once complained that we relied on logging in our servers, instead of simply allowing developers to place breakpoints into live systems.
That's absurd, of course, but they'd been doing it at their old gig for long enough that alternatives to that process had never occurred to them. The rest of the world uses logs or fancy tracing suites, but for them, that was the path of least resistance. Similarly, when working on embedded devices, you might not even have access to the source code you're debugging, let alone a step through debugger, so it might be faster to whip out an oscilloscope than to prod w/GDB in your vendor's proprietary library.
As a final note: there are some sibling threads talking about how, in their preferred language, programs either run or fail to compile. That discounts logical errors, which (ime) are the majority of bugs in the modern era, and which can't always be lifted into the type system -- and even then, your encoding of the problem into types could have a logical error as well! Even the most advanced type system isn't a silver bullet -- you could write your entire spec in Rcoq, extract it out, and still have "bugs" because you didn't understand your customer's requirements correctly (or you forgot to put in a liveness constraint, etc.). Imo, but the goal should be to find the right tool, for the right job, and the right people.
As someone who avoided debuggers for a long time - none.
Once in a blue moon a sys out or using my wits ( divide and conquer, etc ) will do something for me that a debugger can't, BUT with a lot more work.
I'm not against debuggers in principal, but I think they can be a bit of a crutch.
The things that I like about printf/logging style debugging is that:
Basically, I think knowing how to debug without a debugger is a useful skill - essentially a lowest common denominator sort of thing.
That said, I do really like debuggers for an initial triage of crashes - for loading up a program and seeing where it segfaults or panics and then walking up the call stack and inspecting the state to determine how the program got there. I think there's value in both.
One thing here is that there are a ton of different ways to use the debugger. If you just want to say “where exactly did it crash”, maybe your system tells you that, or maybe you need a debugger to know. And then there's what counts as a debugger. Is
valgrinda debugger? Are the various-fsanitizeoptions.Overall, I do agree broadly that in all things to do with coding, you need to engage your brain in the process. You need to be thinking. If using the debugger is a substitute for thinking, it's a poor one. If it can drop some evidence that helps it, then that's great.
Sometimes you can't debug because your app needs to run in a deployed environment. I have had success remote debugging using telepresence to inject a container from my instance into a kubernetes deployment. But it was A LOT of work to set up and maintain. And it's something that cuts across several services (or is a timing bug) that still right not be enough. In this case, good logging can be a soluyion IF your problem is reproducible, so that you can steadily add more logging in the relevant areas until you figure it out.
What I love a debugger for is for troubleshooting failing unit tests. The execution path is usually pretty short, and it's not valuable to add a bunch of print statements to a test.
To go back to the first scenario, if you can use the log data to reproduce the bug in a unit test, then it's great to debug it.
It also depends on the language and tooling available to you:
I never learned any debugger while I was taking University classes with Java. There are a couple bigger reasons why it makes sense to learn Java at uni:
it was extremely influential, which means that what you learn in it will transfer fairly easily
it is object-oriented to the core, which means that when they start covering OOP concepts they will have already been instantiating objects and using methods.
it’s low level enough to make sure that a student can learn about how a computer stores data without having to deal with memory management. Additionally, static typing helps to illuminate errors that can feel opaque to students starting out.
But it all depends on your teaching philosophy. The main reason why colleges might teach programming with Python or JS is because it allows a student to produce working code first; a “top down” approach where atudents learn the lower level concepts only after covering higher level concepts.
I still have a Kneuth inspired belief that a “bottom up” approach is better. That is to say, teach the low level stuff first and build up. There are some very good reasons why few institutions do this: it’s not the most practical approach since the low level stuff isn’t terribly marketable for the majority of the tech job marketplace, and its also a pretty good way to make dropping out an appealing option.
Oh hey, I'm one of those that dropped out because they taught Java and C first! I came back after self-studying Ruby; everything started clicking once I started thinking about logic rather than implementation details. The school did try building blocks first, but when I came back, I saw through the veil and understood that they were really bad at it, thus laying a poor foundation. I'd guess at least 2/3 of all professors are not very good at teaching programming.
It's worth noting that Python didn't really start coming into its own until the mid 2000s, and javascript, and web developments in general, was a far cry from where it is today. Most of it was server-side processing with form submission a la CGI / Asp / php.
I agree with your points in the main. Java was very much a sought-after skill set and "the new hotness" at the time.
In 2003 (when I graduated from undergrad), I would have agreed that the bottom up approach was better. My undergraduate degree is actually in electrical engineering, which is taking that to an extreme :).
I'm not sure I would agree now. From an "old man yelling at clouds" point of view, yes, having touched asm and done memory management is really useful. But lots of things are useful, and the landscape has really changed since then.
I'm pretty out of touch with CS/ECE curricula these days, so I'd be curious to hear from newer grads what your learning experience was like and how it translated into the working world.
My experience is extremely nonstandard. I’m not working in tech right now, I’m doing a part time teaching job with young kids. I’ve always been doing my own thing with a small business so I had somewhat more choice in how I would solve problems than others would have. I also got hired before I had any degree beyond high school and my university classes have all been after the fact. Most recently I had to take another “advanced Java” class (credit didn't transfer between schools) which did not cover debugging at all, but I couldn’t tell you if it was covered on any of the lower level classes.
In any case, I would say that for more than 90% of the kids I am teaching, the top-down method is necessary. They are literally kids, and for some of them having something that works, even if they do not necessarily understand how it works, is the single thing making them come to class. But those kids are also not training for reality; they are doing it for the romanticized ideal of programming they and their parents have for it. Nobody is expected to have a job after taking our classes; at the most they will just have an easier time when they get to college.
One pattern that is becoming increasingly apparent is that when we exhaust our Python curricula, students tend to take Java classes, and those who do have something of a dice roll for how well they will get it. Generally, if they are students I have taught Python to, they do extremely well and we blast through it, which I credit to my approach of trying to expose some of the inner workings that Python hides from the programmer.
Interesting! Can you tell me what curriculum you use? By coincidence, today I am setting up a python environment for my daughter (10yo).
My current plan is to do a hill climb on writing a terminal-based madlib program. For example, naively write every prompt and the whole output. Then introduce functions to let the user choose a madlib. Then get into saving the output each time. Then get into data structures and loops to make defining the madlib easier.
It’s proprietary so I can’t share it with you, I’m afraid. But it’s also very simple, so honestly you could probably do better just going on with what you’re doing. I actually worked with a student making a mad libs program earlier this week, and another first-term student is making an ELIZA style chatbot for their final project.
Thanks !
I came into programming from outside CSE, so I may also be the wrong person to opine here, but imo the problem with a bottom-up approach is that the first step to learning how to program is learning a certain way of thinking about how to go from "I want to do this thing" to "here's instructions that will make computer do the thing". Most people who are already knowledgeable and coding for fun or work don't really remember what it was like to not have this instinct, but it is something that needs to be taught to most people (and something that becomes more sophisticated as one gains experience coding). Learning this thought process is the thing that's actually beneficial about introducing kids to things like Scratch or games like Human Resource Machine, and I think throwing someone into low-level programming without being sure they have learned a baseline level of this type of thinking is going to be deeply frustrating for them.