I've included the entire answer below as I've noticed that stackexchange and stackoverflow responses (and links to them) have been breaking as of late, especially when you link to a specific...
I've included the entire answer below as I've noticed that stackexchange and stackoverflow responses (and links to them) have been breaking as of late, especially when you link to a specific answer.
What looks like guessing from the outside often turns out to be what I call "debugging in your mind". In a way, this is similar to grandmasters' ability to play chess without looking at a chess board.
It is by far the most efficient debugging technique I know, because it does not require a debugger at all. Your brain explores multiple code paths at the same time, yielding better turnaround than you could possibly get with a debugger.
I was not conscious about this technique before briefly entering the world of competitive programming, where using a debugger meant losing precious seconds. After about a year of competing, I started using this technique almost exclusively as my initial line of defense, followed by debug logging, with using an actual debugger sitting at the distant third place. One useful side effect of this practice was that I started adding new bugs at a slower pace, because "debugging in my mind" did not stop as I wrote new code.
Of course this method has its limitations, due mostly to the limitations of one's mind at visualizing multiple paths through the code. I learned to respect these limitations of my mind, turning to a debugger for fixing bugs in more advanced algorithms.
I am one of those who is rather decent at programming, mind you I started programming at six, but I have never really, earnestly, used a debugger. I have had many senior level old-hat devs harp on me for never using the debugger, and some who are astonished at the speed of development and problem solving without them (logs! println! tests... although still not good at writing them....) and their incredibly irritation when a solution would be patently obvious w/ a debugger but I've decided to go down a far different mario-pipe to discover a solution.
Recently (today), I decided to finally look into them after two decades of programming, and found this response that just nailed why I have always tried them for a short time, given up, went back to my style of 'debugging in the mind'. Figured others would enjoy this answer.
Still going to try to use them, just because they would be beneficial to at-least know how to properly use in a codebase I am a stranger to.
I don't love the answer or the theory. I've been supporting my own code for about 7ish years at my current company, and it taught me a LOT about best practices and using technology correctly. The...
I don't love the answer or the theory.
I've been supporting my own code for about 7ish years at my current company, and it taught me a LOT about best practices and using technology correctly. The vast majority of work that proper practices try to eliminate isn't development but maintenance. How much are you going to hate yourself when you look at this in 2 years and don't remember what you were thinking, or how easily can you hand it to a total stranger?
A major reason i became such a huge proponent of languages like F# (a functional language but still willing to let you do OOP/side effects unlike say haskell), is because domain driven design makes 90% of bugs flat out impossible. The code can not compile if you don't handle everything, and the compiler is vastly better at checking every possible end point of my functions and making sure i'm handling all cases. I feel like this is very similar to your mental debugging, but the nice thing about it is since it's part of the code I can hand the base off to someone else and they get the same insights/power without having to have a similar thought process.
Likewise for the data that can't be caught there (IO) debuggers are wildly important. I deal in data that's often in the hundreds of millions, if not billions, of records. Print statements are functionally useless at that volume except for very specific kinds of problems which likely should have been caught by using proper design in the first place.
There is a middle ground where yes during early testing might do some cheeky print statements, but that's also because it's trivial to throw code into the F# REPL from your code base or an .fsx script, at which point again i'm probably going to be better served by using more proper practices.
In short I think my theory boils down to:
For all business logic, you should ideally have a "if it compiles, it runs" style of coding. Some languages make this easier than others. I haven't had a non IO runtime error in F# in 99% of my code in years. The only spot it does happen is a small helper library where I used some reflection and didn't know back then how to properly wrap the failure (and really should clean that up).
For IO, you probably need tests and debugging skills if you're planning on dealing with any sort of serious volume and future proofing.
Print statements are a quick and dirty way to solve small mid development problems (why won't this line compile?) ,but I don't think they make sense beyond that initial stage.
Yes, this is where my increased need for looking into debugging has come into play, as the days go on, more and more people are modifying and working on my code after me. Going into larger...
I feel like this is very similar to your mental debugging, but the nice thing about it is since it's part of the code I can hand the base off to someone else and they get the same insights/power without having to have a similar thought process.
Yes, this is where my increased need for looking into debugging has come into play, as the days go on, more and more people are modifying and working on my code after me. Going into larger projects has been beneficial w/ a debugger - I just never really had a need once I am in the codebase, and so never learned deeper integration of a 'debugger' mindset.
[..] For all business logic, you should ideally have a "if it compiles, it runs" style of coding.
This is exactly how I code, I write in a rather functional way, and have never been particularly a fan of OOP in the slightest. I have always conformed even my OOP-required projects into a more modular or procedural method. I spent a lot of time in Erlang/Elixir, Haskell, Fortran, and without realizing it always forced my C-code (my first love) to always be modular and more functional. Briefly had a fling with F#, but that was before the open sourcing of the .net ecosystem, so it has been a few years.
I have a strict tolerance of "if it compiles, it should always run", in my industry a failure could mean catastrophic physical failure, and ensuring that it's as safe as you possible can make it - proper allocations, minimal upfront designations, strict performance testing and regulation (This is one of the many reasons I really dislike GCs.), and ensuring each piece in play is as small and singular in focus as possible.
I currently write nearly all my new projects (especially, as I run a robotics and security company), in Odin, which while a newer systems language, is procedural. Very similar in many ways to functional programming, but not exactly. Before settling on Odin, I spent a great deal of time as an Ada developer, safe-languages that can drop down to the system that are more functional.
Print statements are a quick and dirty way
My quip about print statements, was mostly a joke, while useful, it's incredibly annoying. Certainly useful at the start, horrific once anything is in production and ongoing development - creates more problems than it solves. Do you know how many buffer IOs I have seen/made to be blocked? Even if you disable it? Some of these boards I work on can't even handle writing anything (even if its non-existent! or blocked!) to any sort of interface without cascading failures.
Yeah I realized after I saw your LISP topic that i'd misunderstood your approach and we're basically on the same page. Your reference to competitive coding threw me off as I've met some people who...
Yeah I realized after I saw your LISP topic that i'd misunderstood your approach and we're basically on the same page. Your reference to competitive coding threw me off as I've met some people who were into that, but not really great at practical coding because they cared more about developing quickly in the moment than in the long term. Obviously that's not what you were going for.
Odin's actually been on my docket to look into for awhile as a natural progression from F#, but god I don't want to give up white space as it keeps the files so clean and readable and I despite seeing curly braces taking up space.
I’m not a huge fan of strictly functional programming, but I totally get the “if it compiles it runs” mindset. It’s actually one of the reasons why I still like writing code in Java vs Python or...
I’m not a huge fan of strictly functional programming, but I totally get the “if it compiles it runs” mindset. It’s actually one of the reasons why I still like writing code in Java vs Python or JavaScript - or really just about any dynamic typed language, for that matter. Java doesn’t catch everything, of course, but all the boilerplate exists for a reason and it’s to help you, the programmer! That, plus it has perhaps the single best error catching system I’ve seen in any language in the form of its extremely robust and flexible exceptions, something I feel is often imitated but never really bested.
I should probably pick up Rust again and try to build something substantial this time around.
I haven’t used Java, but I think typescript with ‘any’ blocked is pretty similar. If there is a type I haven’t covered in the logic, I get squiggles and TS doesn’t compile. But TS error handling...
I haven’t used Java, but I think typescript with ‘any’ blocked is pretty similar. If there is a type I haven’t covered in the logic, I get squiggles and TS doesn’t compile. But TS error handling is pretty terrible.
From what you describe, you might also like Swift. It’s a modern language, strongly typed, and requires that you annotate functions that can throw, and requires that errors be handled or at least acknowledged in some way. And, no, it is not exclusive to Apple platforms.
Typescript is basically a must have when I’m working in JS (which for my current situation is basically never). But one thing I sorely miss in other languages is the concept of a checked...
Typescript is basically a must have when I’m working in JS (which for my current situation is basically never). But one thing I sorely miss in other languages is the concept of a checked exception. Basically if it’s possible or likely a method will cause an error you can use the throws keyword to force the method that calls it to catch the specific type of exception it could cause.
I’ve tried Swift. While it has a lot of features I think are cleaver and useful, I never really got the knack for it.
I’m much the same way, but not being good at using a debugger can be a real setback when you work with complex systems. Fixing bugs in a video game, for instance, would be pretty much impossible....
I’m much the same way, but not being good at using a debugger can be a real setback when you work with complex systems. Fixing bugs in a video game, for instance, would be pretty much impossible. But one of the things I try to teach my students as an important skill is simply trying to make in the code on their brain to find out why it’s not working. Only about a quarter actually “get” it though.
I haven't used a debugger since I started using vim to write C# (probably 15 years or so), with the primary driver being that getting netcoredbg (or whatever the Xamarin/Mono equivalent to that...
I haven't used a debugger since I started using vim to write C# (probably 15 years or so), with the primary driver being that getting netcoredbg (or whatever the Xamarin/Mono equivalent to that was back then, if there even was one) to work with vim was a nightmarish hassle, and I didn't want to deal with it. Prior to the switch I used the debugger in Visual Studio a fair amount, so it took a little while to get used to troubleshooting without it, but I feel like all this experience thinking through the code and identifying possible causes and then testing those manually (usually by adding additional debug logging) has made me a far better programmer than I'd be otherwise.
Working this way makes debugging feel a lot more like a "science," in the sense that you have to come up with theories and then figure out the most efficient way to prove or disprove them. I think working on and improving that alternate way of approaching problems (what the OP calls "debugging in your mind"), instead of always relying on a debugger to do it for you, is a skill that has much wider benefits across the whole sphere of programming--not just in debugging.
I think there is a lot of value in building tools that help you build tools. Debuggers are good but sometimes they are way, way, way too generic. Some problems are better solved with Construction...
I think there is a lot of value in building tools that help you build tools. Debuggers are good but sometimes they are way, way, way too generic. Some problems are better solved with Construction Sets.
See the Direct Manipulation craze of the 1980s for more examples; or Bret Victor:
But some problems are efficiently solved with a generic debugger. And some problems can be solved by logging and aggregating statistics over logs. It's best to learn the basics of each strategy and learn to reach for the right tool when the shape of the problem is different.
One thing that a lot of systems are missing is introspection and observability. Just because there is no error does not guarantee a system is doing the right thing. Debuggers are not a great fit here but often logs aren't either--but well-designed custom tools can bubble up the same kind of information that debuggers can so that would be one reason to avoid generic debuggers.
One reason I haven't used debuggers all that much is that in a new programming environment, it often takes time to learn how to set them up and use them, and debugging with print statements or by...
One reason I haven't used debuggers all that much is that in a new programming environment, it often takes time to learn how to set them up and use them, and debugging with print statements or by improving logging is often easier. I used them the most when I was a Java programmer, and sometimes in Dart or when using Chrome Devtools to debug a website.
That excuse goes away now that we have coding agents. Even if you don't know how to set up and use the debugger, your coding agent probably does, and they are fairly good at debugging, or so I hear.
So far I haven't seen a coding agent try to use a debugger, though; instead it runs little programs from the command line to test things using 'deno eval.' It will also connect to SQLite and execute SQL queries to learn what's in the database. And that seems good enough.
I believe it's just a matter of asking it to use a debugger, though? I haven't asked yet.
As someone who avoided debuggers for a long time - none. Once in a blue moon a sys out or using my wits ( divide and conquer, etc ) will do something for me that a debugger can't, BUT with a lot...
As someone who avoided debuggers for a long time - none.
Once in a blue moon a sys out or using my wits ( divide and conquer, etc ) will do something for me that a debugger can't, BUT with a lot more work.
I'm not against debuggers in principal, but I think they can be a bit of a crutch. The things that I like about printf/logging style debugging is that: It works almost anywhere, even if I don't...
I'm not against debuggers in principal, but I think they can be a bit of a crutch.
The things that I like about printf/logging style debugging is that:
It works almost anywhere, even if I don't have a debugger available or know how the debugger for that particular environment works.
I can save the traces/logs to disk and search through them in my editor. I know there are time-travel debuggers that will let you rewind time, but being able to search backwards through a log can be pretty darn effective to. The traces can give me a nice temporal overview of the evolving state.
If I have saved the traces/logs to various files under conditions like working, broken, and/or with attempted fixes, then I can diff those traces. That can often help reveal where something starts going off the rails. I don't think that I've ever seen a debugger equivalent to diffing traces like that.
Occasionally those diffs have helped me to debug race conditions, where I can see that events interleaved in a different order in the failing runs. (Some care needs to be taken here since prints are often guarded by internal mutexes, so observation this way can sometimes alter the thread order. But sometimes that actually helps tickle a race condition instead of suppressing it.)
Basically, I think knowing how to debug without a debugger is a useful skill - essentially a lowest common denominator sort of thing.
That said, I do really like debuggers for an initial triage of crashes - for loading up a program and seeing where it segfaults or panics and then walking up the call stack and inspecting the state to determine how the program got there. I think there's value in both.
Sometimes you can't debug because your app needs to run in a deployed environment. I have had success remote debugging using telepresence to inject a container from my instance into a kubernetes...
Sometimes you can't debug because your app needs to run in a deployed environment. I have had success remote debugging using telepresence to inject a container from my instance into a kubernetes deployment. But it was A LOT of work to set up and maintain. And it's something that cuts across several services (or is a timing bug) that still right not be enough. In this case, good logging can be a soluyion IF your problem is reproducible, so that you can steadily add more logging in the relevant areas until you figure it out.
What I love a debugger for is for troubleshooting failing unit tests. The execution path is usually pretty short, and it's not valuable to add a bunch of print statements to a test.
To go back to the first scenario, if you can use the log data to reproduce the bug in a unit test, then it's great to debug it.
It also depends on the language and tooling available to you:
For a long time, gbd was the only free choice for C/C++ debugging, and just getting the binary compiled with debug markings is sometimes a hair-pulling exercise, especially if it's a big codebase or you have lots of library dependencies.
Java has always had good debugger support, and this to me, was on of its strengths. I also think this was why it was taught in a lot of university intro to programming courses in the early 2000s.
Modern languages like Go and Typescript have great debugger support and great tool integration with platforms like vs code.
As a meta comment on step through debuggers: they're are a tool, and if you're a professional software developer, a key aspect of your job is to accomplish it as efficiently as possible. 90% of...
As a meta comment on step through debuggers: they're are a tool, and if you're a professional software developer, a key aspect of your job is to accomplish it as efficiently as possible. 90% of the time that means using whatever you're most comfortable with, but 10% of the time it can be more effective to take a different approach.
(if you're a hobbyist, just have fun, and do whatever piques your interest)
Also bear in mind that this conflates very different parts of one's day to day, such as:
Initially authoring work,
Modifying existing works,
Operating/fixing works in production,
all of which combine with your skillset, domain, and existing integrated toolsets to indicate which specific tool is best suited for the job. For example: a new coworker once complained that we relied on logging in our servers, instead of simply allowing developers to place breakpoints into live systems.
That's absurd, of course, but they'd been doing it at their old gig for long enough that alternatives to that process had never occurred to them. The rest of the world uses logs or fancy tracing suites, but for them, that was the path of least resistance. Similarly, when working on embedded devices, you might not even have access to the "source code" you're debugging, let alone a step through debugger, so it might be faster to whip out an oscilloscope than to prod w/GDB in your vendor's proprietary library.
As a final note: there are some sibling threads talking about how, in their preferred language, programs either run or fail to compile. That discounts logical errors, which (ime) are the majority of bugs in the modern era, and which can't always be lifted into the type system -- and even then, your encoding of the problem into types could have a logical error as well! Even the most advanced type system isn't a silver bullet -- you could write your entire spec in Rcoq, extract it out, and still have "bugs" because you didn't understand your customer's requirements correctly (or you forgot to put in a liveness constraint, etc.). Imo, but the goal should be to find the right tool, for the right job, and the right people.
I've included the entire answer below as I've noticed that stackexchange and stackoverflow responses (and links to them) have been breaking as of late, especially when you link to a specific answer.
I am one of those who is rather decent at programming, mind you I started programming at six, but I have never really, earnestly, used a debugger. I have had many senior level old-hat devs harp on me for never using the debugger, and some who are astonished at the speed of development and problem solving without them (logs! println! tests... although still not good at writing them....) and their incredibly irritation when a solution would be patently obvious w/ a debugger but I've decided to go down a far different mario-pipe to discover a solution.
Recently (today), I decided to finally look into them after two decades of programming, and found this response that just nailed why I have always tried them for a short time, given up, went back to my style of 'debugging in the mind'. Figured others would enjoy this answer.
Still going to try to use them, just because they would be beneficial to at-least know how to properly use in a codebase I am a stranger to.
I don't love the answer or the theory.
I've been supporting my own code for about 7ish years at my current company, and it taught me a LOT about best practices and using technology correctly. The vast majority of work that proper practices try to eliminate isn't development but maintenance. How much are you going to hate yourself when you look at this in 2 years and don't remember what you were thinking, or how easily can you hand it to a total stranger?
A major reason i became such a huge proponent of languages like F# (a functional language but still willing to let you do OOP/side effects unlike say haskell), is because domain driven design makes 90% of bugs flat out impossible. The code can not compile if you don't handle everything, and the compiler is vastly better at checking every possible end point of my functions and making sure i'm handling all cases. I feel like this is very similar to your mental debugging, but the nice thing about it is since it's part of the code I can hand the base off to someone else and they get the same insights/power without having to have a similar thought process.
Likewise for the data that can't be caught there (IO) debuggers are wildly important. I deal in data that's often in the hundreds of millions, if not billions, of records. Print statements are functionally useless at that volume except for very specific kinds of problems which likely should have been caught by using proper design in the first place.
There is a middle ground where yes during early testing might do some cheeky print statements, but that's also because it's trivial to throw code into the F# REPL from your code base or an .fsx script, at which point again i'm probably going to be better served by using more proper practices.
In short I think my theory boils down to:
Yes, this is where my increased need for looking into debugging has come into play, as the days go on, more and more people are modifying and working on my code after me. Going into larger projects has been beneficial w/ a debugger - I just never really had a need once I am in the codebase, and so never learned deeper integration of a 'debugger' mindset.
This is exactly how I code, I write in a rather functional way, and have never been particularly a fan of OOP in the slightest. I have always conformed even my OOP-required projects into a more modular or procedural method. I spent a lot of time in Erlang/Elixir, Haskell, Fortran, and without realizing it always forced my C-code (my first love) to always be modular and more functional. Briefly had a fling with F#, but that was before the open sourcing of the .net ecosystem, so it has been a few years.
I have a strict tolerance of "if it compiles, it should always run", in my industry a failure could mean catastrophic physical failure, and ensuring that it's as safe as you possible can make it - proper allocations, minimal upfront designations, strict performance testing and regulation (This is one of the many reasons I really dislike GCs.), and ensuring each piece in play is as small and singular in focus as possible.
I currently write nearly all my new projects (especially, as I run a robotics and security company), in Odin, which while a newer systems language, is procedural. Very similar in many ways to functional programming, but not exactly. Before settling on Odin, I spent a great deal of time as an Ada developer, safe-languages that can drop down to the system that are more functional.
My quip about print statements, was mostly a joke, while useful, it's incredibly annoying. Certainly useful at the start, horrific once anything is in production and ongoing development - creates more problems than it solves. Do you know how many buffer IOs I have seen/made to be blocked? Even if you disable it? Some of these boards I work on can't even handle writing anything (even if its non-existent! or blocked!) to any sort of interface without cascading failures.
Yeah I realized after I saw your LISP topic that i'd misunderstood your approach and we're basically on the same page. Your reference to competitive coding threw me off as I've met some people who were into that, but not really great at practical coding because they cared more about developing quickly in the moment than in the long term. Obviously that's not what you were going for.
Odin's actually been on my docket to look into for awhile as a natural progression from F#, but god I don't want to give up white space as it keeps the files so clean and readable and I despite seeing curly braces taking up space.
I’m not a huge fan of strictly functional programming, but I totally get the “if it compiles it runs” mindset. It’s actually one of the reasons why I still like writing code in Java vs Python or JavaScript - or really just about any dynamic typed language, for that matter. Java doesn’t catch everything, of course, but all the boilerplate exists for a reason and it’s to help you, the programmer! That, plus it has perhaps the single best error catching system I’ve seen in any language in the form of its extremely robust and flexible exceptions, something I feel is often imitated but never really bested.
I should probably pick up Rust again and try to build something substantial this time around.
I haven’t used Java, but I think typescript with ‘any’ blocked is pretty similar. If there is a type I haven’t covered in the logic, I get squiggles and TS doesn’t compile. But TS error handling is pretty terrible.
From what you describe, you might also like Swift. It’s a modern language, strongly typed, and requires that you annotate functions that can throw, and requires that errors be handled or at least acknowledged in some way. And, no, it is not exclusive to Apple platforms.
Typescript is basically a must have when I’m working in JS (which for my current situation is basically never). But one thing I sorely miss in other languages is the concept of a checked exception. Basically if it’s possible or likely a method will cause an error you can use the throws keyword to force the method that calls it to catch the specific type of exception it could cause.
I’ve tried Swift. While it has a lot of features I think are cleaver and useful, I never really got the knack for it.
I’m much the same way, but not being good at using a debugger can be a real setback when you work with complex systems. Fixing bugs in a video game, for instance, would be pretty much impossible. But one of the things I try to teach my students as an important skill is simply trying to make in the code on their brain to find out why it’s not working. Only about a quarter actually “get” it though.
I haven't used a debugger since I started using vim to write C# (probably 15 years or so), with the primary driver being that getting
netcoredbg(or whatever the Xamarin/Mono equivalent to that was back then, if there even was one) to work with vim was a nightmarish hassle, and I didn't want to deal with it. Prior to the switch I used the debugger in Visual Studio a fair amount, so it took a little while to get used to troubleshooting without it, but I feel like all this experience thinking through the code and identifying possible causes and then testing those manually (usually by adding additional debug logging) has made me a far better programmer than I'd be otherwise.Working this way makes debugging feel a lot more like a "science," in the sense that you have to come up with theories and then figure out the most efficient way to prove or disprove them. I think working on and improving that alternate way of approaching problems (what the OP calls "debugging in your mind"), instead of always relying on a debugger to do it for you, is a skill that has much wider benefits across the whole sphere of programming--not just in debugging.
I think there is a lot of value in building tools that help you build tools. Debuggers are good but sometimes they are way, way, way too generic. Some problems are better solved with Construction Sets.
See the Direct Manipulation craze of the 1980s for more examples; or Bret Victor:
But some problems are efficiently solved with a generic debugger. And some problems can be solved by logging and aggregating statistics over logs. It's best to learn the basics of each strategy and learn to reach for the right tool when the shape of the problem is different.
One thing that a lot of systems are missing is introspection and observability. Just because there is no error does not guarantee a system is doing the right thing. Debuggers are not a great fit here but often logs aren't either--but well-designed custom tools can bubble up the same kind of information that debuggers can so that would be one reason to avoid generic debuggers.
One reason I haven't used debuggers all that much is that in a new programming environment, it often takes time to learn how to set them up and use them, and debugging with print statements or by improving logging is often easier. I used them the most when I was a Java programmer, and sometimes in Dart or when using Chrome Devtools to debug a website.
That excuse goes away now that we have coding agents. Even if you don't know how to set up and use the debugger, your coding agent probably does, and they are fairly good at debugging, or so I hear.
So far I haven't seen a coding agent try to use a debugger, though; instead it runs little programs from the command line to test things using 'deno eval.' It will also connect to SQLite and execute SQL queries to learn what's in the database. And that seems good enough.
I believe it's just a matter of asking it to use a debugger, though? I haven't asked yet.
As someone who avoided debuggers for a long time - none.
Once in a blue moon a sys out or using my wits ( divide and conquer, etc ) will do something for me that a debugger can't, BUT with a lot more work.
I'm not against debuggers in principal, but I think they can be a bit of a crutch.
The things that I like about printf/logging style debugging is that:
Basically, I think knowing how to debug without a debugger is a useful skill - essentially a lowest common denominator sort of thing.
That said, I do really like debuggers for an initial triage of crashes - for loading up a program and seeing where it segfaults or panics and then walking up the call stack and inspecting the state to determine how the program got there. I think there's value in both.
Sometimes you can't debug because your app needs to run in a deployed environment. I have had success remote debugging using telepresence to inject a container from my instance into a kubernetes deployment. But it was A LOT of work to set up and maintain. And it's something that cuts across several services (or is a timing bug) that still right not be enough. In this case, good logging can be a soluyion IF your problem is reproducible, so that you can steadily add more logging in the relevant areas until you figure it out.
What I love a debugger for is for troubleshooting failing unit tests. The execution path is usually pretty short, and it's not valuable to add a bunch of print statements to a test.
To go back to the first scenario, if you can use the log data to reproduce the bug in a unit test, then it's great to debug it.
It also depends on the language and tooling available to you:
Will print till I die!
As a meta comment on step through debuggers: they're are a tool, and if you're a professional software developer, a key aspect of your job is to accomplish it as efficiently as possible. 90% of the time that means using whatever you're most comfortable with, but 10% of the time it can be more effective to take a different approach.
(if you're a hobbyist, just have fun, and do whatever piques your interest)
Also bear in mind that this conflates very different parts of one's day to day, such as:
all of which combine with your skillset, domain, and existing integrated toolsets to indicate which specific tool is best suited for the job. For example: a new coworker once complained that we relied on logging in our servers, instead of simply allowing developers to place breakpoints into live systems.
That's absurd, of course, but they'd been doing it at their old gig for long enough that alternatives to that process had never occurred to them. The rest of the world uses logs or fancy tracing suites, but for them, that was the path of least resistance. Similarly, when working on embedded devices, you might not even have access to the "source code" you're debugging, let alone a step through debugger, so it might be faster to whip out an oscilloscope than to prod w/GDB in your vendor's proprietary library.
As a final note: there are some sibling threads talking about how, in their preferred language, programs either run or fail to compile. That discounts logical errors, which (ime) are the majority of bugs in the modern era, and which can't always be lifted into the type system -- and even then, your encoding of the problem into types could have a logical error as well! Even the most advanced type system isn't a silver bullet -- you could write your entire spec in Rcoq, extract it out, and still have "bugs" because you didn't understand your customer's requirements correctly (or you forgot to put in a liveness constraint, etc.). Imo, but the goal should be to find the right tool, for the right job, and the right people.