13 votes

How much testing do you guys do?

Pretty straight forward question, but basically I was watching a discussion panel the other day talking about the ethics of Self-Driving cars. A topic came up about people writing crappy code, and more than that, people not testing their code. And if they do, they do point testing. I am in my last semester of uni and I am working with some companies where we are doing pretty extensive testing, happy flows and a lot of alternate flows, as well as UI/UX testing. I wanted to extend this question to you, do you guys do testing, what type? How much do you focus on it? And if u love it/hate it?

15 comments

  1. davidb Link
    Depends on what it is. How important is it that this thing works? How much does it cost (or will it hurt anyone) if it doesn't? Early on in my career I worked for various semiconductor companies...

    Depends on what it is.

    How important is it that this thing works? How much does it cost (or will it hurt anyone) if it doesn't?

    Early on in my career I worked for various semiconductor companies (AMD, Sun Microsystems, Intel, IBM). At a couple of those, it was my entire job to write tests and try to make things fail. We had whole departments (Functional and Formal Verification) and teams (unit and integration) dedicated to it. And, although I haven't seen cost breakdowns, it's a pretty safe assumption that the company spent way more money on testing than it ever did on research or design. Even when I advanced into platform architecture roles, I still spent a good portion of my time writing tests compared to coming up with new ideas and designs, or implementing them.

    Now, I do more software stuff - web and mobile apps, with some firmware and OS programming on occasion. The cost of making a mistake is much less (with the software I make).

    When I'm just starting out with a new program, I have almost zero testing (unless I'm using some boilerplate that includes good code coverage out of the box - even then I rarely extend it). I do this because I care more about creating something that works. If the program is just a simple script, all I care about is that it gets the task done. If it's an app for some business product, it's usually just a minimum viable product (MVP). In that case, the application itself is a test of the market demand. I care more that the application exists and is collecting feedback from customers than I do that it works flawlessly. As the program matures, I go back often and refactor my code, adopting a sort of hybrid test driven design. Once the app matures to the point where my customers (or I) are dependent on it, I go for full code coverage and write all sorts of testing.

    6 votes
  2. [3]
    somewaffles Link
    I'm still pretty early in my career and have only just started my second job out of school. First job, we were encouraged to dev test but really wasn't a big deal. We had a small QA team that did...

    I'm still pretty early in my career and have only just started my second job out of school. First job, we were encouraged to dev test but really wasn't a big deal. We had a small QA team that did happy path integration tests. At my new job we have unit tests for almost everything on the backend. Our QA has integration tests set up for regression testing as well as a pretty strict regiment for all new tickets that come through. After they give the okay, we have UAT/business testing all before it hits production.

    I think it depends on the company how much testing you actually do but there is a clear difference in product quality between the two companies I've worked for. My universities CS program was geared towards people who were on the path to do be software devs and I don't think we ever once went over anything about testing your code. Honestly, at my new company, I HATED the amount of dev testing we are expected to do before committing our code but it has made me a much better developer. It's a very desirable skill in a dev and I really wish that my school would have went over it at least a little especially because I had a class or two that revolved around software development (as opposed to the other more CS driven courses.)

    5 votes
    1. [2]
      NecrophiliaChocolate Link Parent
      Oh that is really interesting. Seems like our university experiences were quite different. Testing has been drilled into us since our second CS class. We were required to write some tests, albeit...

      Oh that is really interesting. Seems like our university experiences were quite different. Testing has been drilled into us since our second CS class. We were required to write some tests, albeit basic, along side the professors unit tests. This eventually became a habit and after a couple classes, we write tests ourselves so we can commit better code ourselves. Right now for my senior project we are using Apex for CI testing and oh my god our development process is so much better.

      Why do you think there wasn't much testing going on in your first job? Was it just laziness, didn't seem important, or something else?

      3 votes
      1. somewaffles Link Parent
        Not entirely sure but I'm pretty sure all my professors were academics and none of them had any actual professional development experience. I think that in combination with the course material...

        Not entirely sure but I'm pretty sure all my professors were academics and none of them had any actual professional development experience. I think that in combination with the course material they were trying to get through, it was just something that got left out.

  3. orangse (edited ) Link
    My thoughts on testing and what I do currently are in the last paragraph. Testing was a major part of our curriculum in college. In almost every single class you have to either write unit tests...

    My thoughts on testing and what I do currently are in the last paragraph. Testing was a major part of our curriculum in college. In almost every single class you have to either write unit tests (in the lower level classes, specifically the two intro courses) and then in the upper levels you have to write more general tests that are just inputs to the program (not sure what those would qualify as). The instructors would prepare somewhere between 10-30 hidden buggy implementations of the project and then your tests would be run against them; If they produced different output than when the test was run on a correct implementation, you got the points. None of this was shown to us, just a small message that said "test_2 exposed bugs A,C,F, etc". This was slightly different for unit tests, where normally you'd just count the errors and print that at the end.

    This also factors into how the projects were graded, as a small aside; the instructors would prepare something like 10-90 hidden test cases (heavily dependent on how complex the code was) and then you could send your code into an autograder that would run it against those hidden test cases. In almost all situations there was basically no feedback if you failed a hidden test case other than the line of the test that was different from the solution, so your own tests were the only thing you could look at to see where you were going wrong!

    All in all I'd say this made me hyper aware of the testing process, and maybe a better developer? For personal projects I'll generally write some sort of unit tests for each header file that gets exposed and then a few general test cases. The company I'm going to work for doesn't require a whole lot, but I did some on my own during an internship and tbh I think that got me the job. I enjoy testing, the ones I have the most fun writing are fuzzing tests where you generate random input and then see if it crashes. Loads of fun imo.

    4 votes
  4. [3]
    admicos Link
    I don't. I really should, but I just cannot figure out how. There is a lot of guides about how to use the libraries, and testing a simple calculator or whatever, but I never understood what to do...

    I don't. I really should, but I just cannot figure out how. There is a lot of guides about how to use the libraries, and testing a simple calculator or whatever, but I never understood what to do if you don't have a public API to test, or what if you rely on an external service, and that kind of more complicated stuff.

    The most I managed is testing a "emulator" for the Minecraft mod ComputerCraft, but that required a lot of refactoring that I just don't think I can apply to anything else. (But it definitely improved the code, so that's nice)

    1 vote
    1. [2]
      somewaffles Link Parent
      Mocking. There are lots of different types of testing but it sounds like you are after unit tests. If you're writing unit tests, the idea is to only test what that specific function is doing and...

      but I never understood what to do if you don't have a public API to test, or what if you rely on an external service, and that kind of more complicated stuff.

      Mocking. There are lots of different types of testing but it sounds like you are after unit tests. If you're writing unit tests, the idea is to only test what that specific function is doing and returning, not what is passed to it. If you want to write efficient tests, you should be familiar with dependency injection. Once your external service is injected, you can set up your tests so that anytime your external service is called, it will return mocked data that you define rather than actually calling the service or API.

      You usually write multiple tests per function that give it all sorts of data to get good coverage. There are many ways to do this depending on you're language/library. The whole idea was super confusing to me as well at first but it's actually not as complicated as it seems once you get passed injecting your services that need mocking.

      6 votes
      1. NecrophiliaChocolate Link Parent
        Yes! This is what I am doing. I am developing Alexa skills and we use Bespoken API, which is essentially an offline virtual Alexa, we just have to feed it the correct JSON from postman.

        Yes! This is what I am doing. I am developing Alexa skills and we use Bespoken API, which is essentially an offline virtual Alexa, we just have to feed it the correct JSON from postman.

        1 vote
  5. Octofox Link
    Historically I have been really lazy at testing and my previous job allowed it. As a result it was really common for a new change to break something else on the website. At my current job there...

    Historically I have been really lazy at testing and my previous job allowed it. As a result it was really common for a new change to break something else on the website. At my current job there was an actual QA tester who would test the shit out of any change and all of my PRs were getting thrown back at me for edge cases that broke it and rough edges I couldn't be bothered fixing and didn't think anyone would notice. After a few weeks at the new job I had to change my mindset and make sure I tested as much as possible before sending it away and threw away the mindset of "its good enough".

    Now recently after a few bad bugs hit production due to new features breaking existing ones there have been changes in the process put in place so that no PR gets accepted until it has unit tests testing all of it. This combined with a new peer review system has meant that the latest release went out perfectly which is awesome.

    Unit tests to start with were a huge pain in the ass and feel like they are bogging you down but once you get to know how to use them its such a powerful tool that saves you so much time. Its an absolutely wonderful feeling to update a library in your app and run the tests and have it point out the exact bits of your code that broke due to the update whereas without unit tests you would have to test every feature on your site repeatedly and likely still miss bits.

    1 vote
  6. mcavocado Link
    I don’t do TDD, but where I work we require extensive unit and functional testing. Every change is accompanied by tests that verify that it does what it should. It’s a little bit more time...

    I don’t do TDD, but where I work we require extensive unit and functional testing. Every change is accompanied by tests that verify that it does what it should. It’s a little bit more time consuming up front, but in the long run it saves so much more time, especially when you have multiple teams that share a codebase.

    It’s really quite liberating - we have faith in our test suite, and if the pipeline passes then it’s safe to deploy. Combined with small features and CI/CD, we deploy to production multiple times a day with confidence.

    If there’s one area that we’re weak on, it’s integration tests between web/mobile apps and APIs. It’s a thing that we know we should improve, but it hasn’t really bitten us much and so nobody has pushed to prioritize it.

    1 vote
  7. lazer Link
    At work it varies wildly by team, but I would say not enough. At home with my hobby project I spend most of my time writing and fixing tests, which drive further feature development. I would not...

    At work it varies wildly by team, but I would say not enough. At home with my hobby project I spend most of my time writing and fixing tests, which drive further feature development. I would not call this 'TDD' as that seems to be a much more strictly defined methodology, but I test a lot, with most of my tests being integration tests. For me it's almost like a way of gamifying my development process - chasing after a higher coverage %. I could talk for hours about coverage as a metric and how it is not always useful and how differently I'd be approaching testing if my hobby project was something with an actual scope and timeline.

  8. Emerald_Knight Link
    This depends largely on the complexity of the project I'm working on and how frequently I make potentially breaking changes. Tests take time to write, and the time it takes to write them affects...

    This depends largely on the complexity of the project I'm working on and how frequently I make potentially breaking changes. Tests take time to write, and the time it takes to write them affects feature velocity. When you work for a startup, feature velocity is essential. That being said, sometimes you find yourself making changes that require you to constantly go back and thoroughly test multiple parts of your system. Manually.

    When you find yourself doing a lot of manual testing like this, it's important to bite the proverbial bullet and write yourself some unit tests, otherwise the amount of time spent manually testing your changes will far outweigh the amount of time it would have taken you to write the automated tests. Additionally, you avoid being worn down by the prospect of having to perform even more manual testing, as well as avoiding the very real possibility of forgetting a test case that results in a bug making its way into production.

    Manually testing your code can thus be considered a form of technical debt. You can't avoid technical debt and some amount of technical debt is a good thing, but eventually you have to pay it down as it eventually becomes too expensive to leave alone.

    So, to answer your questions: I only do as much automated testing as is needed to free up development bottlenecks. I fucking hate having to write unit tests, but I hate having to do the same, monotonous manual testing over and over again and not being able to get any real work done, so I do my best to strike a balance between writing the tests I need and deferring other tests until they're actually needed. In between, I just do as much manual testing as is necessary to be confident that my code isn't suddenly broken. I will say, though, that once the unit tests are actually in place, I absolutely love them. It's nice being able to do a small refactor and have all of my unit tests tell me either "no problem, you're good to go" or "lol yeah, you broke something, moron". It makes my life a hell of a lot easier.

  9. izik1 Link
    To the title: "Depends" Expanding on that, I'll test any application I make to make sure it at least does the right thing on the happy path, if it's a one time use script I'll stop there,...

    To the title: "Depends"
    Expanding on that, I'll test any application I make to make sure it at least does the right thing on the happy path, if it's a one time use script I'll stop there, otherwise depending on the scope (and if it's a personal project) I'll have my own usage be the testing. If it's bigger I'll write a couple tests for more complex things. bigger still (my compiler project for example) and I'll write a test for everything that makes sense to have a test for (as long as I can think of one). Using said compiler as an example, I have tests for: Lexing (is it possible to cause the program to crash by running the lexer? Does the lexer lex things correctly?), Parsing (pretty much all of these are "do I get a correct AST" tests), and fuzzing if something looks a bit hairy.

    I won't write tests for trivial methods because they are well... Trivial, like, I won't create a (unit) test to make sure add2(num: i32) does the correct thing, because if I have such a method it should be used in other places, and at some point something should be tested, and if that trivial method (which is probably a foundation at this point) is incorrect, the test shouldn't pass.

    I'm not sure if I'd advocate my way of doing things though, I probably don't test enough. On the other hand, I normally have a great compiler to tell me when I'm doing something wrong, such returning a Foo instead of a Bar which is a much more frequent pattern than you might think. (consider your language's equivalent to Maybe<T>, Option<T>, or T?) Sometimes it even untangles this complicated thing that ends up causing me to have a data race that I wouldn't know about.

    I use this as much as I can, in order to make the code physically unable to encode incorrect states whenever possible. (why worry about what happens when you get a null pointer if you can't get a null pointer, or what happens when somebody calls your functions in the wrong order if they can't)
    with that being said, I actually need to add some units into my Game Boy emulator to ensure that I don't accidentally make stuff run for 8 M cycles when they should be running for 8 T cycles (2 M cycles if anyone cares)

  10. gabelanglais Link
    I don’t necessarily love testing but its a necessity. I usually write unit tests for most of my code as a sanity check, and then some larger intergration tests that test large components of the...

    I don’t necessarily love testing but its a necessity. I usually write unit tests for most of my code as a sanity check, and then some larger intergration tests that test large components of the project as a whole. Never had to mess with UI/UX testing so I can’t comment on that.

  11. KilledByAPixel Link
    Test as much as you need to. Try to do everything a potential user could do. If you wrote the code, try to think ideas of where it could potentially fail then try to do those things. Write clean...

    Test as much as you need to. Try to do everything a potential user could do. If you wrote the code, try to think ideas of where it could potentially fail then try to do those things. Write clean code and occasionally take the time to refactor it, you will discover bugs and save time in the long run.

    Some really imporatnt advice is when you add something new, test it as much as possible as soon as possible. It makes it much easier to fix bugs when you know it's probably caused by something you just did.