Looking for advice on a CI / regression testing platform
I'm looking for some advice regarding how to set up a basic CI regression / testing suite. This isn't my full time job, but a side project my group at work wants to spin up to... shall we say, give us a more real time monitoring of functionality and performance regressions coming out of the underlying software stack development (long story).
As none of us are particularly automation experts, I was looking for some advice from my fellow Tilderinos. Please forgive me if any of the below is obvious and/or silly.
A few basic requirements I had in mind:
Can handle different execution environments: essentially different versions of the software stack, both in docker form and (eventually) via lmod or some other module file approach (e.g., TCL), and sensible handling of a node list.
Related to one, supports using the products of builds as execution environments. Ideally we'd like to have a build step compile the stack and install it to a NFS from which we can load it as a module.
Simple to add tests. Again, this isn't our full time job -- we mostly want to add a quick bash script / makefile / source code or the like to the tests when we run into an issue and forgot about it.
Related. We should be able to store the entire thing as a git repo. I have seen this to some extent with Travis, but my experience with Jenkins was... sub-par (is there a history? Changelog? Any way at all of backing up the test config?).
Some sort of post-processing capabilities. At a glance we need to be able to see the top line performance numbers for 20-30 apps over the different build environment. Bonus points if there's a graph showing performance vs build version or the like, but honestly a CSV log file is good enough.
Whatever CI software we get has to be able to run this locally. Lots of these are internal only numbers / codes. FOSS prefered.
A webui for scheduling runs / visualizing results would be nice, but again this could be a bash script and none of us would bat an eye.
Any thoughts would be greatly appreciated. Thanks!
I'd just like to recommend sourcehut's CI pipelines. It's very easy and quick to get set up, and integrates with the sourcehut platform very well. Sourcehut is a git platform developed by Drew DeVault, the author of sway.
Sourcehut build documentation: https://man.sr.ht/builds.sr.ht/
What is sourcehut? https://sourcehut.org/
Have a look at https://www.gocd.org/ I have only used it briefly at a previous gig, but I've heard good things about it since. As with any tool there is an initial learning curve and you'll probably not get your pipelines "right" immediately. So expect to have to field a bit to find a good approach.
I like the look of this one, thanks!
I would heartily recommend GoCD as well, but honestly not in response to this post (cc @archevel). It sounds like you haven't really set up a CI/CD pipeline before, so I would not recommend going into deploying your own CI/CD system altogether.
Start small. Use Gitlab CI or Github Actions depending on where your project is hosted. Just run some tests with it. Then add performance tests to it (most test runners support or have plugins for that). Both GH and GL support saving "build artifacts", which are arbitrary files you choose by path from the build; so just save the logs, test outputs, performance results, whatever.
Start small. Build incrementally. Try some scuba diving before building your own submarine.
I’m not advocating for Jenkins, because Jenkins can be a huge pain in the butt sometimes, but it does have two types of pipelines files that can be used to check the steps of your build job in to version control. You still have manually create the jobs themselves though.
On a related note, does anyone automate management of Jenkins jobs in any fashion?