Fun With Infrastructure

My group was in a lull of release-related work, so we put in some time on other needful things. Overhaul of regression test environments, and retire some of them. Some test suites needed maintenance, and some needed to be moved to a different execution cadence, and some needed to be retired. Tools work in preparation for near-future test requirements. And we wanted improvements in test reporting, and that one was for me!

Our classic product has a huge and valuable regression test library, implemented with an in-house testing framework. Traditionally it runs command line scripts (as cron jobs), and results get stored as database records. Reporting is via a few simple in-house applications that display test results based on those database records.

We wanted to move the official regression test execution to Jenkins, and get improved reporting for it. (I have a sneaking suspicion that the wish for improved reporting was partly triggered by envy of the reporting that the Robot Framework Jenkins plugin provides for my next-generation-product Robot tests.) Well, Jenkins can execute scripts just as well as cron can, so no problem with moving the execution. The information provided by the Jenkins plugin Test Results Analyzer looks useful, but that looks at data from JUnit tests. We’d also like to make it easy to find information such as “is this test failing only for certain platforms” and “what tests are reporting failures and who needs to look at them”. Constraint on all this: this work has to be fit in between working on releases, so improvements have to be incremental.


Migrating test execution to Jenkins is pretty easy – take the shell scripts that call the specialized testrunner cli scripts, and turn them into Jenkins pipelines. Yeah, that’s using Jenkins as a cron-clone, but it’s a start and we’ll improve that during the next window for making improvements. Better reporting is a better use of currently available time.


To help with being able to make improvements in small pieces then pause, I made a wrapper for the existing cli testrunner. The wrapper calls the testrunner, then “does stuff” to process the results of that testrunner execution. So, as we convert the shell scripts run by cron to Jenkins pipelines, change the testrunner execution lines to an equivalent execution using the wrapper. Then we can change what the wrapper does to process results without needing to edit all the pipelines.


First step for improved reporting: make the wrapper get the database records produced by its testrunner execution, and export that information as JUnit-format xml files. Then we can use JUnit test reporting available for Jenkins, such as the JUnit publisher and Test Results Analyzer plugin. Making that fake JUnit data turned out to be fairly easy to do, so now we have pretty charts of our test results history.


Second step for improved reporting: take one of those in-house reporting applications, and modify it so it only finds records for tests executed by the Jenkins user, plus rearrange the data presentation a bit, plus restrict the filtering options. Then add it to our Jenkins Dashboard as an “iframe portlet”, which basically tells the Jenkins Dashboard that you want it to display a specified URL. It was my first experience working with PHP, but really, when you start with something already close to what you need, no matter what the language is, some modifications are obvious, and you can look up syntax for the rest. I made it work.


So now our Jenkins Dashboard is a single source of information about the status of our actual regression tests, uncluttered by noise from experimental test execution. It is much easier to tell what is and isn’t reporting green, and who needs to investigate bug vs broken test. And, it was fun to build!


I have some ideas for improvements to the testrunner wrapper, next time I can devote some effort to that. And I’m sure there will be requests for other reporting stuff. More fun with infrastructure to come, but now, back to testing product.