Making good use of the Newbie

I am happily settling into what I still (after 8 months) think of as “the new job”. By about 5  months in, I understood the current product well enough to be useful helping with regression testing a release, but not well enough to efficiently test a new feature. So – of course! – assign me to the Next Generation product currently under development.

The current and nextgen products both monitor the same data and produce lots and lots of reports about it, and one of the things any existing customer is going to do with a nextgen is point it at the same data their current system is looking at, and compare the reports. So we’d like to test nextgen using the extensive archive of automated data correctness tests we have for the current product. Those tests are all implemented as: run a test-specific report, export the report to CSV, and compare that CSV to an archived CSV file that was carefully vetted for that report when the test was first implemented.

First nextgen project: do a proof of concept for implementing automated tests for nextgen this way. Make a nextgen report that matches a specific current report, run the report and export to CSV, and compare to the CSV archived for the current report test. Complications: while both products can trigger “run a report and get CSV” from the command line, those command lines are quite different, plus, more importantly, the CSV files produced by the two products are different.

Part of the project was figuring out a procedure to compose the automated test cases for nextgen, each corresponding to an existing test for the current product. Yes, doable, and as expected, a manual but straightforward process for a human with brain engaged. (Also, still unclear if it will be possible to make equivalent reports for *all* those existing tests, but at least we can get most of them.)

As for executing those test cases – making the test framework run the new commands is pretty easy. The compare – well, fortunately, and yes it makes sense for what we do with the data, the observed-to-archive CSV compare is not line-by-line, instead each data section gets parsed into a data structure, and the data structures get compared. So, modify the test framework to parse the nextgen observed CSV into current-style data structures, and compare to the specified archive. The CSV differences include some trivial formatting stuff like how to identify the start of a data section, but also different database column IDs for the same data, the way CSV data is laid out for a certain kind of chart, and some database columns that don’t map directly – like “concatenate values for nextgen columns A and C to get the corresponding values for current column X”. The test framework is written in Perl, and CSV files are just text, and Perl was made for text manipulation, so I waded in and got something working. Fun!

I really enjoyed that. I was able to show how to go about building the nextgen report test cases to match the current test cases, and how to run them from the (modified) test framework the same way you run the current test cases. If the nextgen product is doing the right thing, the nextgen observed report data matches the current product archived data.

And then my working prototype test framework code got handed off to the guy who wrote the original code I started from, who understands all the special cases and other stuff I simplified to get that prototype working. So he’s reworking it into production quality test code, which he’s able to do a better job of than I can, while I go on to another proof of concept project. We’ll be able to compose and run those nextgen test cases when we need to.