Robot Framework offers support for data-driven test cases. – write the test once and make test cases by feeding it multiple data sets. You identify a keyword (aka function/subroutine) as your Test Template, then your test cases are just names followed by the arguments expected by that template keyword.
My current test project is data correctness tests (REST based, implemented using Robot) for a feature in our next-generation product – check that each user-defined instance of the feature produces the data expected for that customization. I could have implemented data-driven test cases from the get-go, with test cases named for the test instances, it wouldn’t have been too bad for the first dozen. But keeping that test case list in sync with my test instances as I made the expected several hundred more? That’s tedious, error prone, asking for trouble, and dealing with that problem up front will delay my first deployment. Get something working first, then make it better. I put off dealing with the data-driven test cases problem and made just one test case, with a FOR loop to check all the test instances, and got my initial tests into production.
OK, it’s time to deal with the problem, before I get to those several hundred additional instances. Data-driven test cases as Robot normally offers them would not work well. I figured that what I needed to search for was pointers on automatically generating Robot data-driven test cases for dynamic data? from a CSV file? That is not an easy thing for Robot to do, but yeah, other people have wanted it also. I found some stuff in forums, a blog post, and – score! – a library that does it for you: https://github.com/Snooz82/robotframework-datadriver
My test instances are stored as JSON data, and I have to rebuild that file anyway when I add instances to my test set, so just add a second step to rebuilding to parse that JSON file to find the names of the instances and stuff them into a .csv.
Using that library, I refactored the FOR Loop Of Doom into data-driven test cases automatically generated and named for each instance. It was EASY. The result is better reporting, more maintainable code, and easier problem isolation when a test instance produces an unexpected result. Thank you Snooz82 (whoever you are) for that library!